Sophie

Sophie

distrib > Fedora > 13 > i386 > by-pkgid > ac6777676411c9d28c89b9ff949c953d > files > 44

kernel-2.6.34.9-69.fc13.src.rpm

commit a77889d1b091dd6783db3e1b059cc378d37f9982
Author: Kyle McMartin <kyle@dreadnought.i.jkkm.org>
Date:   Wed Jun 16 15:06:54 2010 +0100

    provide a knob to {en,dis}able radeon_pm

commit ef24e3e0e644621e2c98d38f27f4b25d23875256
Author: Kyle McMartin <kyle@dreadnought.i.jkkm.org>
Date:   Wed Jun 16 14:51:26 2010 +0100

    Merge local branch 'drm-since-1067b6c'
    
    % git log --no-merges --oneline 1067b6c..v2.6.35-rc3 -- drivers/gpu/drm
    8d86dc6 Revert "drm/i915: Don't enable pipe/plane/VCO early (wait for DPMS on)."
    b62e948 drm/radeon: don't poll tv dac if crtc2 is in use.
    d294ed6 drm/radeon: reset i2c valid to avoid incorrect tv-out polling.
    4eb3033 drm/nv50: fix iommu errors caused by device reading from address 0
    7504794 drm/nouveau: off by one in init_i2c_device_find()
    55a4c5c nouveau: off by one in nv50_gpio_location()
    6d69630 drm/nouveau: completely fail init if we fail to map the PRAMIN BAR
    1eb3810 drm/nouveau: match U/DP script against SOR link
    f712d0c drm/radeon/kms/pm: resurrect printing power states
    0fcbe94 drm/radeon/kms: add trivial debugging for voltage
    a081a9d drm/radeon/kms/r600+: use voltage from requested clock mode (v3)
    4d60173 drm/radeon/kms/pm: track current voltage (v2)
    aa1df0f drm/radeon/kms/pm: Disable voltage adjust on RS780/RS880
    cbd4623 drm/radeon/kms: fix typo in printing the HPD info
    c9e75b2 drm/radeon/kms/pm: add mid profile
    f8ed8b4 drm/radeon/kms/pm: Misc fixes
    8de016e drm/radeon/kms/combios: fix typo in voltage fix
    148a03b drm/radeon/kms/evergreen: set accel_enabled
    9b8eb4d drm/vmwgfx: return -EFAULT for copy_to_user errors
    e902a35 drm/drm_crtc: return -EFAULT on copy_to_user errors
    fc2362a drm/fb: use printk to print out the switching to text mode error.
    9bad145 drm/radeon: fix PM on non-vram cards.
    5a79395 drm: Propagate error from drm_fb_helper_init().
    a3524f1 drm/i915: fix oops on single crtc devices.
    e7b526b drm/i915: Move non-phys cursors into the GTT

commit e1442526a8b1b9a0ffd3f8778d2ff40597ef4662
Author: Kyle McMartin <kyle@dreadnought.i.jkkm.org>
Date:   Mon May 31 12:38:09 2010 +0100

    nouveau is not in staging on Fedora

commit fcd86a22bc88817a417185602e90451a3c5a25b8
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Thu Jun 3 07:19:45 2010 -0700

    Merge branch 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6
    
    * 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (41 commits)
    % git log --oneline --no-merges a652883..1067b6c
    d8dcaa1 drm/radeon/kms: make sure display hw is disabled when suspending
    d8bd19d drm/vmwgfx: Allow userspace to change default layout. Bump minor.
    991b7b4 drm/vmwgfx: Fix framebuffer modesetting
    7c4f778 drm/vmwgfx: Fix vga save / restore with display topology.
    c0db9cb vgaarb: use MIT license
    2d6e9b9 vgaarb: convert pr_devel() to pr_debug()
    ce04cc0 drm: fix typos in Linux DRM Developer's Guide
    84d88f4 drm/radeon/kms/pm: voltage fixes
    9264587 drm/radeon/kms/pm: radeon_set_power_state fixes
    c5e8ce6 drm/radeon/kms/pm: patch default power state with default clocks/voltages on r6xx+
    9349d5c drm/radeon/kms/pm: enable SetVoltage on r7xx/evergreen
    7ac9aa5 drm/radeon/kms/pm: add support for SetVoltage cmd table (V2)
    cb5fcbd drm/radeon/kms/evergreen: add initial CS parser
    fbf8176 drm/kms: disable/enable poll around switcheroo on/off
    fc5ea29 drm/nouveau: fixup confusion over which handle the DSM is hanging off.
    afeb3e1 drm/nouveau: attempt to get bios from ACPI v3
    8b281db drm/nv50: cast IGP memory location to u64 before shifting
    4abe438 drm/ttm: Fix ttm_page_alloc.c
    e8613c0 drm/ttm: Fix cached TTM page allocation.
    1ca14e7 drm/vmwgfx: Remove some leftover debug messages.
    316ab13 drm/vmwgfx: Print warnings in kernel log about bo pinning that fails.
    792778e drm/vmwgfx: Unpause overlay on update.
    259600d drm/vmwgfx: Some modesetting cleanups and fixes.
    d451f62 drm/vmwgfx: Don't use SVGA_REG_ENABLE in modesetting code.
    bbfad33 drm/vmwgfx: Remove duplicate member from struct vmw_legacy_display_unit.
    22ee861 drm/vmwgfx: Reserve first part of VRAM for framebuffer.
    d7e1958 drm/vmwgfx: Support older hardware.
    1ae1ddd drm/vmwgfx: Get connector status from detection function.
    1925d45 drm/vmwgfx: Add kernel throttling support. Bump minor.
    04e9e94 drm/vmwgfx: Make sure to unpin old and pin new framebuffer.
    6a591a9 drm/vmwgfx: Fix single framebuffer detection.
    7e71f8a drm/vmwgfx: Assume larger framebuffer max size.
    becd214 drm/nv50: use alternate source of SOR_MODE_CTRL for DP hack
    26099a7 drm/nouveau: fix dual-link displays when plugged into single-link outputs
    2c58077 drm/nv50: obey dcb->duallink_possible
    2348487 drm/nv50: fix duallink_possible calculation for DCB 4.0 cards
    73db4be drm/nouveau: don't execute INIT_GPIO unless we're really running the table
    f50c0b9 drm/nv40: allow cold-booting of nv4x chipsets
    d13102c drm/nouveau: fix POST detection for certain chipsets
    7fc74f1 drm/nouveau: Add getparam for current PTIMER time.
    b334f2b drm/nouveau: allow cursor image and position to survive suspend

commit 663568ea6a7503a12898c7f1ba8192c8d42a28ac
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Tue Jun 1 14:12:27 2010 -0700

    Merge branch 'drm-intel-next' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel
    
    * 'drm-intel-next' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel: (41 commits)
    % git log --oneline --no-merges 08a6685..709d015
    e3a815f drm/i915: add HAS_BSD check to i915_getparam
    9bc3549 drm/i915: Honor sync polarity from VBT panel timing descriptors
    a1786bd drm/i915: Unmask interrupt for render engine on Sandybridge
    ca76482 drm/i915: Fix PIPE_CONTROL command on Sandybridge
    ab34c22 drm/i915: Fix up address spaces in slow_kernel_write()
    99a03df drm/i915: Use non-atomic kmap for slow copy paths
    9b8c4a0 drm/i915: Avoid moving from CPU domain during pwrite
    68f95ba drm/i915: Cleanup after failed initialization of ringbuffers
    654fc60 drm/i915: Reject bind_to_gtt() early if object > aperture
    85cd461 drm/i915: Check error code whilst moving buffer to GTT domain.
    3d1cc47 drm/i915: Remove spurious warning "Failure to install fence"
    ac0c6b5 drm/i915: Rebind bo if currently bound with incorrect alignment.
    a7faf32 drm/i915: Include pitch in set_base debug statement.
    a939406 drm/i915: Only print "nothing to do" debug message as required.
    808b24d drm/i915: Propagate error from unbinding an unfenceable object.
    b118c1e drm/i915: Avoid nesting of domain changes when setting display plane
    468f0b4 drm/i915: Hold the spinlock whilst resetting unpin_work along error path
    35aed2e drm/i915: Only print an message if there was an error
    e20f9c6 drm/i915: Clean up leftover bits from hws move to ring structure.
    9553426 drm/i915: Add CxSR support on Pineview DDR3
    d8201ab i915: remove unneeded null checks
    90a78e8 i915/intel_sdvo: remove unneeded null check
    467b200 drm/i915: Fix HDMI mode select for Cougarpoint PCH
    778c354 drm/i915: combine all small integers into one single bitfield
    a7de64e drm/i915/dp: Add DPCD data to debug output
    9962c92 drm/i915/dp: Only enable enhanced framing if the sink supports it
    9908ff7 drm/i915: Kill dangerous pending-flip debugging
    f1befe7 agp/intel: Restrict GTT mapping to valid range on i915 and i945
    9a7e849 drm/i915: Storage class should be before const qualifier
    7648fa9 drm/i915: add power monitoring support
    7a772c4 drm/i915/gen4: Extra CRT hotplug paranoia
    734b415 drm/i915: Add support for interlaced display.
    f953c93 i915: fix lock imbalance on error path...
    f41275e drm/i915: Convert more trace events to DEFINE_EVENT
    9517a92 drm/i915: add timeout to FBC disable waits
    d1b851f drm/i915: implement BSD ring buffer V2
    852835f drm/i915: convert some gem structures to per-ring V2
    8187a2b drm/i915: introduce intel_ring_buffer structure (V2)
    d3301d8 drm/i915: Rename dev_priv->ring to dev_priv->render_ring.
    62fdfea drm/i915: Move ringbuffer-related code to intel_ringbuffer.c.
    79a78dd drm/i915: Fail to load driver if KMS request without GEM

commit 30f0d753b32570886e6b98812d33df30229dcf87
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Fri May 28 16:14:40 2010 -0700

    Merge branch 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6
    
    * 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (1 commit)
    % git log --oneline --no-merges e4f2e5e..24010e4
    cf22f20 drm/radeon: fix the r100/r200 ums block 0 page fix

commit 5bf8778218d6085190bed41b729f6001e712b057
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 26 12:30:09 2010 -0700

    Merge branch 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6
    
    * 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (6 commits)
    % git log --oneline --no-merges 91803b4..31f4671
    2101d6f agp: amd64, fix pci reference leaks
    4a638b4 drm/edid: Allow non-fatal checksum errors in CEA blocks
    921d98b drm/radeon/kms: suppress a build warning (unused variable)
    f49d273 drm: Fixes linux-next & linux-2.6 checkstack warnings:
    5797660 nouveau: fix acpi_lid_open undefined
    10b0612 drm/radeon/kms: release AGP bridge at suspend

commit 019d6c44898a414e7d6ef16fce1950577163cccb
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Fri May 21 11:14:52 2010 -0700

    Merge branch 'drm-for-2.6.35' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6
    
    * 'drm-for-2.6.35' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (207 commits)
    % git log --oneline --no-merges ac3ee84..59534f7
    b486787 drm/radeon/kms/pm/r600: select the mid clock mode for single head low profile
    5d9b7e2 drm/radeon: fix power supply kconfig interaction.
    e865275 drm/radeon/kms: record object that have been list reserved
    365048f drm/radeon: AGP memory is only I/O if the aperture can be mapped by the CPU.
    4573744 drm/radeon/kms: don't default display priority to high on rs4xx
    c43ae47 drm/edid: fix typo in 1600x1200@75 mode
    893887ed drm/nouveau: fix i2c-related init table handlers
    04f542c drm/nouveau: support init table i2c device identifier 0x81
    f8b0be1 drm/nouveau: ensure we've parsed i2c table entry for INIT_*I2C* handlers
    92b9618 drm/nouveau: display error message for any failed init table opcode
    9170a82 drm/nouveau: fix init table handlers to return proper error codes
    e9ebb68 drm/nv50: support fractional feedback divider on newer chips
    7e99a9b drm/nv50: fix monitor detection on certain chipsets
    07fee3d drm/nv50: store full dcb i2c entry from vbios
    afa3b4c drm/nv50: fix suspend/resume with DP outputs
    17b96cc drm/nv50: output calculated crtc pll when debugging on
    4c389f0 drm/nouveau: dump pll limits entries when debugging is on
    25908b7 drm/nouveau: bios parser fixes for eDP boards
    90af89b drm/nouveau: fix a nouveau_bo dereference after it's been destroyed
    de1f46a drm/nv40: remove some completed ctxprog TODOs
    f23d4cf drm/nv04: Implement missing nv04 PGRAPH methods in software.
    a0e6544 drm/nouveau: Use 0x5f instead of 0x9f as imageblit on original NV10.
    6698998 drm/radeon: fix hdmi offset bug reported by smatch.
    8e36ed0 drm/radeon/kms: hpd cleanup
    2bfcc0f drm/radeon/kms: reset ddc_bus in object header parsing
    6fd0248 amd64-agp: Probe unknown AGP devices the right way
    26481fb drm/radeon/pm: fix device_create_file return value checks.
    4bff517 drm/radeon/kms/pm: fix r6xx+ profile setup
    ce8a3eb drm/radeon/kms/pm: make pm spam debug only
    ce8f537 drm/radeon/kms/pm: rework power management
    d731117 drm/radeon/kms/pm: add support for no display power states
    ca2af92 drm/radeon/kms: fix lock ordering in ring, ib handling
    01434b4 radeon: Use fences to gate entry to reclocking on <r600
    91700f3 radeon: Split out ring locking and allocation
    78930b1 drm/radeon/kms: enable misc pm power state features on r1xx-r4xx
    536fcd5 drm/radeon/kms: enable misc pm power state features on r5xx, rs6xx
    4f3218c drm/radeon/kms: re-enable gui idle interrupts on r6xx+
    f4b7fb9 drm/radeon/kms: take vram mutex pointer before derefing object.
    539d241 drm/radeon/kms: more pm fixes
    68adac5 drm: move radeon_fixed.h to shared drm_fixed.h header
    15a7df8 radeon: Enable memory reclocking on r100-500
    f81f202 radeon: Try harder to ensure we reclock in vblank
    612e06c radeon: Fix locking in power management paths
    c37d230 radeon: Make sure that we determine the correct PM state before transition
    956ac86 radeon: Enable memory reclockong on r600
    d9932a3 radeon: Stop the ttm workqueue while reclocking
    7c5ee53 ttm: Provide an API for starting and stopping the delayed workqueue
    8f5b5e6 radeon: Take drm struct_mutex over reclocking
    5876dd2 radeon: Unmap vram pages when reclocking
    2aba631 radeon: Unify PM entry paths
    a424816 drm/radeon/kms/pm: rework power management
    49e02b7 drm/radeon/kms/pm: add additional asic callbacks
    58e21df drm/radeon/kms/pm: restore default power state on exit
    79daedc drm/radeon/kms: minor pm cleanups
    d91eeb7 drm/radeon/kms/pm: clean power state printing
    90c3905 drm/radeon/kms/pm: don't enable pm if there is only on power state
    678e7df drm/radeon/kms/atom: load hwmon drivers
    c00f53b drm/radeon/kms/pm: update display watermarks with power state changes
    a48b9b4 drm/radeon/kms/pm: add asic specific callbacks for getting power state (v2)
    bae6b56 drm/radeon/kms/pm: add asic specific callbacks for setting power state (v2)
    03214bd drm/radeon/kms/pm: move pm state update to crtc functions
    8a56df6 drm/radeon/kms/pm: interate across crtcs for vblank
    02b17cc drm/radeon/kms/atom/pm: rework power mode parsing
    ef6e6cf drm/radeon/kms: wait for gpu idle before changing power mode
    2031f77 drm/radeon/kms: add support for gui idle interrupts (v4)
    def9ba9 drm/radeon/kms: add gui_idle callback
    1d42bbc drm/fbdev: fix cloning on fbcon
    eb1f8e4 drm/fbdev: rework output polling to be back in the core. (v4)
    0ddfa7d drm: off by one in drm_edid.c
    3b9676e vga16fb, drm: vga16fb->drm handoff
    06415c5 fbmem, drm/nouveau: kick firmware framebuffers as soon as possible
    1471ca9 fbdev: allow passing more than one aperture for handoff
    3da1f33 drm: Prefix info printk about registering panic notifier with 'drm'
    bc35afd drm/radeon/kms: add query for crtc hw id from crtc id to get info V2
    61dd98f drm/edid: Fix 1024x768@85Hz
    6ebc22e drivers/gpu/drm: Use kzalloc
    96525a2 drm_edid: There should be 6 Standard Timings
    f405a1a drivers/gpu/drm: Use kmemdup
    ca117d6 vga: fix kconfig text typos
    0bcad4c drm/edid: remove an unneeded variable
    68b61a7 drm/radeon/kms/combios: match lvds panel info parsing to ddx
    1ff26a3 drm/radeon/kms/atom: fix typo in LVDS panel info parsing
    8bf3aae drm/radeon/kms: fix copy pasto in disable encoders patch
    a7c5427 drm/i915: Fix out of tree builds
    007cc8a drm/i915: move fence lru to struct drm_i915_fence_reg
    31770bd drm/i915: don't allow tiling changes on pinned buffers v2
    149c36a drm/i915: Be extra careful about A/D matching for multifunction SDVO
    b108333 drm/i915: Fix DDC bus selection for multifunction SDVO
    aa96139 drm/radeon/kms/atom: disable the encoders in encoder_disable
    3d8620c drm/i915: cleanup mode setting before unmapping registers
    ee5382a drm/i915: Make fbc control wrapper functions
    1637ef4 drm/i915: Wait for the GPU whilst shrinking, if truly desperate.
    0a31a44 drm/i915: Use spatio-temporal dithering on PCH
    9e51159 drm/ttm: fix, avoid iomapping system memory
    a1e9ada drm/radeon/kms: R3XX-R4XX fix GPU reset code
    f259493 drm/radeon/kms: HDMI irq support
    58bd086 drm/radeon/kms: rework audio polling timer
    61cf059 agp: use scratch page on memory remove and at GATT creation V4
    2d2ef82 drm: add initial DRM developer documentation
    10fd883 agp/intel: put back check that we have a driver for the bridge.
    d4b74bf Revert "drm/i915: Configure the TV sense state correctly on GM45 to make TV detection reliable"
    6b8b178 drm/radeon/kms: enable use of unmappable VRAM V2
    0c321c7 drm/ttm: remove io_ field from TTM V6
    96bf8b8 drm/vmwgfx: add support for new TTM fault callback V5
    f32f02f drm/nouveau/kms: add support for new TTM fault callback V5
    0a2d50e drm/radeon/kms: add support for new fault callback V7
    82c5da6 drm/ttm: ttm_fault callback to allow driver to handle bo placement V6
    a8089e8 drm/i915: drop pointer to drm_gem_object
    62b8b21 drm/i915: don't use ->driver_private anymore
    c397b90 drm/i915: embed the gem object into drm_i915_gem_object
    ac52bc5 drm/i915: introduce i915_gem_alloc_object
    fd632aa drm: free core gem object from driver callbacks
    1d39704 drm: extract drm_gem_object_init
    e158316 agp/intel-gtt: kill previous_size assignments
    1ca46bd agp/intel-gtt: kill intel_i830_tlbflush
    22dd82a agp/intel: split out gmch/gtt probe, part 1
    059efc6 agp/intel: kill mutli_gmch_chip
    e5a04d5 agp/intel: uncoditionally reconfigure driver on resume
    f51b766 agp/intel: split out the GTT support
    ff7cdd6 agp/intel: introduce intel-agp.h header file
    6e0032f drm/i915: Don't touch PORT_HOTPLUG_EN in intel_dp_detect()
    77ffb59 drm/i915/pch: Use minimal number of FDI lanes (v2)
    7f8a856 drm/i915: Add the support of memory self-refresh on Ironlake
    d429434 drm/i915: Move Pineview CxSR and watermark code into update_wm hook.
    a2c459e drm/i915: Only save/restore FBC on the platform that supports FBC
    8a1837c drm/i915: Fix the incorrect argument for SDVO SET_TV_format command
    461ed3c drm/i915: Add support of SDVO on Ibexpeak PCH
    cfecde4 drm/i915: Don't enable pipe/plane/VCO early (wait for DPMS on).
    ea059a1 drm/i915: do not read uninitialized ->dev_private
    a1f4b7f Revert "drm/i915: Use a dmi quirk to skip a broken SDVO TV output."
    14571b4 drm/i915: implement multifunction SDVO device support
    409608b drm/i915: remove unused intel_pipe_get_connector()
    1f254ec drm/i915: remove connector object in old output structure
    0c41ee2 drm/i915: convert TV driver to new encoder/connector structure
    d2a82a6 drm/i915: convert SDVO driver to new encoder/connector structure
    599be16 drm/i915: convert DVO driver to new encoder/connector structure
    55f78c4 drm/i915: convert DP/eDP driver to new encoder/connector structure
    674e2d0 drm/i915: convert HDMI driver to new encoder/connector structure
    bb8a356 drm/i915: convert LVDS driver to new encoder/connector structure
    454c1ca drm/i915: convert VGA driver to new encoder/connector structure
    9c9e792 drm/i915: Set sync polarity correctly on DisplayPort
    ab00a9e drm/i915: Un-magic a DPCD register write
    e3421a1 drm/i915: enable DP/eDP for Sandybridge/Cougarpoint
    0f22906 drm/i915: enable HDMI on Cougarpoint
    b3b095b drm/i915: enable LVDS on Cougarpoint
    a4a6b90 drm/i915: Fix CRT force detect on Cougarpoint
    8db9d77 drm/i915: Support for Cougarpoint PCH display pipeline
    3bad078 drm/i915: Probe for PCH chipset type
    7da9f6c drm/i915: Sandybridge has no integrated TV
    edcb49c drm/i915: Fix legacy BLC event for pipe A
    d275f66 drm/i915: Clear the LVDS pipe B select bit when moving the LVDS to pipe A.
    0f3ee80 drm/i915: Allow LVDS on pipe A on gen4+
    6443170 drm/i915: Remove dead KMS encoder save/restore code.
    522032d drm/edid: When checking duplicate standard modes, walked the probed list
    335af9a drm/i915: change intel_ddc_get_modes() function parameters
    c1c4397 drm/i915: passing drm connector param for load detection
    f1c79df drm/i915: Add new helper to return current attached encoder for connector
    5daa55e drm/i915: Add new 'intel_connector' structure
    c5e4df3 drm/i915: more conversion from connector_list walk to encoder_list
    5bf4c9c drm/i915: use encoder_list for hotplug callback
    903cf20 drm/i915: Convert some trace events to DEFINE_TRACE
    fb8b5a3 drm/i915: Configure the TV sense state correctly on GM45 to make TV detection reliable
    a743374 drm/radeon: fix cypress firmware typo.
    0ca2ab5 drm/radeon/kms/evergreen: add hpd support
    45f9a39 drm/radeon/kms/evergreen: implement irq support
    fe251e2 drm/radeon/kms/evergreen: setup and enable the CP
    32fcdbf drm/radeon/kms/evergreen: implement gfx init
    747943e drm/radeon/kms/evergreen: add soft reset function
    0fcdb61 drm/radeon/kms/evergreen: add gart support
    49f6598 drm/radeon/kms: add support for evergreen power tables
    08c5c51 drm/radeon/kms: update atombios.h power tables for evergreen
    c385e50c drm/edid: Fix sync polarity for secondary GTF curve
    2125b8a drm/ttm: using kmalloc/kfree requires including slab.h
    9d87fa2 drm/ttm: split no_wait argument in 2 GPU or reserve wait
    b1f2019 drm/fb: remove drm_fb_helper_setcolreg
    4cdc840 drm/ttm: include linux/seq_file.h for seq_printf
    4abe352 drm/kms/fb: use slow work mechanism for normal hotplug also.
    5c4426a drm/kms/fb: add polling support for when nothing is connected.
    19b4b44 drm/kms/fb: provide a 1024x768 fbcon if no outputs found.
    0b4c0f3 drm/kms/fb: separate fbdev connector list from core drm connectors
    8be48d9 drm/kms/fb: move to using fb helper crtc grouping instead of core crtc list
    3865167 drm/fb: fix fbdev object model + cleanup properly.
    c96af79 drm/ttm: Add sysfs interface to control pool allocator.
    975efdb drm/ttm: Use set_pages_array_wc instead of set_memory_wc.
    4f64625 arch/x86: Add array variants for setting memory to wc caching.
    bf62acd drm/nouveau: Add ttm page pool debugfs file.
    8d7cddc drm/radeon/kms: Add ttm page pool debugfs file.
    0745866 drm/ttm: Add debugfs output entry to pool allocator.
    1403b1a drm/ttm: add pool wc/uc page allocator V3
    90aca4d drm/radeon/kms: simplify & improve GPU reset V2
    a2d07b7 drm/radeon/kms: rename gpu_reset to asic_reset
    225758d drm/radeon/kms: fence cleanup + more reliable GPU lockup detection V4
    171fdd8 drm/modes: Fix interlaced mode names
    7a37435 drm/edid: Add secondary GTF curve support
    7ca6adb drm/edid: Strengthen the algorithm for standard mode codes
    a0910c8 drm/edid: Fix the HDTV hack.
    b17e52e drm/edid: Extend range-based mode addition for EDID 1.4
    d1ff640 drm/edid: Add test for monitor reduced blanking support.
    a327f6b drm/edid: Fix preferred mode parse for EDID 1.4
    59d8aff drm/edid: Remove some silly comments
    7466f4c drm/edid: Remove arbitrary EDID extension limit
    2255be1 drm/edid: Add modes for Established Timings III section
    c867df7 drm/edid: Reshuffle mode list construction to closer match the spec
    2b470ab drm/edid: Remove a redundant check
    fbcc06b drm/edid: Remove some misleading comments
    61e57a8 drm/edid: Fix secondary block fetch.

 Documentation/DocBook/Makefile                  |    2 +-
 Documentation/DocBook/drm.tmpl                  |  839 ++++++++++
 arch/x86/include/asm/cacheflush.h               |    2 +
 arch/x86/mm/pageattr.c                          |   53 +-
 drivers/char/agp/agp.h                          |   80 -
 drivers/char/agp/ali-agp.c                      |    1 +
 drivers/char/agp/amd-k7-agp.c                   |    9 +
 drivers/char/agp/amd64-agp.c                    |   56 +-
 drivers/char/agp/ati-agp.c                      |    8 +
 drivers/char/agp/efficeon-agp.c                 |    1 +
 drivers/char/agp/intel-agp.c                    | 1883 ++---------------------
 drivers/char/agp/intel-agp.h                    |  239 +++
 drivers/char/agp/intel-gtt.c                    | 1548 +++++++++++++++++++
 drivers/char/agp/nvidia-agp.c                   |    1 +
 drivers/char/agp/sis-agp.c                      |    9 +-
 drivers/char/agp/uninorth-agp.c                 |   16 +-
 drivers/char/agp/via-agp.c                      |    2 +
 drivers/gpu/drm/Kconfig                         |    4 +
 drivers/gpu/drm/drm_auth.c                      |    3 +-
 drivers/gpu/drm/drm_crtc.c                      |   13 +-
 drivers/gpu/drm/drm_crtc_helper.c               |  506 ++-----
 drivers/gpu/drm/drm_dma.c                       |    4 +-
 drivers/gpu/drm/drm_edid.c                      |  807 +++++++---
 drivers/gpu/drm/drm_fb_helper.c                 |  910 ++++++++----
 drivers/gpu/drm/drm_fops.c                      |    3 +-
 drivers/gpu/drm/drm_gem.c                       |   49 +-
 drivers/gpu/drm/drm_modes.c                     |  105 +-
 drivers/gpu/drm/drm_sysfs.c                     |    2 +-
 drivers/gpu/drm/i915/Makefile                   |    3 +
 drivers/gpu/drm/i915/dvo.h                      |   10 -
 drivers/gpu/drm/i915/dvo_ch7017.c               |   46 +-
 drivers/gpu/drm/i915/dvo_ch7xxx.c               |   44 +-
 drivers/gpu/drm/i915/dvo_ivch.c                 |   21 -
 drivers/gpu/drm/i915/dvo_sil164.c               |   38 -
 drivers/gpu/drm/i915/dvo_tfp410.c               |   32 -
 drivers/gpu/drm/i915/i915_debugfs.c             |  110 +-
 drivers/gpu/drm/i915/i915_dma.c                 |  745 ++++++++--
 drivers/gpu/drm/i915/i915_drv.c                 |   99 +-
 drivers/gpu/drm/i915/i915_drv.h                 |  246 ++--
 drivers/gpu/drm/i915/i915_gem.c                 | 1045 ++++++--------
 drivers/gpu/drm/i915/i915_gem_debug.c           |    2 +-
 drivers/gpu/drm/i915/i915_gem_tiling.c          |    5 +
 drivers/gpu/drm/i915/i915_irq.c                 |  205 ++--
 drivers/gpu/drm/i915/i915_reg.h                 |  225 +++-
 drivers/gpu/drm/i915/i915_suspend.c             |   41 +-
 drivers/gpu/drm/i915/i915_trace.h               |  112 +-
 drivers/gpu/drm/i915/intel_bios.c               |   11 +
 drivers/gpu/drm/i915/intel_crt.c                |  116 +-
 drivers/gpu/drm/i915/intel_display.c            | 1350 ++++++++++++-----
 drivers/gpu/drm/i915/intel_dp.c                 |  263 ++--
 drivers/gpu/drm/i915/intel_drv.h                |   31 +-
 drivers/gpu/drm/i915/intel_dvo.c                |  103 +-
 drivers/gpu/drm/i915/intel_fb.c                 |  223 ++--
 drivers/gpu/drm/i915/intel_hdmi.c               |   76 +-
 drivers/gpu/drm/i915/intel_lvds.c               |  111 +-
 drivers/gpu/drm/i915/intel_modes.c              |   21 +-
 drivers/gpu/drm/i915/intel_overlay.c            |   60 +-
 drivers/gpu/drm/i915/intel_ringbuffer.c         |  849 ++++++++++
 drivers/gpu/drm/i915/intel_ringbuffer.h         |  124 ++
 drivers/gpu/drm/i915/intel_sdvo.c               | 1009 ++++++-------
 drivers/gpu/drm/i915/intel_tv.c                 |  185 +--
 drivers/gpu/drm/nouveau/Makefile                |    3 +-
 drivers/gpu/drm/nouveau/nouveau_acpi.c          |   71 +-
 drivers/gpu/drm/nouveau/nouveau_bios.c          |  594 +++++---
 drivers/gpu/drm/nouveau/nouveau_bios.h          |    1 +
 drivers/gpu/drm/nouveau/nouveau_bo.c            |  116 +-
 drivers/gpu/drm/nouveau/nouveau_connector.c     |   49 +-
 drivers/gpu/drm/nouveau/nouveau_crtc.h          |    2 +
 drivers/gpu/drm/nouveau/nouveau_debugfs.c       |    3 +
 drivers/gpu/drm/nouveau/nouveau_display.c       |   42 +-
 drivers/gpu/drm/nouveau/nouveau_drv.c           |   48 +-
 drivers/gpu/drm/nouveau/nouveau_drv.h           |   15 +
 drivers/gpu/drm/nouveau/nouveau_encoder.h       |    2 +
 drivers/gpu/drm/nouveau/nouveau_fb.h            |    6 +-
 drivers/gpu/drm/nouveau/nouveau_fbcon.c         |  265 ++--
 drivers/gpu/drm/nouveau/nouveau_fbcon.h         |   19 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c           |    5 +-
 drivers/gpu/drm/nouveau/nouveau_grctx.c         |    6 +-
 drivers/gpu/drm/nouveau/nouveau_i2c.c           |   21 +-
 drivers/gpu/drm/nouveau/nouveau_irq.c           |   10 +-
 drivers/gpu/drm/nouveau/nouveau_mem.c           |    3 +-
 drivers/gpu/drm/nouveau/nouveau_reg.h           |    1 +
 drivers/gpu/drm/nouveau/nouveau_state.c         |   83 +-
 drivers/gpu/drm/nouveau/nv04_cursor.c           |    1 +
 drivers/gpu/drm/nouveau/nv04_fbcon.c            |   18 +-
 drivers/gpu/drm/nouveau/nv04_graph.c            |  566 +++++++-
 drivers/gpu/drm/nouveau/nv40_graph.c            |    8 +-
 drivers/gpu/drm/nouveau/nv40_grctx.c            |    5 -
 drivers/gpu/drm/nouveau/nv50_calc.c             |   87 ++
 drivers/gpu/drm/nouveau/nv50_crtc.c             |   46 +-
 drivers/gpu/drm/nouveau/nv50_cursor.c           |    1 +
 drivers/gpu/drm/nouveau/nv50_display.c          |   36 +
 drivers/gpu/drm/nouveau/nv50_fb.c               |   10 +-
 drivers/gpu/drm/nouveau/nv50_fbcon.c            |   16 +-
 drivers/gpu/drm/nouveau/nv50_gpio.c             |    2 +-
 drivers/gpu/drm/nouveau/nv50_sor.c              |   18 +-
 drivers/gpu/drm/radeon/Makefile                 |    7 +-
 drivers/gpu/drm/radeon/atombios.h               |   76 +-
 drivers/gpu/drm/radeon/atombios_crtc.c          |   23 +-
 drivers/gpu/drm/radeon/atombios_dp.c            |    2 +-
 drivers/gpu/drm/radeon/evergreen.c              | 1562 ++++++++++++++++++-
 drivers/gpu/drm/radeon/evergreen_cs.c           | 1356 ++++++++++++++++
 drivers/gpu/drm/radeon/evergreen_reg.h          |    7 +
 drivers/gpu/drm/radeon/evergreend.h             | 1020 ++++++++++++
 drivers/gpu/drm/radeon/r100.c                   |  739 +++++++---
 drivers/gpu/drm/radeon/r100d.h                  |  164 ++
 drivers/gpu/drm/radeon/r300.c                   |  151 +-
 drivers/gpu/drm/radeon/r300d.h                  |   47 +-
 drivers/gpu/drm/radeon/r420.c                   |   46 +-
 drivers/gpu/drm/radeon/r500_reg.h               |    3 +
 drivers/gpu/drm/radeon/r520.c                   |    7 +-
 drivers/gpu/drm/radeon/r600.c                   |  693 ++++++++-
 drivers/gpu/drm/radeon/r600_audio.c             |   58 +-
 drivers/gpu/drm/radeon/r600_blit_kms.c          |    3 +
 drivers/gpu/drm/radeon/r600_hdmi.c              |   65 +-
 drivers/gpu/drm/radeon/r600_reg.h               |   57 +-
 drivers/gpu/drm/radeon/radeon.h                 |  265 +++-
 drivers/gpu/drm/radeon/radeon_agp.c             |    5 +
 drivers/gpu/drm/radeon/radeon_asic.c            |  144 ++-
 drivers/gpu/drm/radeon/radeon_asic.h            |   45 +-
 drivers/gpu/drm/radeon/radeon_atombios.c        |  321 +++-
 drivers/gpu/drm/radeon/radeon_bios.c            |    3 +-
 drivers/gpu/drm/radeon/radeon_combios.c         |   71 +-
 drivers/gpu/drm/radeon/radeon_connectors.c      |   63 +-
 drivers/gpu/drm/radeon/radeon_cs.c              |    4 -
 drivers/gpu/drm/radeon/radeon_device.c          |   72 +-
 drivers/gpu/drm/radeon/radeon_display.c         |  135 +-
 drivers/gpu/drm/radeon/radeon_drv.c             |   12 +-
 drivers/gpu/drm/radeon/radeon_encoders.c        |   44 +-
 drivers/gpu/drm/radeon/radeon_fb.c              |  364 +++--
 drivers/gpu/drm/radeon/radeon_fence.c           |  107 +-
 drivers/gpu/drm/radeon/radeon_fixed.h           |   67 -
 drivers/gpu/drm/radeon/radeon_gart.c            |    2 +-
 drivers/gpu/drm/radeon/radeon_gem.c             |    6 +-
 drivers/gpu/drm/radeon/radeon_irq_kms.c         |    5 +-
 drivers/gpu/drm/radeon/radeon_kms.c             |   25 +
 drivers/gpu/drm/radeon/radeon_legacy_crtc.c     |   14 +-
 drivers/gpu/drm/radeon/radeon_legacy_encoders.c |   26 +-
 drivers/gpu/drm/radeon/radeon_mode.h            |   49 +-
 drivers/gpu/drm/radeon/radeon_object.c          |   44 +-
 drivers/gpu/drm/radeon/radeon_object.h          |    2 +-
 drivers/gpu/drm/radeon/radeon_pm.c              |  834 +++++++----
 drivers/gpu/drm/radeon/radeon_reg.h             |    4 +-
 drivers/gpu/drm/radeon/radeon_ring.c            |   66 +-
 drivers/gpu/drm/radeon/radeon_state.c           |    5 +-
 drivers/gpu/drm/radeon/radeon_ttm.c             |  122 +-
 drivers/gpu/drm/radeon/reg_srcs/evergreen       |  611 ++++++++
 drivers/gpu/drm/radeon/rs400.c                  |    9 +-
 drivers/gpu/drm/radeon/rs600.c                  |  232 +++-
 drivers/gpu/drm/radeon/rs600d.h                 |   80 +
 drivers/gpu/drm/radeon/rs690.c                  |  289 ++--
 drivers/gpu/drm/radeon/rv515.c                  |  287 ++---
 drivers/gpu/drm/radeon/rv515d.h                 |   46 +
 drivers/gpu/drm/radeon/rv770.c                  |   39 +-
 drivers/gpu/drm/savage/savage_bci.c             |    3 +-
 drivers/gpu/drm/ttm/Makefile                    |    2 +-
 drivers/gpu/drm/ttm/ttm_bo.c                    |   98 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c               |  122 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c                 |   41 +-
 drivers/gpu/drm/ttm/ttm_memory.c                |    7 +-
 drivers/gpu/drm/ttm/ttm_page_alloc.c            |  855 ++++++++++
 drivers/gpu/drm/ttm/ttm_tt.c                    |   44 +-
 drivers/gpu/drm/vmwgfx/Makefile                 |    2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c          |   50 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c             |   24 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.h             |   49 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c         |   14 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_fb.c              |  101 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_fence.c           |  173 +++
 drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c            |   23 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_irq.c             |   17 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_kms.c             |  209 ++-
 drivers/gpu/drm/vmwgfx/vmwgfx_kms.h             |    4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c             |  189 ++-
 drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c         |    4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_resource.c        |    8 +-
 drivers/gpu/vga/Kconfig                         |    6 +-
 drivers/gpu/vga/vgaarb.c                        |   61 +-
 drivers/staging/Kconfig                         |    2 -
 drivers/video/efifb.c                           |   11 +-
 drivers/video/fbmem.c                           |   74 +-
 drivers/video/fbsysfs.c                         |    1 +
 drivers/video/offb.c                            |   28 +-
 drivers/video/vesafb.c                          |   11 +-
 drivers/video/vga16fb.c                         |   26 +-
 include/drm/drmP.h                              |    3 +
 include/drm/drm_crtc.h                          |   43 +-
 include/drm/drm_crtc_helper.h                   |   12 +-
 include/drm/drm_edid.h                          |    5 +-
 include/drm/drm_fb_helper.h                     |   67 +-
 include/drm/drm_fixed.h                         |   67 +
 include/drm/i915_drm.h                          |    5 +-
 include/drm/nouveau_drm.h                       |    1 +
 include/drm/radeon_drm.h                        |    2 +
 include/drm/ttm/ttm_bo_api.h                    |   46 +-
 include/drm/ttm/ttm_bo_driver.h                 |   57 +-
 include/drm/ttm/ttm_page_alloc.h                |   74 +
 include/drm/vmwgfx_drm.h                        |   26 +
 include/linux/fb.h                              |   19 +-
 include/linux/vgaarb.h                          |   21 +
 200 files changed, 21571 insertions(+), 8636 deletions(-)

diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
index 325cfd1..c7e5dc7 100644
--- a/Documentation/DocBook/Makefile
+++ b/Documentation/DocBook/Makefile
@@ -14,7 +14,7 @@ DOCBOOKS := z8530book.xml mcabook.xml device-drivers.xml \
 	    genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \
 	    mac80211.xml debugobjects.xml sh.xml regulator.xml \
 	    alsa-driver-api.xml writing-an-alsa-driver.xml \
-	    tracepoint.xml utrace.xml media.xml
+	    tracepoint.xml utrace.xml media.xml drm.xml
 
 ###
 # The build process is as follows (targets):
diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl
new file mode 100644
index 0000000..910c923
--- /dev/null
+++ b/Documentation/DocBook/drm.tmpl
@@ -0,0 +1,839 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
+	"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
+
+<book id="drmDevelopersGuide">
+  <bookinfo>
+    <title>Linux DRM Developer's Guide</title>
+
+    <copyright>
+      <year>2008-2009</year>
+      <holder>
+	Intel Corporation (Jesse Barnes &lt;jesse.barnes@intel.com&gt;)
+      </holder>
+    </copyright>
+
+    <legalnotice>
+      <para>
+	The contents of this file may be used under the terms of the GNU
+	General Public License version 2 (the "GPL") as distributed in
+	the kernel source COPYING file.
+      </para>
+    </legalnotice>
+  </bookinfo>
+
+<toc></toc>
+
+  <!-- Introduction -->
+
+  <chapter id="drmIntroduction">
+    <title>Introduction</title>
+    <para>
+      The Linux DRM layer contains code intended to support the needs
+      of complex graphics devices, usually containing programmable
+      pipelines well suited to 3D graphics acceleration.  Graphics
+      drivers in the kernel can make use of DRM functions to make
+      tasks like memory management, interrupt handling and DMA easier,
+      and provide a uniform interface to applications.
+    </para>
+    <para>
+      A note on versions: this guide covers features found in the DRM
+      tree, including the TTM memory manager, output configuration and
+      mode setting, and the new vblank internals, in addition to all
+      the regular features found in current kernels.
+    </para>
+    <para>
+      [Insert diagram of typical DRM stack here]
+    </para>
+  </chapter>
+
+  <!-- Internals -->
+
+  <chapter id="drmInternals">
+    <title>DRM Internals</title>
+    <para>
+      This chapter documents DRM internals relevant to driver authors
+      and developers working to add support for the latest features to
+      existing drivers.
+    </para>
+    <para>
+      First, we'll go over some typical driver initialization
+      requirements, like setting up command buffers, creating an
+      initial output configuration, and initializing core services.
+      Subsequent sections will cover core internals in more detail,
+      providing implementation notes and examples.
+    </para>
+    <para>
+      The DRM layer provides several services to graphics drivers,
+      many of them driven by the application interfaces it provides
+      through libdrm, the library that wraps most of the DRM ioctls.
+      These include vblank event handling, memory
+      management, output management, framebuffer management, command
+      submission &amp; fencing, suspend/resume support, and DMA
+      services.
+    </para>
+    <para>
+      The core of every DRM driver is struct drm_device.  Drivers
+      will typically statically initialize a drm_device structure,
+      then pass it to drm_init() at load time.
+    </para>
+
+  <!-- Internals: driver init -->
+
+  <sect1>
+    <title>Driver initialization</title>
+    <para>
+      Before calling the DRM initialization routines, the driver must
+      first create and fill out a struct drm_device structure.
+    </para>
+    <programlisting>
+      static struct drm_driver driver = {
+	/* don't use mtrr's here, the Xserver or user space app should
+	 * deal with them for intel hardware.
+	 */
+	.driver_features =
+	    DRIVER_USE_AGP | DRIVER_REQUIRE_AGP |
+	    DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_MODESET,
+	.load = i915_driver_load,
+	.unload = i915_driver_unload,
+	.firstopen = i915_driver_firstopen,
+	.lastclose = i915_driver_lastclose,
+	.preclose = i915_driver_preclose,
+	.save = i915_save,
+	.restore = i915_restore,
+	.device_is_agp = i915_driver_device_is_agp,
+	.get_vblank_counter = i915_get_vblank_counter,
+	.enable_vblank = i915_enable_vblank,
+	.disable_vblank = i915_disable_vblank,
+	.irq_preinstall = i915_driver_irq_preinstall,
+	.irq_postinstall = i915_driver_irq_postinstall,
+	.irq_uninstall = i915_driver_irq_uninstall,
+	.irq_handler = i915_driver_irq_handler,
+	.reclaim_buffers = drm_core_reclaim_buffers,
+	.get_map_ofs = drm_core_get_map_ofs,
+	.get_reg_ofs = drm_core_get_reg_ofs,
+	.fb_probe = intelfb_probe,
+	.fb_remove = intelfb_remove,
+	.fb_resize = intelfb_resize,
+	.master_create = i915_master_create,
+	.master_destroy = i915_master_destroy,
+#if defined(CONFIG_DEBUG_FS)
+	.debugfs_init = i915_debugfs_init,
+	.debugfs_cleanup = i915_debugfs_cleanup,
+#endif
+	.gem_init_object = i915_gem_init_object,
+	.gem_free_object = i915_gem_free_object,
+	.gem_vm_ops = &amp;i915_gem_vm_ops,
+	.ioctls = i915_ioctls,
+	.fops = {
+		.owner = THIS_MODULE,
+		.open = drm_open,
+		.release = drm_release,
+		.ioctl = drm_ioctl,
+		.mmap = drm_mmap,
+		.poll = drm_poll,
+		.fasync = drm_fasync,
+#ifdef CONFIG_COMPAT
+		.compat_ioctl = i915_compat_ioctl,
+#endif
+		},
+	.pci_driver = {
+		.name = DRIVER_NAME,
+		.id_table = pciidlist,
+		.probe = probe,
+		.remove = __devexit_p(drm_cleanup_pci),
+		},
+	.name = DRIVER_NAME,
+	.desc = DRIVER_DESC,
+	.date = DRIVER_DATE,
+	.major = DRIVER_MAJOR,
+	.minor = DRIVER_MINOR,
+	.patchlevel = DRIVER_PATCHLEVEL,
+      };
+    </programlisting>
+    <para>
+      In the example above, taken from the i915 DRM driver, the driver
+      sets several flags indicating what core features it supports.
+      We'll go over the individual callbacks in later sections.  Since
+      flags indicate which features your driver supports to the DRM
+      core, you need to set most of them prior to calling drm_init().  Some,
+      like DRIVER_MODESET can be set later based on user supplied parameters,
+      but that's the exception rather than the rule.
+    </para>
+    <variablelist>
+      <title>Driver flags</title>
+      <varlistentry>
+	<term>DRIVER_USE_AGP</term>
+	<listitem><para>
+	    Driver uses AGP interface
+	</para></listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_REQUIRE_AGP</term>
+	<listitem><para>
+	    Driver needs AGP interface to function.
+	</para></listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_USE_MTRR</term>
+	<listitem>
+	  <para>
+	    Driver uses MTRR interface for mapping memory.  Deprecated.
+	  </para>
+	</listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_PCI_DMA</term>
+	<listitem><para>
+	    Driver is capable of PCI DMA.  Deprecated.
+	</para></listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_SG</term>
+	<listitem><para>
+	    Driver can perform scatter/gather DMA.  Deprecated.
+	</para></listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_HAVE_DMA</term>
+	<listitem><para>Driver supports DMA.  Deprecated.</para></listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term>
+	<listitem>
+	  <para>
+	    DRIVER_HAVE_IRQ indicates whether the driver has a IRQ
+	    handler, DRIVER_IRQ_SHARED indicates whether the device &amp;
+	    handler support shared IRQs (note that this is required of
+	    PCI drivers).
+	  </para>
+	</listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_DMA_QUEUE</term>
+	<listitem>
+	  <para>
+	    If the driver queues DMA requests and completes them
+	    asynchronously, this flag should be set.  Deprecated.
+	  </para>
+	</listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_FB_DMA</term>
+	<listitem>
+	  <para>
+	    Driver supports DMA to/from the framebuffer.  Deprecated.
+	  </para>
+	</listitem>
+      </varlistentry>
+      <varlistentry>
+	<term>DRIVER_MODESET</term>
+	<listitem>
+	  <para>
+	    Driver supports mode setting interfaces.
+	  </para>
+	</listitem>
+      </varlistentry>
+    </variablelist>
+    <para>
+      In this specific case, the driver requires AGP and supports
+      IRQs.  DMA, as we'll see, is handled by device specific ioctls
+      in this case.  It also supports the kernel mode setting APIs, though
+      unlike in the actual i915 driver source, this example unconditionally
+      exports KMS capability.
+    </para>
+  </sect1>
+
+  <!-- Internals: driver load -->
+
+  <sect1>
+    <title>Driver load</title>
+    <para>
+      In the previous section, we saw what a typical drm_driver
+      structure might look like.  One of the more important fields in
+      the structure is the hook for the load function.
+    </para>
+    <programlisting>
+      static struct drm_driver driver = {
+      	...
+      	.load = i915_driver_load,
+        ...
+      };
+    </programlisting>
+    <para>
+      The load function has many responsibilities: allocating a driver
+      private structure, specifying supported performance counters,
+      configuring the device (e.g. mapping registers &amp; command
+      buffers), initializing the memory manager, and setting up the
+      initial output configuration.
+    </para>
+    <para>
+      Note that the tasks performed at driver load time must not
+      conflict with DRM client requirements.  For instance, if user
+      level mode setting drivers are in use, it would be problematic
+      to perform output discovery &amp; configuration at load time.
+      Likewise, if pre-memory management aware user level drivers are
+      in use, memory management and command buffer setup may need to
+      be omitted.  These requirements are driver specific, and care
+      needs to be taken to keep both old and new applications and
+      libraries working.  The i915 driver supports the "modeset"
+      module parameter to control whether advanced features are
+      enabled at load time or in legacy fashion.  If compatibility is
+      a concern (e.g. with drivers converted over to the new interfaces
+      from the old ones), care must be taken to prevent incompatible
+      device initialization and control with the currently active
+      userspace drivers.
+    </para>
+
+    <sect2>
+      <title>Driver private &amp; performance counters</title>
+      <para>
+	The driver private hangs off the main drm_device structure and
+	can be used for tracking various device specific bits of
+	information, like register offsets, command buffer status,
+	register state for suspend/resume, etc.  At load time, a
+	driver can simply allocate one and set drm_device.dev_priv
+	appropriately; at unload the driver can free it and set
+	drm_device.dev_priv to NULL.
+      </para>
+      <para>
+	The DRM supports several counters which can be used for rough
+	performance characterization.  Note that the DRM stat counter
+	system is not often used by applications, and supporting
+	additional counters is completely optional.
+      </para>
+      <para>
+	These interfaces are deprecated and should not be used.  If performance
+	monitoring is desired, the developer should investigate and
+	potentially enhance the kernel perf and tracing infrastructure to export
+	GPU related performance information to performance monitoring
+	tools and applications.
+      </para>
+    </sect2>
+
+    <sect2>
+      <title>Configuring the device</title>
+      <para>
+	Obviously, device configuration will be device specific.
+	However, there are several common operations: finding a
+	device's PCI resources, mapping them, and potentially setting
+	up an IRQ handler.
+      </para>
+      <para>
+	Finding &amp; mapping resources is fairly straightforward.  The
+	DRM wrapper functions, drm_get_resource_start() and
+	drm_get_resource_len() can be used to find BARs on the given
+	drm_device struct.  Once those values have been retrieved, the
+	driver load function can call drm_addmap() to create a new
+	mapping for the BAR in question.  Note you'll probably want a
+	drm_local_map_t in your driver private structure to track any
+	mappings you create.
+<!-- !Fdrivers/gpu/drm/drm_bufs.c drm_get_resource_* -->
+<!-- !Finclude/drm/drmP.h drm_local_map_t -->
+      </para>
+      <para>
+	if compatibility with other operating systems isn't a concern
+	(DRM drivers can run under various BSD variants and OpenSolaris),
+	native Linux calls can be used for the above, e.g. pci_resource_*
+	and iomap*/iounmap.  See the Linux device driver book for more
+	info.
+      </para>
+      <para>
+	Once you have a register map, you can use the DRM_READn() and
+	DRM_WRITEn() macros to access the registers on your device, or
+	use driver specific versions to offset into your MMIO space
+	relative to a driver specific base pointer (see I915_READ for
+	example).
+      </para>
+      <para>
+	If your device supports interrupt generation, you may want to
+	setup an interrupt handler at driver load time as well.  This
+	is done using the drm_irq_install() function.  If your device
+	supports vertical blank interrupts, it should call
+	drm_vblank_init() to initialize the core vblank handling code before
+	enabling interrupts on your device.  This ensures the vblank related
+	structures are allocated and allows the core to handle vblank events.
+      </para>
+<!--!Fdrivers/char/drm/drm_irq.c drm_irq_install-->
+      <para>
+	Once your interrupt handler is registered (it'll use your
+	drm_driver.irq_handler as the actual interrupt handling
+	function), you can safely enable interrupts on your device,
+	assuming any other state your interrupt handler uses is also
+	initialized.
+      </para>
+      <para>
+	Another task that may be necessary during configuration is
+	mapping the video BIOS.  On many devices, the VBIOS describes
+	device configuration, LCD panel timings (if any), and contains
+	flags indicating device state.  Mapping the BIOS can be done
+	using the pci_map_rom() call, a convenience function that
+	takes care of mapping the actual ROM, whether it has been
+	shadowed into memory (typically at address 0xc0000) or exists
+	on the PCI device in the ROM BAR.  Note that once you've
+	mapped the ROM and extracted any necessary information, be
+	sure to unmap it; on many devices the ROM address decoder is
+	shared with other BARs, so leaving it mapped can cause
+	undesired behavior like hangs or memory corruption.
+<!--!Fdrivers/pci/rom.c pci_map_rom-->
+      </para>
+    </sect2>
+
+    <sect2>
+      <title>Memory manager initialization</title>
+      <para>
+	In order to allocate command buffers, cursor memory, scanout
+	buffers, etc., as well as support the latest features provided
+	by packages like Mesa and the X.Org X server, your driver
+	should support a memory manager.
+      </para>
+      <para>
+	If your driver supports memory management (it should!), you'll
+	need to set that up at load time as well.  How you initialize
+	it depends on which memory manager you're using, TTM or GEM.
+      </para>
+      <sect3>
+	<title>TTM initialization</title>
+	<para>
+	  TTM (for Translation Table Manager) manages video memory and
+	  aperture space for graphics devices. TTM supports both UMA devices
+	  and devices with dedicated video RAM (VRAM), i.e. most discrete
+	  graphics devices.  If your device has dedicated RAM, supporting
+	  TTM is desirable.  TTM also integrates tightly with your
+	  driver specific buffer execution function.  See the radeon
+	  driver for examples.
+	</para>
+	<para>
+	  The core TTM structure is the ttm_bo_driver struct.  It contains
+	  several fields with function pointers for initializing the TTM,
+	  allocating and freeing memory, waiting for command completion
+	  and fence synchronization, and memory migration.  See the
+	  radeon_ttm.c file for an example of usage.
+	</para>
+	<para>
+	  The ttm_global_reference structure is made up of several fields:
+	</para>
+	<programlisting>
+	  struct ttm_global_reference {
+	  	enum ttm_global_types global_type;
+	  	size_t size;
+	  	void *object;
+	  	int (*init) (struct ttm_global_reference *);
+	  	void (*release) (struct ttm_global_reference *);
+	  };
+	</programlisting>
+	<para>
+	  There should be one global reference structure for your memory
+	  manager as a whole, and there will be others for each object
+	  created by the memory manager at runtime.  Your global TTM should
+	  have a type of TTM_GLOBAL_TTM_MEM.  The size field for the global
+	  object should be sizeof(struct ttm_mem_global), and the init and
+	  release hooks should point at your driver specific init and
+	  release routines, which will probably eventually call
+	  ttm_mem_global_init and ttm_mem_global_release respectively.
+	</para>
+	<para>
+	  Once your global TTM accounting structure is set up and initialized
+	  (done by calling ttm_global_item_ref on the global object you
+	  just created), you'll need to create a buffer object TTM to
+	  provide a pool for buffer object allocation by clients and the
+	  kernel itself.  The type of this object should be TTM_GLOBAL_TTM_BO,
+	  and its size should be sizeof(struct ttm_bo_global).  Again,
+	  driver specific init and release functions can be provided,
+	  likely eventually calling ttm_bo_global_init and
+	  ttm_bo_global_release, respectively.  Also like the previous
+	  object, ttm_global_item_ref is used to create an initial reference
+	  count for the TTM, which will call your initialization function.
+	</para>
+      </sect3>
+      <sect3>
+	<title>GEM initialization</title>
+	<para>
+	  GEM is an alternative to TTM, designed specifically for UMA
+	  devices.  It has simpler initialization and execution requirements
+	  than TTM, but has no VRAM management capability.  Core GEM
+	  initialization is comprised of a basic drm_mm_init call to create
+	  a GTT DRM MM object, which provides an address space pool for
+	  object allocation.  In a KMS configuration, the driver will
+	  need to allocate and initialize a command ring buffer following
+	  basic GEM initialization.  Most UMA devices have a so-called
+	  "stolen" memory region, which provides space for the initial
+	  framebuffer and large, contiguous memory regions required by the
+	  device.  This space is not typically managed by GEM, and must
+	  be initialized separately into its own DRM MM object.
+	</para>
+	<para>
+	  Initialization will be driver specific, and will depend on
+	  the architecture of the device.  In the case of Intel
+	  integrated graphics chips like 965GM, GEM initialization can
+	  be done by calling the internal GEM init function,
+	  i915_gem_do_init().  Since the 965GM is a UMA device
+	  (i.e. it doesn't have dedicated VRAM), GEM will manage
+	  making regular RAM available for GPU operations.  Memory set
+	  aside by the BIOS (called "stolen" memory by the i915
+	  driver) will be managed by the DRM memrange allocator; the
+	  rest of the aperture will be managed by GEM.
+	  <programlisting>
+	    /* Basic memrange allocator for stolen space (aka vram) */
+	    drm_memrange_init(&amp;dev_priv->vram, 0, prealloc_size);
+	    /* Let GEM Manage from end of prealloc space to end of aperture */
+	    i915_gem_do_init(dev, prealloc_size, agp_size);
+	  </programlisting>
+<!--!Edrivers/char/drm/drm_memrange.c-->
+	</para>
+	<para>
+	  Once the memory manager has been set up, we can allocate the
+	  command buffer.  In the i915 case, this is also done with a
+	  GEM function, i915_gem_init_ringbuffer().
+	</para>
+      </sect3>
+    </sect2>
+
+    <sect2>
+      <title>Output configuration</title>
+      <para>
+	The final initialization task is output configuration.  This involves
+	finding and initializing the CRTCs, encoders and connectors
+	for your device, creating an initial configuration and
+	registering a framebuffer console driver.
+      </para>
+      <sect3>
+	<title>Output discovery and initialization</title>
+	<para>
+	  Several core functions exist to create CRTCs, encoders and
+	  connectors, namely drm_crtc_init(), drm_connector_init() and
+	  drm_encoder_init(), along with several "helper" functions to
+	  perform common tasks.
+	</para>
+	<para>
+	  Connectors should be registered with sysfs once they've been
+	  detected and initialized, using the
+	  drm_sysfs_connector_add() function.  Likewise, when they're
+	  removed from the system, they should be destroyed with
+	  drm_sysfs_connector_remove().
+	</para>
+	<programlisting>
+<![CDATA[
+void intel_crt_init(struct drm_device *dev)
+{
+	struct drm_connector *connector;
+	struct intel_output *intel_output;
+
+	intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL);
+	if (!intel_output)
+		return;
+
+	connector = &intel_output->base;
+	drm_connector_init(dev, &intel_output->base,
+			   &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
+
+	drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs,
+			 DRM_MODE_ENCODER_DAC);
+
+	drm_mode_connector_attach_encoder(&intel_output->base,
+					  &intel_output->enc);
+
+	/* Set up the DDC bus. */
+	intel_output->ddc_bus = intel_i2c_create(dev, GPIOA, "CRTDDC_A");
+	if (!intel_output->ddc_bus) {
+		dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration "
+			   "failed.\n");
+		return;
+	}
+
+	intel_output->type = INTEL_OUTPUT_ANALOG;
+	connector->interlace_allowed = 0;
+	connector->doublescan_allowed = 0;
+
+	drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs);
+	drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
+
+	drm_sysfs_connector_add(connector);
+}
+]]>
+	</programlisting>
+	<para>
+	  In the example above (again, taken from the i915 driver), a
+	  CRT connector and encoder combination is created.  A device
+	  specific i2c bus is also created, for fetching EDID data and
+	  performing monitor detection.  Once the process is complete,
+	  the new connector is registered with sysfs, to make its
+	  properties available to applications.
+	</para>
+	<sect4>
+	  <title>Helper functions and core functions</title>
+	  <para>
+	    Since many PC-class graphics devices have similar display output
+	    designs, the DRM provides a set of helper functions to make
+	    output management easier.  The core helper routines handle
+	    encoder re-routing and disabling of unused functions following
+	    mode set.  Using the helpers is optional, but recommended for
+	    devices with PC-style architectures (i.e. a set of display planes
+	    for feeding pixels to encoders which are in turn routed to
+	    connectors).  Devices with more complex requirements needing
+	    finer grained management can opt to use the core callbacks
+	    directly.
+	  </para>
+	  <para>
+	    [Insert typical diagram here.]  [Insert OMAP style config here.]
+	  </para>
+	</sect4>
+	<para>
+	  For each encoder, CRTC and connector, several functions must
+	  be provided, depending on the object type.  Encoder objects
+	  need to provide a DPMS (basically on/off) function, mode fixup
+	  (for converting requested modes into native hardware timings),
+	  and prepare, set and commit functions for use by the core DRM
+	  helper functions.  Connector helpers need to provide mode fetch and
+	  validity functions as well as an encoder matching function for
+	  returning an ideal encoder for a given connector.  The core
+	  connector functions include a DPMS callback, (deprecated)
+	  save/restore routines, detection, mode probing, property handling,
+	  and cleanup functions.
+	</para>
+<!--!Edrivers/char/drm/drm_crtc.h-->
+<!--!Edrivers/char/drm/drm_crtc.c-->
+<!--!Edrivers/char/drm/drm_crtc_helper.c-->
+      </sect3>
+    </sect2>
+  </sect1>
+
+  <!-- Internals: vblank handling -->
+
+  <sect1>
+    <title>VBlank event handling</title>
+    <para>
+      The DRM core exposes two vertical blank related ioctls:
+      DRM_IOCTL_WAIT_VBLANK and DRM_IOCTL_MODESET_CTL.
+<!--!Edrivers/char/drm/drm_irq.c-->
+    </para>
+    <para>
+      DRM_IOCTL_WAIT_VBLANK takes a struct drm_wait_vblank structure
+      as its argument, and is used to block or request a signal when a
+      specified vblank event occurs.
+    </para>
+    <para>
+      DRM_IOCTL_MODESET_CTL should be called by application level
+      drivers before and after mode setting, since on many devices the
+      vertical blank counter will be reset at that time.  Internally,
+      the DRM snapshots the last vblank count when the ioctl is called
+      with the _DRM_PRE_MODESET command so that the counter won't go
+      backwards (which is dealt with when _DRM_POST_MODESET is used).
+    </para>
+    <para>
+      To support the functions above, the DRM core provides several
+      helper functions for tracking vertical blank counters, and
+      requires drivers to provide several callbacks:
+      get_vblank_counter(), enable_vblank() and disable_vblank().  The
+      core uses get_vblank_counter() to keep the counter accurate
+      across interrupt disable periods.  It should return the current
+      vertical blank event count, which is often tracked in a device
+      register.  The enable and disable vblank callbacks should enable
+      and disable vertical blank interrupts, respectively.  In the
+      absence of DRM clients waiting on vblank events, the core DRM
+      code will use the disable_vblank() function to disable
+      interrupts, which saves power.  They'll be re-enabled again when
+      a client calls the vblank wait ioctl above.
+    </para>
+    <para>
+      Devices that don't provide a count register can simply use an
+      internal atomic counter incremented on every vertical blank
+      interrupt, and can make their enable and disable vblank
+      functions into no-ops.
+    </para>
+  </sect1>
+
+  <sect1>
+    <title>Memory management</title>
+    <para>
+      The memory manager lies at the heart of many DRM operations, and
+      is also required to support advanced client features like OpenGL
+      pbuffers.  The DRM currently contains two memory managers, TTM
+      and GEM.
+    </para>
+
+    <sect2>
+      <title>The Translation Table Manager (TTM)</title>
+      <para>
+	TTM was developed by Tungsten Graphics, primarily by Thomas
+	Hellström, and is intended to be a flexible, high performance
+	graphics memory manager.
+      </para>
+      <para>
+	Drivers wishing to support TTM must fill out a drm_bo_driver
+	structure.
+      </para>
+      <para>
+	TTM design background and information belongs here.
+      </para>
+    </sect2>
+
+    <sect2>
+      <title>The Graphics Execution Manager (GEM)</title>
+      <para>
+	GEM is an Intel project, authored by Eric Anholt and Keith
+	Packard.  It provides simpler interfaces than TTM, and is well
+	suited for UMA devices.
+      </para>
+      <para>
+	GEM-enabled drivers must provide gem_init_object() and
+	gem_free_object() callbacks to support the core memory
+	allocation routines.  They should also provide several driver
+	specific ioctls to support command execution, pinning, buffer
+	read &amp; write, mapping, and domain ownership transfers.
+      </para>
+      <para>
+	On a fundamental level, GEM involves several operations: memory
+	allocation and freeing, command execution, and aperture management
+	at command execution time.  Buffer object allocation is relatively
+	straightforward and largely provided by Linux's shmem layer, which
+	provides memory to back each object.  When mapped into the GTT
+	or used in a command buffer, the backing pages for an object are
+	flushed to memory and marked write combined so as to be coherent
+	with the GPU.  Likewise, when the GPU finishes rendering to an object,
+	if the CPU accesses it, it must be made coherent with the CPU's view
+	of memory, usually involving GPU cache flushing of various kinds.
+	This core CPU&lt;-&gt;GPU coherency management is provided by the GEM
+	set domain function, which evaluates an object's current domain and
+	performs any necessary flushing or synchronization to put the object
+	into the desired coherency domain (note that the object may be busy,
+	i.e. an active render target; in that case the set domain function
+	will block the client and wait for rendering to complete before
+	performing any necessary flushing operations).
+      </para>
+      <para>
+	Perhaps the most important GEM function is providing a command
+	execution interface to clients.  Client programs construct command
+	buffers containing references to previously allocated memory objects
+	and submit them to GEM.  At that point, GEM will take care to bind
+	all the objects into the GTT, execute the buffer, and provide
+	necessary synchronization between clients accessing the same buffers.
+	This often involves evicting some objects from the GTT and re-binding
+	others (a fairly expensive operation), and providing relocation
+	support which hides fixed GTT offsets from clients.  Clients must
+	take care not to submit command buffers that reference more objects
+	than can fit in the GTT or GEM will reject them and no rendering
+	will occur.  Similarly, if several objects in the buffer require
+	fence registers to be allocated for correct rendering (e.g. 2D blits
+	on pre-965 chips), care must be taken not to require more fence
+	registers than are available to the client.  Such resource management
+	should be abstracted from the client in libdrm.
+      </para>
+    </sect2>
+
+  </sect1>
+
+  <!-- Output management -->
+  <sect1>
+    <title>Output management</title>
+    <para>
+      At the core of the DRM output management code is a set of
+      structures representing CRTCs, encoders and connectors.
+    </para>
+    <para>
+      A CRTC is an abstraction representing a part of the chip that
+      contains a pointer to a scanout buffer.  Therefore, the number
+      of CRTCs available determines how many independent scanout
+      buffers can be active at any given time.  The CRTC structure
+      contains several fields to support this: a pointer to some video
+      memory, a display mode, and an (x, y) offset into the video
+      memory to support panning or configurations where one piece of
+      video memory spans multiple CRTCs.
+    </para>
+    <para>
+      An encoder takes pixel data from a CRTC and converts it to a
+      format suitable for any attached connectors.  On some devices,
+      it may be possible to have a CRTC send data to more than one
+      encoder.  In that case, both encoders would receive data from
+      the same scanout buffer, resulting in a "cloned" display
+      configuration across the connectors attached to each encoder.
+    </para>
+    <para>
+      A connector is the final destination for pixel data on a device,
+      and usually connects directly to an external display device like
+      a monitor or laptop panel.  A connector can only be attached to
+      one encoder at a time.  The connector is also the structure
+      where information about the attached display is kept, so it
+      contains fields for display data, EDID data, DPMS &amp;
+      connection status, and information about modes supported on the
+      attached displays.
+    </para>
+<!--!Edrivers/char/drm/drm_crtc.c-->
+  </sect1>
+
+  <sect1>
+    <title>Framebuffer management</title>
+    <para>
+      In order to set a mode on a given CRTC, encoder and connector
+      configuration, clients need to provide a framebuffer object which
+      will provide a source of pixels for the CRTC to deliver to the encoder(s)
+      and ultimately the connector(s) in the configuration.  A framebuffer
+      is fundamentally a driver specific memory object, made into an opaque
+      handle by the DRM addfb function.  Once an fb has been created this
+      way it can be passed to the KMS mode setting routines for use in
+      a configuration.
+    </para>
+  </sect1>
+
+  <sect1>
+    <title>Command submission &amp; fencing</title>
+    <para>
+      This should cover a few device specific command submission
+      implementations.
+    </para>
+  </sect1>
+
+  <sect1>
+    <title>Suspend/resume</title>
+    <para>
+      The DRM core provides some suspend/resume code, but drivers
+      wanting full suspend/resume support should provide save() and
+      restore() functions.  These will be called at suspend,
+      hibernate, or resume time, and should perform any state save or
+      restore required by your device across suspend or hibernate
+      states.
+    </para>
+  </sect1>
+
+  <sect1>
+    <title>DMA services</title>
+    <para>
+      This should cover how DMA mapping etc. is supported by the core.
+      These functions are deprecated and should not be used.
+    </para>
+  </sect1>
+  </chapter>
+
+  <!-- External interfaces -->
+
+  <chapter id="drmExternals">
+    <title>Userland interfaces</title>
+    <para>
+      The DRM core exports several interfaces to applications,
+      generally intended to be used through corresponding libdrm
+      wrapper functions.  In addition, drivers export device specific
+      interfaces for use by userspace drivers &amp; device aware
+      applications through ioctls and sysfs files.
+    </para>
+    <para>
+      External interfaces include: memory mapping, context management,
+      DMA operations, AGP management, vblank control, fence
+      management, memory management, and output management.
+    </para>
+    <para>
+      Cover generic ioctls and sysfs layout here.  Only need high
+      level info, since man pages will cover the rest.
+    </para>
+  </chapter>
+
+  <!-- API reference -->
+
+  <appendix id="drmDriverApi">
+    <title>DRM Driver API</title>
+    <para>
+      Include auto-generated API reference here (need to reference it
+      from paragraphs above too).
+    </para>
+  </appendix>
+
+</book>
diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 634c40a..d92d63a 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -139,9 +139,11 @@ int set_memory_np(unsigned long addr, int numpages);
 int set_memory_4k(unsigned long addr, int numpages);
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray);
+int set_memory_array_wc(unsigned long *addr, int addrinarray);
 int set_memory_array_wb(unsigned long *addr, int addrinarray);
 
 int set_pages_array_uc(struct page **pages, int addrinarray);
+int set_pages_array_wc(struct page **pages, int addrinarray);
 int set_pages_array_wb(struct page **pages, int addrinarray);
 
 /*
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 28195c3..532e793 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -997,7 +997,8 @@ out_err:
 }
 EXPORT_SYMBOL(set_memory_uc);
 
-int set_memory_array_uc(unsigned long *addr, int addrinarray)
+int _set_memory_array(unsigned long *addr, int addrinarray,
+		unsigned long new_type)
 {
 	int i, j;
 	int ret;
@@ -1007,13 +1008,19 @@ int set_memory_array_uc(unsigned long *addr, int addrinarray)
 	 */
 	for (i = 0; i < addrinarray; i++) {
 		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
-					_PAGE_CACHE_UC_MINUS, NULL);
+					new_type, NULL);
 		if (ret)
 			goto out_free;
 	}
 
 	ret = change_page_attr_set(addr, addrinarray,
 				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
+
+	if (!ret && new_type == _PAGE_CACHE_WC)
+		ret = change_page_attr_set_clr(addr, addrinarray,
+					       __pgprot(_PAGE_CACHE_WC),
+					       __pgprot(_PAGE_CACHE_MASK),
+					       0, CPA_ARRAY, NULL);
 	if (ret)
 		goto out_free;
 
@@ -1025,8 +1032,19 @@ out_free:
 
 	return ret;
 }
+
+int set_memory_array_uc(unsigned long *addr, int addrinarray)
+{
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_UC_MINUS);
+}
 EXPORT_SYMBOL(set_memory_array_uc);
 
+int set_memory_array_wc(unsigned long *addr, int addrinarray)
+{
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_WC);
+}
+EXPORT_SYMBOL(set_memory_array_wc);
+
 int _set_memory_wc(unsigned long addr, int numpages)
 {
 	int ret;
@@ -1153,26 +1171,34 @@ int set_pages_uc(struct page *page, int numpages)
 }
 EXPORT_SYMBOL(set_pages_uc);
 
-int set_pages_array_uc(struct page **pages, int addrinarray)
+static int _set_pages_array(struct page **pages, int addrinarray,
+		unsigned long new_type)
 {
 	unsigned long start;
 	unsigned long end;
 	int i;
 	int free_idx;
+	int ret;
 
 	for (i = 0; i < addrinarray; i++) {
 		if (PageHighMem(pages[i]))
 			continue;
 		start = page_to_pfn(pages[i]) << PAGE_SHIFT;
 		end = start + PAGE_SIZE;
-		if (reserve_memtype(start, end, _PAGE_CACHE_UC_MINUS, NULL))
+		if (reserve_memtype(start, end, new_type, NULL))
 			goto err_out;
 	}
 
-	if (cpa_set_pages_array(pages, addrinarray,
-			__pgprot(_PAGE_CACHE_UC_MINUS)) == 0) {
-		return 0; /* Success */
-	}
+	ret = cpa_set_pages_array(pages, addrinarray,
+			__pgprot(_PAGE_CACHE_UC_MINUS));
+	if (!ret && new_type == _PAGE_CACHE_WC)
+		ret = change_page_attr_set_clr(NULL, addrinarray,
+					       __pgprot(_PAGE_CACHE_WC),
+					       __pgprot(_PAGE_CACHE_MASK),
+					       0, CPA_PAGES_ARRAY, pages);
+	if (ret)
+		goto err_out;
+	return 0; /* Success */
 err_out:
 	free_idx = i;
 	for (i = 0; i < free_idx; i++) {
@@ -1184,8 +1210,19 @@ err_out:
 	}
 	return -EINVAL;
 }
+
+int set_pages_array_uc(struct page **pages, int addrinarray)
+{
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_UC_MINUS);
+}
 EXPORT_SYMBOL(set_pages_array_uc);
 
+int set_pages_array_wc(struct page **pages, int addrinarray)
+{
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_WC);
+}
+EXPORT_SYMBOL(set_pages_array_wc);
+
 int set_pages_wb(struct page *page, int numpages)
 {
 	unsigned long addr = (unsigned long)page_address(page);
diff --git a/drivers/char/agp/agp.h b/drivers/char/agp/agp.h
index 870f12c..1204909 100644
--- a/drivers/char/agp/agp.h
+++ b/drivers/char/agp/agp.h
@@ -178,86 +178,6 @@ struct agp_bridge_data {
 #define PGE_EMPTY(b, p)	(!(p) || (p) == (unsigned long) (b)->scratch_page)
 
 
-/* Intel registers */
-#define INTEL_APSIZE	0xb4
-#define INTEL_ATTBASE	0xb8
-#define INTEL_AGPCTRL	0xb0
-#define INTEL_NBXCFG	0x50
-#define INTEL_ERRSTS	0x91
-
-/* Intel i830 registers */
-#define I830_GMCH_CTRL			0x52
-#define I830_GMCH_ENABLED		0x4
-#define I830_GMCH_MEM_MASK		0x1
-#define I830_GMCH_MEM_64M		0x1
-#define I830_GMCH_MEM_128M		0
-#define I830_GMCH_GMS_MASK		0x70
-#define I830_GMCH_GMS_DISABLED		0x00
-#define I830_GMCH_GMS_LOCAL		0x10
-#define I830_GMCH_GMS_STOLEN_512	0x20
-#define I830_GMCH_GMS_STOLEN_1024	0x30
-#define I830_GMCH_GMS_STOLEN_8192	0x40
-#define I830_RDRAM_CHANNEL_TYPE		0x03010
-#define I830_RDRAM_ND(x)		(((x) & 0x20) >> 5)
-#define I830_RDRAM_DDT(x)		(((x) & 0x18) >> 3)
-
-/* This one is for I830MP w. an external graphic card */
-#define INTEL_I830_ERRSTS	0x92
-
-/* Intel 855GM/852GM registers */
-#define I855_GMCH_GMS_MASK		0xF0
-#define I855_GMCH_GMS_STOLEN_0M		0x0
-#define I855_GMCH_GMS_STOLEN_1M		(0x1 << 4)
-#define I855_GMCH_GMS_STOLEN_4M		(0x2 << 4)
-#define I855_GMCH_GMS_STOLEN_8M		(0x3 << 4)
-#define I855_GMCH_GMS_STOLEN_16M	(0x4 << 4)
-#define I855_GMCH_GMS_STOLEN_32M	(0x5 << 4)
-#define I85X_CAPID			0x44
-#define I85X_VARIANT_MASK		0x7
-#define I85X_VARIANT_SHIFT		5
-#define I855_GME			0x0
-#define I855_GM				0x4
-#define I852_GME			0x2
-#define I852_GM				0x5
-
-/* Intel i845 registers */
-#define INTEL_I845_AGPM		0x51
-#define INTEL_I845_ERRSTS	0xc8
-
-/* Intel i860 registers */
-#define INTEL_I860_MCHCFG	0x50
-#define INTEL_I860_ERRSTS	0xc8
-
-/* Intel i810 registers */
-#define I810_GMADDR		0x10
-#define I810_MMADDR		0x14
-#define I810_PTE_BASE		0x10000
-#define I810_PTE_MAIN_UNCACHED	0x00000000
-#define I810_PTE_LOCAL		0x00000002
-#define I810_PTE_VALID		0x00000001
-#define I830_PTE_SYSTEM_CACHED  0x00000006
-#define I810_SMRAM_MISCC	0x70
-#define I810_GFX_MEM_WIN_SIZE	0x00010000
-#define I810_GFX_MEM_WIN_32M	0x00010000
-#define I810_GMS		0x000000c0
-#define I810_GMS_DISABLE	0x00000000
-#define I810_PGETBL_CTL		0x2020
-#define I810_PGETBL_ENABLED	0x00000001
-#define I965_PGETBL_SIZE_MASK	0x0000000e
-#define I965_PGETBL_SIZE_512KB	(0 << 1)
-#define I965_PGETBL_SIZE_256KB	(1 << 1)
-#define I965_PGETBL_SIZE_128KB	(2 << 1)
-#define I965_PGETBL_SIZE_1MB	(3 << 1)
-#define I965_PGETBL_SIZE_2MB	(4 << 1)
-#define I965_PGETBL_SIZE_1_5MB	(5 << 1)
-#define G33_PGETBL_SIZE_MASK    (3 << 8)
-#define G33_PGETBL_SIZE_1M      (1 << 8)
-#define G33_PGETBL_SIZE_2M      (2 << 8)
-
-#define I810_DRAM_CTL		0x3000
-#define I810_DRAM_ROW_0		0x00000001
-#define I810_DRAM_ROW_0_SDRAM	0x00000001
-
 struct agp_device_ids {
 	unsigned short device_id; /* first, to make table easier to read */
 	enum chipset_type chipset;
diff --git a/drivers/char/agp/ali-agp.c b/drivers/char/agp/ali-agp.c
index d2ce68f..fd79351 100644
--- a/drivers/char/agp/ali-agp.c
+++ b/drivers/char/agp/ali-agp.c
@@ -204,6 +204,7 @@ static const struct agp_bridge_driver ali_generic_bridge = {
 	.aperture_sizes		= ali_generic_sizes,
 	.size_type		= U32_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= ali_configure,
 	.fetch_size		= ali_fetch_size,
 	.cleanup		= ali_cleanup,
diff --git a/drivers/char/agp/amd-k7-agp.c b/drivers/char/agp/amd-k7-agp.c
index a7637d7..b6b1568 100644
--- a/drivers/char/agp/amd-k7-agp.c
+++ b/drivers/char/agp/amd-k7-agp.c
@@ -142,6 +142,7 @@ static int amd_create_gatt_table(struct agp_bridge_data *bridge)
 {
 	struct aper_size_info_lvl2 *value;
 	struct amd_page_map page_dir;
+	unsigned long __iomem *cur_gatt;
 	unsigned long addr;
 	int retval;
 	u32 temp;
@@ -178,6 +179,13 @@ static int amd_create_gatt_table(struct agp_bridge_data *bridge)
 		readl(page_dir.remapped+GET_PAGE_DIR_OFF(addr));	/* PCI Posting. */
 	}
 
+	for (i = 0; i < value->num_entries; i++) {
+		addr = (i * PAGE_SIZE) + agp_bridge->gart_bus_addr;
+		cur_gatt = GET_GATT(addr);
+		writel(agp_bridge->scratch_page, cur_gatt+GET_GATT_OFF(addr));
+		readl(cur_gatt+GET_GATT_OFF(addr));	/* PCI Posting. */
+	}
+
 	return 0;
 }
 
@@ -375,6 +383,7 @@ static const struct agp_bridge_driver amd_irongate_driver = {
 	.aperture_sizes		= amd_irongate_sizes,
 	.size_type		= LVL2_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= amd_irongate_configure,
 	.fetch_size		= amd_irongate_fetch_size,
 	.cleanup		= amd_irongate_cleanup,
diff --git a/drivers/char/agp/amd64-agp.c b/drivers/char/agp/amd64-agp.c
index fd50ead..70312da 100644
--- a/drivers/char/agp/amd64-agp.c
+++ b/drivers/char/agp/amd64-agp.c
@@ -210,6 +210,7 @@ static const struct agp_bridge_driver amd_8151_driver = {
 	.aperture_sizes		= amd_8151_sizes,
 	.size_type		= U32_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= amd_8151_configure,
 	.fetch_size		= amd64_fetch_size,
 	.cleanup		= amd64_cleanup,
@@ -383,7 +384,7 @@ static int __devinit uli_agp_init(struct pci_dev *pdev)
 {
 	u32 httfea,baseaddr,enuscr;
 	struct pci_dev *dev1;
-	int i;
+	int i, ret;
 	unsigned size = amd64_fetch_size();
 
 	dev_info(&pdev->dev, "setting up ULi AGP\n");
@@ -399,15 +400,18 @@ static int __devinit uli_agp_init(struct pci_dev *pdev)
 
 	if (i == ARRAY_SIZE(uli_sizes)) {
 		dev_info(&pdev->dev, "no ULi size found for %d\n", size);
-		return -ENODEV;
+		ret = -ENODEV;
+		goto put;
 	}
 
 	/* shadow x86-64 registers into ULi registers */
 	pci_read_config_dword (k8_northbridges[0], AMD64_GARTAPERTUREBASE, &httfea);
 
 	/* if x86-64 aperture base is beyond 4G, exit here */
-	if ((httfea & 0x7fff) >> (32 - 25))
-		return -ENODEV;
+	if ((httfea & 0x7fff) >> (32 - 25)) {
+		ret = -ENODEV;
+		goto put;
+	}
 
 	httfea = (httfea& 0x7fff) << 25;
 
@@ -419,9 +423,10 @@ static int __devinit uli_agp_init(struct pci_dev *pdev)
 	enuscr= httfea+ (size * 1024 * 1024) - 1;
 	pci_write_config_dword(dev1, ULI_X86_64_HTT_FEA_REG, httfea);
 	pci_write_config_dword(dev1, ULI_X86_64_ENU_SCR_REG, enuscr);
-
+	ret = 0;
+put:
 	pci_dev_put(dev1);
-	return 0;
+	return ret;
 }
 
 
@@ -440,7 +445,7 @@ static int nforce3_agp_init(struct pci_dev *pdev)
 {
 	u32 tmp, apbase, apbar, aplimit;
 	struct pci_dev *dev1;
-	int i;
+	int i, ret;
 	unsigned size = amd64_fetch_size();
 
 	dev_info(&pdev->dev, "setting up Nforce3 AGP\n");
@@ -457,7 +462,8 @@ static int nforce3_agp_init(struct pci_dev *pdev)
 
 	if (i == ARRAY_SIZE(nforce3_sizes)) {
 		dev_info(&pdev->dev, "no NForce3 size found for %d\n", size);
-		return -ENODEV;
+		ret = -ENODEV;
+		goto put;
 	}
 
 	pci_read_config_dword(dev1, NVIDIA_X86_64_1_APSIZE, &tmp);
@@ -471,7 +477,8 @@ static int nforce3_agp_init(struct pci_dev *pdev)
 	/* if x86-64 aperture base is beyond 4G, exit here */
 	if ( (apbase & 0x7fff) >> (32 - 25) ) {
 		dev_info(&pdev->dev, "aperture base > 4G\n");
-		return -ENODEV;
+		ret = -ENODEV;
+		goto put;
 	}
 
 	apbase = (apbase & 0x7fff) << 25;
@@ -487,9 +494,11 @@ static int nforce3_agp_init(struct pci_dev *pdev)
 	pci_write_config_dword(dev1, NVIDIA_X86_64_1_APBASE2, apbase);
 	pci_write_config_dword(dev1, NVIDIA_X86_64_1_APLIMIT2, aplimit);
 
+	ret = 0;
+put:
 	pci_dev_put(dev1);
 
-	return 0;
+	return ret;
 }
 
 static int __devinit agp_amd64_probe(struct pci_dev *pdev,
@@ -499,6 +508,10 @@ static int __devinit agp_amd64_probe(struct pci_dev *pdev,
 	u8 cap_ptr;
 	int err;
 
+	/* The Highlander principle */
+	if (agp_bridges_found)
+		return -ENODEV;
+
 	cap_ptr = pci_find_capability(pdev, PCI_CAP_ID_AGP);
 	if (!cap_ptr)
 		return -ENODEV;
@@ -562,6 +575,8 @@ static void __devexit agp_amd64_remove(struct pci_dev *pdev)
 			   amd64_aperture_sizes[bridge->aperture_size_idx].size);
 	agp_remove_bridge(bridge);
 	agp_put_bridge(bridge);
+
+	agp_bridges_found--;
 }
 
 #ifdef CONFIG_PM
@@ -709,6 +724,11 @@ static struct pci_device_id agp_amd64_pci_table[] = {
 
 MODULE_DEVICE_TABLE(pci, agp_amd64_pci_table);
 
+static DEFINE_PCI_DEVICE_TABLE(agp_amd64_pci_promisc_table) = {
+	{ PCI_DEVICE_CLASS(0, 0) },
+	{ }
+};
+
 static struct pci_driver agp_amd64_pci_driver = {
 	.name		= "agpgart-amd64",
 	.id_table	= agp_amd64_pci_table,
@@ -734,7 +754,6 @@ int __init agp_amd64_init(void)
 		return err;
 
 	if (agp_bridges_found == 0) {
-		struct pci_dev *dev;
 		if (!agp_try_unsupported && !agp_try_unsupported_boot) {
 			printk(KERN_INFO PFX "No supported AGP bridge found.\n");
 #ifdef MODULE
@@ -750,17 +769,10 @@ int __init agp_amd64_init(void)
 			return -ENODEV;
 
 		/* Look for any AGP bridge */
-		dev = NULL;
-		err = -ENODEV;
-		for_each_pci_dev(dev) {
-			if (!pci_find_capability(dev, PCI_CAP_ID_AGP))
-				continue;
-			/* Only one bridge supported right now */
-			if (agp_amd64_probe(dev, NULL) == 0) {
-				err = 0;
-				break;
-			}
-		}
+		agp_amd64_pci_driver.id_table = agp_amd64_pci_promisc_table;
+		err = driver_attach(&agp_amd64_pci_driver.driver);
+		if (err == 0 && agp_bridges_found == 0)
+			err = -ENODEV;
 	}
 	return err;
 }
diff --git a/drivers/char/agp/ati-agp.c b/drivers/char/agp/ati-agp.c
index 3b2ecbe..dc30e22 100644
--- a/drivers/char/agp/ati-agp.c
+++ b/drivers/char/agp/ati-agp.c
@@ -341,6 +341,7 @@ static int ati_create_gatt_table(struct agp_bridge_data *bridge)
 {
 	struct aper_size_info_lvl2 *value;
 	struct ati_page_map page_dir;
+	unsigned long __iomem *cur_gatt;
 	unsigned long addr;
 	int retval;
 	u32 temp;
@@ -395,6 +396,12 @@ static int ati_create_gatt_table(struct agp_bridge_data *bridge)
 		readl(page_dir.remapped+GET_PAGE_DIR_OFF(addr));	/* PCI Posting. */
 	}
 
+	for (i = 0; i < value->num_entries; i++) {
+		addr = (i * PAGE_SIZE) + agp_bridge->gart_bus_addr;
+		cur_gatt = GET_GATT(addr);
+		writel(agp_bridge->scratch_page, cur_gatt+GET_GATT_OFF(addr));
+	}
+
 	return 0;
 }
 
@@ -415,6 +422,7 @@ static const struct agp_bridge_driver ati_generic_bridge = {
 	.aperture_sizes		= ati_generic_sizes,
 	.size_type		= LVL2_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= ati_configure,
 	.fetch_size		= ati_fetch_size,
 	.cleanup		= ati_cleanup,
diff --git a/drivers/char/agp/efficeon-agp.c b/drivers/char/agp/efficeon-agp.c
index 793f39e..aa109cb 100644
--- a/drivers/char/agp/efficeon-agp.c
+++ b/drivers/char/agp/efficeon-agp.c
@@ -28,6 +28,7 @@
 #include <linux/page-flags.h>
 #include <linux/mm.h>
 #include "agp.h"
+#include "intel-agp.h"
 
 /*
  * The real differences to the generic AGP code is
diff --git a/drivers/char/agp/intel-agp.c b/drivers/char/agp/intel-agp.c
index aa4248e..d836a71 100644
--- a/drivers/char/agp/intel-agp.c
+++ b/drivers/char/agp/intel-agp.c
@@ -11,1531 +11,13 @@
 #include <linux/agp_backend.h>
 #include <asm/smp.h>
 #include "agp.h"
+#include "intel-agp.h"
+
+#include "intel-gtt.c"
 
 int intel_agp_enabled;
 EXPORT_SYMBOL(intel_agp_enabled);
 
-/*
- * If we have Intel graphics, we're not going to have anything other than
- * an Intel IOMMU. So make the correct use of the PCI DMA API contingent
- * on the Intel IOMMU support (CONFIG_DMAR).
- * Only newer chipsets need to bother with this, of course.
- */
-#ifdef CONFIG_DMAR
-#define USE_PCI_DMA_API 1
-#endif
-
-#define PCI_DEVICE_ID_INTEL_E7221_HB	0x2588
-#define PCI_DEVICE_ID_INTEL_E7221_IG	0x258a
-#define PCI_DEVICE_ID_INTEL_82946GZ_HB      0x2970
-#define PCI_DEVICE_ID_INTEL_82946GZ_IG      0x2972
-#define PCI_DEVICE_ID_INTEL_82G35_HB     0x2980
-#define PCI_DEVICE_ID_INTEL_82G35_IG     0x2982
-#define PCI_DEVICE_ID_INTEL_82965Q_HB       0x2990
-#define PCI_DEVICE_ID_INTEL_82965Q_IG       0x2992
-#define PCI_DEVICE_ID_INTEL_82965G_HB       0x29A0
-#define PCI_DEVICE_ID_INTEL_82965G_IG       0x29A2
-#define PCI_DEVICE_ID_INTEL_82965GM_HB      0x2A00
-#define PCI_DEVICE_ID_INTEL_82965GM_IG      0x2A02
-#define PCI_DEVICE_ID_INTEL_82965GME_HB     0x2A10
-#define PCI_DEVICE_ID_INTEL_82965GME_IG     0x2A12
-#define PCI_DEVICE_ID_INTEL_82945GME_HB     0x27AC
-#define PCI_DEVICE_ID_INTEL_82945GME_IG     0x27AE
-#define PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB        0xA010
-#define PCI_DEVICE_ID_INTEL_PINEVIEW_M_IG        0xA011
-#define PCI_DEVICE_ID_INTEL_PINEVIEW_HB         0xA000
-#define PCI_DEVICE_ID_INTEL_PINEVIEW_IG         0xA001
-#define PCI_DEVICE_ID_INTEL_G33_HB          0x29C0
-#define PCI_DEVICE_ID_INTEL_G33_IG          0x29C2
-#define PCI_DEVICE_ID_INTEL_Q35_HB          0x29B0
-#define PCI_DEVICE_ID_INTEL_Q35_IG          0x29B2
-#define PCI_DEVICE_ID_INTEL_Q33_HB          0x29D0
-#define PCI_DEVICE_ID_INTEL_Q33_IG          0x29D2
-#define PCI_DEVICE_ID_INTEL_B43_HB          0x2E40
-#define PCI_DEVICE_ID_INTEL_B43_IG          0x2E42
-#define PCI_DEVICE_ID_INTEL_GM45_HB         0x2A40
-#define PCI_DEVICE_ID_INTEL_GM45_IG         0x2A42
-#define PCI_DEVICE_ID_INTEL_EAGLELAKE_HB        0x2E00
-#define PCI_DEVICE_ID_INTEL_EAGLELAKE_IG        0x2E02
-#define PCI_DEVICE_ID_INTEL_Q45_HB          0x2E10
-#define PCI_DEVICE_ID_INTEL_Q45_IG          0x2E12
-#define PCI_DEVICE_ID_INTEL_G45_HB          0x2E20
-#define PCI_DEVICE_ID_INTEL_G45_IG          0x2E22
-#define PCI_DEVICE_ID_INTEL_G41_HB          0x2E30
-#define PCI_DEVICE_ID_INTEL_G41_IG          0x2E32
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB	    0x0040
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG	    0x0042
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB	    0x0044
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB	    0x0062
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB    0x006a
-#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG	    0x0046
-#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB  0x0100
-#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_IG  0x0102
-#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB  0x0104
-#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_IG  0x0106
-
-/* cover 915 and 945 variants */
-#define IS_I915 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_E7221_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82915G_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945G_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945GM_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945GME_HB)
-
-#define IS_I965 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82946GZ_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82G35_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965Q_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965G_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965GM_HB || \
-		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965GME_HB)
-
-#define IS_G33 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G33_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q35_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q33_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_HB)
-
-#define IS_PINEVIEW (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_HB)
-
-#define IS_SNB (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
-
-#define IS_G4X (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_EAGLELAKE_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q45_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G45_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_GM45_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G41_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_B43_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB || \
-		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB || \
-		IS_SNB)
-
-extern int agp_memory_reserved;
-
-
-/* Intel 815 register */
-#define INTEL_815_APCONT	0x51
-#define INTEL_815_ATTBASE_MASK	~0x1FFFFFFF
-
-/* Intel i820 registers */
-#define INTEL_I820_RDCR		0x51
-#define INTEL_I820_ERRSTS	0xc8
-
-/* Intel i840 registers */
-#define INTEL_I840_MCHCFG	0x50
-#define INTEL_I840_ERRSTS	0xc8
-
-/* Intel i850 registers */
-#define INTEL_I850_MCHCFG	0x50
-#define INTEL_I850_ERRSTS	0xc8
-
-/* intel 915G registers */
-#define I915_GMADDR	0x18
-#define I915_MMADDR	0x10
-#define I915_PTEADDR	0x1C
-#define I915_GMCH_GMS_STOLEN_48M	(0x6 << 4)
-#define I915_GMCH_GMS_STOLEN_64M	(0x7 << 4)
-#define G33_GMCH_GMS_STOLEN_128M	(0x8 << 4)
-#define G33_GMCH_GMS_STOLEN_256M	(0x9 << 4)
-#define INTEL_GMCH_GMS_STOLEN_96M	(0xa << 4)
-#define INTEL_GMCH_GMS_STOLEN_160M	(0xb << 4)
-#define INTEL_GMCH_GMS_STOLEN_224M	(0xc << 4)
-#define INTEL_GMCH_GMS_STOLEN_352M	(0xd << 4)
-
-#define I915_IFPADDR    0x60
-
-/* Intel 965G registers */
-#define I965_MSAC 0x62
-#define I965_IFPADDR    0x70
-
-/* Intel 7505 registers */
-#define INTEL_I7505_APSIZE	0x74
-#define INTEL_I7505_NCAPID	0x60
-#define INTEL_I7505_NISTAT	0x6c
-#define INTEL_I7505_ATTBASE	0x78
-#define INTEL_I7505_ERRSTS	0x42
-#define INTEL_I7505_AGPCTRL	0x70
-#define INTEL_I7505_MCHCFG	0x50
-
-#define SNB_GMCH_CTRL	0x50
-#define SNB_GMCH_GMS_STOLEN_MASK	0xF8
-#define SNB_GMCH_GMS_STOLEN_32M		(1 << 3)
-#define SNB_GMCH_GMS_STOLEN_64M		(2 << 3)
-#define SNB_GMCH_GMS_STOLEN_96M		(3 << 3)
-#define SNB_GMCH_GMS_STOLEN_128M	(4 << 3)
-#define SNB_GMCH_GMS_STOLEN_160M	(5 << 3)
-#define SNB_GMCH_GMS_STOLEN_192M	(6 << 3)
-#define SNB_GMCH_GMS_STOLEN_224M	(7 << 3)
-#define SNB_GMCH_GMS_STOLEN_256M	(8 << 3)
-#define SNB_GMCH_GMS_STOLEN_288M	(9 << 3)
-#define SNB_GMCH_GMS_STOLEN_320M	(0xa << 3)
-#define SNB_GMCH_GMS_STOLEN_352M	(0xb << 3)
-#define SNB_GMCH_GMS_STOLEN_384M	(0xc << 3)
-#define SNB_GMCH_GMS_STOLEN_416M	(0xd << 3)
-#define SNB_GMCH_GMS_STOLEN_448M	(0xe << 3)
-#define SNB_GMCH_GMS_STOLEN_480M	(0xf << 3)
-#define SNB_GMCH_GMS_STOLEN_512M	(0x10 << 3)
-#define SNB_GTT_SIZE_0M			(0 << 8)
-#define SNB_GTT_SIZE_1M			(1 << 8)
-#define SNB_GTT_SIZE_2M			(2 << 8)
-#define SNB_GTT_SIZE_MASK		(3 << 8)
-
-static const struct aper_size_info_fixed intel_i810_sizes[] =
-{
-	{64, 16384, 4},
-	/* The 32M mode still requires a 64k gatt */
-	{32, 8192, 4}
-};
-
-#define AGP_DCACHE_MEMORY	1
-#define AGP_PHYS_MEMORY		2
-#define INTEL_AGP_CACHED_MEMORY 3
-
-static struct gatt_mask intel_i810_masks[] =
-{
-	{.mask = I810_PTE_VALID, .type = 0},
-	{.mask = (I810_PTE_VALID | I810_PTE_LOCAL), .type = AGP_DCACHE_MEMORY},
-	{.mask = I810_PTE_VALID, .type = 0},
-	{.mask = I810_PTE_VALID | I830_PTE_SYSTEM_CACHED,
-	 .type = INTEL_AGP_CACHED_MEMORY}
-};
-
-static struct _intel_private {
-	struct pci_dev *pcidev;	/* device one */
-	u8 __iomem *registers;
-	u32 __iomem *gtt;		/* I915G */
-	int num_dcache_entries;
-	/* gtt_entries is the number of gtt entries that are already mapped
-	 * to stolen memory.  Stolen memory is larger than the memory mapped
-	 * through gtt_entries, as it includes some reserved space for the BIOS
-	 * popup and for the GTT.
-	 */
-	int gtt_entries;			/* i830+ */
-	int gtt_total_size;
-	union {
-		void __iomem *i9xx_flush_page;
-		void *i8xx_flush_page;
-	};
-	struct page *i8xx_page;
-	struct resource ifp_resource;
-	int resource_valid;
-} intel_private;
-
-#ifdef USE_PCI_DMA_API
-static int intel_agp_map_page(struct page *page, dma_addr_t *ret)
-{
-	*ret = pci_map_page(intel_private.pcidev, page, 0,
-			    PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
-	if (pci_dma_mapping_error(intel_private.pcidev, *ret))
-		return -EINVAL;
-	return 0;
-}
-
-static void intel_agp_unmap_page(struct page *page, dma_addr_t dma)
-{
-	pci_unmap_page(intel_private.pcidev, dma,
-		       PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
-}
-
-static void intel_agp_free_sglist(struct agp_memory *mem)
-{
-	struct sg_table st;
-
-	st.sgl = mem->sg_list;
-	st.orig_nents = st.nents = mem->page_count;
-
-	sg_free_table(&st);
-
-	mem->sg_list = NULL;
-	mem->num_sg = 0;
-}
-
-static int intel_agp_map_memory(struct agp_memory *mem)
-{
-	struct sg_table st;
-	struct scatterlist *sg;
-	int i;
-
-	DBG("try mapping %lu pages\n", (unsigned long)mem->page_count);
-
-	if (sg_alloc_table(&st, mem->page_count, GFP_KERNEL))
-		return -ENOMEM;
-
-	mem->sg_list = sg = st.sgl;
-
-	for (i = 0 ; i < mem->page_count; i++, sg = sg_next(sg))
-		sg_set_page(sg, mem->pages[i], PAGE_SIZE, 0);
-
-	mem->num_sg = pci_map_sg(intel_private.pcidev, mem->sg_list,
-				 mem->page_count, PCI_DMA_BIDIRECTIONAL);
-	if (unlikely(!mem->num_sg)) {
-		intel_agp_free_sglist(mem);
-		return -ENOMEM;
-	}
-	return 0;
-}
-
-static void intel_agp_unmap_memory(struct agp_memory *mem)
-{
-	DBG("try unmapping %lu pages\n", (unsigned long)mem->page_count);
-
-	pci_unmap_sg(intel_private.pcidev, mem->sg_list,
-		     mem->page_count, PCI_DMA_BIDIRECTIONAL);
-	intel_agp_free_sglist(mem);
-}
-
-static void intel_agp_insert_sg_entries(struct agp_memory *mem,
-					off_t pg_start, int mask_type)
-{
-	struct scatterlist *sg;
-	int i, j;
-
-	j = pg_start;
-
-	WARN_ON(!mem->num_sg);
-
-	if (mem->num_sg == mem->page_count) {
-		for_each_sg(mem->sg_list, sg, mem->page_count, i) {
-			writel(agp_bridge->driver->mask_memory(agp_bridge,
-					sg_dma_address(sg), mask_type),
-					intel_private.gtt+j);
-			j++;
-		}
-	} else {
-		/* sg may merge pages, but we have to separate
-		 * per-page addr for GTT */
-		unsigned int len, m;
-
-		for_each_sg(mem->sg_list, sg, mem->num_sg, i) {
-			len = sg_dma_len(sg) / PAGE_SIZE;
-			for (m = 0; m < len; m++) {
-				writel(agp_bridge->driver->mask_memory(agp_bridge,
-								       sg_dma_address(sg) + m * PAGE_SIZE,
-								       mask_type),
-				       intel_private.gtt+j);
-				j++;
-			}
-		}
-	}
-	readl(intel_private.gtt+j-1);
-}
-
-#else
-
-static void intel_agp_insert_sg_entries(struct agp_memory *mem,
-					off_t pg_start, int mask_type)
-{
-	int i, j;
-	u32 cache_bits = 0;
-
-	if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB ||
-	    agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
-	{
-		cache_bits = I830_PTE_SYSTEM_CACHED;
-	}
-
-	for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
-		writel(agp_bridge->driver->mask_memory(agp_bridge,
-				page_to_phys(mem->pages[i]), mask_type),
-		       intel_private.gtt+j);
-	}
-
-	readl(intel_private.gtt+j-1);
-}
-
-#endif
-
-static int intel_i810_fetch_size(void)
-{
-	u32 smram_miscc;
-	struct aper_size_info_fixed *values;
-
-	pci_read_config_dword(agp_bridge->dev, I810_SMRAM_MISCC, &smram_miscc);
-	values = A_SIZE_FIX(agp_bridge->driver->aperture_sizes);
-
-	if ((smram_miscc & I810_GMS) == I810_GMS_DISABLE) {
-		dev_warn(&agp_bridge->dev->dev, "i810 is disabled\n");
-		return 0;
-	}
-	if ((smram_miscc & I810_GFX_MEM_WIN_SIZE) == I810_GFX_MEM_WIN_32M) {
-		agp_bridge->previous_size =
-			agp_bridge->current_size = (void *) (values + 1);
-		agp_bridge->aperture_size_idx = 1;
-		return values[1].size;
-	} else {
-		agp_bridge->previous_size =
-			agp_bridge->current_size = (void *) (values);
-		agp_bridge->aperture_size_idx = 0;
-		return values[0].size;
-	}
-
-	return 0;
-}
-
-static int intel_i810_configure(void)
-{
-	struct aper_size_info_fixed *current_size;
-	u32 temp;
-	int i;
-
-	current_size = A_SIZE_FIX(agp_bridge->current_size);
-
-	if (!intel_private.registers) {
-		pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &temp);
-		temp &= 0xfff80000;
-
-		intel_private.registers = ioremap(temp, 128 * 4096);
-		if (!intel_private.registers) {
-			dev_err(&intel_private.pcidev->dev,
-				"can't remap memory\n");
-			return -ENOMEM;
-		}
-	}
-
-	if ((readl(intel_private.registers+I810_DRAM_CTL)
-		& I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) {
-		/* This will need to be dynamically assigned */
-		dev_info(&intel_private.pcidev->dev,
-			 "detected 4MB dedicated video ram\n");
-		intel_private.num_dcache_entries = 1024;
-	}
-	pci_read_config_dword(intel_private.pcidev, I810_GMADDR, &temp);
-	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
-	writel(agp_bridge->gatt_bus_addr | I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
-	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
-
-	if (agp_bridge->driver->needs_scratch_page) {
-		for (i = 0; i < current_size->num_entries; i++) {
-			writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
-		}
-		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));	/* PCI posting. */
-	}
-	global_cache_flush();
-	return 0;
-}
-
-static void intel_i810_cleanup(void)
-{
-	writel(0, intel_private.registers+I810_PGETBL_CTL);
-	readl(intel_private.registers);	/* PCI Posting. */
-	iounmap(intel_private.registers);
-}
-
-static void intel_i810_tlbflush(struct agp_memory *mem)
-{
-	return;
-}
-
-static void intel_i810_agp_enable(struct agp_bridge_data *bridge, u32 mode)
-{
-	return;
-}
-
-/* Exists to support ARGB cursors */
-static struct page *i8xx_alloc_pages(void)
-{
-	struct page *page;
-
-	page = alloc_pages(GFP_KERNEL | GFP_DMA32, 2);
-	if (page == NULL)
-		return NULL;
-
-	if (set_pages_uc(page, 4) < 0) {
-		set_pages_wb(page, 4);
-		__free_pages(page, 2);
-		return NULL;
-	}
-	get_page(page);
-	atomic_inc(&agp_bridge->current_memory_agp);
-	return page;
-}
-
-static void i8xx_destroy_pages(struct page *page)
-{
-	if (page == NULL)
-		return;
-
-	set_pages_wb(page, 4);
-	put_page(page);
-	__free_pages(page, 2);
-	atomic_dec(&agp_bridge->current_memory_agp);
-}
-
-static int intel_i830_type_to_mask_type(struct agp_bridge_data *bridge,
-					int type)
-{
-	if (type < AGP_USER_TYPES)
-		return type;
-	else if (type == AGP_USER_CACHED_MEMORY)
-		return INTEL_AGP_CACHED_MEMORY;
-	else
-		return 0;
-}
-
-static int intel_i810_insert_entries(struct agp_memory *mem, off_t pg_start,
-				int type)
-{
-	int i, j, num_entries;
-	void *temp;
-	int ret = -EINVAL;
-	int mask_type;
-
-	if (mem->page_count == 0)
-		goto out;
-
-	temp = agp_bridge->current_size;
-	num_entries = A_SIZE_FIX(temp)->num_entries;
-
-	if ((pg_start + mem->page_count) > num_entries)
-		goto out_err;
-
-
-	for (j = pg_start; j < (pg_start + mem->page_count); j++) {
-		if (!PGE_EMPTY(agp_bridge, readl(agp_bridge->gatt_table+j))) {
-			ret = -EBUSY;
-			goto out_err;
-		}
-	}
-
-	if (type != mem->type)
-		goto out_err;
-
-	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
-
-	switch (mask_type) {
-	case AGP_DCACHE_MEMORY:
-		if (!mem->is_flushed)
-			global_cache_flush();
-		for (i = pg_start; i < (pg_start + mem->page_count); i++) {
-			writel((i*4096)|I810_PTE_LOCAL|I810_PTE_VALID,
-			       intel_private.registers+I810_PTE_BASE+(i*4));
-		}
-		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
-		break;
-	case AGP_PHYS_MEMORY:
-	case AGP_NORMAL_MEMORY:
-		if (!mem->is_flushed)
-			global_cache_flush();
-		for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
-			writel(agp_bridge->driver->mask_memory(agp_bridge,
-					page_to_phys(mem->pages[i]), mask_type),
-			       intel_private.registers+I810_PTE_BASE+(j*4));
-		}
-		readl(intel_private.registers+I810_PTE_BASE+((j-1)*4));
-		break;
-	default:
-		goto out_err;
-	}
-
-	agp_bridge->driver->tlb_flush(mem);
-out:
-	ret = 0;
-out_err:
-	mem->is_flushed = true;
-	return ret;
-}
-
-static int intel_i810_remove_entries(struct agp_memory *mem, off_t pg_start,
-				int type)
-{
-	int i;
-
-	if (mem->page_count == 0)
-		return 0;
-
-	for (i = pg_start; i < (mem->page_count + pg_start); i++) {
-		writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
-	}
-	readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
-
-	agp_bridge->driver->tlb_flush(mem);
-	return 0;
-}
-
-/*
- * The i810/i830 requires a physical address to program its mouse
- * pointer into hardware.
- * However the Xserver still writes to it through the agp aperture.
- */
-static struct agp_memory *alloc_agpphysmem_i8xx(size_t pg_count, int type)
-{
-	struct agp_memory *new;
-	struct page *page;
-
-	switch (pg_count) {
-	case 1: page = agp_bridge->driver->agp_alloc_page(agp_bridge);
-		break;
-	case 4:
-		/* kludge to get 4 physical pages for ARGB cursor */
-		page = i8xx_alloc_pages();
-		break;
-	default:
-		return NULL;
-	}
-
-	if (page == NULL)
-		return NULL;
-
-	new = agp_create_memory(pg_count);
-	if (new == NULL)
-		return NULL;
-
-	new->pages[0] = page;
-	if (pg_count == 4) {
-		/* kludge to get 4 physical pages for ARGB cursor */
-		new->pages[1] = new->pages[0] + 1;
-		new->pages[2] = new->pages[1] + 1;
-		new->pages[3] = new->pages[2] + 1;
-	}
-	new->page_count = pg_count;
-	new->num_scratch_pages = pg_count;
-	new->type = AGP_PHYS_MEMORY;
-	new->physical = page_to_phys(new->pages[0]);
-	return new;
-}
-
-static struct agp_memory *intel_i810_alloc_by_type(size_t pg_count, int type)
-{
-	struct agp_memory *new;
-
-	if (type == AGP_DCACHE_MEMORY) {
-		if (pg_count != intel_private.num_dcache_entries)
-			return NULL;
-
-		new = agp_create_memory(1);
-		if (new == NULL)
-			return NULL;
-
-		new->type = AGP_DCACHE_MEMORY;
-		new->page_count = pg_count;
-		new->num_scratch_pages = 0;
-		agp_free_page_array(new);
-		return new;
-	}
-	if (type == AGP_PHYS_MEMORY)
-		return alloc_agpphysmem_i8xx(pg_count, type);
-	return NULL;
-}
-
-static void intel_i810_free_by_type(struct agp_memory *curr)
-{
-	agp_free_key(curr->key);
-	if (curr->type == AGP_PHYS_MEMORY) {
-		if (curr->page_count == 4)
-			i8xx_destroy_pages(curr->pages[0]);
-		else {
-			agp_bridge->driver->agp_destroy_page(curr->pages[0],
-							     AGP_PAGE_DESTROY_UNMAP);
-			agp_bridge->driver->agp_destroy_page(curr->pages[0],
-							     AGP_PAGE_DESTROY_FREE);
-		}
-		agp_free_page_array(curr);
-	}
-	kfree(curr);
-}
-
-static unsigned long intel_i810_mask_memory(struct agp_bridge_data *bridge,
-					    dma_addr_t addr, int type)
-{
-	/* Type checking must be done elsewhere */
-	return addr | bridge->driver->masks[type].mask;
-}
-
-static struct aper_size_info_fixed intel_i830_sizes[] =
-{
-	{128, 32768, 5},
-	/* The 64M mode still requires a 128k gatt */
-	{64, 16384, 5},
-	{256, 65536, 6},
-	{512, 131072, 7},
-};
-
-static void intel_i830_init_gtt_entries(void)
-{
-	u16 gmch_ctrl;
-	int gtt_entries = 0;
-	u8 rdct;
-	int local = 0;
-	static const int ddt[4] = { 0, 16, 32, 64 };
-	int size; /* reserved space (in kb) at the top of stolen memory */
-
-	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
-
-	if (IS_I965) {
-		u32 pgetbl_ctl;
-		pgetbl_ctl = readl(intel_private.registers+I810_PGETBL_CTL);
-
-		/* The 965 has a field telling us the size of the GTT,
-		 * which may be larger than what is necessary to map the
-		 * aperture.
-		 */
-		switch (pgetbl_ctl & I965_PGETBL_SIZE_MASK) {
-		case I965_PGETBL_SIZE_128KB:
-			size = 128;
-			break;
-		case I965_PGETBL_SIZE_256KB:
-			size = 256;
-			break;
-		case I965_PGETBL_SIZE_512KB:
-			size = 512;
-			break;
-		case I965_PGETBL_SIZE_1MB:
-			size = 1024;
-			break;
-		case I965_PGETBL_SIZE_2MB:
-			size = 2048;
-			break;
-		case I965_PGETBL_SIZE_1_5MB:
-			size = 1024 + 512;
-			break;
-		default:
-			dev_info(&intel_private.pcidev->dev,
-				 "unknown page table size, assuming 512KB\n");
-			size = 512;
-		}
-		size += 4; /* add in BIOS popup space */
-	} else if (IS_G33 && !IS_PINEVIEW) {
-	/* G33's GTT size defined in gmch_ctrl */
-		switch (gmch_ctrl & G33_PGETBL_SIZE_MASK) {
-		case G33_PGETBL_SIZE_1M:
-			size = 1024;
-			break;
-		case G33_PGETBL_SIZE_2M:
-			size = 2048;
-			break;
-		default:
-			dev_info(&agp_bridge->dev->dev,
-				 "unknown page table size 0x%x, assuming 512KB\n",
-				(gmch_ctrl & G33_PGETBL_SIZE_MASK));
-			size = 512;
-		}
-		size += 4;
-	} else if (IS_G4X || IS_PINEVIEW) {
-		/* On 4 series hardware, GTT stolen is separate from graphics
-		 * stolen, ignore it in stolen gtt entries counting.  However,
-		 * 4KB of the stolen memory doesn't get mapped to the GTT.
-		 */
-		size = 4;
-	} else {
-		/* On previous hardware, the GTT size was just what was
-		 * required to map the aperture.
-		 */
-		size = agp_bridge->driver->fetch_size() + 4;
-	}
-
-	if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82830_HB ||
-	    agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82845G_HB) {
-		switch (gmch_ctrl & I830_GMCH_GMS_MASK) {
-		case I830_GMCH_GMS_STOLEN_512:
-			gtt_entries = KB(512) - KB(size);
-			break;
-		case I830_GMCH_GMS_STOLEN_1024:
-			gtt_entries = MB(1) - KB(size);
-			break;
-		case I830_GMCH_GMS_STOLEN_8192:
-			gtt_entries = MB(8) - KB(size);
-			break;
-		case I830_GMCH_GMS_LOCAL:
-			rdct = readb(intel_private.registers+I830_RDRAM_CHANNEL_TYPE);
-			gtt_entries = (I830_RDRAM_ND(rdct) + 1) *
-					MB(ddt[I830_RDRAM_DDT(rdct)]);
-			local = 1;
-			break;
-		default:
-			gtt_entries = 0;
-			break;
-		}
-	} else if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB ||
-		   agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB) {
-		/*
-		 * SandyBridge has new memory control reg at 0x50.w
-		 */
-		u16 snb_gmch_ctl;
-		pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
-		switch (snb_gmch_ctl & SNB_GMCH_GMS_STOLEN_MASK) {
-		case SNB_GMCH_GMS_STOLEN_32M:
-			gtt_entries = MB(32) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_64M:
-			gtt_entries = MB(64) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_96M:
-			gtt_entries = MB(96) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_128M:
-			gtt_entries = MB(128) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_160M:
-			gtt_entries = MB(160) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_192M:
-			gtt_entries = MB(192) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_224M:
-			gtt_entries = MB(224) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_256M:
-			gtt_entries = MB(256) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_288M:
-			gtt_entries = MB(288) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_320M:
-			gtt_entries = MB(320) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_352M:
-			gtt_entries = MB(352) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_384M:
-			gtt_entries = MB(384) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_416M:
-			gtt_entries = MB(416) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_448M:
-			gtt_entries = MB(448) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_480M:
-			gtt_entries = MB(480) - KB(size);
-			break;
-		case SNB_GMCH_GMS_STOLEN_512M:
-			gtt_entries = MB(512) - KB(size);
-			break;
-		}
-	} else {
-		switch (gmch_ctrl & I855_GMCH_GMS_MASK) {
-		case I855_GMCH_GMS_STOLEN_1M:
-			gtt_entries = MB(1) - KB(size);
-			break;
-		case I855_GMCH_GMS_STOLEN_4M:
-			gtt_entries = MB(4) - KB(size);
-			break;
-		case I855_GMCH_GMS_STOLEN_8M:
-			gtt_entries = MB(8) - KB(size);
-			break;
-		case I855_GMCH_GMS_STOLEN_16M:
-			gtt_entries = MB(16) - KB(size);
-			break;
-		case I855_GMCH_GMS_STOLEN_32M:
-			gtt_entries = MB(32) - KB(size);
-			break;
-		case I915_GMCH_GMS_STOLEN_48M:
-			/* Check it's really I915G */
-			if (IS_I915 || IS_I965 || IS_G33 || IS_G4X)
-				gtt_entries = MB(48) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case I915_GMCH_GMS_STOLEN_64M:
-			/* Check it's really I915G */
-			if (IS_I915 || IS_I965 || IS_G33 || IS_G4X)
-				gtt_entries = MB(64) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case G33_GMCH_GMS_STOLEN_128M:
-			if (IS_G33 || IS_I965 || IS_G4X)
-				gtt_entries = MB(128) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case G33_GMCH_GMS_STOLEN_256M:
-			if (IS_G33 || IS_I965 || IS_G4X)
-				gtt_entries = MB(256) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case INTEL_GMCH_GMS_STOLEN_96M:
-			if (IS_I965 || IS_G4X)
-				gtt_entries = MB(96) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case INTEL_GMCH_GMS_STOLEN_160M:
-			if (IS_I965 || IS_G4X)
-				gtt_entries = MB(160) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case INTEL_GMCH_GMS_STOLEN_224M:
-			if (IS_I965 || IS_G4X)
-				gtt_entries = MB(224) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		case INTEL_GMCH_GMS_STOLEN_352M:
-			if (IS_I965 || IS_G4X)
-				gtt_entries = MB(352) - KB(size);
-			else
-				gtt_entries = 0;
-			break;
-		default:
-			gtt_entries = 0;
-			break;
-		}
-	}
-	if (gtt_entries > 0) {
-		dev_info(&agp_bridge->dev->dev, "detected %dK %s memory\n",
-		       gtt_entries / KB(1), local ? "local" : "stolen");
-		gtt_entries /= KB(4);
-	} else {
-		dev_info(&agp_bridge->dev->dev,
-		       "no pre-allocated video memory detected\n");
-		gtt_entries = 0;
-	}
-
-	intel_private.gtt_entries = gtt_entries;
-}
-
-static void intel_i830_fini_flush(void)
-{
-	kunmap(intel_private.i8xx_page);
-	intel_private.i8xx_flush_page = NULL;
-	unmap_page_from_agp(intel_private.i8xx_page);
-
-	__free_page(intel_private.i8xx_page);
-	intel_private.i8xx_page = NULL;
-}
-
-static void intel_i830_setup_flush(void)
-{
-	/* return if we've already set the flush mechanism up */
-	if (intel_private.i8xx_page)
-		return;
-
-	intel_private.i8xx_page = alloc_page(GFP_KERNEL | __GFP_ZERO | GFP_DMA32);
-	if (!intel_private.i8xx_page)
-		return;
-
-	intel_private.i8xx_flush_page = kmap(intel_private.i8xx_page);
-	if (!intel_private.i8xx_flush_page)
-		intel_i830_fini_flush();
-}
-
-/* The chipset_flush interface needs to get data that has already been
- * flushed out of the CPU all the way out to main memory, because the GPU
- * doesn't snoop those buffers.
- *
- * The 8xx series doesn't have the same lovely interface for flushing the
- * chipset write buffers that the later chips do. According to the 865
- * specs, it's 64 octwords, or 1KB.  So, to get those previous things in
- * that buffer out, we just fill 1KB and clflush it out, on the assumption
- * that it'll push whatever was in there out.  It appears to work.
- */
-static void intel_i830_chipset_flush(struct agp_bridge_data *bridge)
-{
-	unsigned int *pg = intel_private.i8xx_flush_page;
-
-	memset(pg, 0, 1024);
-
-	if (cpu_has_clflush)
-		clflush_cache_range(pg, 1024);
-	else if (wbinvd_on_all_cpus() != 0)
-		printk(KERN_ERR "Timed out waiting for cache flush.\n");
-}
-
-/* The intel i830 automatically initializes the agp aperture during POST.
- * Use the memory already set aside for in the GTT.
- */
-static int intel_i830_create_gatt_table(struct agp_bridge_data *bridge)
-{
-	int page_order;
-	struct aper_size_info_fixed *size;
-	int num_entries;
-	u32 temp;
-
-	size = agp_bridge->current_size;
-	page_order = size->page_order;
-	num_entries = size->num_entries;
-	agp_bridge->gatt_table_real = NULL;
-
-	pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &temp);
-	temp &= 0xfff80000;
-
-	intel_private.registers = ioremap(temp, 128 * 4096);
-	if (!intel_private.registers)
-		return -ENOMEM;
-
-	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
-	global_cache_flush();	/* FIXME: ?? */
-
-	/* we have to call this as early as possible after the MMIO base address is known */
-	intel_i830_init_gtt_entries();
-
-	agp_bridge->gatt_table = NULL;
-
-	agp_bridge->gatt_bus_addr = temp;
-
-	return 0;
-}
-
-/* Return the gatt table to a sane state. Use the top of stolen
- * memory for the GTT.
- */
-static int intel_i830_free_gatt_table(struct agp_bridge_data *bridge)
-{
-	return 0;
-}
-
-static int intel_i830_fetch_size(void)
-{
-	u16 gmch_ctrl;
-	struct aper_size_info_fixed *values;
-
-	values = A_SIZE_FIX(agp_bridge->driver->aperture_sizes);
-
-	if (agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82830_HB &&
-	    agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82845G_HB) {
-		/* 855GM/852GM/865G has 128MB aperture size */
-		agp_bridge->previous_size = agp_bridge->current_size = (void *) values;
-		agp_bridge->aperture_size_idx = 0;
-		return values[0].size;
-	}
-
-	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
-
-	if ((gmch_ctrl & I830_GMCH_MEM_MASK) == I830_GMCH_MEM_128M) {
-		agp_bridge->previous_size = agp_bridge->current_size = (void *) values;
-		agp_bridge->aperture_size_idx = 0;
-		return values[0].size;
-	} else {
-		agp_bridge->previous_size = agp_bridge->current_size = (void *) (values + 1);
-		agp_bridge->aperture_size_idx = 1;
-		return values[1].size;
-	}
-
-	return 0;
-}
-
-static int intel_i830_configure(void)
-{
-	struct aper_size_info_fixed *current_size;
-	u32 temp;
-	u16 gmch_ctrl;
-	int i;
-
-	current_size = A_SIZE_FIX(agp_bridge->current_size);
-
-	pci_read_config_dword(intel_private.pcidev, I810_GMADDR, &temp);
-	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
-
-	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
-	gmch_ctrl |= I830_GMCH_ENABLED;
-	pci_write_config_word(agp_bridge->dev, I830_GMCH_CTRL, gmch_ctrl);
-
-	writel(agp_bridge->gatt_bus_addr|I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
-	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
-
-	if (agp_bridge->driver->needs_scratch_page) {
-		for (i = intel_private.gtt_entries; i < current_size->num_entries; i++) {
-			writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
-		}
-		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));	/* PCI Posting. */
-	}
-
-	global_cache_flush();
-
-	intel_i830_setup_flush();
-	return 0;
-}
-
-static void intel_i830_cleanup(void)
-{
-	iounmap(intel_private.registers);
-}
-
-static int intel_i830_insert_entries(struct agp_memory *mem, off_t pg_start,
-				     int type)
-{
-	int i, j, num_entries;
-	void *temp;
-	int ret = -EINVAL;
-	int mask_type;
-
-	if (mem->page_count == 0)
-		goto out;
-
-	temp = agp_bridge->current_size;
-	num_entries = A_SIZE_FIX(temp)->num_entries;
-
-	if (pg_start < intel_private.gtt_entries) {
-		dev_printk(KERN_DEBUG, &intel_private.pcidev->dev,
-			   "pg_start == 0x%.8lx, intel_private.gtt_entries == 0x%.8x\n",
-			   pg_start, intel_private.gtt_entries);
-
-		dev_info(&intel_private.pcidev->dev,
-			 "trying to insert into local/stolen memory\n");
-		goto out_err;
-	}
-
-	if ((pg_start + mem->page_count) > num_entries)
-		goto out_err;
-
-	/* The i830 can't check the GTT for entries since its read only,
-	 * depend on the caller to make the correct offset decisions.
-	 */
-
-	if (type != mem->type)
-		goto out_err;
-
-	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
-
-	if (mask_type != 0 && mask_type != AGP_PHYS_MEMORY &&
-	    mask_type != INTEL_AGP_CACHED_MEMORY)
-		goto out_err;
-
-	if (!mem->is_flushed)
-		global_cache_flush();
-
-	for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
-		writel(agp_bridge->driver->mask_memory(agp_bridge,
-				page_to_phys(mem->pages[i]), mask_type),
-		       intel_private.registers+I810_PTE_BASE+(j*4));
-	}
-	readl(intel_private.registers+I810_PTE_BASE+((j-1)*4));
-	agp_bridge->driver->tlb_flush(mem);
-
-out:
-	ret = 0;
-out_err:
-	mem->is_flushed = true;
-	return ret;
-}
-
-static int intel_i830_remove_entries(struct agp_memory *mem, off_t pg_start,
-				     int type)
-{
-	int i;
-
-	if (mem->page_count == 0)
-		return 0;
-
-	if (pg_start < intel_private.gtt_entries) {
-		dev_info(&intel_private.pcidev->dev,
-			 "trying to disable local/stolen memory\n");
-		return -EINVAL;
-	}
-
-	for (i = pg_start; i < (mem->page_count + pg_start); i++) {
-		writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
-	}
-	readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
-
-	agp_bridge->driver->tlb_flush(mem);
-	return 0;
-}
-
-static struct agp_memory *intel_i830_alloc_by_type(size_t pg_count, int type)
-{
-	if (type == AGP_PHYS_MEMORY)
-		return alloc_agpphysmem_i8xx(pg_count, type);
-	/* always return NULL for other allocation types for now */
-	return NULL;
-}
-
-static int intel_alloc_chipset_flush_resource(void)
-{
-	int ret;
-	ret = pci_bus_alloc_resource(agp_bridge->dev->bus, &intel_private.ifp_resource, PAGE_SIZE,
-				     PAGE_SIZE, PCIBIOS_MIN_MEM, 0,
-				     pcibios_align_resource, agp_bridge->dev);
-
-	return ret;
-}
-
-static void intel_i915_setup_chipset_flush(void)
-{
-	int ret;
-	u32 temp;
-
-	pci_read_config_dword(agp_bridge->dev, I915_IFPADDR, &temp);
-	if (!(temp & 0x1)) {
-		intel_alloc_chipset_flush_resource();
-		intel_private.resource_valid = 1;
-		pci_write_config_dword(agp_bridge->dev, I915_IFPADDR, (intel_private.ifp_resource.start & 0xffffffff) | 0x1);
-	} else {
-		temp &= ~1;
-
-		intel_private.resource_valid = 1;
-		intel_private.ifp_resource.start = temp;
-		intel_private.ifp_resource.end = temp + PAGE_SIZE;
-		ret = request_resource(&iomem_resource, &intel_private.ifp_resource);
-		/* some BIOSes reserve this area in a pnp some don't */
-		if (ret)
-			intel_private.resource_valid = 0;
-	}
-}
-
-static void intel_i965_g33_setup_chipset_flush(void)
-{
-	u32 temp_hi, temp_lo;
-	int ret;
-
-	pci_read_config_dword(agp_bridge->dev, I965_IFPADDR + 4, &temp_hi);
-	pci_read_config_dword(agp_bridge->dev, I965_IFPADDR, &temp_lo);
-
-	if (!(temp_lo & 0x1)) {
-
-		intel_alloc_chipset_flush_resource();
-
-		intel_private.resource_valid = 1;
-		pci_write_config_dword(agp_bridge->dev, I965_IFPADDR + 4,
-			upper_32_bits(intel_private.ifp_resource.start));
-		pci_write_config_dword(agp_bridge->dev, I965_IFPADDR, (intel_private.ifp_resource.start & 0xffffffff) | 0x1);
-	} else {
-		u64 l64;
-
-		temp_lo &= ~0x1;
-		l64 = ((u64)temp_hi << 32) | temp_lo;
-
-		intel_private.resource_valid = 1;
-		intel_private.ifp_resource.start = l64;
-		intel_private.ifp_resource.end = l64 + PAGE_SIZE;
-		ret = request_resource(&iomem_resource, &intel_private.ifp_resource);
-		/* some BIOSes reserve this area in a pnp some don't */
-		if (ret)
-			intel_private.resource_valid = 0;
-	}
-}
-
-static void intel_i9xx_setup_flush(void)
-{
-	/* return if already configured */
-	if (intel_private.ifp_resource.start)
-		return;
-
-	if (IS_SNB)
-		return;
-
-	/* setup a resource for this object */
-	intel_private.ifp_resource.name = "Intel Flush Page";
-	intel_private.ifp_resource.flags = IORESOURCE_MEM;
-
-	/* Setup chipset flush for 915 */
-	if (IS_I965 || IS_G33 || IS_G4X) {
-		intel_i965_g33_setup_chipset_flush();
-	} else {
-		intel_i915_setup_chipset_flush();
-	}
-
-	if (intel_private.ifp_resource.start) {
-		intel_private.i9xx_flush_page = ioremap_nocache(intel_private.ifp_resource.start, PAGE_SIZE);
-		if (!intel_private.i9xx_flush_page)
-			dev_info(&intel_private.pcidev->dev, "can't ioremap flush page - no chipset flushing");
-	}
-}
-
-static int intel_i915_configure(void)
-{
-	struct aper_size_info_fixed *current_size;
-	u32 temp;
-	u16 gmch_ctrl;
-	int i;
-
-	current_size = A_SIZE_FIX(agp_bridge->current_size);
-
-	pci_read_config_dword(intel_private.pcidev, I915_GMADDR, &temp);
-
-	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
-
-	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
-	gmch_ctrl |= I830_GMCH_ENABLED;
-	pci_write_config_word(agp_bridge->dev, I830_GMCH_CTRL, gmch_ctrl);
-
-	writel(agp_bridge->gatt_bus_addr|I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
-	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
-
-	if (agp_bridge->driver->needs_scratch_page) {
-		for (i = intel_private.gtt_entries; i < intel_private.gtt_total_size; i++) {
-			writel(agp_bridge->scratch_page, intel_private.gtt+i);
-		}
-		readl(intel_private.gtt+i-1);	/* PCI Posting. */
-	}
-
-	global_cache_flush();
-
-	intel_i9xx_setup_flush();
-
-	return 0;
-}
-
-static void intel_i915_cleanup(void)
-{
-	if (intel_private.i9xx_flush_page)
-		iounmap(intel_private.i9xx_flush_page);
-	if (intel_private.resource_valid)
-		release_resource(&intel_private.ifp_resource);
-	intel_private.ifp_resource.start = 0;
-	intel_private.resource_valid = 0;
-	iounmap(intel_private.gtt);
-	iounmap(intel_private.registers);
-}
-
-static void intel_i915_chipset_flush(struct agp_bridge_data *bridge)
-{
-	if (intel_private.i9xx_flush_page)
-		writel(1, intel_private.i9xx_flush_page);
-}
-
-static int intel_i915_insert_entries(struct agp_memory *mem, off_t pg_start,
-				     int type)
-{
-	int num_entries;
-	void *temp;
-	int ret = -EINVAL;
-	int mask_type;
-
-	if (mem->page_count == 0)
-		goto out;
-
-	temp = agp_bridge->current_size;
-	num_entries = A_SIZE_FIX(temp)->num_entries;
-
-	if (pg_start < intel_private.gtt_entries) {
-		dev_printk(KERN_DEBUG, &intel_private.pcidev->dev,
-			   "pg_start == 0x%.8lx, intel_private.gtt_entries == 0x%.8x\n",
-			   pg_start, intel_private.gtt_entries);
-
-		dev_info(&intel_private.pcidev->dev,
-			 "trying to insert into local/stolen memory\n");
-		goto out_err;
-	}
-
-	if ((pg_start + mem->page_count) > num_entries)
-		goto out_err;
-
-	/* The i915 can't check the GTT for entries since it's read only;
-	 * depend on the caller to make the correct offset decisions.
-	 */
-
-	if (type != mem->type)
-		goto out_err;
-
-	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
-
-	if (mask_type != 0 && mask_type != AGP_PHYS_MEMORY &&
-	    mask_type != INTEL_AGP_CACHED_MEMORY)
-		goto out_err;
-
-	if (!mem->is_flushed)
-		global_cache_flush();
-
-	intel_agp_insert_sg_entries(mem, pg_start, mask_type);
-	agp_bridge->driver->tlb_flush(mem);
-
- out:
-	ret = 0;
- out_err:
-	mem->is_flushed = true;
-	return ret;
-}
-
-static int intel_i915_remove_entries(struct agp_memory *mem, off_t pg_start,
-				     int type)
-{
-	int i;
-
-	if (mem->page_count == 0)
-		return 0;
-
-	if (pg_start < intel_private.gtt_entries) {
-		dev_info(&intel_private.pcidev->dev,
-			 "trying to disable local/stolen memory\n");
-		return -EINVAL;
-	}
-
-	for (i = pg_start; i < (mem->page_count + pg_start); i++)
-		writel(agp_bridge->scratch_page, intel_private.gtt+i);
-
-	readl(intel_private.gtt+i-1);
-
-	agp_bridge->driver->tlb_flush(mem);
-	return 0;
-}
-
-/* Return the aperture size by just checking the resource length.  The effect
- * described in the spec of the MSAC registers is just changing of the
- * resource size.
- */
-static int intel_i9xx_fetch_size(void)
-{
-	int num_sizes = ARRAY_SIZE(intel_i830_sizes);
-	int aper_size; /* size in megabytes */
-	int i;
-
-	aper_size = pci_resource_len(intel_private.pcidev, 2) / MB(1);
-
-	for (i = 0; i < num_sizes; i++) {
-		if (aper_size == intel_i830_sizes[i].size) {
-			agp_bridge->current_size = intel_i830_sizes + i;
-			agp_bridge->previous_size = agp_bridge->current_size;
-			return aper_size;
-		}
-	}
-
-	return 0;
-}
-
-/* The intel i915 automatically initializes the agp aperture during POST.
- * Use the memory already set aside for in the GTT.
- */
-static int intel_i915_create_gatt_table(struct agp_bridge_data *bridge)
-{
-	int page_order;
-	struct aper_size_info_fixed *size;
-	int num_entries;
-	u32 temp, temp2;
-	int gtt_map_size = 256 * 1024;
-
-	size = agp_bridge->current_size;
-	page_order = size->page_order;
-	num_entries = size->num_entries;
-	agp_bridge->gatt_table_real = NULL;
-
-	pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &temp);
-	pci_read_config_dword(intel_private.pcidev, I915_PTEADDR, &temp2);
-
-	if (IS_G33)
-	    gtt_map_size = 1024 * 1024; /* 1M on G33 */
-	intel_private.gtt = ioremap(temp2, gtt_map_size);
-	if (!intel_private.gtt)
-		return -ENOMEM;
-
-	intel_private.gtt_total_size = gtt_map_size / 4;
-
-	temp &= 0xfff80000;
-
-	intel_private.registers = ioremap(temp, 128 * 4096);
-	if (!intel_private.registers) {
-		iounmap(intel_private.gtt);
-		return -ENOMEM;
-	}
-
-	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
-	global_cache_flush();	/* FIXME: ? */
-
-	/* we have to call this as early as possible after the MMIO base address is known */
-	intel_i830_init_gtt_entries();
-
-	agp_bridge->gatt_table = NULL;
-
-	agp_bridge->gatt_bus_addr = temp;
-
-	return 0;
-}
-
-/*
- * The i965 supports 36-bit physical addresses, but to keep
- * the format of the GTT the same, the bits that don't fit
- * in a 32-bit word are shifted down to bits 4..7.
- *
- * Gcc is smart enough to notice that "(addr >> 28) & 0xf0"
- * is always zero on 32-bit architectures, so no need to make
- * this conditional.
- */
-static unsigned long intel_i965_mask_memory(struct agp_bridge_data *bridge,
-					    dma_addr_t addr, int type)
-{
-	/* Shift high bits down */
-	addr |= (addr >> 28) & 0xf0;
-
-	/* Type checking must be done elsewhere */
-	return addr | bridge->driver->masks[type].mask;
-}
-
-static void intel_i965_get_gtt_range(int *gtt_offset, int *gtt_size)
-{
-	u16 snb_gmch_ctl;
-
-	switch (agp_bridge->dev->device) {
-	case PCI_DEVICE_ID_INTEL_GM45_HB:
-	case PCI_DEVICE_ID_INTEL_EAGLELAKE_HB:
-	case PCI_DEVICE_ID_INTEL_Q45_HB:
-	case PCI_DEVICE_ID_INTEL_G45_HB:
-	case PCI_DEVICE_ID_INTEL_G41_HB:
-	case PCI_DEVICE_ID_INTEL_B43_HB:
-	case PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB:
-	case PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB:
-	case PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB:
-	case PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB:
-		*gtt_offset = *gtt_size = MB(2);
-		break;
-	case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB:
-	case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB:
-		*gtt_offset = MB(2);
-
-		pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
-		switch (snb_gmch_ctl & SNB_GTT_SIZE_MASK) {
-		default:
-		case SNB_GTT_SIZE_0M:
-			printk(KERN_ERR "Bad GTT size mask: 0x%04x.\n", snb_gmch_ctl);
-			*gtt_size = MB(0);
-			break;
-		case SNB_GTT_SIZE_1M:
-			*gtt_size = MB(1);
-			break;
-		case SNB_GTT_SIZE_2M:
-			*gtt_size = MB(2);
-			break;
-		}
-		break;
-	default:
-		*gtt_offset = *gtt_size = KB(512);
-	}
-}
-
-/* The intel i965 automatically initializes the agp aperture during POST.
- * Use the memory already set aside for in the GTT.
- */
-static int intel_i965_create_gatt_table(struct agp_bridge_data *bridge)
-{
-	int page_order;
-	struct aper_size_info_fixed *size;
-	int num_entries;
-	u32 temp;
-	int gtt_offset, gtt_size;
-
-	size = agp_bridge->current_size;
-	page_order = size->page_order;
-	num_entries = size->num_entries;
-	agp_bridge->gatt_table_real = NULL;
-
-	pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &temp);
-
-	temp &= 0xfff00000;
-
-	intel_i965_get_gtt_range(&gtt_offset, &gtt_size);
-
-	intel_private.gtt = ioremap((temp + gtt_offset) , gtt_size);
-
-	if (!intel_private.gtt)
-		return -ENOMEM;
-
-	intel_private.gtt_total_size = gtt_size / 4;
-
-	intel_private.registers = ioremap(temp, 128 * 4096);
-	if (!intel_private.registers) {
-		iounmap(intel_private.gtt);
-		return -ENOMEM;
-	}
-
-	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
-	global_cache_flush();   /* FIXME: ? */
-
-	/* we have to call this as early as possible after the MMIO base address is known */
-	intel_i830_init_gtt_entries();
-
-	agp_bridge->gatt_table = NULL;
-
-	agp_bridge->gatt_bus_addr = temp;
-
-	return 0;
-}
-
-
 static int intel_fetch_size(void)
 {
 	int i;
@@ -1982,6 +464,7 @@ static const struct agp_bridge_driver intel_generic_driver = {
 	.aperture_sizes		= intel_generic_sizes,
 	.size_type		= U16_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_configure,
 	.fetch_size		= intel_fetch_size,
 	.cleanup		= intel_cleanup,
@@ -2003,38 +486,12 @@ static const struct agp_bridge_driver intel_generic_driver = {
 	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
 };
 
-static const struct agp_bridge_driver intel_810_driver = {
-	.owner			= THIS_MODULE,
-	.aperture_sizes		= intel_i810_sizes,
-	.size_type		= FIXED_APER_SIZE,
-	.num_aperture_sizes	= 2,
-	.needs_scratch_page	= true,
-	.configure		= intel_i810_configure,
-	.fetch_size		= intel_i810_fetch_size,
-	.cleanup		= intel_i810_cleanup,
-	.tlb_flush		= intel_i810_tlbflush,
-	.mask_memory		= intel_i810_mask_memory,
-	.masks			= intel_i810_masks,
-	.agp_enable		= intel_i810_agp_enable,
-	.cache_flush		= global_cache_flush,
-	.create_gatt_table	= agp_generic_create_gatt_table,
-	.free_gatt_table	= agp_generic_free_gatt_table,
-	.insert_memory		= intel_i810_insert_entries,
-	.remove_memory		= intel_i810_remove_entries,
-	.alloc_by_type		= intel_i810_alloc_by_type,
-	.free_by_type		= intel_i810_free_by_type,
-	.agp_alloc_page		= agp_generic_alloc_page,
-	.agp_alloc_pages        = agp_generic_alloc_pages,
-	.agp_destroy_page	= agp_generic_destroy_page,
-	.agp_destroy_pages      = agp_generic_destroy_pages,
-	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
-};
-
 static const struct agp_bridge_driver intel_815_driver = {
 	.owner			= THIS_MODULE,
 	.aperture_sizes		= intel_815_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 2,
+	.needs_scratch_page	= true,
 	.configure		= intel_815_configure,
 	.fetch_size		= intel_815_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2056,39 +513,12 @@ static const struct agp_bridge_driver intel_815_driver = {
 	.agp_type_to_mask_type	= agp_generic_type_to_mask_type,
 };
 
-static const struct agp_bridge_driver intel_830_driver = {
-	.owner			= THIS_MODULE,
-	.aperture_sizes		= intel_i830_sizes,
-	.size_type		= FIXED_APER_SIZE,
-	.num_aperture_sizes	= 4,
-	.needs_scratch_page	= true,
-	.configure		= intel_i830_configure,
-	.fetch_size		= intel_i830_fetch_size,
-	.cleanup		= intel_i830_cleanup,
-	.tlb_flush		= intel_i810_tlbflush,
-	.mask_memory		= intel_i810_mask_memory,
-	.masks			= intel_i810_masks,
-	.agp_enable		= intel_i810_agp_enable,
-	.cache_flush		= global_cache_flush,
-	.create_gatt_table	= intel_i830_create_gatt_table,
-	.free_gatt_table	= intel_i830_free_gatt_table,
-	.insert_memory		= intel_i830_insert_entries,
-	.remove_memory		= intel_i830_remove_entries,
-	.alloc_by_type		= intel_i830_alloc_by_type,
-	.free_by_type		= intel_i810_free_by_type,
-	.agp_alloc_page		= agp_generic_alloc_page,
-	.agp_alloc_pages        = agp_generic_alloc_pages,
-	.agp_destroy_page	= agp_generic_destroy_page,
-	.agp_destroy_pages      = agp_generic_destroy_pages,
-	.agp_type_to_mask_type  = intel_i830_type_to_mask_type,
-	.chipset_flush		= intel_i830_chipset_flush,
-};
-
 static const struct agp_bridge_driver intel_820_driver = {
 	.owner			= THIS_MODULE,
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_820_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_820_cleanup,
@@ -2115,6 +545,7 @@ static const struct agp_bridge_driver intel_830mp_driver = {
 	.aperture_sizes		= intel_830mp_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 4,
+	.needs_scratch_page	= true,
 	.configure		= intel_830mp_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2141,6 +572,7 @@ static const struct agp_bridge_driver intel_840_driver = {
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_840_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2167,6 +599,7 @@ static const struct agp_bridge_driver intel_845_driver = {
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_845_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2193,6 +626,7 @@ static const struct agp_bridge_driver intel_850_driver = {
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_850_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2219,6 +653,7 @@ static const struct agp_bridge_driver intel_860_driver = {
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_860_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2240,79 +675,12 @@ static const struct agp_bridge_driver intel_860_driver = {
 	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
 };
 
-static const struct agp_bridge_driver intel_915_driver = {
-	.owner			= THIS_MODULE,
-	.aperture_sizes		= intel_i830_sizes,
-	.size_type		= FIXED_APER_SIZE,
-	.num_aperture_sizes	= 4,
-	.needs_scratch_page	= true,
-	.configure		= intel_i915_configure,
-	.fetch_size		= intel_i9xx_fetch_size,
-	.cleanup		= intel_i915_cleanup,
-	.tlb_flush		= intel_i810_tlbflush,
-	.mask_memory		= intel_i810_mask_memory,
-	.masks			= intel_i810_masks,
-	.agp_enable		= intel_i810_agp_enable,
-	.cache_flush		= global_cache_flush,
-	.create_gatt_table	= intel_i915_create_gatt_table,
-	.free_gatt_table	= intel_i830_free_gatt_table,
-	.insert_memory		= intel_i915_insert_entries,
-	.remove_memory		= intel_i915_remove_entries,
-	.alloc_by_type		= intel_i830_alloc_by_type,
-	.free_by_type		= intel_i810_free_by_type,
-	.agp_alloc_page		= agp_generic_alloc_page,
-	.agp_alloc_pages        = agp_generic_alloc_pages,
-	.agp_destroy_page	= agp_generic_destroy_page,
-	.agp_destroy_pages      = agp_generic_destroy_pages,
-	.agp_type_to_mask_type  = intel_i830_type_to_mask_type,
-	.chipset_flush		= intel_i915_chipset_flush,
-#ifdef USE_PCI_DMA_API
-	.agp_map_page		= intel_agp_map_page,
-	.agp_unmap_page		= intel_agp_unmap_page,
-	.agp_map_memory		= intel_agp_map_memory,
-	.agp_unmap_memory	= intel_agp_unmap_memory,
-#endif
-};
-
-static const struct agp_bridge_driver intel_i965_driver = {
-	.owner			= THIS_MODULE,
-	.aperture_sizes		= intel_i830_sizes,
-	.size_type		= FIXED_APER_SIZE,
-	.num_aperture_sizes	= 4,
-	.needs_scratch_page	= true,
-	.configure		= intel_i915_configure,
-	.fetch_size		= intel_i9xx_fetch_size,
-	.cleanup		= intel_i915_cleanup,
-	.tlb_flush		= intel_i810_tlbflush,
-	.mask_memory		= intel_i965_mask_memory,
-	.masks			= intel_i810_masks,
-	.agp_enable		= intel_i810_agp_enable,
-	.cache_flush		= global_cache_flush,
-	.create_gatt_table	= intel_i965_create_gatt_table,
-	.free_gatt_table	= intel_i830_free_gatt_table,
-	.insert_memory		= intel_i915_insert_entries,
-	.remove_memory		= intel_i915_remove_entries,
-	.alloc_by_type		= intel_i830_alloc_by_type,
-	.free_by_type		= intel_i810_free_by_type,
-	.agp_alloc_page		= agp_generic_alloc_page,
-	.agp_alloc_pages        = agp_generic_alloc_pages,
-	.agp_destroy_page	= agp_generic_destroy_page,
-	.agp_destroy_pages      = agp_generic_destroy_pages,
-	.agp_type_to_mask_type	= intel_i830_type_to_mask_type,
-	.chipset_flush		= intel_i915_chipset_flush,
-#ifdef USE_PCI_DMA_API
-	.agp_map_page		= intel_agp_map_page,
-	.agp_unmap_page		= intel_agp_unmap_page,
-	.agp_map_memory		= intel_agp_map_memory,
-	.agp_unmap_memory	= intel_agp_unmap_memory,
-#endif
-};
-
 static const struct agp_bridge_driver intel_7505_driver = {
 	.owner			= THIS_MODULE,
 	.aperture_sizes		= intel_8xx_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= intel_7505_configure,
 	.fetch_size		= intel_8xx_fetch_size,
 	.cleanup		= intel_8xx_cleanup,
@@ -2334,40 +702,6 @@ static const struct agp_bridge_driver intel_7505_driver = {
 	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
 };
 
-static const struct agp_bridge_driver intel_g33_driver = {
-	.owner			= THIS_MODULE,
-	.aperture_sizes		= intel_i830_sizes,
-	.size_type		= FIXED_APER_SIZE,
-	.num_aperture_sizes	= 4,
-	.needs_scratch_page	= true,
-	.configure		= intel_i915_configure,
-	.fetch_size		= intel_i9xx_fetch_size,
-	.cleanup		= intel_i915_cleanup,
-	.tlb_flush		= intel_i810_tlbflush,
-	.mask_memory		= intel_i965_mask_memory,
-	.masks			= intel_i810_masks,
-	.agp_enable		= intel_i810_agp_enable,
-	.cache_flush		= global_cache_flush,
-	.create_gatt_table	= intel_i915_create_gatt_table,
-	.free_gatt_table	= intel_i830_free_gatt_table,
-	.insert_memory		= intel_i915_insert_entries,
-	.remove_memory		= intel_i915_remove_entries,
-	.alloc_by_type		= intel_i830_alloc_by_type,
-	.free_by_type		= intel_i810_free_by_type,
-	.agp_alloc_page		= agp_generic_alloc_page,
-	.agp_alloc_pages        = agp_generic_alloc_pages,
-	.agp_destroy_page	= agp_generic_destroy_page,
-	.agp_destroy_pages      = agp_generic_destroy_pages,
-	.agp_type_to_mask_type	= intel_i830_type_to_mask_type,
-	.chipset_flush		= intel_i915_chipset_flush,
-#ifdef USE_PCI_DMA_API
-	.agp_map_page		= intel_agp_map_page,
-	.agp_unmap_page		= intel_agp_unmap_page,
-	.agp_map_memory		= intel_agp_map_memory,
-	.agp_unmap_memory	= intel_agp_unmap_memory,
-#endif
-};
-
 static int find_gmch(u16 device)
 {
 	struct pci_dev *gmch_device;
@@ -2392,103 +726,137 @@ static int find_gmch(u16 device)
 static const struct intel_driver_description {
 	unsigned int chip_id;
 	unsigned int gmch_chip_id;
-	unsigned int multi_gmch_chip; /* if we have more gfx chip type on this HB. */
 	char *name;
 	const struct agp_bridge_driver *driver;
 	const struct agp_bridge_driver *gmch_driver;
 } intel_agp_chipsets[] = {
-	{ PCI_DEVICE_ID_INTEL_82443LX_0, 0, 0, "440LX", &intel_generic_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82443BX_0, 0, 0, "440BX", &intel_generic_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82443GX_0, 0, 0, "440GX", &intel_generic_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82810_MC1, PCI_DEVICE_ID_INTEL_82810_IG1, 0, "i810",
+	{ PCI_DEVICE_ID_INTEL_82443LX_0, 0, "440LX", &intel_generic_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82443BX_0, 0, "440BX", &intel_generic_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82443GX_0, 0, "440GX", &intel_generic_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82810_MC1, PCI_DEVICE_ID_INTEL_82810_IG1, "i810",
 		NULL, &intel_810_driver },
-	{ PCI_DEVICE_ID_INTEL_82810_MC3, PCI_DEVICE_ID_INTEL_82810_IG3, 0, "i810",
+	{ PCI_DEVICE_ID_INTEL_82810_MC3, PCI_DEVICE_ID_INTEL_82810_IG3, "i810",
 		NULL, &intel_810_driver },
-	{ PCI_DEVICE_ID_INTEL_82810E_MC, PCI_DEVICE_ID_INTEL_82810E_IG, 0, "i810",
+	{ PCI_DEVICE_ID_INTEL_82810E_MC, PCI_DEVICE_ID_INTEL_82810E_IG, "i810",
 		NULL, &intel_810_driver },
-	{ PCI_DEVICE_ID_INTEL_82815_MC, PCI_DEVICE_ID_INTEL_82815_CGC, 0, "i815",
+	{ PCI_DEVICE_ID_INTEL_82815_MC, PCI_DEVICE_ID_INTEL_82815_CGC, "i815",
 		&intel_815_driver, &intel_810_driver },
-	{ PCI_DEVICE_ID_INTEL_82820_HB, 0, 0, "i820", &intel_820_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82820_UP_HB, 0, 0, "i820", &intel_820_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82830_HB, PCI_DEVICE_ID_INTEL_82830_CGC, 0, "830M",
+	{ PCI_DEVICE_ID_INTEL_82820_HB, 0, "i820", &intel_820_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82820_UP_HB, 0, "i820", &intel_820_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82830_HB, PCI_DEVICE_ID_INTEL_82830_CGC, "830M",
 		&intel_830mp_driver, &intel_830_driver },
-	{ PCI_DEVICE_ID_INTEL_82840_HB, 0, 0, "i840", &intel_840_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82845_HB, 0, 0, "845G", &intel_845_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82845G_HB, PCI_DEVICE_ID_INTEL_82845G_IG, 0, "830M",
+	{ PCI_DEVICE_ID_INTEL_82840_HB, 0, "i840", &intel_840_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82845_HB, 0, "845G", &intel_845_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82845G_HB, PCI_DEVICE_ID_INTEL_82845G_IG, "830M",
 		&intel_845_driver, &intel_830_driver },
-	{ PCI_DEVICE_ID_INTEL_82850_HB, 0, 0, "i850", &intel_850_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82854_HB, PCI_DEVICE_ID_INTEL_82854_IG, 0, "854",
+	{ PCI_DEVICE_ID_INTEL_82850_HB, 0, "i850", &intel_850_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82854_HB, PCI_DEVICE_ID_INTEL_82854_IG, "854",
 		&intel_845_driver, &intel_830_driver },
-	{ PCI_DEVICE_ID_INTEL_82855PM_HB, 0, 0, "855PM", &intel_845_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82855GM_HB, PCI_DEVICE_ID_INTEL_82855GM_IG, 0, "855GM",
+	{ PCI_DEVICE_ID_INTEL_82855PM_HB, 0, "855PM", &intel_845_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82855GM_HB, PCI_DEVICE_ID_INTEL_82855GM_IG, "855GM",
 		&intel_845_driver, &intel_830_driver },
-	{ PCI_DEVICE_ID_INTEL_82860_HB, 0, 0, "i860", &intel_860_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_82865_HB, PCI_DEVICE_ID_INTEL_82865_IG, 0, "865",
+	{ PCI_DEVICE_ID_INTEL_82860_HB, 0, "i860", &intel_860_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_82865_HB, PCI_DEVICE_ID_INTEL_82865_IG, "865",
 		&intel_845_driver, &intel_830_driver },
-	{ PCI_DEVICE_ID_INTEL_82875_HB, 0, 0, "i875", &intel_845_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_E7221_HB, PCI_DEVICE_ID_INTEL_E7221_IG, 0, "E7221 (i915)",
+	{ PCI_DEVICE_ID_INTEL_82875_HB, 0, "i875", &intel_845_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_E7221_HB, PCI_DEVICE_ID_INTEL_E7221_IG, "E7221 (i915)",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82915G_HB, PCI_DEVICE_ID_INTEL_82915G_IG, 0, "915G",
+	{ PCI_DEVICE_ID_INTEL_82915G_HB, PCI_DEVICE_ID_INTEL_82915G_IG, "915G",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82915GM_HB, PCI_DEVICE_ID_INTEL_82915GM_IG, 0, "915GM",
+	{ PCI_DEVICE_ID_INTEL_82915GM_HB, PCI_DEVICE_ID_INTEL_82915GM_IG, "915GM",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82945G_HB, PCI_DEVICE_ID_INTEL_82945G_IG, 0, "945G",
+	{ PCI_DEVICE_ID_INTEL_82945G_HB, PCI_DEVICE_ID_INTEL_82945G_IG, "945G",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82945GM_HB, PCI_DEVICE_ID_INTEL_82945GM_IG, 0, "945GM",
+	{ PCI_DEVICE_ID_INTEL_82945GM_HB, PCI_DEVICE_ID_INTEL_82945GM_IG, "945GM",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82945GME_HB, PCI_DEVICE_ID_INTEL_82945GME_IG, 0, "945GME",
+	{ PCI_DEVICE_ID_INTEL_82945GME_HB, PCI_DEVICE_ID_INTEL_82945GME_IG, "945GME",
 		NULL, &intel_915_driver },
-	{ PCI_DEVICE_ID_INTEL_82946GZ_HB, PCI_DEVICE_ID_INTEL_82946GZ_IG, 0, "946GZ",
+	{ PCI_DEVICE_ID_INTEL_82946GZ_HB, PCI_DEVICE_ID_INTEL_82946GZ_IG, "946GZ",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_82G35_HB, PCI_DEVICE_ID_INTEL_82G35_IG, 0, "G35",
+	{ PCI_DEVICE_ID_INTEL_82G35_HB, PCI_DEVICE_ID_INTEL_82G35_IG, "G35",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_82965Q_HB, PCI_DEVICE_ID_INTEL_82965Q_IG, 0, "965Q",
+	{ PCI_DEVICE_ID_INTEL_82965Q_HB, PCI_DEVICE_ID_INTEL_82965Q_IG, "965Q",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_82965G_HB, PCI_DEVICE_ID_INTEL_82965G_IG, 0, "965G",
+	{ PCI_DEVICE_ID_INTEL_82965G_HB, PCI_DEVICE_ID_INTEL_82965G_IG, "965G",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_82965GM_HB, PCI_DEVICE_ID_INTEL_82965GM_IG, 0, "965GM",
+	{ PCI_DEVICE_ID_INTEL_82965GM_HB, PCI_DEVICE_ID_INTEL_82965GM_IG, "965GM",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_82965GME_HB, PCI_DEVICE_ID_INTEL_82965GME_IG, 0, "965GME/GLE",
+	{ PCI_DEVICE_ID_INTEL_82965GME_HB, PCI_DEVICE_ID_INTEL_82965GME_IG, "965GME/GLE",
 		NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_7505_0, 0, 0, "E7505", &intel_7505_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_7205_0, 0, 0, "E7205", &intel_7505_driver, NULL },
-	{ PCI_DEVICE_ID_INTEL_G33_HB, PCI_DEVICE_ID_INTEL_G33_IG, 0, "G33",
+	{ PCI_DEVICE_ID_INTEL_7505_0, 0, "E7505", &intel_7505_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_7205_0, 0, "E7205", &intel_7505_driver, NULL },
+	{ PCI_DEVICE_ID_INTEL_G33_HB, PCI_DEVICE_ID_INTEL_G33_IG, "G33",
 		NULL, &intel_g33_driver },
-	{ PCI_DEVICE_ID_INTEL_Q35_HB, PCI_DEVICE_ID_INTEL_Q35_IG, 0, "Q35",
+	{ PCI_DEVICE_ID_INTEL_Q35_HB, PCI_DEVICE_ID_INTEL_Q35_IG, "Q35",
 		NULL, &intel_g33_driver },
-	{ PCI_DEVICE_ID_INTEL_Q33_HB, PCI_DEVICE_ID_INTEL_Q33_IG, 0, "Q33",
+	{ PCI_DEVICE_ID_INTEL_Q33_HB, PCI_DEVICE_ID_INTEL_Q33_IG, "Q33",
 		NULL, &intel_g33_driver },
-	{ PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB, PCI_DEVICE_ID_INTEL_PINEVIEW_M_IG, 0, "GMA3150",
+	{ PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB, PCI_DEVICE_ID_INTEL_PINEVIEW_M_IG, "GMA3150",
 		NULL, &intel_g33_driver },
-	{ PCI_DEVICE_ID_INTEL_PINEVIEW_HB, PCI_DEVICE_ID_INTEL_PINEVIEW_IG, 0, "GMA3150",
+	{ PCI_DEVICE_ID_INTEL_PINEVIEW_HB, PCI_DEVICE_ID_INTEL_PINEVIEW_IG, "GMA3150",
 		NULL, &intel_g33_driver },
-	{ PCI_DEVICE_ID_INTEL_GM45_HB, PCI_DEVICE_ID_INTEL_GM45_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_GM45_HB, PCI_DEVICE_ID_INTEL_GM45_IG,
 	    "GM45", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_EAGLELAKE_HB, PCI_DEVICE_ID_INTEL_EAGLELAKE_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_EAGLELAKE_HB, PCI_DEVICE_ID_INTEL_EAGLELAKE_IG,
 	    "Eaglelake", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_Q45_HB, PCI_DEVICE_ID_INTEL_Q45_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_Q45_HB, PCI_DEVICE_ID_INTEL_Q45_IG,
 	    "Q45/Q43", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_G45_HB, PCI_DEVICE_ID_INTEL_G45_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_G45_HB, PCI_DEVICE_ID_INTEL_G45_IG,
 	    "G45/G43", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_B43_HB, PCI_DEVICE_ID_INTEL_B43_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_B43_HB, PCI_DEVICE_ID_INTEL_B43_IG,
 	    "B43", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_G41_HB, PCI_DEVICE_ID_INTEL_G41_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_G41_HB, PCI_DEVICE_ID_INTEL_G41_IG,
 	    "G41", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG,
 	    "HD Graphics", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG,
 	    "HD Graphics", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG,
 	    "HD Graphics", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB, PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG,
 	    "HD Graphics", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB, PCI_DEVICE_ID_INTEL_SANDYBRIDGE_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB, PCI_DEVICE_ID_INTEL_SANDYBRIDGE_IG,
 	    "Sandybridge", NULL, &intel_i965_driver },
-	{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB, PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_IG, 0,
+	{ PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB, PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_IG,
 	    "Sandybridge", NULL, &intel_i965_driver },
-	{ 0, 0, 0, NULL, NULL, NULL }
+	{ 0, 0, NULL, NULL, NULL }
 };
 
+static int __devinit intel_gmch_probe(struct pci_dev *pdev,
+				      struct agp_bridge_data *bridge)
+{
+	int i;
+	bridge->driver = NULL;
+
+	for (i = 0; intel_agp_chipsets[i].name != NULL; i++) {
+		if ((intel_agp_chipsets[i].gmch_chip_id != 0) &&
+			find_gmch(intel_agp_chipsets[i].gmch_chip_id)) {
+			bridge->driver =
+				intel_agp_chipsets[i].gmch_driver;
+			break;
+		}
+	}
+
+	if (!bridge->driver)
+		return 0;
+
+	bridge->dev_private_data = &intel_private;
+	bridge->dev = pdev;
+
+	dev_info(&pdev->dev, "Intel %s Chipset\n", intel_agp_chipsets[i].name);
+
+	if (bridge->driver->mask_memory == intel_i965_mask_memory) {
+		if (pci_set_dma_mask(intel_private.pcidev, DMA_BIT_MASK(36)))
+			dev_err(&intel_private.pcidev->dev,
+				"set gfx device dma mask 36bit failed!\n");
+		else
+			pci_set_consistent_dma_mask(intel_private.pcidev,
+						    DMA_BIT_MASK(36));
+	}
+
+	return 1;
+}
+
 static int __devinit agp_intel_probe(struct pci_dev *pdev,
 				     const struct pci_device_id *ent)
 {
@@ -2503,22 +871,18 @@ static int __devinit agp_intel_probe(struct pci_dev *pdev,
 	if (!bridge)
 		return -ENOMEM;
 
+	bridge->capndx = cap_ptr;
+
+	if (intel_gmch_probe(pdev, bridge))
+		goto found_gmch;
+
 	for (i = 0; intel_agp_chipsets[i].name != NULL; i++) {
 		/* In case that multiple models of gfx chip may
 		   stand on same host bridge type, this can be
 		   sure we detect the right IGD. */
 		if (pdev->device == intel_agp_chipsets[i].chip_id) {
-			if ((intel_agp_chipsets[i].gmch_chip_id != 0) &&
-				find_gmch(intel_agp_chipsets[i].gmch_chip_id)) {
-				bridge->driver =
-					intel_agp_chipsets[i].gmch_driver;
-				break;
-			} else if (intel_agp_chipsets[i].multi_gmch_chip) {
-				continue;
-			} else {
-				bridge->driver = intel_agp_chipsets[i].driver;
-				break;
-			}
+			bridge->driver = intel_agp_chipsets[i].driver;
+			break;
 		}
 	}
 
@@ -2530,18 +894,16 @@ static int __devinit agp_intel_probe(struct pci_dev *pdev,
 		return -ENODEV;
 	}
 
-	if (bridge->driver == NULL) {
-		/* bridge has no AGP and no IGD detected */
+	if (!bridge->driver) {
 		if (cap_ptr)
 			dev_warn(&pdev->dev, "can't find bridge device (chip_id: %04x)\n",
-				 intel_agp_chipsets[i].gmch_chip_id);
+			    	 intel_agp_chipsets[i].gmch_chip_id);
 		agp_put_bridge(bridge);
 		return -ENODEV;
 	}
 
 	bridge->dev = pdev;
-	bridge->capndx = cap_ptr;
-	bridge->dev_private_data = &intel_private;
+	bridge->dev_private_data = NULL;
 
 	dev_info(&pdev->dev, "Intel %s Chipset\n", intel_agp_chipsets[i].name);
 
@@ -2577,15 +939,7 @@ static int __devinit agp_intel_probe(struct pci_dev *pdev,
 				&bridge->mode);
 	}
 
-	if (bridge->driver->mask_memory == intel_i965_mask_memory) {
-		if (pci_set_dma_mask(intel_private.pcidev, DMA_BIT_MASK(36)))
-			dev_err(&intel_private.pcidev->dev,
-				"set gfx device dma mask 36bit failed!\n");
-		else
-			pci_set_consistent_dma_mask(intel_private.pcidev,
-						    DMA_BIT_MASK(36));
-	}
-
+found_gmch:
 	pci_set_drvdata(pdev, bridge);
 	err = agp_add_bridge(bridge);
 	if (!err)
@@ -2611,22 +965,7 @@ static int agp_intel_resume(struct pci_dev *pdev)
 	struct agp_bridge_data *bridge = pci_get_drvdata(pdev);
 	int ret_val;
 
-	if (bridge->driver == &intel_generic_driver)
-		intel_configure();
-	else if (bridge->driver == &intel_850_driver)
-		intel_850_configure();
-	else if (bridge->driver == &intel_845_driver)
-		intel_845_configure();
-	else if (bridge->driver == &intel_830mp_driver)
-		intel_830mp_configure();
-	else if (bridge->driver == &intel_915_driver)
-		intel_i915_configure();
-	else if (bridge->driver == &intel_830_driver)
-		intel_i830_configure();
-	else if (bridge->driver == &intel_810_driver)
-		intel_i810_configure();
-	else if (bridge->driver == &intel_i965_driver)
-		intel_i915_configure();
+	bridge->driver->configure();
 
 	ret_val = agp_rebind_memory();
 	if (ret_val != 0)
diff --git a/drivers/char/agp/intel-agp.h b/drivers/char/agp/intel-agp.h
new file mode 100644
index 0000000..2547465
--- /dev/null
+++ b/drivers/char/agp/intel-agp.h
@@ -0,0 +1,239 @@
+/*
+ * Common Intel AGPGART and GTT definitions.
+ */
+
+/* Intel registers */
+#define INTEL_APSIZE	0xb4
+#define INTEL_ATTBASE	0xb8
+#define INTEL_AGPCTRL	0xb0
+#define INTEL_NBXCFG	0x50
+#define INTEL_ERRSTS	0x91
+
+/* Intel i830 registers */
+#define I830_GMCH_CTRL			0x52
+#define I830_GMCH_ENABLED		0x4
+#define I830_GMCH_MEM_MASK		0x1
+#define I830_GMCH_MEM_64M		0x1
+#define I830_GMCH_MEM_128M		0
+#define I830_GMCH_GMS_MASK		0x70
+#define I830_GMCH_GMS_DISABLED		0x00
+#define I830_GMCH_GMS_LOCAL		0x10
+#define I830_GMCH_GMS_STOLEN_512	0x20
+#define I830_GMCH_GMS_STOLEN_1024	0x30
+#define I830_GMCH_GMS_STOLEN_8192	0x40
+#define I830_RDRAM_CHANNEL_TYPE		0x03010
+#define I830_RDRAM_ND(x)		(((x) & 0x20) >> 5)
+#define I830_RDRAM_DDT(x)		(((x) & 0x18) >> 3)
+
+/* This one is for I830MP w. an external graphic card */
+#define INTEL_I830_ERRSTS	0x92
+
+/* Intel 855GM/852GM registers */
+#define I855_GMCH_GMS_MASK		0xF0
+#define I855_GMCH_GMS_STOLEN_0M		0x0
+#define I855_GMCH_GMS_STOLEN_1M		(0x1 << 4)
+#define I855_GMCH_GMS_STOLEN_4M		(0x2 << 4)
+#define I855_GMCH_GMS_STOLEN_8M		(0x3 << 4)
+#define I855_GMCH_GMS_STOLEN_16M	(0x4 << 4)
+#define I855_GMCH_GMS_STOLEN_32M	(0x5 << 4)
+#define I85X_CAPID			0x44
+#define I85X_VARIANT_MASK		0x7
+#define I85X_VARIANT_SHIFT		5
+#define I855_GME			0x0
+#define I855_GM				0x4
+#define I852_GME			0x2
+#define I852_GM				0x5
+
+/* Intel i845 registers */
+#define INTEL_I845_AGPM		0x51
+#define INTEL_I845_ERRSTS	0xc8
+
+/* Intel i860 registers */
+#define INTEL_I860_MCHCFG	0x50
+#define INTEL_I860_ERRSTS	0xc8
+
+/* Intel i810 registers */
+#define I810_GMADDR		0x10
+#define I810_MMADDR		0x14
+#define I810_PTE_BASE		0x10000
+#define I810_PTE_MAIN_UNCACHED	0x00000000
+#define I810_PTE_LOCAL		0x00000002
+#define I810_PTE_VALID		0x00000001
+#define I830_PTE_SYSTEM_CACHED  0x00000006
+#define I810_SMRAM_MISCC	0x70
+#define I810_GFX_MEM_WIN_SIZE	0x00010000
+#define I810_GFX_MEM_WIN_32M	0x00010000
+#define I810_GMS		0x000000c0
+#define I810_GMS_DISABLE	0x00000000
+#define I810_PGETBL_CTL		0x2020
+#define I810_PGETBL_ENABLED	0x00000001
+#define I965_PGETBL_SIZE_MASK	0x0000000e
+#define I965_PGETBL_SIZE_512KB	(0 << 1)
+#define I965_PGETBL_SIZE_256KB	(1 << 1)
+#define I965_PGETBL_SIZE_128KB	(2 << 1)
+#define I965_PGETBL_SIZE_1MB	(3 << 1)
+#define I965_PGETBL_SIZE_2MB	(4 << 1)
+#define I965_PGETBL_SIZE_1_5MB	(5 << 1)
+#define G33_PGETBL_SIZE_MASK    (3 << 8)
+#define G33_PGETBL_SIZE_1M      (1 << 8)
+#define G33_PGETBL_SIZE_2M      (2 << 8)
+
+#define I810_DRAM_CTL		0x3000
+#define I810_DRAM_ROW_0		0x00000001
+#define I810_DRAM_ROW_0_SDRAM	0x00000001
+
+/* Intel 815 register */
+#define INTEL_815_APCONT	0x51
+#define INTEL_815_ATTBASE_MASK	~0x1FFFFFFF
+
+/* Intel i820 registers */
+#define INTEL_I820_RDCR		0x51
+#define INTEL_I820_ERRSTS	0xc8
+
+/* Intel i840 registers */
+#define INTEL_I840_MCHCFG	0x50
+#define INTEL_I840_ERRSTS	0xc8
+
+/* Intel i850 registers */
+#define INTEL_I850_MCHCFG	0x50
+#define INTEL_I850_ERRSTS	0xc8
+
+/* intel 915G registers */
+#define I915_GMADDR	0x18
+#define I915_MMADDR	0x10
+#define I915_PTEADDR	0x1C
+#define I915_GMCH_GMS_STOLEN_48M	(0x6 << 4)
+#define I915_GMCH_GMS_STOLEN_64M	(0x7 << 4)
+#define G33_GMCH_GMS_STOLEN_128M	(0x8 << 4)
+#define G33_GMCH_GMS_STOLEN_256M	(0x9 << 4)
+#define INTEL_GMCH_GMS_STOLEN_96M	(0xa << 4)
+#define INTEL_GMCH_GMS_STOLEN_160M	(0xb << 4)
+#define INTEL_GMCH_GMS_STOLEN_224M	(0xc << 4)
+#define INTEL_GMCH_GMS_STOLEN_352M	(0xd << 4)
+
+#define I915_IFPADDR    0x60
+
+/* Intel 965G registers */
+#define I965_MSAC 0x62
+#define I965_IFPADDR    0x70
+
+/* Intel 7505 registers */
+#define INTEL_I7505_APSIZE	0x74
+#define INTEL_I7505_NCAPID	0x60
+#define INTEL_I7505_NISTAT	0x6c
+#define INTEL_I7505_ATTBASE	0x78
+#define INTEL_I7505_ERRSTS	0x42
+#define INTEL_I7505_AGPCTRL	0x70
+#define INTEL_I7505_MCHCFG	0x50
+
+#define SNB_GMCH_CTRL	0x50
+#define SNB_GMCH_GMS_STOLEN_MASK	0xF8
+#define SNB_GMCH_GMS_STOLEN_32M		(1 << 3)
+#define SNB_GMCH_GMS_STOLEN_64M		(2 << 3)
+#define SNB_GMCH_GMS_STOLEN_96M		(3 << 3)
+#define SNB_GMCH_GMS_STOLEN_128M	(4 << 3)
+#define SNB_GMCH_GMS_STOLEN_160M	(5 << 3)
+#define SNB_GMCH_GMS_STOLEN_192M	(6 << 3)
+#define SNB_GMCH_GMS_STOLEN_224M	(7 << 3)
+#define SNB_GMCH_GMS_STOLEN_256M	(8 << 3)
+#define SNB_GMCH_GMS_STOLEN_288M	(9 << 3)
+#define SNB_GMCH_GMS_STOLEN_320M	(0xa << 3)
+#define SNB_GMCH_GMS_STOLEN_352M	(0xb << 3)
+#define SNB_GMCH_GMS_STOLEN_384M	(0xc << 3)
+#define SNB_GMCH_GMS_STOLEN_416M	(0xd << 3)
+#define SNB_GMCH_GMS_STOLEN_448M	(0xe << 3)
+#define SNB_GMCH_GMS_STOLEN_480M	(0xf << 3)
+#define SNB_GMCH_GMS_STOLEN_512M	(0x10 << 3)
+#define SNB_GTT_SIZE_0M			(0 << 8)
+#define SNB_GTT_SIZE_1M			(1 << 8)
+#define SNB_GTT_SIZE_2M			(2 << 8)
+#define SNB_GTT_SIZE_MASK		(3 << 8)
+
+/* pci devices ids */
+#define PCI_DEVICE_ID_INTEL_E7221_HB	0x2588
+#define PCI_DEVICE_ID_INTEL_E7221_IG	0x258a
+#define PCI_DEVICE_ID_INTEL_82946GZ_HB      0x2970
+#define PCI_DEVICE_ID_INTEL_82946GZ_IG      0x2972
+#define PCI_DEVICE_ID_INTEL_82G35_HB     0x2980
+#define PCI_DEVICE_ID_INTEL_82G35_IG     0x2982
+#define PCI_DEVICE_ID_INTEL_82965Q_HB       0x2990
+#define PCI_DEVICE_ID_INTEL_82965Q_IG       0x2992
+#define PCI_DEVICE_ID_INTEL_82965G_HB       0x29A0
+#define PCI_DEVICE_ID_INTEL_82965G_IG       0x29A2
+#define PCI_DEVICE_ID_INTEL_82965GM_HB      0x2A00
+#define PCI_DEVICE_ID_INTEL_82965GM_IG      0x2A02
+#define PCI_DEVICE_ID_INTEL_82965GME_HB     0x2A10
+#define PCI_DEVICE_ID_INTEL_82965GME_IG     0x2A12
+#define PCI_DEVICE_ID_INTEL_82945GME_HB     0x27AC
+#define PCI_DEVICE_ID_INTEL_82945GME_IG     0x27AE
+#define PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB        0xA010
+#define PCI_DEVICE_ID_INTEL_PINEVIEW_M_IG        0xA011
+#define PCI_DEVICE_ID_INTEL_PINEVIEW_HB         0xA000
+#define PCI_DEVICE_ID_INTEL_PINEVIEW_IG         0xA001
+#define PCI_DEVICE_ID_INTEL_G33_HB          0x29C0
+#define PCI_DEVICE_ID_INTEL_G33_IG          0x29C2
+#define PCI_DEVICE_ID_INTEL_Q35_HB          0x29B0
+#define PCI_DEVICE_ID_INTEL_Q35_IG          0x29B2
+#define PCI_DEVICE_ID_INTEL_Q33_HB          0x29D0
+#define PCI_DEVICE_ID_INTEL_Q33_IG          0x29D2
+#define PCI_DEVICE_ID_INTEL_B43_HB          0x2E40
+#define PCI_DEVICE_ID_INTEL_B43_IG          0x2E42
+#define PCI_DEVICE_ID_INTEL_GM45_HB         0x2A40
+#define PCI_DEVICE_ID_INTEL_GM45_IG         0x2A42
+#define PCI_DEVICE_ID_INTEL_EAGLELAKE_HB        0x2E00
+#define PCI_DEVICE_ID_INTEL_EAGLELAKE_IG        0x2E02
+#define PCI_DEVICE_ID_INTEL_Q45_HB          0x2E10
+#define PCI_DEVICE_ID_INTEL_Q45_IG          0x2E12
+#define PCI_DEVICE_ID_INTEL_G45_HB          0x2E20
+#define PCI_DEVICE_ID_INTEL_G45_IG          0x2E22
+#define PCI_DEVICE_ID_INTEL_G41_HB          0x2E30
+#define PCI_DEVICE_ID_INTEL_G41_IG          0x2E32
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB	    0x0040
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG	    0x0042
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB	    0x0044
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB	    0x0062
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB    0x006a
+#define PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG	    0x0046
+#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB  0x0100
+#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_IG  0x0102
+#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB  0x0104
+#define PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_IG  0x0106
+
+/* cover 915 and 945 variants */
+#define IS_I915 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_E7221_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82915G_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82915GM_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945G_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945GM_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82945GME_HB)
+
+#define IS_I965 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82946GZ_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82G35_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965Q_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965G_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965GM_HB || \
+		 agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82965GME_HB)
+
+#define IS_G33 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G33_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q35_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q33_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_HB)
+
+#define IS_PINEVIEW (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_M_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_PINEVIEW_HB)
+
+#define IS_SNB (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
+
+#define IS_G4X (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_EAGLELAKE_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q45_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G45_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_GM45_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G41_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_B43_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB || \
+		agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB || \
+		IS_SNB)
diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
new file mode 100644
index 0000000..9344216
--- /dev/null
+++ b/drivers/char/agp/intel-gtt.c
@@ -0,0 +1,1548 @@
+/*
+ * Intel GTT (Graphics Translation Table) routines
+ *
+ * Caveat: This driver implements the linux agp interface, but this is far from
+ * a agp driver! GTT support ended up here for purely historical reasons: The
+ * old userspace intel graphics drivers needed an interface to map memory into
+ * the GTT. And the drm provides a default interface for graphic devices sitting
+ * on an agp port. So it made sense to fake the GTT support as an agp port to
+ * avoid having to create a new api.
+ *
+ * With gem this does not make much sense anymore, just needlessly complicates
+ * the code. But as long as the old graphics stack is still support, it's stuck
+ * here.
+ *
+ * /fairy-tale-mode off
+ */
+
+/*
+ * If we have Intel graphics, we're not going to have anything other than
+ * an Intel IOMMU. So make the correct use of the PCI DMA API contingent
+ * on the Intel IOMMU support (CONFIG_DMAR).
+ * Only newer chipsets need to bother with this, of course.
+ */
+#ifdef CONFIG_DMAR
+#define USE_PCI_DMA_API 1
+#endif
+
+static const struct aper_size_info_fixed intel_i810_sizes[] =
+{
+	{64, 16384, 4},
+	/* The 32M mode still requires a 64k gatt */
+	{32, 8192, 4}
+};
+
+#define AGP_DCACHE_MEMORY	1
+#define AGP_PHYS_MEMORY		2
+#define INTEL_AGP_CACHED_MEMORY 3
+
+static struct gatt_mask intel_i810_masks[] =
+{
+	{.mask = I810_PTE_VALID, .type = 0},
+	{.mask = (I810_PTE_VALID | I810_PTE_LOCAL), .type = AGP_DCACHE_MEMORY},
+	{.mask = I810_PTE_VALID, .type = 0},
+	{.mask = I810_PTE_VALID | I830_PTE_SYSTEM_CACHED,
+	 .type = INTEL_AGP_CACHED_MEMORY}
+};
+
+static struct _intel_private {
+	struct pci_dev *pcidev;	/* device one */
+	u8 __iomem *registers;
+	u32 __iomem *gtt;		/* I915G */
+	int num_dcache_entries;
+	/* gtt_entries is the number of gtt entries that are already mapped
+	 * to stolen memory.  Stolen memory is larger than the memory mapped
+	 * through gtt_entries, as it includes some reserved space for the BIOS
+	 * popup and for the GTT.
+	 */
+	int gtt_entries;			/* i830+ */
+	int gtt_total_size;
+	union {
+		void __iomem *i9xx_flush_page;
+		void *i8xx_flush_page;
+	};
+	struct page *i8xx_page;
+	struct resource ifp_resource;
+	int resource_valid;
+} intel_private;
+
+#ifdef USE_PCI_DMA_API
+static int intel_agp_map_page(struct page *page, dma_addr_t *ret)
+{
+	*ret = pci_map_page(intel_private.pcidev, page, 0,
+			    PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+	if (pci_dma_mapping_error(intel_private.pcidev, *ret))
+		return -EINVAL;
+	return 0;
+}
+
+static void intel_agp_unmap_page(struct page *page, dma_addr_t dma)
+{
+	pci_unmap_page(intel_private.pcidev, dma,
+		       PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+}
+
+static void intel_agp_free_sglist(struct agp_memory *mem)
+{
+	struct sg_table st;
+
+	st.sgl = mem->sg_list;
+	st.orig_nents = st.nents = mem->page_count;
+
+	sg_free_table(&st);
+
+	mem->sg_list = NULL;
+	mem->num_sg = 0;
+}
+
+static int intel_agp_map_memory(struct agp_memory *mem)
+{
+	struct sg_table st;
+	struct scatterlist *sg;
+	int i;
+
+	DBG("try mapping %lu pages\n", (unsigned long)mem->page_count);
+
+	if (sg_alloc_table(&st, mem->page_count, GFP_KERNEL))
+		return -ENOMEM;
+
+	mem->sg_list = sg = st.sgl;
+
+	for (i = 0 ; i < mem->page_count; i++, sg = sg_next(sg))
+		sg_set_page(sg, mem->pages[i], PAGE_SIZE, 0);
+
+	mem->num_sg = pci_map_sg(intel_private.pcidev, mem->sg_list,
+				 mem->page_count, PCI_DMA_BIDIRECTIONAL);
+	if (unlikely(!mem->num_sg)) {
+		intel_agp_free_sglist(mem);
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static void intel_agp_unmap_memory(struct agp_memory *mem)
+{
+	DBG("try unmapping %lu pages\n", (unsigned long)mem->page_count);
+
+	pci_unmap_sg(intel_private.pcidev, mem->sg_list,
+		     mem->page_count, PCI_DMA_BIDIRECTIONAL);
+	intel_agp_free_sglist(mem);
+}
+
+static void intel_agp_insert_sg_entries(struct agp_memory *mem,
+					off_t pg_start, int mask_type)
+{
+	struct scatterlist *sg;
+	int i, j;
+
+	j = pg_start;
+
+	WARN_ON(!mem->num_sg);
+
+	if (mem->num_sg == mem->page_count) {
+		for_each_sg(mem->sg_list, sg, mem->page_count, i) {
+			writel(agp_bridge->driver->mask_memory(agp_bridge,
+					sg_dma_address(sg), mask_type),
+					intel_private.gtt+j);
+			j++;
+		}
+	} else {
+		/* sg may merge pages, but we have to separate
+		 * per-page addr for GTT */
+		unsigned int len, m;
+
+		for_each_sg(mem->sg_list, sg, mem->num_sg, i) {
+			len = sg_dma_len(sg) / PAGE_SIZE;
+			for (m = 0; m < len; m++) {
+				writel(agp_bridge->driver->mask_memory(agp_bridge,
+								       sg_dma_address(sg) + m * PAGE_SIZE,
+								       mask_type),
+				       intel_private.gtt+j);
+				j++;
+			}
+		}
+	}
+	readl(intel_private.gtt+j-1);
+}
+
+#else
+
+static void intel_agp_insert_sg_entries(struct agp_memory *mem,
+					off_t pg_start, int mask_type)
+{
+	int i, j;
+	u32 cache_bits = 0;
+
+	if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB ||
+	    agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB)
+	{
+		cache_bits = I830_PTE_SYSTEM_CACHED;
+	}
+
+	for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
+		writel(agp_bridge->driver->mask_memory(agp_bridge,
+				page_to_phys(mem->pages[i]), mask_type),
+		       intel_private.gtt+j);
+	}
+
+	readl(intel_private.gtt+j-1);
+}
+
+#endif
+
+static int intel_i810_fetch_size(void)
+{
+	u32 smram_miscc;
+	struct aper_size_info_fixed *values;
+
+	pci_read_config_dword(agp_bridge->dev, I810_SMRAM_MISCC, &smram_miscc);
+	values = A_SIZE_FIX(agp_bridge->driver->aperture_sizes);
+
+	if ((smram_miscc & I810_GMS) == I810_GMS_DISABLE) {
+		dev_warn(&agp_bridge->dev->dev, "i810 is disabled\n");
+		return 0;
+	}
+	if ((smram_miscc & I810_GFX_MEM_WIN_SIZE) == I810_GFX_MEM_WIN_32M) {
+		agp_bridge->current_size = (void *) (values + 1);
+		agp_bridge->aperture_size_idx = 1;
+		return values[1].size;
+	} else {
+		agp_bridge->current_size = (void *) (values);
+		agp_bridge->aperture_size_idx = 0;
+		return values[0].size;
+	}
+
+	return 0;
+}
+
+static int intel_i810_configure(void)
+{
+	struct aper_size_info_fixed *current_size;
+	u32 temp;
+	int i;
+
+	current_size = A_SIZE_FIX(agp_bridge->current_size);
+
+	if (!intel_private.registers) {
+		pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &temp);
+		temp &= 0xfff80000;
+
+		intel_private.registers = ioremap(temp, 128 * 4096);
+		if (!intel_private.registers) {
+			dev_err(&intel_private.pcidev->dev,
+				"can't remap memory\n");
+			return -ENOMEM;
+		}
+	}
+
+	if ((readl(intel_private.registers+I810_DRAM_CTL)
+		& I810_DRAM_ROW_0) == I810_DRAM_ROW_0_SDRAM) {
+		/* This will need to be dynamically assigned */
+		dev_info(&intel_private.pcidev->dev,
+			 "detected 4MB dedicated video ram\n");
+		intel_private.num_dcache_entries = 1024;
+	}
+	pci_read_config_dword(intel_private.pcidev, I810_GMADDR, &temp);
+	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
+	writel(agp_bridge->gatt_bus_addr | I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
+	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
+
+	if (agp_bridge->driver->needs_scratch_page) {
+		for (i = 0; i < current_size->num_entries; i++) {
+			writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
+		}
+		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));	/* PCI posting. */
+	}
+	global_cache_flush();
+	return 0;
+}
+
+static void intel_i810_cleanup(void)
+{
+	writel(0, intel_private.registers+I810_PGETBL_CTL);
+	readl(intel_private.registers);	/* PCI Posting. */
+	iounmap(intel_private.registers);
+}
+
+static void intel_i810_agp_enable(struct agp_bridge_data *bridge, u32 mode)
+{
+	return;
+}
+
+/* Exists to support ARGB cursors */
+static struct page *i8xx_alloc_pages(void)
+{
+	struct page *page;
+
+	page = alloc_pages(GFP_KERNEL | GFP_DMA32, 2);
+	if (page == NULL)
+		return NULL;
+
+	if (set_pages_uc(page, 4) < 0) {
+		set_pages_wb(page, 4);
+		__free_pages(page, 2);
+		return NULL;
+	}
+	get_page(page);
+	atomic_inc(&agp_bridge->current_memory_agp);
+	return page;
+}
+
+static void i8xx_destroy_pages(struct page *page)
+{
+	if (page == NULL)
+		return;
+
+	set_pages_wb(page, 4);
+	put_page(page);
+	__free_pages(page, 2);
+	atomic_dec(&agp_bridge->current_memory_agp);
+}
+
+static int intel_i830_type_to_mask_type(struct agp_bridge_data *bridge,
+					int type)
+{
+	if (type < AGP_USER_TYPES)
+		return type;
+	else if (type == AGP_USER_CACHED_MEMORY)
+		return INTEL_AGP_CACHED_MEMORY;
+	else
+		return 0;
+}
+
+static int intel_i810_insert_entries(struct agp_memory *mem, off_t pg_start,
+				int type)
+{
+	int i, j, num_entries;
+	void *temp;
+	int ret = -EINVAL;
+	int mask_type;
+
+	if (mem->page_count == 0)
+		goto out;
+
+	temp = agp_bridge->current_size;
+	num_entries = A_SIZE_FIX(temp)->num_entries;
+
+	if ((pg_start + mem->page_count) > num_entries)
+		goto out_err;
+
+
+	for (j = pg_start; j < (pg_start + mem->page_count); j++) {
+		if (!PGE_EMPTY(agp_bridge, readl(agp_bridge->gatt_table+j))) {
+			ret = -EBUSY;
+			goto out_err;
+		}
+	}
+
+	if (type != mem->type)
+		goto out_err;
+
+	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
+
+	switch (mask_type) {
+	case AGP_DCACHE_MEMORY:
+		if (!mem->is_flushed)
+			global_cache_flush();
+		for (i = pg_start; i < (pg_start + mem->page_count); i++) {
+			writel((i*4096)|I810_PTE_LOCAL|I810_PTE_VALID,
+			       intel_private.registers+I810_PTE_BASE+(i*4));
+		}
+		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
+		break;
+	case AGP_PHYS_MEMORY:
+	case AGP_NORMAL_MEMORY:
+		if (!mem->is_flushed)
+			global_cache_flush();
+		for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
+			writel(agp_bridge->driver->mask_memory(agp_bridge,
+					page_to_phys(mem->pages[i]), mask_type),
+			       intel_private.registers+I810_PTE_BASE+(j*4));
+		}
+		readl(intel_private.registers+I810_PTE_BASE+((j-1)*4));
+		break;
+	default:
+		goto out_err;
+	}
+
+out:
+	ret = 0;
+out_err:
+	mem->is_flushed = true;
+	return ret;
+}
+
+static int intel_i810_remove_entries(struct agp_memory *mem, off_t pg_start,
+				int type)
+{
+	int i;
+
+	if (mem->page_count == 0)
+		return 0;
+
+	for (i = pg_start; i < (mem->page_count + pg_start); i++) {
+		writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
+	}
+	readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
+
+	return 0;
+}
+
+/*
+ * The i810/i830 requires a physical address to program its mouse
+ * pointer into hardware.
+ * However the Xserver still writes to it through the agp aperture.
+ */
+static struct agp_memory *alloc_agpphysmem_i8xx(size_t pg_count, int type)
+{
+	struct agp_memory *new;
+	struct page *page;
+
+	switch (pg_count) {
+	case 1: page = agp_bridge->driver->agp_alloc_page(agp_bridge);
+		break;
+	case 4:
+		/* kludge to get 4 physical pages for ARGB cursor */
+		page = i8xx_alloc_pages();
+		break;
+	default:
+		return NULL;
+	}
+
+	if (page == NULL)
+		return NULL;
+
+	new = agp_create_memory(pg_count);
+	if (new == NULL)
+		return NULL;
+
+	new->pages[0] = page;
+	if (pg_count == 4) {
+		/* kludge to get 4 physical pages for ARGB cursor */
+		new->pages[1] = new->pages[0] + 1;
+		new->pages[2] = new->pages[1] + 1;
+		new->pages[3] = new->pages[2] + 1;
+	}
+	new->page_count = pg_count;
+	new->num_scratch_pages = pg_count;
+	new->type = AGP_PHYS_MEMORY;
+	new->physical = page_to_phys(new->pages[0]);
+	return new;
+}
+
+static struct agp_memory *intel_i810_alloc_by_type(size_t pg_count, int type)
+{
+	struct agp_memory *new;
+
+	if (type == AGP_DCACHE_MEMORY) {
+		if (pg_count != intel_private.num_dcache_entries)
+			return NULL;
+
+		new = agp_create_memory(1);
+		if (new == NULL)
+			return NULL;
+
+		new->type = AGP_DCACHE_MEMORY;
+		new->page_count = pg_count;
+		new->num_scratch_pages = 0;
+		agp_free_page_array(new);
+		return new;
+	}
+	if (type == AGP_PHYS_MEMORY)
+		return alloc_agpphysmem_i8xx(pg_count, type);
+	return NULL;
+}
+
+static void intel_i810_free_by_type(struct agp_memory *curr)
+{
+	agp_free_key(curr->key);
+	if (curr->type == AGP_PHYS_MEMORY) {
+		if (curr->page_count == 4)
+			i8xx_destroy_pages(curr->pages[0]);
+		else {
+			agp_bridge->driver->agp_destroy_page(curr->pages[0],
+							     AGP_PAGE_DESTROY_UNMAP);
+			agp_bridge->driver->agp_destroy_page(curr->pages[0],
+							     AGP_PAGE_DESTROY_FREE);
+		}
+		agp_free_page_array(curr);
+	}
+	kfree(curr);
+}
+
+static unsigned long intel_i810_mask_memory(struct agp_bridge_data *bridge,
+					    dma_addr_t addr, int type)
+{
+	/* Type checking must be done elsewhere */
+	return addr | bridge->driver->masks[type].mask;
+}
+
+static struct aper_size_info_fixed intel_i830_sizes[] =
+{
+	{128, 32768, 5},
+	/* The 64M mode still requires a 128k gatt */
+	{64, 16384, 5},
+	{256, 65536, 6},
+	{512, 131072, 7},
+};
+
+static void intel_i830_init_gtt_entries(void)
+{
+	u16 gmch_ctrl;
+	int gtt_entries = 0;
+	u8 rdct;
+	int local = 0;
+	static const int ddt[4] = { 0, 16, 32, 64 };
+	int size; /* reserved space (in kb) at the top of stolen memory */
+
+	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
+
+	if (IS_I965) {
+		u32 pgetbl_ctl;
+		pgetbl_ctl = readl(intel_private.registers+I810_PGETBL_CTL);
+
+		/* The 965 has a field telling us the size of the GTT,
+		 * which may be larger than what is necessary to map the
+		 * aperture.
+		 */
+		switch (pgetbl_ctl & I965_PGETBL_SIZE_MASK) {
+		case I965_PGETBL_SIZE_128KB:
+			size = 128;
+			break;
+		case I965_PGETBL_SIZE_256KB:
+			size = 256;
+			break;
+		case I965_PGETBL_SIZE_512KB:
+			size = 512;
+			break;
+		case I965_PGETBL_SIZE_1MB:
+			size = 1024;
+			break;
+		case I965_PGETBL_SIZE_2MB:
+			size = 2048;
+			break;
+		case I965_PGETBL_SIZE_1_5MB:
+			size = 1024 + 512;
+			break;
+		default:
+			dev_info(&intel_private.pcidev->dev,
+				 "unknown page table size, assuming 512KB\n");
+			size = 512;
+		}
+		size += 4; /* add in BIOS popup space */
+	} else if (IS_G33 && !IS_PINEVIEW) {
+	/* G33's GTT size defined in gmch_ctrl */
+		switch (gmch_ctrl & G33_PGETBL_SIZE_MASK) {
+		case G33_PGETBL_SIZE_1M:
+			size = 1024;
+			break;
+		case G33_PGETBL_SIZE_2M:
+			size = 2048;
+			break;
+		default:
+			dev_info(&agp_bridge->dev->dev,
+				 "unknown page table size 0x%x, assuming 512KB\n",
+				(gmch_ctrl & G33_PGETBL_SIZE_MASK));
+			size = 512;
+		}
+		size += 4;
+	} else if (IS_G4X || IS_PINEVIEW) {
+		/* On 4 series hardware, GTT stolen is separate from graphics
+		 * stolen, ignore it in stolen gtt entries counting.  However,
+		 * 4KB of the stolen memory doesn't get mapped to the GTT.
+		 */
+		size = 4;
+	} else {
+		/* On previous hardware, the GTT size was just what was
+		 * required to map the aperture.
+		 */
+		size = agp_bridge->driver->fetch_size() + 4;
+	}
+
+	if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82830_HB ||
+	    agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_82845G_HB) {
+		switch (gmch_ctrl & I830_GMCH_GMS_MASK) {
+		case I830_GMCH_GMS_STOLEN_512:
+			gtt_entries = KB(512) - KB(size);
+			break;
+		case I830_GMCH_GMS_STOLEN_1024:
+			gtt_entries = MB(1) - KB(size);
+			break;
+		case I830_GMCH_GMS_STOLEN_8192:
+			gtt_entries = MB(8) - KB(size);
+			break;
+		case I830_GMCH_GMS_LOCAL:
+			rdct = readb(intel_private.registers+I830_RDRAM_CHANNEL_TYPE);
+			gtt_entries = (I830_RDRAM_ND(rdct) + 1) *
+					MB(ddt[I830_RDRAM_DDT(rdct)]);
+			local = 1;
+			break;
+		default:
+			gtt_entries = 0;
+			break;
+		}
+	} else if (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB ||
+		   agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB) {
+		/*
+		 * SandyBridge has new memory control reg at 0x50.w
+		 */
+		u16 snb_gmch_ctl;
+		pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
+		switch (snb_gmch_ctl & SNB_GMCH_GMS_STOLEN_MASK) {
+		case SNB_GMCH_GMS_STOLEN_32M:
+			gtt_entries = MB(32) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_64M:
+			gtt_entries = MB(64) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_96M:
+			gtt_entries = MB(96) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_128M:
+			gtt_entries = MB(128) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_160M:
+			gtt_entries = MB(160) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_192M:
+			gtt_entries = MB(192) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_224M:
+			gtt_entries = MB(224) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_256M:
+			gtt_entries = MB(256) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_288M:
+			gtt_entries = MB(288) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_320M:
+			gtt_entries = MB(320) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_352M:
+			gtt_entries = MB(352) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_384M:
+			gtt_entries = MB(384) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_416M:
+			gtt_entries = MB(416) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_448M:
+			gtt_entries = MB(448) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_480M:
+			gtt_entries = MB(480) - KB(size);
+			break;
+		case SNB_GMCH_GMS_STOLEN_512M:
+			gtt_entries = MB(512) - KB(size);
+			break;
+		}
+	} else {
+		switch (gmch_ctrl & I855_GMCH_GMS_MASK) {
+		case I855_GMCH_GMS_STOLEN_1M:
+			gtt_entries = MB(1) - KB(size);
+			break;
+		case I855_GMCH_GMS_STOLEN_4M:
+			gtt_entries = MB(4) - KB(size);
+			break;
+		case I855_GMCH_GMS_STOLEN_8M:
+			gtt_entries = MB(8) - KB(size);
+			break;
+		case I855_GMCH_GMS_STOLEN_16M:
+			gtt_entries = MB(16) - KB(size);
+			break;
+		case I855_GMCH_GMS_STOLEN_32M:
+			gtt_entries = MB(32) - KB(size);
+			break;
+		case I915_GMCH_GMS_STOLEN_48M:
+			/* Check it's really I915G */
+			if (IS_I915 || IS_I965 || IS_G33 || IS_G4X)
+				gtt_entries = MB(48) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case I915_GMCH_GMS_STOLEN_64M:
+			/* Check it's really I915G */
+			if (IS_I915 || IS_I965 || IS_G33 || IS_G4X)
+				gtt_entries = MB(64) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case G33_GMCH_GMS_STOLEN_128M:
+			if (IS_G33 || IS_I965 || IS_G4X)
+				gtt_entries = MB(128) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case G33_GMCH_GMS_STOLEN_256M:
+			if (IS_G33 || IS_I965 || IS_G4X)
+				gtt_entries = MB(256) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case INTEL_GMCH_GMS_STOLEN_96M:
+			if (IS_I965 || IS_G4X)
+				gtt_entries = MB(96) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case INTEL_GMCH_GMS_STOLEN_160M:
+			if (IS_I965 || IS_G4X)
+				gtt_entries = MB(160) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case INTEL_GMCH_GMS_STOLEN_224M:
+			if (IS_I965 || IS_G4X)
+				gtt_entries = MB(224) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		case INTEL_GMCH_GMS_STOLEN_352M:
+			if (IS_I965 || IS_G4X)
+				gtt_entries = MB(352) - KB(size);
+			else
+				gtt_entries = 0;
+			break;
+		default:
+			gtt_entries = 0;
+			break;
+		}
+	}
+	if (gtt_entries > 0) {
+		dev_info(&agp_bridge->dev->dev, "detected %dK %s memory\n",
+		       gtt_entries / KB(1), local ? "local" : "stolen");
+		gtt_entries /= KB(4);
+	} else {
+		dev_info(&agp_bridge->dev->dev,
+		       "no pre-allocated video memory detected\n");
+		gtt_entries = 0;
+	}
+
+	intel_private.gtt_entries = gtt_entries;
+}
+
+static void intel_i830_fini_flush(void)
+{
+	kunmap(intel_private.i8xx_page);
+	intel_private.i8xx_flush_page = NULL;
+	unmap_page_from_agp(intel_private.i8xx_page);
+
+	__free_page(intel_private.i8xx_page);
+	intel_private.i8xx_page = NULL;
+}
+
+static void intel_i830_setup_flush(void)
+{
+	/* return if we've already set the flush mechanism up */
+	if (intel_private.i8xx_page)
+		return;
+
+	intel_private.i8xx_page = alloc_page(GFP_KERNEL | __GFP_ZERO | GFP_DMA32);
+	if (!intel_private.i8xx_page)
+		return;
+
+	intel_private.i8xx_flush_page = kmap(intel_private.i8xx_page);
+	if (!intel_private.i8xx_flush_page)
+		intel_i830_fini_flush();
+}
+
+/* The chipset_flush interface needs to get data that has already been
+ * flushed out of the CPU all the way out to main memory, because the GPU
+ * doesn't snoop those buffers.
+ *
+ * The 8xx series doesn't have the same lovely interface for flushing the
+ * chipset write buffers that the later chips do. According to the 865
+ * specs, it's 64 octwords, or 1KB.  So, to get those previous things in
+ * that buffer out, we just fill 1KB and clflush it out, on the assumption
+ * that it'll push whatever was in there out.  It appears to work.
+ */
+static void intel_i830_chipset_flush(struct agp_bridge_data *bridge)
+{
+	unsigned int *pg = intel_private.i8xx_flush_page;
+
+	memset(pg, 0, 1024);
+
+	if (cpu_has_clflush)
+		clflush_cache_range(pg, 1024);
+	else if (wbinvd_on_all_cpus() != 0)
+		printk(KERN_ERR "Timed out waiting for cache flush.\n");
+}
+
+/* The intel i830 automatically initializes the agp aperture during POST.
+ * Use the memory already set aside for in the GTT.
+ */
+static int intel_i830_create_gatt_table(struct agp_bridge_data *bridge)
+{
+	int page_order;
+	struct aper_size_info_fixed *size;
+	int num_entries;
+	u32 temp;
+
+	size = agp_bridge->current_size;
+	page_order = size->page_order;
+	num_entries = size->num_entries;
+	agp_bridge->gatt_table_real = NULL;
+
+	pci_read_config_dword(intel_private.pcidev, I810_MMADDR, &temp);
+	temp &= 0xfff80000;
+
+	intel_private.registers = ioremap(temp, 128 * 4096);
+	if (!intel_private.registers)
+		return -ENOMEM;
+
+	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
+	global_cache_flush();	/* FIXME: ?? */
+
+	/* we have to call this as early as possible after the MMIO base address is known */
+	intel_i830_init_gtt_entries();
+
+	agp_bridge->gatt_table = NULL;
+
+	agp_bridge->gatt_bus_addr = temp;
+
+	return 0;
+}
+
+/* Return the gatt table to a sane state. Use the top of stolen
+ * memory for the GTT.
+ */
+static int intel_i830_free_gatt_table(struct agp_bridge_data *bridge)
+{
+	return 0;
+}
+
+static int intel_i830_fetch_size(void)
+{
+	u16 gmch_ctrl;
+	struct aper_size_info_fixed *values;
+
+	values = A_SIZE_FIX(agp_bridge->driver->aperture_sizes);
+
+	if (agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82830_HB &&
+	    agp_bridge->dev->device != PCI_DEVICE_ID_INTEL_82845G_HB) {
+		/* 855GM/852GM/865G has 128MB aperture size */
+		agp_bridge->current_size = (void *) values;
+		agp_bridge->aperture_size_idx = 0;
+		return values[0].size;
+	}
+
+	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
+
+	if ((gmch_ctrl & I830_GMCH_MEM_MASK) == I830_GMCH_MEM_128M) {
+		agp_bridge->current_size = (void *) values;
+		agp_bridge->aperture_size_idx = 0;
+		return values[0].size;
+	} else {
+		agp_bridge->current_size = (void *) (values + 1);
+		agp_bridge->aperture_size_idx = 1;
+		return values[1].size;
+	}
+
+	return 0;
+}
+
+static int intel_i830_configure(void)
+{
+	struct aper_size_info_fixed *current_size;
+	u32 temp;
+	u16 gmch_ctrl;
+	int i;
+
+	current_size = A_SIZE_FIX(agp_bridge->current_size);
+
+	pci_read_config_dword(intel_private.pcidev, I810_GMADDR, &temp);
+	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
+
+	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
+	gmch_ctrl |= I830_GMCH_ENABLED;
+	pci_write_config_word(agp_bridge->dev, I830_GMCH_CTRL, gmch_ctrl);
+
+	writel(agp_bridge->gatt_bus_addr|I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
+	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
+
+	if (agp_bridge->driver->needs_scratch_page) {
+		for (i = intel_private.gtt_entries; i < current_size->num_entries; i++) {
+			writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
+		}
+		readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));	/* PCI Posting. */
+	}
+
+	global_cache_flush();
+
+	intel_i830_setup_flush();
+	return 0;
+}
+
+static void intel_i830_cleanup(void)
+{
+	iounmap(intel_private.registers);
+}
+
+static int intel_i830_insert_entries(struct agp_memory *mem, off_t pg_start,
+				     int type)
+{
+	int i, j, num_entries;
+	void *temp;
+	int ret = -EINVAL;
+	int mask_type;
+
+	if (mem->page_count == 0)
+		goto out;
+
+	temp = agp_bridge->current_size;
+	num_entries = A_SIZE_FIX(temp)->num_entries;
+
+	if (pg_start < intel_private.gtt_entries) {
+		dev_printk(KERN_DEBUG, &intel_private.pcidev->dev,
+			   "pg_start == 0x%.8lx, intel_private.gtt_entries == 0x%.8x\n",
+			   pg_start, intel_private.gtt_entries);
+
+		dev_info(&intel_private.pcidev->dev,
+			 "trying to insert into local/stolen memory\n");
+		goto out_err;
+	}
+
+	if ((pg_start + mem->page_count) > num_entries)
+		goto out_err;
+
+	/* The i830 can't check the GTT for entries since its read only,
+	 * depend on the caller to make the correct offset decisions.
+	 */
+
+	if (type != mem->type)
+		goto out_err;
+
+	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
+
+	if (mask_type != 0 && mask_type != AGP_PHYS_MEMORY &&
+	    mask_type != INTEL_AGP_CACHED_MEMORY)
+		goto out_err;
+
+	if (!mem->is_flushed)
+		global_cache_flush();
+
+	for (i = 0, j = pg_start; i < mem->page_count; i++, j++) {
+		writel(agp_bridge->driver->mask_memory(agp_bridge,
+				page_to_phys(mem->pages[i]), mask_type),
+		       intel_private.registers+I810_PTE_BASE+(j*4));
+	}
+	readl(intel_private.registers+I810_PTE_BASE+((j-1)*4));
+
+out:
+	ret = 0;
+out_err:
+	mem->is_flushed = true;
+	return ret;
+}
+
+static int intel_i830_remove_entries(struct agp_memory *mem, off_t pg_start,
+				     int type)
+{
+	int i;
+
+	if (mem->page_count == 0)
+		return 0;
+
+	if (pg_start < intel_private.gtt_entries) {
+		dev_info(&intel_private.pcidev->dev,
+			 "trying to disable local/stolen memory\n");
+		return -EINVAL;
+	}
+
+	for (i = pg_start; i < (mem->page_count + pg_start); i++) {
+		writel(agp_bridge->scratch_page, intel_private.registers+I810_PTE_BASE+(i*4));
+	}
+	readl(intel_private.registers+I810_PTE_BASE+((i-1)*4));
+
+	return 0;
+}
+
+static struct agp_memory *intel_i830_alloc_by_type(size_t pg_count, int type)
+{
+	if (type == AGP_PHYS_MEMORY)
+		return alloc_agpphysmem_i8xx(pg_count, type);
+	/* always return NULL for other allocation types for now */
+	return NULL;
+}
+
+static int intel_alloc_chipset_flush_resource(void)
+{
+	int ret;
+	ret = pci_bus_alloc_resource(agp_bridge->dev->bus, &intel_private.ifp_resource, PAGE_SIZE,
+				     PAGE_SIZE, PCIBIOS_MIN_MEM, 0,
+				     pcibios_align_resource, agp_bridge->dev);
+
+	return ret;
+}
+
+static void intel_i915_setup_chipset_flush(void)
+{
+	int ret;
+	u32 temp;
+
+	pci_read_config_dword(agp_bridge->dev, I915_IFPADDR, &temp);
+	if (!(temp & 0x1)) {
+		intel_alloc_chipset_flush_resource();
+		intel_private.resource_valid = 1;
+		pci_write_config_dword(agp_bridge->dev, I915_IFPADDR, (intel_private.ifp_resource.start & 0xffffffff) | 0x1);
+	} else {
+		temp &= ~1;
+
+		intel_private.resource_valid = 1;
+		intel_private.ifp_resource.start = temp;
+		intel_private.ifp_resource.end = temp + PAGE_SIZE;
+		ret = request_resource(&iomem_resource, &intel_private.ifp_resource);
+		/* some BIOSes reserve this area in a pnp some don't */
+		if (ret)
+			intel_private.resource_valid = 0;
+	}
+}
+
+static void intel_i965_g33_setup_chipset_flush(void)
+{
+	u32 temp_hi, temp_lo;
+	int ret;
+
+	pci_read_config_dword(agp_bridge->dev, I965_IFPADDR + 4, &temp_hi);
+	pci_read_config_dword(agp_bridge->dev, I965_IFPADDR, &temp_lo);
+
+	if (!(temp_lo & 0x1)) {
+
+		intel_alloc_chipset_flush_resource();
+
+		intel_private.resource_valid = 1;
+		pci_write_config_dword(agp_bridge->dev, I965_IFPADDR + 4,
+			upper_32_bits(intel_private.ifp_resource.start));
+		pci_write_config_dword(agp_bridge->dev, I965_IFPADDR, (intel_private.ifp_resource.start & 0xffffffff) | 0x1);
+	} else {
+		u64 l64;
+
+		temp_lo &= ~0x1;
+		l64 = ((u64)temp_hi << 32) | temp_lo;
+
+		intel_private.resource_valid = 1;
+		intel_private.ifp_resource.start = l64;
+		intel_private.ifp_resource.end = l64 + PAGE_SIZE;
+		ret = request_resource(&iomem_resource, &intel_private.ifp_resource);
+		/* some BIOSes reserve this area in a pnp some don't */
+		if (ret)
+			intel_private.resource_valid = 0;
+	}
+}
+
+static void intel_i9xx_setup_flush(void)
+{
+	/* return if already configured */
+	if (intel_private.ifp_resource.start)
+		return;
+
+	if (IS_SNB)
+		return;
+
+	/* setup a resource for this object */
+	intel_private.ifp_resource.name = "Intel Flush Page";
+	intel_private.ifp_resource.flags = IORESOURCE_MEM;
+
+	/* Setup chipset flush for 915 */
+	if (IS_I965 || IS_G33 || IS_G4X) {
+		intel_i965_g33_setup_chipset_flush();
+	} else {
+		intel_i915_setup_chipset_flush();
+	}
+
+	if (intel_private.ifp_resource.start) {
+		intel_private.i9xx_flush_page = ioremap_nocache(intel_private.ifp_resource.start, PAGE_SIZE);
+		if (!intel_private.i9xx_flush_page)
+			dev_info(&intel_private.pcidev->dev, "can't ioremap flush page - no chipset flushing");
+	}
+}
+
+static int intel_i9xx_configure(void)
+{
+	struct aper_size_info_fixed *current_size;
+	u32 temp;
+	u16 gmch_ctrl;
+	int i;
+
+	current_size = A_SIZE_FIX(agp_bridge->current_size);
+
+	pci_read_config_dword(intel_private.pcidev, I915_GMADDR, &temp);
+
+	agp_bridge->gart_bus_addr = (temp & PCI_BASE_ADDRESS_MEM_MASK);
+
+	pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
+	gmch_ctrl |= I830_GMCH_ENABLED;
+	pci_write_config_word(agp_bridge->dev, I830_GMCH_CTRL, gmch_ctrl);
+
+	writel(agp_bridge->gatt_bus_addr|I810_PGETBL_ENABLED, intel_private.registers+I810_PGETBL_CTL);
+	readl(intel_private.registers+I810_PGETBL_CTL);	/* PCI Posting. */
+
+	if (agp_bridge->driver->needs_scratch_page) {
+		for (i = intel_private.gtt_entries; i < intel_private.gtt_total_size; i++) {
+			writel(agp_bridge->scratch_page, intel_private.gtt+i);
+		}
+		readl(intel_private.gtt+i-1);	/* PCI Posting. */
+	}
+
+	global_cache_flush();
+
+	intel_i9xx_setup_flush();
+
+	return 0;
+}
+
+static void intel_i915_cleanup(void)
+{
+	if (intel_private.i9xx_flush_page)
+		iounmap(intel_private.i9xx_flush_page);
+	if (intel_private.resource_valid)
+		release_resource(&intel_private.ifp_resource);
+	intel_private.ifp_resource.start = 0;
+	intel_private.resource_valid = 0;
+	iounmap(intel_private.gtt);
+	iounmap(intel_private.registers);
+}
+
+static void intel_i915_chipset_flush(struct agp_bridge_data *bridge)
+{
+	if (intel_private.i9xx_flush_page)
+		writel(1, intel_private.i9xx_flush_page);
+}
+
+static int intel_i915_insert_entries(struct agp_memory *mem, off_t pg_start,
+				     int type)
+{
+	int num_entries;
+	void *temp;
+	int ret = -EINVAL;
+	int mask_type;
+
+	if (mem->page_count == 0)
+		goto out;
+
+	temp = agp_bridge->current_size;
+	num_entries = A_SIZE_FIX(temp)->num_entries;
+
+	if (pg_start < intel_private.gtt_entries) {
+		dev_printk(KERN_DEBUG, &intel_private.pcidev->dev,
+			   "pg_start == 0x%.8lx, intel_private.gtt_entries == 0x%.8x\n",
+			   pg_start, intel_private.gtt_entries);
+
+		dev_info(&intel_private.pcidev->dev,
+			 "trying to insert into local/stolen memory\n");
+		goto out_err;
+	}
+
+	if ((pg_start + mem->page_count) > num_entries)
+		goto out_err;
+
+	/* The i915 can't check the GTT for entries since it's read only;
+	 * depend on the caller to make the correct offset decisions.
+	 */
+
+	if (type != mem->type)
+		goto out_err;
+
+	mask_type = agp_bridge->driver->agp_type_to_mask_type(agp_bridge, type);
+
+	if (mask_type != 0 && mask_type != AGP_PHYS_MEMORY &&
+	    mask_type != INTEL_AGP_CACHED_MEMORY)
+		goto out_err;
+
+	if (!mem->is_flushed)
+		global_cache_flush();
+
+	intel_agp_insert_sg_entries(mem, pg_start, mask_type);
+
+ out:
+	ret = 0;
+ out_err:
+	mem->is_flushed = true;
+	return ret;
+}
+
+static int intel_i915_remove_entries(struct agp_memory *mem, off_t pg_start,
+				     int type)
+{
+	int i;
+
+	if (mem->page_count == 0)
+		return 0;
+
+	if (pg_start < intel_private.gtt_entries) {
+		dev_info(&intel_private.pcidev->dev,
+			 "trying to disable local/stolen memory\n");
+		return -EINVAL;
+	}
+
+	for (i = pg_start; i < (mem->page_count + pg_start); i++)
+		writel(agp_bridge->scratch_page, intel_private.gtt+i);
+
+	readl(intel_private.gtt+i-1);
+
+	return 0;
+}
+
+/* Return the aperture size by just checking the resource length.  The effect
+ * described in the spec of the MSAC registers is just changing of the
+ * resource size.
+ */
+static int intel_i9xx_fetch_size(void)
+{
+	int num_sizes = ARRAY_SIZE(intel_i830_sizes);
+	int aper_size; /* size in megabytes */
+	int i;
+
+	aper_size = pci_resource_len(intel_private.pcidev, 2) / MB(1);
+
+	for (i = 0; i < num_sizes; i++) {
+		if (aper_size == intel_i830_sizes[i].size) {
+			agp_bridge->current_size = intel_i830_sizes + i;
+			return aper_size;
+		}
+	}
+
+	return 0;
+}
+
+static int intel_i915_get_gtt_size(void)
+{
+	int size;
+
+	if (IS_G33) {
+		u16 gmch_ctrl;
+
+		/* G33's GTT size defined in gmch_ctrl */
+		pci_read_config_word(agp_bridge->dev, I830_GMCH_CTRL, &gmch_ctrl);
+		switch (gmch_ctrl & G33_PGETBL_SIZE_MASK) {
+		case G33_PGETBL_SIZE_1M:
+			size = 1024;
+			break;
+		case G33_PGETBL_SIZE_2M:
+			size = 2048;
+			break;
+		default:
+			dev_info(&agp_bridge->dev->dev,
+				 "unknown page table size 0x%x, assuming 512KB\n",
+				(gmch_ctrl & G33_PGETBL_SIZE_MASK));
+			size = 512;
+		}
+	} else {
+		/* On previous hardware, the GTT size was just what was
+		 * required to map the aperture.
+		 */
+		size = agp_bridge->driver->fetch_size();
+	}
+
+	return KB(size);
+}
+
+/* The intel i915 automatically initializes the agp aperture during POST.
+ * Use the memory already set aside for in the GTT.
+ */
+static int intel_i915_create_gatt_table(struct agp_bridge_data *bridge)
+{
+	int page_order;
+	struct aper_size_info_fixed *size;
+	int num_entries;
+	u32 temp, temp2;
+	int gtt_map_size;
+
+	size = agp_bridge->current_size;
+	page_order = size->page_order;
+	num_entries = size->num_entries;
+	agp_bridge->gatt_table_real = NULL;
+
+	pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &temp);
+	pci_read_config_dword(intel_private.pcidev, I915_PTEADDR, &temp2);
+
+	gtt_map_size = intel_i915_get_gtt_size();
+
+	intel_private.gtt = ioremap(temp2, gtt_map_size);
+	if (!intel_private.gtt)
+		return -ENOMEM;
+
+	intel_private.gtt_total_size = gtt_map_size / 4;
+
+	temp &= 0xfff80000;
+
+	intel_private.registers = ioremap(temp, 128 * 4096);
+	if (!intel_private.registers) {
+		iounmap(intel_private.gtt);
+		return -ENOMEM;
+	}
+
+	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
+	global_cache_flush();	/* FIXME: ? */
+
+	/* we have to call this as early as possible after the MMIO base address is known */
+	intel_i830_init_gtt_entries();
+
+	agp_bridge->gatt_table = NULL;
+
+	agp_bridge->gatt_bus_addr = temp;
+
+	return 0;
+}
+
+/*
+ * The i965 supports 36-bit physical addresses, but to keep
+ * the format of the GTT the same, the bits that don't fit
+ * in a 32-bit word are shifted down to bits 4..7.
+ *
+ * Gcc is smart enough to notice that "(addr >> 28) & 0xf0"
+ * is always zero on 32-bit architectures, so no need to make
+ * this conditional.
+ */
+static unsigned long intel_i965_mask_memory(struct agp_bridge_data *bridge,
+					    dma_addr_t addr, int type)
+{
+	/* Shift high bits down */
+	addr |= (addr >> 28) & 0xf0;
+
+	/* Type checking must be done elsewhere */
+	return addr | bridge->driver->masks[type].mask;
+}
+
+static void intel_i965_get_gtt_range(int *gtt_offset, int *gtt_size)
+{
+	u16 snb_gmch_ctl;
+
+	switch (agp_bridge->dev->device) {
+	case PCI_DEVICE_ID_INTEL_GM45_HB:
+	case PCI_DEVICE_ID_INTEL_EAGLELAKE_HB:
+	case PCI_DEVICE_ID_INTEL_Q45_HB:
+	case PCI_DEVICE_ID_INTEL_G45_HB:
+	case PCI_DEVICE_ID_INTEL_G41_HB:
+	case PCI_DEVICE_ID_INTEL_B43_HB:
+	case PCI_DEVICE_ID_INTEL_IRONLAKE_D_HB:
+	case PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB:
+	case PCI_DEVICE_ID_INTEL_IRONLAKE_MA_HB:
+	case PCI_DEVICE_ID_INTEL_IRONLAKE_MC2_HB:
+		*gtt_offset = *gtt_size = MB(2);
+		break;
+	case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_HB:
+	case PCI_DEVICE_ID_INTEL_SANDYBRIDGE_M_HB:
+		*gtt_offset = MB(2);
+
+		pci_read_config_word(intel_private.pcidev, SNB_GMCH_CTRL, &snb_gmch_ctl);
+		switch (snb_gmch_ctl & SNB_GTT_SIZE_MASK) {
+		default:
+		case SNB_GTT_SIZE_0M:
+			printk(KERN_ERR "Bad GTT size mask: 0x%04x.\n", snb_gmch_ctl);
+			*gtt_size = MB(0);
+			break;
+		case SNB_GTT_SIZE_1M:
+			*gtt_size = MB(1);
+			break;
+		case SNB_GTT_SIZE_2M:
+			*gtt_size = MB(2);
+			break;
+		}
+		break;
+	default:
+		*gtt_offset = *gtt_size = KB(512);
+	}
+}
+
+/* The intel i965 automatically initializes the agp aperture during POST.
+ * Use the memory already set aside for in the GTT.
+ */
+static int intel_i965_create_gatt_table(struct agp_bridge_data *bridge)
+{
+	int page_order;
+	struct aper_size_info_fixed *size;
+	int num_entries;
+	u32 temp;
+	int gtt_offset, gtt_size;
+
+	size = agp_bridge->current_size;
+	page_order = size->page_order;
+	num_entries = size->num_entries;
+	agp_bridge->gatt_table_real = NULL;
+
+	pci_read_config_dword(intel_private.pcidev, I915_MMADDR, &temp);
+
+	temp &= 0xfff00000;
+
+	intel_i965_get_gtt_range(&gtt_offset, &gtt_size);
+
+	intel_private.gtt = ioremap((temp + gtt_offset) , gtt_size);
+
+	if (!intel_private.gtt)
+		return -ENOMEM;
+
+	intel_private.gtt_total_size = gtt_size / 4;
+
+	intel_private.registers = ioremap(temp, 128 * 4096);
+	if (!intel_private.registers) {
+		iounmap(intel_private.gtt);
+		return -ENOMEM;
+	}
+
+	temp = readl(intel_private.registers+I810_PGETBL_CTL) & 0xfffff000;
+	global_cache_flush();   /* FIXME: ? */
+
+	/* we have to call this as early as possible after the MMIO base address is known */
+	intel_i830_init_gtt_entries();
+
+	agp_bridge->gatt_table = NULL;
+
+	agp_bridge->gatt_bus_addr = temp;
+
+	return 0;
+}
+
+static const struct agp_bridge_driver intel_810_driver = {
+	.owner			= THIS_MODULE,
+	.aperture_sizes		= intel_i810_sizes,
+	.size_type		= FIXED_APER_SIZE,
+	.num_aperture_sizes	= 2,
+	.needs_scratch_page	= true,
+	.configure		= intel_i810_configure,
+	.fetch_size		= intel_i810_fetch_size,
+	.cleanup		= intel_i810_cleanup,
+	.mask_memory		= intel_i810_mask_memory,
+	.masks			= intel_i810_masks,
+	.agp_enable		= intel_i810_agp_enable,
+	.cache_flush		= global_cache_flush,
+	.create_gatt_table	= agp_generic_create_gatt_table,
+	.free_gatt_table	= agp_generic_free_gatt_table,
+	.insert_memory		= intel_i810_insert_entries,
+	.remove_memory		= intel_i810_remove_entries,
+	.alloc_by_type		= intel_i810_alloc_by_type,
+	.free_by_type		= intel_i810_free_by_type,
+	.agp_alloc_page		= agp_generic_alloc_page,
+	.agp_alloc_pages        = agp_generic_alloc_pages,
+	.agp_destroy_page	= agp_generic_destroy_page,
+	.agp_destroy_pages      = agp_generic_destroy_pages,
+	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
+};
+
+static const struct agp_bridge_driver intel_830_driver = {
+	.owner			= THIS_MODULE,
+	.aperture_sizes		= intel_i830_sizes,
+	.size_type		= FIXED_APER_SIZE,
+	.num_aperture_sizes	= 4,
+	.needs_scratch_page	= true,
+	.configure		= intel_i830_configure,
+	.fetch_size		= intel_i830_fetch_size,
+	.cleanup		= intel_i830_cleanup,
+	.mask_memory		= intel_i810_mask_memory,
+	.masks			= intel_i810_masks,
+	.agp_enable		= intel_i810_agp_enable,
+	.cache_flush		= global_cache_flush,
+	.create_gatt_table	= intel_i830_create_gatt_table,
+	.free_gatt_table	= intel_i830_free_gatt_table,
+	.insert_memory		= intel_i830_insert_entries,
+	.remove_memory		= intel_i830_remove_entries,
+	.alloc_by_type		= intel_i830_alloc_by_type,
+	.free_by_type		= intel_i810_free_by_type,
+	.agp_alloc_page		= agp_generic_alloc_page,
+	.agp_alloc_pages        = agp_generic_alloc_pages,
+	.agp_destroy_page	= agp_generic_destroy_page,
+	.agp_destroy_pages      = agp_generic_destroy_pages,
+	.agp_type_to_mask_type  = intel_i830_type_to_mask_type,
+	.chipset_flush		= intel_i830_chipset_flush,
+};
+
+static const struct agp_bridge_driver intel_915_driver = {
+	.owner			= THIS_MODULE,
+	.aperture_sizes		= intel_i830_sizes,
+	.size_type		= FIXED_APER_SIZE,
+	.num_aperture_sizes	= 4,
+	.needs_scratch_page	= true,
+	.configure		= intel_i9xx_configure,
+	.fetch_size		= intel_i9xx_fetch_size,
+	.cleanup		= intel_i915_cleanup,
+	.mask_memory		= intel_i810_mask_memory,
+	.masks			= intel_i810_masks,
+	.agp_enable		= intel_i810_agp_enable,
+	.cache_flush		= global_cache_flush,
+	.create_gatt_table	= intel_i915_create_gatt_table,
+	.free_gatt_table	= intel_i830_free_gatt_table,
+	.insert_memory		= intel_i915_insert_entries,
+	.remove_memory		= intel_i915_remove_entries,
+	.alloc_by_type		= intel_i830_alloc_by_type,
+	.free_by_type		= intel_i810_free_by_type,
+	.agp_alloc_page		= agp_generic_alloc_page,
+	.agp_alloc_pages        = agp_generic_alloc_pages,
+	.agp_destroy_page	= agp_generic_destroy_page,
+	.agp_destroy_pages      = agp_generic_destroy_pages,
+	.agp_type_to_mask_type  = intel_i830_type_to_mask_type,
+	.chipset_flush		= intel_i915_chipset_flush,
+#ifdef USE_PCI_DMA_API
+	.agp_map_page		= intel_agp_map_page,
+	.agp_unmap_page		= intel_agp_unmap_page,
+	.agp_map_memory		= intel_agp_map_memory,
+	.agp_unmap_memory	= intel_agp_unmap_memory,
+#endif
+};
+
+static const struct agp_bridge_driver intel_i965_driver = {
+	.owner			= THIS_MODULE,
+	.aperture_sizes		= intel_i830_sizes,
+	.size_type		= FIXED_APER_SIZE,
+	.num_aperture_sizes	= 4,
+	.needs_scratch_page	= true,
+	.configure		= intel_i9xx_configure,
+	.fetch_size		= intel_i9xx_fetch_size,
+	.cleanup		= intel_i915_cleanup,
+	.mask_memory		= intel_i965_mask_memory,
+	.masks			= intel_i810_masks,
+	.agp_enable		= intel_i810_agp_enable,
+	.cache_flush		= global_cache_flush,
+	.create_gatt_table	= intel_i965_create_gatt_table,
+	.free_gatt_table	= intel_i830_free_gatt_table,
+	.insert_memory		= intel_i915_insert_entries,
+	.remove_memory		= intel_i915_remove_entries,
+	.alloc_by_type		= intel_i830_alloc_by_type,
+	.free_by_type		= intel_i810_free_by_type,
+	.agp_alloc_page		= agp_generic_alloc_page,
+	.agp_alloc_pages        = agp_generic_alloc_pages,
+	.agp_destroy_page	= agp_generic_destroy_page,
+	.agp_destroy_pages      = agp_generic_destroy_pages,
+	.agp_type_to_mask_type	= intel_i830_type_to_mask_type,
+	.chipset_flush		= intel_i915_chipset_flush,
+#ifdef USE_PCI_DMA_API
+	.agp_map_page		= intel_agp_map_page,
+	.agp_unmap_page		= intel_agp_unmap_page,
+	.agp_map_memory		= intel_agp_map_memory,
+	.agp_unmap_memory	= intel_agp_unmap_memory,
+#endif
+};
+
+static const struct agp_bridge_driver intel_g33_driver = {
+	.owner			= THIS_MODULE,
+	.aperture_sizes		= intel_i830_sizes,
+	.size_type		= FIXED_APER_SIZE,
+	.num_aperture_sizes	= 4,
+	.needs_scratch_page	= true,
+	.configure		= intel_i9xx_configure,
+	.fetch_size		= intel_i9xx_fetch_size,
+	.cleanup		= intel_i915_cleanup,
+	.mask_memory		= intel_i965_mask_memory,
+	.masks			= intel_i810_masks,
+	.agp_enable		= intel_i810_agp_enable,
+	.cache_flush		= global_cache_flush,
+	.create_gatt_table	= intel_i915_create_gatt_table,
+	.free_gatt_table	= intel_i830_free_gatt_table,
+	.insert_memory		= intel_i915_insert_entries,
+	.remove_memory		= intel_i915_remove_entries,
+	.alloc_by_type		= intel_i830_alloc_by_type,
+	.free_by_type		= intel_i810_free_by_type,
+	.agp_alloc_page		= agp_generic_alloc_page,
+	.agp_alloc_pages        = agp_generic_alloc_pages,
+	.agp_destroy_page	= agp_generic_destroy_page,
+	.agp_destroy_pages      = agp_generic_destroy_pages,
+	.agp_type_to_mask_type	= intel_i830_type_to_mask_type,
+	.chipset_flush		= intel_i915_chipset_flush,
+#ifdef USE_PCI_DMA_API
+	.agp_map_page		= intel_agp_map_page,
+	.agp_unmap_page		= intel_agp_unmap_page,
+	.agp_map_memory		= intel_agp_map_memory,
+	.agp_unmap_memory	= intel_agp_unmap_memory,
+#endif
+};
diff --git a/drivers/char/agp/nvidia-agp.c b/drivers/char/agp/nvidia-agp.c
index 10f24e3..b9734a9 100644
--- a/drivers/char/agp/nvidia-agp.c
+++ b/drivers/char/agp/nvidia-agp.c
@@ -310,6 +310,7 @@ static const struct agp_bridge_driver nvidia_driver = {
 	.aperture_sizes		= nvidia_generic_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 5,
+	.needs_scratch_page	= true,
 	.configure		= nvidia_configure,
 	.fetch_size		= nvidia_fetch_size,
 	.cleanup		= nvidia_cleanup,
diff --git a/drivers/char/agp/sis-agp.c b/drivers/char/agp/sis-agp.c
index 6c3837a..29aacd8 100644
--- a/drivers/char/agp/sis-agp.c
+++ b/drivers/char/agp/sis-agp.c
@@ -125,6 +125,7 @@ static struct agp_bridge_driver sis_driver = {
 	.aperture_sizes		= sis_generic_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 7,
+	.needs_scratch_page	= true,
 	.configure		= sis_configure,
 	.fetch_size		= sis_fetch_size,
 	.cleanup		= sis_cleanup,
diff --git a/drivers/char/agp/uninorth-agp.c b/drivers/char/agp/uninorth-agp.c
index 6f48931..95db713 100644
--- a/drivers/char/agp/uninorth-agp.c
+++ b/drivers/char/agp/uninorth-agp.c
@@ -28,6 +28,7 @@
  */
 static int uninorth_rev;
 static int is_u3;
+static u32 scratch_value;
 
 #define DEFAULT_APERTURE_SIZE 256
 #define DEFAULT_APERTURE_STRING "256"
@@ -172,7 +173,7 @@ static int uninorth_insert_memory(struct agp_memory *mem, off_t pg_start, int ty
 
 	gp = (u32 *) &agp_bridge->gatt_table[pg_start];
 	for (i = 0; i < mem->page_count; ++i) {
-		if (gp[i]) {
+		if (gp[i] != scratch_value) {
 			dev_info(&agp_bridge->dev->dev,
 				 "uninorth_insert_memory: entry 0x%x occupied (%x)\n",
 				 i, gp[i]);
@@ -214,8 +215,9 @@ int uninorth_remove_memory(struct agp_memory *mem, off_t pg_start, int type)
 		return 0;
 
 	gp = (u32 *) &agp_bridge->gatt_table[pg_start];
-	for (i = 0; i < mem->page_count; ++i)
-		gp[i] = 0;
+	for (i = 0; i < mem->page_count; ++i) {
+		gp[i] = scratch_value;
+	}
 	mb();
 	uninorth_tlbflush(mem);
 
@@ -421,8 +423,13 @@ static int uninorth_create_gatt_table(struct agp_bridge_data *bridge)
 
 	bridge->gatt_bus_addr = virt_to_phys(table);
 
+	if (is_u3)
+		scratch_value = (page_to_phys(agp_bridge->scratch_page_page) >> PAGE_SHIFT) | 0x80000000UL;
+	else
+		scratch_value =	cpu_to_le32((page_to_phys(agp_bridge->scratch_page_page) & 0xFFFFF000UL) |
+				0x1UL);
 	for (i = 0; i < num_entries; i++)
-		bridge->gatt_table[i] = 0;
+		bridge->gatt_table[i] = scratch_value;
 
 	return 0;
 
@@ -519,6 +526,7 @@ const struct agp_bridge_driver uninorth_agp_driver = {
 	.agp_destroy_pages	= agp_generic_destroy_pages,
 	.agp_type_to_mask_type  = agp_generic_type_to_mask_type,
 	.cant_use_aperture	= true,
+	.needs_scratch_page	= true,
 };
 
 const struct agp_bridge_driver u3_agp_driver = {
diff --git a/drivers/char/agp/via-agp.c b/drivers/char/agp/via-agp.c
index d3bd243..df67e80 100644
--- a/drivers/char/agp/via-agp.c
+++ b/drivers/char/agp/via-agp.c
@@ -175,6 +175,7 @@ static const struct agp_bridge_driver via_agp3_driver = {
 	.aperture_sizes		= agp3_generic_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 10,
+	.needs_scratch_page	= true,
 	.configure		= via_configure_agp3,
 	.fetch_size		= via_fetch_size_agp3,
 	.cleanup		= via_cleanup_agp3,
@@ -201,6 +202,7 @@ static const struct agp_bridge_driver via_driver = {
 	.aperture_sizes		= via_generic_sizes,
 	.size_type		= U8_APER_SIZE,
 	.num_aperture_sizes	= 9,
+	.needs_scratch_page	= true,
 	.configure		= via_configure,
 	.fetch_size		= via_fetch_size,
 	.cleanup		= via_cleanup,
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 305c590..c2711c6 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -9,6 +9,7 @@ menuconfig DRM
 	depends on (AGP || AGP=n) && PCI && !EMULATED_CMPXCHG && MMU
 	select I2C
 	select I2C_ALGOBIT
+	select SLOW_WORK
 	help
 	  Kernel-level support for the Direct Rendering Infrastructure (DRI)
 	  introduced in XFree86 4.0. If you say Y here, you need to select
@@ -59,6 +60,7 @@ config DRM_RADEON
 	select FW_LOADER
         select DRM_KMS_HELPER
         select DRM_TTM
+	select POWER_SUPPLY
 	help
 	  Choose this option if you have an ATI Radeon graphics card.  There
 	  are both PCI and AGP versions.  You don't need to choose this to
@@ -157,3 +159,5 @@ config DRM_SAVAGE
 	help
 	  Choose this option if you have a Savage3D/4/SuperSavage/Pro/Twister
 	  chipset. If M is selected the module will be called savage.
+
+source "drivers/gpu/drm/nouveau/Kconfig"
diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
index 932b5aa..3f46772 100644
--- a/drivers/gpu/drm/drm_auth.c
+++ b/drivers/gpu/drm/drm_auth.c
@@ -79,10 +79,9 @@ static int drm_add_magic(struct drm_master *master, struct drm_file *priv,
 	struct drm_device *dev = master->minor->dev;
 	DRM_DEBUG("%d\n", magic);
 
-	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+	entry = kzalloc(sizeof(*entry), GFP_KERNEL);
 	if (!entry)
 		return -ENOMEM;
-	memset(entry, 0, sizeof(*entry));
 	entry->priv = priv;
 	entry->hash_item.key = (unsigned long)magic;
 	mutex_lock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index 61b9bcf..57cea01 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -34,6 +34,7 @@
 #include "drm.h"
 #include "drmP.h"
 #include "drm_crtc.h"
+#include "drm_edid.h"
 
 struct drm_prop_enum_list {
 	int type;
@@ -494,7 +495,6 @@ void drm_connector_cleanup(struct drm_connector *connector)
 	list_for_each_entry_safe(mode, t, &connector->user_modes, head)
 		drm_mode_remove(connector, mode);
 
-	kfree(connector->fb_helper_private);
 	mutex_lock(&dev->mode_config.mutex);
 	drm_mode_object_put(dev, &connector->base);
 	list_del(&connector->head);
@@ -858,7 +858,6 @@ void drm_mode_config_init(struct drm_device *dev)
 	mutex_init(&dev->mode_config.mutex);
 	mutex_init(&dev->mode_config.idr_mutex);
 	INIT_LIST_HEAD(&dev->mode_config.fb_list);
-	INIT_LIST_HEAD(&dev->mode_config.fb_kernel_list);
 	INIT_LIST_HEAD(&dev->mode_config.crtc_list);
 	INIT_LIST_HEAD(&dev->mode_config.connector_list);
 	INIT_LIST_HEAD(&dev->mode_config.encoder_list);
@@ -1841,8 +1840,10 @@ int drm_mode_dirtyfb_ioctl(struct drm_device *dev,
 
 		ret = copy_from_user(clips, clips_ptr,
 				     num_clips * sizeof(*clips));
-		if (ret)
+		if (ret) {
+			ret = -EFAULT;
 			goto out_err2;
+		}
 	}
 
 	if (fb->funcs->dirty) {
@@ -2350,7 +2351,7 @@ int drm_mode_connector_update_edid_property(struct drm_connector *connector,
 					    struct edid *edid)
 {
 	struct drm_device *dev = connector->dev;
-	int ret = 0;
+	int ret = 0, size;
 
 	if (connector->edid_blob_ptr)
 		drm_property_destroy_blob(dev, connector->edid_blob_ptr);
@@ -2362,7 +2363,9 @@ int drm_mode_connector_update_edid_property(struct drm_connector *connector,
 		return ret;
 	}
 
-	connector->edid_blob_ptr = drm_property_create_blob(connector->dev, 128, edid);
+	size = EDID_LENGTH * (1 + edid->extensions);
+	connector->edid_blob_ptr = drm_property_create_blob(connector->dev,
+							    size, edid);
 
 	ret = drm_connector_property_set_value(connector,
 					       dev->mode_config.edid_property,
diff --git a/drivers/gpu/drm/drm_crtc_helper.c b/drivers/gpu/drm/drm_crtc_helper.c
index 51103aa..9b2a541 100644
--- a/drivers/gpu/drm/drm_crtc_helper.c
+++ b/drivers/gpu/drm/drm_crtc_helper.c
@@ -55,7 +55,7 @@ static void drm_mode_validate_flag(struct drm_connector *connector,
 }
 
 /**
- * drm_helper_probe_connector_modes - get complete set of display modes
+ * drm_helper_probe_single_connector_modes - get complete set of display modes
  * @dev: DRM device
  * @maxX: max width for modes
  * @maxY: max height for modes
@@ -154,21 +154,6 @@ prune:
 }
 EXPORT_SYMBOL(drm_helper_probe_single_connector_modes);
 
-int drm_helper_probe_connector_modes(struct drm_device *dev, uint32_t maxX,
-				      uint32_t maxY)
-{
-	struct drm_connector *connector;
-	int count = 0;
-
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		count += drm_helper_probe_single_connector_modes(connector,
-								 maxX, maxY);
-	}
-
-	return count;
-}
-EXPORT_SYMBOL(drm_helper_probe_connector_modes);
-
 /**
  * drm_helper_encoder_in_use - check if a given encoder is in use
  * @encoder: encoder to check
@@ -263,302 +248,6 @@ void drm_helper_disable_unused_functions(struct drm_device *dev)
 }
 EXPORT_SYMBOL(drm_helper_disable_unused_functions);
 
-static struct drm_display_mode *drm_has_preferred_mode(struct drm_connector *connector, int width, int height)
-{
-	struct drm_display_mode *mode;
-
-	list_for_each_entry(mode, &connector->modes, head) {
-		if (drm_mode_width(mode) > width ||
-		    drm_mode_height(mode) > height)
-			continue;
-		if (mode->type & DRM_MODE_TYPE_PREFERRED)
-			return mode;
-	}
-	return NULL;
-}
-
-static bool drm_has_cmdline_mode(struct drm_connector *connector)
-{
-	struct drm_fb_helper_connector *fb_help_conn = connector->fb_helper_private;
-	struct drm_fb_helper_cmdline_mode *cmdline_mode;
-
-	if (!fb_help_conn)
-		return false;
-
-	cmdline_mode = &fb_help_conn->cmdline_mode;
-	return cmdline_mode->specified;
-}
-
-static struct drm_display_mode *drm_pick_cmdline_mode(struct drm_connector *connector, int width, int height)
-{
-	struct drm_fb_helper_connector *fb_help_conn = connector->fb_helper_private;
-	struct drm_fb_helper_cmdline_mode *cmdline_mode;
-	struct drm_display_mode *mode = NULL;
-
-	if (!fb_help_conn)
-		return mode;
-
-	cmdline_mode = &fb_help_conn->cmdline_mode;
-	if (cmdline_mode->specified == false)
-		return mode;
-
-	/* attempt to find a matching mode in the list of modes
-	 *  we have gotten so far, if not add a CVT mode that conforms
-	 */
-	if (cmdline_mode->rb || cmdline_mode->margins)
-		goto create_mode;
-
-	list_for_each_entry(mode, &connector->modes, head) {
-		/* check width/height */
-		if (mode->hdisplay != cmdline_mode->xres ||
-		    mode->vdisplay != cmdline_mode->yres)
-			continue;
-
-		if (cmdline_mode->refresh_specified) {
-			if (mode->vrefresh != cmdline_mode->refresh)
-				continue;
-		}
-
-		if (cmdline_mode->interlace) {
-			if (!(mode->flags & DRM_MODE_FLAG_INTERLACE))
-				continue;
-		}
-		return mode;
-	}
-
-create_mode:
-	mode = drm_cvt_mode(connector->dev, cmdline_mode->xres,
-			    cmdline_mode->yres,
-			    cmdline_mode->refresh_specified ? cmdline_mode->refresh : 60,
-			    cmdline_mode->rb, cmdline_mode->interlace,
-			    cmdline_mode->margins);
-	drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
-	list_add(&mode->head, &connector->modes);
-	return mode;
-}
-
-static bool drm_connector_enabled(struct drm_connector *connector, bool strict)
-{
-	bool enable;
-
-	if (strict) {
-		enable = connector->status == connector_status_connected;
-	} else {
-		enable = connector->status != connector_status_disconnected;
-	}
-	return enable;
-}
-
-static void drm_enable_connectors(struct drm_device *dev, bool *enabled)
-{
-	bool any_enabled = false;
-	struct drm_connector *connector;
-	int i = 0;
-
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		enabled[i] = drm_connector_enabled(connector, true);
-		DRM_DEBUG_KMS("connector %d enabled? %s\n", connector->base.id,
-			  enabled[i] ? "yes" : "no");
-		any_enabled |= enabled[i];
-		i++;
-	}
-
-	if (any_enabled)
-		return;
-
-	i = 0;
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		enabled[i] = drm_connector_enabled(connector, false);
-		i++;
-	}
-}
-
-static bool drm_target_preferred(struct drm_device *dev,
-				 struct drm_display_mode **modes,
-				 bool *enabled, int width, int height)
-{
-	struct drm_connector *connector;
-	int i = 0;
-
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-
-		if (enabled[i] == false) {
-			i++;
-			continue;
-		}
-
-		DRM_DEBUG_KMS("looking for cmdline mode on connector %d\n",
-			      connector->base.id);
-
-		/* got for command line mode first */
-		modes[i] = drm_pick_cmdline_mode(connector, width, height);
-		if (!modes[i]) {
-			DRM_DEBUG_KMS("looking for preferred mode on connector %d\n",
-				      connector->base.id);
-			modes[i] = drm_has_preferred_mode(connector, width, height);
-		}
-		/* No preferred modes, pick one off the list */
-		if (!modes[i] && !list_empty(&connector->modes)) {
-			list_for_each_entry(modes[i], &connector->modes, head)
-				break;
-		}
-		DRM_DEBUG_KMS("found mode %s\n", modes[i] ? modes[i]->name :
-			  "none");
-		i++;
-	}
-	return true;
-}
-
-static int drm_pick_crtcs(struct drm_device *dev,
-			  struct drm_crtc **best_crtcs,
-			  struct drm_display_mode **modes,
-			  int n, int width, int height)
-{
-	int c, o;
-	struct drm_connector *connector;
-	struct drm_connector_helper_funcs *connector_funcs;
-	struct drm_encoder *encoder;
-	struct drm_crtc *best_crtc;
-	int my_score, best_score, score;
-	struct drm_crtc **crtcs, *crtc;
-
-	if (n == dev->mode_config.num_connector)
-		return 0;
-	c = 0;
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		if (c == n)
-			break;
-		c++;
-	}
-
-	best_crtcs[n] = NULL;
-	best_crtc = NULL;
-	best_score = drm_pick_crtcs(dev, best_crtcs, modes, n+1, width, height);
-	if (modes[n] == NULL)
-		return best_score;
-
-	crtcs = kmalloc(dev->mode_config.num_connector *
-			sizeof(struct drm_crtc *), GFP_KERNEL);
-	if (!crtcs)
-		return best_score;
-
-	my_score = 1;
-	if (connector->status == connector_status_connected)
-		my_score++;
-	if (drm_has_cmdline_mode(connector))
-		my_score++;
-	if (drm_has_preferred_mode(connector, width, height))
-		my_score++;
-
-	connector_funcs = connector->helper_private;
-	encoder = connector_funcs->best_encoder(connector);
-	if (!encoder)
-		goto out;
-
-	connector->encoder = encoder;
-
-	/* select a crtc for this connector and then attempt to configure
-	   remaining connectors */
-	c = 0;
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-
-		if ((encoder->possible_crtcs & (1 << c)) == 0) {
-			c++;
-			continue;
-		}
-
-		for (o = 0; o < n; o++)
-			if (best_crtcs[o] == crtc)
-				break;
-
-		if (o < n) {
-			/* ignore cloning for now */
-			c++;
-			continue;
-		}
-
-		crtcs[n] = crtc;
-		memcpy(crtcs, best_crtcs, n * sizeof(struct drm_crtc *));
-		score = my_score + drm_pick_crtcs(dev, crtcs, modes, n + 1,
-						  width, height);
-		if (score > best_score) {
-			best_crtc = crtc;
-			best_score = score;
-			memcpy(best_crtcs, crtcs,
-			       dev->mode_config.num_connector *
-			       sizeof(struct drm_crtc *));
-		}
-		c++;
-	}
-out:
-	kfree(crtcs);
-	return best_score;
-}
-
-static void drm_setup_crtcs(struct drm_device *dev)
-{
-	struct drm_crtc **crtcs;
-	struct drm_display_mode **modes;
-	struct drm_encoder *encoder;
-	struct drm_connector *connector;
-	bool *enabled;
-	int width, height;
-	int i, ret;
-
-	DRM_DEBUG_KMS("\n");
-
-	width = dev->mode_config.max_width;
-	height = dev->mode_config.max_height;
-
-	/* clean out all the encoder/crtc combos */
-	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
-		encoder->crtc = NULL;
-	}
-
-	crtcs = kcalloc(dev->mode_config.num_connector,
-			sizeof(struct drm_crtc *), GFP_KERNEL);
-	modes = kcalloc(dev->mode_config.num_connector,
-			sizeof(struct drm_display_mode *), GFP_KERNEL);
-	enabled = kcalloc(dev->mode_config.num_connector,
-			  sizeof(bool), GFP_KERNEL);
-
-	drm_enable_connectors(dev, enabled);
-
-	ret = drm_target_preferred(dev, modes, enabled, width, height);
-	if (!ret)
-		DRM_ERROR("Unable to find initial modes\n");
-
-	DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n", width, height);
-
-	drm_pick_crtcs(dev, crtcs, modes, 0, width, height);
-
-	i = 0;
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		struct drm_display_mode *mode = modes[i];
-		struct drm_crtc *crtc = crtcs[i];
-
-		if (connector->encoder == NULL) {
-			i++;
-			continue;
-		}
-
-		if (mode && crtc) {
-			DRM_DEBUG_KMS("desired mode %s set on crtc %d\n",
-				  mode->name, crtc->base.id);
-			crtc->desired_mode = mode;
-			connector->encoder->crtc = crtc;
-		} else {
-			connector->encoder->crtc = NULL;
-			connector->encoder = NULL;
-		}
-		i++;
-	}
-
-	kfree(crtcs);
-	kfree(modes);
-	kfree(enabled);
-}
-
 /**
  * drm_encoder_crtc_ok - can a given crtc drive a given encoder?
  * @encoder: encoder to test
@@ -936,10 +625,6 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
 				ret = -EINVAL;
 				goto fail;
 			}
-			/* TODO are these needed? */
-			set->crtc->desired_x = set->x;
-			set->crtc->desired_y = set->y;
-			set->crtc->desired_mode = set->mode;
 		}
 		drm_helper_disable_unused_functions(dev);
 	} else if (fb_changed) {
@@ -984,63 +669,6 @@ fail:
 }
 EXPORT_SYMBOL(drm_crtc_helper_set_config);
 
-bool drm_helper_plugged_event(struct drm_device *dev)
-{
-	DRM_DEBUG_KMS("\n");
-
-	drm_helper_probe_connector_modes(dev, dev->mode_config.max_width,
-					 dev->mode_config.max_height);
-
-	drm_setup_crtcs(dev);
-
-	/* alert the driver fb layer */
-	dev->mode_config.funcs->fb_changed(dev);
-
-	/* FIXME: send hotplug event */
-	return true;
-}
-/**
- * drm_initial_config - setup a sane initial connector configuration
- * @dev: DRM device
- *
- * LOCKING:
- * Called at init time, must take mode config lock.
- *
- * Scan the CRTCs and connectors and try to put together an initial setup.
- * At the moment, this is a cloned configuration across all heads with
- * a new framebuffer object as the backing store.
- *
- * RETURNS:
- * Zero if everything went ok, nonzero otherwise.
- */
-bool drm_helper_initial_config(struct drm_device *dev)
-{
-	int count = 0;
-
-	/* disable all the possible outputs/crtcs before entering KMS mode */
-	drm_helper_disable_unused_functions(dev);
-
-	drm_fb_helper_parse_command_line(dev);
-
-	count = drm_helper_probe_connector_modes(dev,
-						 dev->mode_config.max_width,
-						 dev->mode_config.max_height);
-
-	/*
-	 * we shouldn't end up with no modes here.
-	 */
-	if (count == 0)
-		printk(KERN_INFO "No connectors reported connected with modes\n");
-
-	drm_setup_crtcs(dev);
-
-	/* alert the driver fb layer */
-	dev->mode_config.funcs->fb_changed(dev);
-
-	return 0;
-}
-EXPORT_SYMBOL(drm_helper_initial_config);
-
 static int drm_helper_choose_encoder_dpms(struct drm_encoder *encoder)
 {
 	int dpms = DRM_MODE_DPMS_OFF;
@@ -1123,27 +751,6 @@ void drm_helper_connector_dpms(struct drm_connector *connector, int mode)
 }
 EXPORT_SYMBOL(drm_helper_connector_dpms);
 
-/**
- * drm_hotplug_stage_two
- * @dev DRM device
- * @connector hotpluged connector
- *
- * LOCKING.
- * Caller must hold mode config lock, function might grab struct lock.
- *
- * Stage two of a hotplug.
- *
- * RETURNS:
- * Zero on success, errno on failure.
- */
-int drm_helper_hotplug_stage_two(struct drm_device *dev)
-{
-	drm_helper_plugged_event(dev);
-
-	return 0;
-}
-EXPORT_SYMBOL(drm_helper_hotplug_stage_two);
-
 int drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb,
 				   struct drm_mode_fb_cmd *mode_cmd)
 {
@@ -1200,3 +807,114 @@ int drm_helper_resume_force_mode(struct drm_device *dev)
 	return 0;
 }
 EXPORT_SYMBOL(drm_helper_resume_force_mode);
+
+static struct slow_work_ops output_poll_ops;
+
+#define DRM_OUTPUT_POLL_PERIOD (10*HZ)
+static void output_poll_execute(struct slow_work *work)
+{
+	struct delayed_slow_work *delayed_work = container_of(work, struct delayed_slow_work, work);
+	struct drm_device *dev = container_of(delayed_work, struct drm_device, mode_config.output_poll_slow_work);
+	struct drm_connector *connector;
+	enum drm_connector_status old_status, status;
+	bool repoll = false, changed = false;
+	int ret;
+
+	mutex_lock(&dev->mode_config.mutex);
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+
+		/* if this is HPD or polled don't check it -
+		   TV out for instance */
+		if (!connector->polled)
+			continue;
+
+		else if (connector->polled & (DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT))
+			repoll = true;
+
+		old_status = connector->status;
+		/* if we are connected and don't want to poll for disconnect
+		   skip it */
+		if (old_status == connector_status_connected &&
+		    !(connector->polled & DRM_CONNECTOR_POLL_DISCONNECT) &&
+		    !(connector->polled & DRM_CONNECTOR_POLL_HPD))
+			continue;
+
+		status = connector->funcs->detect(connector);
+		if (old_status != status)
+			changed = true;
+	}
+
+	mutex_unlock(&dev->mode_config.mutex);
+
+	if (changed) {
+		/* send a uevent + call fbdev */
+		drm_sysfs_hotplug_event(dev);
+		if (dev->mode_config.funcs->output_poll_changed)
+			dev->mode_config.funcs->output_poll_changed(dev);
+	}
+
+	if (repoll) {
+		ret = delayed_slow_work_enqueue(delayed_work, DRM_OUTPUT_POLL_PERIOD);
+		if (ret)
+			DRM_ERROR("delayed enqueue failed %d\n", ret);
+	}
+}
+
+void drm_kms_helper_poll_disable(struct drm_device *dev)
+{
+	if (!dev->mode_config.poll_enabled)
+		return;
+	delayed_slow_work_cancel(&dev->mode_config.output_poll_slow_work);
+}
+EXPORT_SYMBOL(drm_kms_helper_poll_disable);
+
+void drm_kms_helper_poll_enable(struct drm_device *dev)
+{
+	bool poll = false;
+	struct drm_connector *connector;
+	int ret;
+
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+		if (connector->polled)
+			poll = true;
+	}
+
+	if (poll) {
+		ret = delayed_slow_work_enqueue(&dev->mode_config.output_poll_slow_work, DRM_OUTPUT_POLL_PERIOD);
+		if (ret)
+			DRM_ERROR("delayed enqueue failed %d\n", ret);
+	}
+}
+EXPORT_SYMBOL(drm_kms_helper_poll_enable);
+
+void drm_kms_helper_poll_init(struct drm_device *dev)
+{
+	slow_work_register_user(THIS_MODULE);
+	delayed_slow_work_init(&dev->mode_config.output_poll_slow_work,
+			       &output_poll_ops);
+	dev->mode_config.poll_enabled = true;
+
+	drm_kms_helper_poll_enable(dev);
+}
+EXPORT_SYMBOL(drm_kms_helper_poll_init);
+
+void drm_kms_helper_poll_fini(struct drm_device *dev)
+{
+	drm_kms_helper_poll_disable(dev);
+	slow_work_unregister_user(THIS_MODULE);
+}
+EXPORT_SYMBOL(drm_kms_helper_poll_fini);
+
+void drm_helper_hpd_irq_event(struct drm_device *dev)
+{
+	if (!dev->mode_config.poll_enabled)
+		return;
+	delayed_slow_work_cancel(&dev->mode_config.output_poll_slow_work);
+	/* schedule a slow work asap */
+	delayed_slow_work_enqueue(&dev->mode_config.output_poll_slow_work, 0);
+}
+EXPORT_SYMBOL(drm_helper_hpd_irq_event);
+
+static struct slow_work_ops output_poll_ops = {
+	.execute = output_poll_execute,
+};
diff --git a/drivers/gpu/drm/drm_dma.c b/drivers/gpu/drm/drm_dma.c
index 13f1537..252cbd7 100644
--- a/drivers/gpu/drm/drm_dma.c
+++ b/drivers/gpu/drm/drm_dma.c
@@ -47,12 +47,10 @@ int drm_dma_setup(struct drm_device *dev)
 {
 	int i;
 
-	dev->dma = kmalloc(sizeof(*dev->dma), GFP_KERNEL);
+	dev->dma = kzalloc(sizeof(*dev->dma), GFP_KERNEL);
 	if (!dev->dma)
 		return -ENOMEM;
 
-	memset(dev->dma, 0, sizeof(*dev->dma));
-
 	for (i = 0; i <= DRM_MAX_ORDER; i++)
 		memset(&dev->dma->bufs[i], 0, sizeof(dev->dma->bufs[0]));
 
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 18f41d7..c198186 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -2,6 +2,7 @@
  * Copyright (c) 2006 Luc Verhaegen (quirks list)
  * Copyright (c) 2007-2008 Intel Corporation
  *   Jesse Barnes <jesse.barnes@intel.com>
+ * Copyright 2010 Red Hat, Inc.
  *
  * DDC probing routines (drm_ddc_read & drm_do_probe_ddc_edid) originally from
  * FB layer.
@@ -33,10 +34,9 @@
 #include "drmP.h"
 #include "drm_edid.h"
 
-/*
- * TODO:
- *   - support EDID 1.4 (incl. CE blocks)
- */
+#define EDID_EST_TIMINGS 16
+#define EDID_STD_TIMINGS 8
+#define EDID_DETAILED_TIMINGS 4
 
 /*
  * EDID blocks out in the wild have a variety of bugs, try to collect
@@ -65,7 +65,8 @@
 
 #define LEVEL_DMT	0
 #define LEVEL_GTF	1
-#define LEVEL_CVT	2
+#define LEVEL_GTF2	2
+#define LEVEL_CVT	3
 
 static struct edid_quirk {
 	char *vendor;
@@ -109,51 +110,64 @@ static struct edid_quirk {
 	{ "SAM", 638, EDID_QUIRK_PREFER_LARGE_60 },
 };
 
+/*** DDC fetch and block validation ***/
 
-/* Valid EDID header has these bytes */
 static const u8 edid_header[] = {
 	0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00
 };
 
-/**
- * drm_edid_is_valid - sanity check EDID data
- * @edid: EDID data
- *
- * Sanity check the EDID block by looking at the header, the version number
- * and the checksum.  Return 0 if the EDID doesn't check out, or 1 if it's
- * valid.
+/*
+ * Sanity check the EDID block (base or extension).  Return 0 if the block
+ * doesn't check out, or 1 if it's valid.
  */
-bool drm_edid_is_valid(struct edid *edid)
+static bool
+drm_edid_block_valid(u8 *raw_edid)
 {
-	int i, score = 0;
+	int i;
 	u8 csum = 0;
-	u8 *raw_edid = (u8 *)edid;
+	struct edid *edid = (struct edid *)raw_edid;
+
+	if (raw_edid[0] == 0x00) {
+		int score = 0;
 
-	for (i = 0; i < sizeof(edid_header); i++)
-		if (raw_edid[i] == edid_header[i])
-			score++;
+		for (i = 0; i < sizeof(edid_header); i++)
+			if (raw_edid[i] == edid_header[i])
+				score++;
 
-	if (score == 8) ;
-	else if (score >= 6) {
-		DRM_DEBUG("Fixing EDID header, your hardware may be failing\n");
-		memcpy(raw_edid, edid_header, sizeof(edid_header));
-	} else
-		goto bad;
+		if (score == 8) ;
+		else if (score >= 6) {
+			DRM_DEBUG("Fixing EDID header, your hardware may be failing\n");
+			memcpy(raw_edid, edid_header, sizeof(edid_header));
+		} else {
+			goto bad;
+		}
+	}
 
 	for (i = 0; i < EDID_LENGTH; i++)
 		csum += raw_edid[i];
 	if (csum) {
 		DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum);
-		goto bad;
-	}
 
-	if (edid->version != 1) {
-		DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version);
-		goto bad;
+		/* allow CEA to slide through, switches mangle this */
+		if (raw_edid[0] != 0x02)
+			goto bad;
 	}
 
-	if (edid->revision > 4)
-		DRM_DEBUG("EDID minor > 4, assuming backward compatibility\n");
+	/* per-block-type checks */
+	switch (raw_edid[0]) {
+	case 0: /* base */
+		if (edid->version != 1) {
+			DRM_ERROR("EDID has major version %d, instead of 1\n", edid->version);
+			goto bad;
+		}
+
+		if (edid->revision > 4)
+			DRM_DEBUG("EDID minor > 4, assuming backward compatibility\n");
+		break;
+
+	default:
+		break;
+	}
 
 	return 1;
 
@@ -165,8 +179,158 @@ bad:
 	}
 	return 0;
 }
+
+/**
+ * drm_edid_is_valid - sanity check EDID data
+ * @edid: EDID data
+ *
+ * Sanity-check an entire EDID record (including extensions)
+ */
+bool drm_edid_is_valid(struct edid *edid)
+{
+	int i;
+	u8 *raw = (u8 *)edid;
+
+	if (!edid)
+		return false;
+
+	for (i = 0; i <= edid->extensions; i++)
+		if (!drm_edid_block_valid(raw + i * EDID_LENGTH))
+			return false;
+
+	return true;
+}
 EXPORT_SYMBOL(drm_edid_is_valid);
 
+#define DDC_ADDR 0x50
+#define DDC_SEGMENT_ADDR 0x30
+/**
+ * Get EDID information via I2C.
+ *
+ * \param adapter : i2c device adaptor
+ * \param buf     : EDID data buffer to be filled
+ * \param len     : EDID data buffer length
+ * \return 0 on success or -1 on failure.
+ *
+ * Try to fetch EDID information by calling i2c driver function.
+ */
+static int
+drm_do_probe_ddc_edid(struct i2c_adapter *adapter, unsigned char *buf,
+		      int block, int len)
+{
+	unsigned char start = block * EDID_LENGTH;
+	struct i2c_msg msgs[] = {
+		{
+			.addr	= DDC_ADDR,
+			.flags	= 0,
+			.len	= 1,
+			.buf	= &start,
+		}, {
+			.addr	= DDC_ADDR,
+			.flags	= I2C_M_RD,
+			.len	= len,
+			.buf	= buf + start,
+		}
+	};
+
+	if (i2c_transfer(adapter, msgs, 2) == 2)
+		return 0;
+
+	return -1;
+}
+
+static u8 *
+drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
+{
+	int i, j = 0;
+	u8 *block, *new;
+
+	if ((block = kmalloc(EDID_LENGTH, GFP_KERNEL)) == NULL)
+		return NULL;
+
+	/* base block fetch */
+	for (i = 0; i < 4; i++) {
+		if (drm_do_probe_ddc_edid(adapter, block, 0, EDID_LENGTH))
+			goto out;
+		if (drm_edid_block_valid(block))
+			break;
+	}
+	if (i == 4)
+		goto carp;
+
+	/* if there's no extensions, we're done */
+	if (block[0x7e] == 0)
+		return block;
+
+	new = krealloc(block, (block[0x7e] + 1) * EDID_LENGTH, GFP_KERNEL);
+	if (!new)
+		goto out;
+	block = new;
+
+	for (j = 1; j <= block[0x7e]; j++) {
+		for (i = 0; i < 4; i++) {
+			if (drm_do_probe_ddc_edid(adapter, block, j,
+						  EDID_LENGTH))
+				goto out;
+			if (drm_edid_block_valid(block + j * EDID_LENGTH))
+				break;
+		}
+		if (i == 4)
+			goto carp;
+	}
+
+	return block;
+
+carp:
+	dev_warn(&connector->dev->pdev->dev, "%s: EDID block %d invalid.\n",
+		 drm_get_connector_name(connector), j);
+
+out:
+	kfree(block);
+	return NULL;
+}
+
+/**
+ * Probe DDC presence.
+ *
+ * \param adapter : i2c device adaptor
+ * \return 1 on success
+ */
+static bool
+drm_probe_ddc(struct i2c_adapter *adapter)
+{
+	unsigned char out;
+
+	return (drm_do_probe_ddc_edid(adapter, &out, 0, 1) == 0);
+}
+
+/**
+ * drm_get_edid - get EDID data, if available
+ * @connector: connector we're probing
+ * @adapter: i2c adapter to use for DDC
+ *
+ * Poke the given i2c channel to grab EDID data if possible.  If found,
+ * attach it to the connector.
+ *
+ * Return edid data or NULL if we couldn't find any.
+ */
+struct edid *drm_get_edid(struct drm_connector *connector,
+			  struct i2c_adapter *adapter)
+{
+	struct edid *edid = NULL;
+
+	if (drm_probe_ddc(adapter))
+		edid = (struct edid *)drm_do_get_edid(connector, adapter);
+
+	connector->display_info.raw_edid = (char *)edid;
+
+	return edid;
+
+}
+EXPORT_SYMBOL(drm_get_edid);
+
+/*** EDID parsing ***/
+
 /**
  * edid_vendor - match a string against EDID's obfuscated vendor field
  * @edid: EDID to match
@@ -335,7 +499,7 @@ static struct drm_display_mode drm_dmt_modes[] = {
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
 	/* 1024x768@85Hz */
 	{ DRM_MODE("1024x768", DRM_MODE_TYPE_DRIVER, 94500, 1024, 1072,
-		   1072, 1376, 0, 768, 769, 772, 808, 0,
+		   1168, 1376, 0, 768, 769, 772, 808, 0,
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
 	/* 1152x864@75Hz */
 	{ DRM_MODE("1152x864", DRM_MODE_TYPE_DRIVER, 108000, 1152, 1216,
@@ -426,7 +590,7 @@ static struct drm_display_mode drm_dmt_modes[] = {
 		   1856, 2160, 0, 1200, 1201, 1204, 1250, 0,
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
 	/* 1600x1200@75Hz */
-	{ DRM_MODE("1600x1200", DRM_MODE_TYPE_DRIVER, 2025000, 1600, 1664,
+	{ DRM_MODE("1600x1200", DRM_MODE_TYPE_DRIVER, 202500, 1600, 1664,
 		   1856, 2160, 0, 1200, 1201, 1204, 1250, 0,
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
 	/* 1600x1200@85Hz */
@@ -497,8 +661,8 @@ static struct drm_display_mode drm_dmt_modes[] = {
 static const int drm_num_dmt_modes =
 	sizeof(drm_dmt_modes) / sizeof(struct drm_display_mode);
 
-static struct drm_display_mode *drm_find_dmt(struct drm_device *dev,
-			int hsize, int vsize, int fresh)
+struct drm_display_mode *drm_mode_find_dmt(struct drm_device *dev,
+					   int hsize, int vsize, int fresh)
 {
 	int i;
 	struct drm_display_mode *ptr, *mode;
@@ -516,6 +680,111 @@ static struct drm_display_mode *drm_find_dmt(struct drm_device *dev,
 	}
 	return mode;
 }
+EXPORT_SYMBOL(drm_mode_find_dmt);
+
+typedef void detailed_cb(struct detailed_timing *timing, void *closure);
+
+static void
+drm_for_each_detailed_block(u8 *raw_edid, detailed_cb *cb, void *closure)
+{
+	int i;
+	struct edid *edid = (struct edid *)raw_edid;
+
+	if (edid == NULL)
+		return;
+
+	for (i = 0; i < EDID_DETAILED_TIMINGS; i++)
+		cb(&(edid->detailed_timings[i]), closure);
+
+	/* XXX extension block walk */
+}
+
+static void
+is_rb(struct detailed_timing *t, void *data)
+{
+	u8 *r = (u8 *)t;
+	if (r[3] == EDID_DETAIL_MONITOR_RANGE)
+		if (r[15] & 0x10)
+			*(bool *)data = true;
+}
+
+/* EDID 1.4 defines this explicitly.  For EDID 1.3, we guess, badly. */
+static bool
+drm_monitor_supports_rb(struct edid *edid)
+{
+	if (edid->revision >= 4) {
+		bool ret;
+		drm_for_each_detailed_block((u8 *)edid, is_rb, &ret);
+		return ret;
+	}
+
+	return ((edid->input & DRM_EDID_INPUT_DIGITAL) != 0);
+}
+
+static void
+find_gtf2(struct detailed_timing *t, void *data)
+{
+	u8 *r = (u8 *)t;
+	if (r[3] == EDID_DETAIL_MONITOR_RANGE && r[10] == 0x02)
+		*(u8 **)data = r;
+}
+
+/* Secondary GTF curve kicks in above some break frequency */
+static int
+drm_gtf2_hbreak(struct edid *edid)
+{
+	u8 *r = NULL;
+	drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r);
+	return r ? (r[12] * 2) : 0;
+}
+
+static int
+drm_gtf2_2c(struct edid *edid)
+{
+	u8 *r = NULL;
+	drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r);
+	return r ? r[13] : 0;
+}
+
+static int
+drm_gtf2_m(struct edid *edid)
+{
+	u8 *r = NULL;
+	drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r);
+	return r ? (r[15] << 8) + r[14] : 0;
+}
+
+static int
+drm_gtf2_k(struct edid *edid)
+{
+	u8 *r = NULL;
+	drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r);
+	return r ? r[16] : 0;
+}
+
+static int
+drm_gtf2_2j(struct edid *edid)
+{
+	u8 *r = NULL;
+	drm_for_each_detailed_block((u8 *)edid, find_gtf2, &r);
+	return r ? r[17] : 0;
+}
+
+/**
+ * standard_timing_level - get std. timing level(CVT/GTF/DMT)
+ * @edid: EDID block to scan
+ */
+static int standard_timing_level(struct edid *edid)
+{
+	if (edid->revision >= 2) {
+		if (edid->revision >= 4 && (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF))
+			return LEVEL_CVT;
+		if (drm_gtf2_hbreak(edid))
+			return LEVEL_GTF2;
+		return LEVEL_GTF;
+	}
+	return LEVEL_DMT;
+}
 
 /*
  * 0 is reserved.  The spec says 0x01 fill for unused timings.  Some old
@@ -536,22 +805,20 @@ bad_std_timing(u8 a, u8 b)
  *
  * Take the standard timing params (in this case width, aspect, and refresh)
  * and convert them into a real mode using CVT/GTF/DMT.
- *
- * Punts for now, but should eventually use the FB layer's CVT based mode
- * generation code.
  */
-struct drm_display_mode *drm_mode_std(struct drm_device *dev,
-				      struct std_timing *t,
-				      int revision,
-				      int timing_level)
+static struct drm_display_mode *
+drm_mode_std(struct drm_connector *connector, struct edid *edid,
+	     struct std_timing *t, int revision)
 {
-	struct drm_display_mode *mode;
+	struct drm_device *dev = connector->dev;
+	struct drm_display_mode *m, *mode = NULL;
 	int hsize, vsize;
 	int vrefresh_rate;
 	unsigned aspect_ratio = (t->vfreq_aspect & EDID_TIMING_ASPECT_MASK)
 		>> EDID_TIMING_ASPECT_SHIFT;
 	unsigned vfreq = (t->vfreq_aspect & EDID_TIMING_VFREQ_MASK)
 		>> EDID_TIMING_VFREQ_SHIFT;
+	int timing_level = standard_timing_level(edid);
 
 	if (bad_std_timing(t->hsize, t->vfreq_aspect))
 		return NULL;
@@ -572,18 +839,38 @@ struct drm_display_mode *drm_mode_std(struct drm_device *dev,
 		vsize = (hsize * 4) / 5;
 	else
 		vsize = (hsize * 9) / 16;
-	/* HDTV hack */
-	if (hsize == 1360 && vsize == 765 && vrefresh_rate == 60) {
-		mode = drm_cvt_mode(dev, hsize, vsize, vrefresh_rate, 0, 0,
+
+	/* HDTV hack, part 1 */
+	if (vrefresh_rate == 60 &&
+	    ((hsize == 1360 && vsize == 765) ||
+	     (hsize == 1368 && vsize == 769))) {
+		hsize = 1366;
+		vsize = 768;
+	}
+
+	/*
+	 * If this connector already has a mode for this size and refresh
+	 * rate (because it came from detailed or CVT info), use that
+	 * instead.  This way we don't have to guess at interlace or
+	 * reduced blanking.
+	 */
+	list_for_each_entry(m, &connector->probed_modes, head)
+		if (m->hdisplay == hsize && m->vdisplay == vsize &&
+		    drm_mode_vrefresh(m) == vrefresh_rate)
+			return NULL;
+
+	/* HDTV hack, part 2 */
+	if (hsize == 1366 && vsize == 768 && vrefresh_rate == 60) {
+		mode = drm_cvt_mode(dev, 1366, 768, vrefresh_rate, 0, 0,
 				    false);
 		mode->hdisplay = 1366;
 		mode->hsync_start = mode->hsync_start - 1;
 		mode->hsync_end = mode->hsync_end - 1;
 		return mode;
 	}
-	mode = NULL;
+
 	/* check whether it can be found in default mode table */
-	mode = drm_find_dmt(dev, hsize, vsize, vrefresh_rate);
+	mode = drm_mode_find_dmt(dev, hsize, vsize, vrefresh_rate);
 	if (mode)
 		return mode;
 
@@ -593,6 +880,23 @@ struct drm_display_mode *drm_mode_std(struct drm_device *dev,
 	case LEVEL_GTF:
 		mode = drm_gtf_mode(dev, hsize, vsize, vrefresh_rate, 0, 0);
 		break;
+	case LEVEL_GTF2:
+		/*
+		 * This is potentially wrong if there's ever a monitor with
+		 * more than one ranges section, each claiming a different
+		 * secondary GTF curve.  Please don't do that.
+		 */
+		mode = drm_gtf_mode(dev, hsize, vsize, vrefresh_rate, 0, 0);
+		if (drm_mode_hsync(mode) > drm_gtf2_hbreak(edid)) {
+			kfree(mode);
+			mode = drm_gtf_mode_complex(dev, hsize, vsize,
+						    vrefresh_rate, 0, 0,
+						    drm_gtf2_m(edid),
+						    drm_gtf2_2c(edid),
+						    drm_gtf2_k(edid),
+						    drm_gtf2_2j(edid));
+		}
+		break;
 	case LEVEL_CVT:
 		mode = drm_cvt_mode(dev, hsize, vsize, vrefresh_rate, 0, 0,
 				    false);
@@ -716,10 +1020,10 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_device *dev,
 	if (mode->vsync_end > mode->vtotal)
 		mode->vtotal = mode->vsync_end + 1;
 
-	drm_mode_set_name(mode);
-
 	drm_mode_do_interlace_quirk(mode, pt);
 
+	drm_mode_set_name(mode);
+
 	if (quirks & EDID_QUIRK_DETAILED_SYNC_PP) {
 		pt->misc |= DRM_EDID_PT_HSYNC_POSITIVE | DRM_EDID_PT_VSYNC_POSITIVE;
 	}
@@ -802,10 +1106,6 @@ static struct drm_display_mode edid_est_modes[] = {
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) }, /* 1152x864@75Hz */
 };
 
-#define EDID_EST_TIMINGS 16
-#define EDID_STD_TIMINGS 8
-#define EDID_DETAILED_TIMINGS 4
-
 /**
  * add_established_modes - get est. modes from EDID and add them
  * @edid: EDID block to scan
@@ -833,19 +1133,6 @@ static int add_established_modes(struct drm_connector *connector, struct edid *e
 
 	return modes;
 }
-/**
- * stanard_timing_level - get std. timing level(CVT/GTF/DMT)
- * @edid: EDID block to scan
- */
-static int standard_timing_level(struct edid *edid)
-{
-	if (edid->revision >= 2) {
-		if (edid->revision >= 4 && (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF))
-			return LEVEL_CVT;
-		return LEVEL_GTF;
-	}
-	return LEVEL_DMT;
-}
 
 /**
  * add_standard_modes - get std. modes from EDID and add them
@@ -856,22 +1143,14 @@ static int standard_timing_level(struct edid *edid)
  */
 static int add_standard_modes(struct drm_connector *connector, struct edid *edid)
 {
-	struct drm_device *dev = connector->dev;
 	int i, modes = 0;
-	int timing_level;
-
-	timing_level = standard_timing_level(edid);
 
 	for (i = 0; i < EDID_STD_TIMINGS; i++) {
-		struct std_timing *t = &edid->standard_timings[i];
 		struct drm_display_mode *newmode;
 
-		/* If std timings bytes are 1, 1 it's empty */
-		if (t->hsize == 1 && t->vfreq_aspect == 1)
-			continue;
-
-		newmode = drm_mode_std(dev, &edid->standard_timings[i],
-				       edid->revision, timing_level);
+		newmode = drm_mode_std(connector, edid,
+				       &edid->standard_timings[i],
+				       edid->revision);
 		if (newmode) {
 			drm_mode_probed_add(connector, newmode);
 			modes++;
@@ -881,36 +1160,86 @@ static int add_standard_modes(struct drm_connector *connector, struct edid *edid
 	return modes;
 }
 
-/*
- * XXX fix this for:
- * - GTF secondary curve formula
- * - EDID 1.4 range offsets
- * - CVT extended bits
- */
 static bool
-mode_in_range(struct drm_display_mode *mode, struct detailed_timing *timing)
+mode_is_rb(struct drm_display_mode *mode)
 {
-	struct detailed_data_monitor_range *range;
-	int hsync, vrefresh;
-
-	range = &timing->data.other_data.data.range;
+	return (mode->htotal - mode->hdisplay == 160) &&
+	       (mode->hsync_end - mode->hdisplay == 80) &&
+	       (mode->hsync_end - mode->hsync_start == 32) &&
+	       (mode->vsync_start - mode->vdisplay == 3);
+}
 
+static bool
+mode_in_hsync_range(struct drm_display_mode *mode, struct edid *edid, u8 *t)
+{
+	int hsync, hmin, hmax;
+
+	hmin = t[7];
+	if (edid->revision >= 4)
+	    hmin += ((t[4] & 0x04) ? 255 : 0);
+	hmax = t[8];
+	if (edid->revision >= 4)
+	    hmax += ((t[4] & 0x08) ? 255 : 0);
 	hsync = drm_mode_hsync(mode);
-	vrefresh = drm_mode_vrefresh(mode);
 
-	if (hsync < range->min_hfreq_khz || hsync > range->max_hfreq_khz)
+	return (hsync <= hmax && hsync >= hmin);
+}
+
+static bool
+mode_in_vsync_range(struct drm_display_mode *mode, struct edid *edid, u8 *t)
+{
+	int vsync, vmin, vmax;
+
+	vmin = t[5];
+	if (edid->revision >= 4)
+	    vmin += ((t[4] & 0x01) ? 255 : 0);
+	vmax = t[6];
+	if (edid->revision >= 4)
+	    vmax += ((t[4] & 0x02) ? 255 : 0);
+	vsync = drm_mode_vrefresh(mode);
+
+	return (vsync <= vmax && vsync >= vmin);
+}
+
+static u32
+range_pixel_clock(struct edid *edid, u8 *t)
+{
+	/* unspecified */
+	if (t[9] == 0 || t[9] == 255)
+		return 0;
+
+	/* 1.4 with CVT support gives us real precision, yay */
+	if (edid->revision >= 4 && t[10] == 0x04)
+		return (t[9] * 10000) - ((t[12] >> 2) * 250);
+
+	/* 1.3 is pathetic, so fuzz up a bit */
+	return t[9] * 10000 + 5001;
+}
+
+static bool
+mode_in_range(struct drm_display_mode *mode, struct edid *edid,
+	      struct detailed_timing *timing)
+{
+	u32 max_clock;
+	u8 *t = (u8 *)timing;
+
+	if (!mode_in_hsync_range(mode, edid, t))
 		return false;
 
-	if (vrefresh < range->min_vfreq || vrefresh > range->max_vfreq)
+	if (!mode_in_vsync_range(mode, edid, t))
 		return false;
 
-	if (range->pixel_clock_mhz && range->pixel_clock_mhz != 0xff) {
-		/* be forgiving since it's in units of 10MHz */
-		int max_clock = range->pixel_clock_mhz * 10 + 9;
-		max_clock *= 1000;
+	if ((max_clock = range_pixel_clock(edid, t)))
 		if (mode->clock > max_clock)
 			return false;
-	}
+
+	/* 1.4 max horizontal check */
+	if (edid->revision >= 4 && t[10] == 0x04)
+		if (t[13] && mode->hdisplay > 8 * (t[13] + (256 * (t[12]&0x3))))
+			return false;
+
+	if (mode_is_rb(mode) && !drm_monitor_supports_rb(edid))
+		return false;
 
 	return true;
 }
@@ -919,15 +1248,16 @@ mode_in_range(struct drm_display_mode *mode, struct detailed_timing *timing)
  * XXX If drm_dmt_modes ever regrows the CVT-R modes (and it will) this will
  * need to account for them.
  */
-static int drm_gtf_modes_for_range(struct drm_connector *connector,
-				   struct detailed_timing *timing)
+static int
+drm_gtf_modes_for_range(struct drm_connector *connector, struct edid *edid,
+			struct detailed_timing *timing)
 {
 	int i, modes = 0;
 	struct drm_display_mode *newmode;
 	struct drm_device *dev = connector->dev;
 
 	for (i = 0; i < drm_num_dmt_modes; i++) {
-		if (mode_in_range(drm_dmt_modes + i, timing)) {
+		if (mode_in_range(drm_dmt_modes + i, edid, timing)) {
 			newmode = drm_mode_duplicate(dev, &drm_dmt_modes[i]);
 			if (newmode) {
 				drm_mode_probed_add(connector, newmode);
@@ -988,13 +1318,100 @@ static int drm_cvt_modes(struct drm_connector *connector,
 	return modes;
 }
 
+static const struct {
+	short w;
+	short h;
+	short r;
+	short rb;
+} est3_modes[] = {
+	/* byte 6 */
+	{ 640, 350, 85, 0 },
+	{ 640, 400, 85, 0 },
+	{ 720, 400, 85, 0 },
+	{ 640, 480, 85, 0 },
+	{ 848, 480, 60, 0 },
+	{ 800, 600, 85, 0 },
+	{ 1024, 768, 85, 0 },
+	{ 1152, 864, 75, 0 },
+	/* byte 7 */
+	{ 1280, 768, 60, 1 },
+	{ 1280, 768, 60, 0 },
+	{ 1280, 768, 75, 0 },
+	{ 1280, 768, 85, 0 },
+	{ 1280, 960, 60, 0 },
+	{ 1280, 960, 85, 0 },
+	{ 1280, 1024, 60, 0 },
+	{ 1280, 1024, 85, 0 },
+	/* byte 8 */
+	{ 1360, 768, 60, 0 },
+	{ 1440, 900, 60, 1 },
+	{ 1440, 900, 60, 0 },
+	{ 1440, 900, 75, 0 },
+	{ 1440, 900, 85, 0 },
+	{ 1400, 1050, 60, 1 },
+	{ 1400, 1050, 60, 0 },
+	{ 1400, 1050, 75, 0 },
+	/* byte 9 */
+	{ 1400, 1050, 85, 0 },
+	{ 1680, 1050, 60, 1 },
+	{ 1680, 1050, 60, 0 },
+	{ 1680, 1050, 75, 0 },
+	{ 1680, 1050, 85, 0 },
+	{ 1600, 1200, 60, 0 },
+	{ 1600, 1200, 65, 0 },
+	{ 1600, 1200, 70, 0 },
+	/* byte 10 */
+	{ 1600, 1200, 75, 0 },
+	{ 1600, 1200, 85, 0 },
+	{ 1792, 1344, 60, 0 },
+	{ 1792, 1344, 85, 0 },
+	{ 1856, 1392, 60, 0 },
+	{ 1856, 1392, 75, 0 },
+	{ 1920, 1200, 60, 1 },
+	{ 1920, 1200, 60, 0 },
+	/* byte 11 */
+	{ 1920, 1200, 75, 0 },
+	{ 1920, 1200, 85, 0 },
+	{ 1920, 1440, 60, 0 },
+	{ 1920, 1440, 75, 0 },
+};
+static const int num_est3_modes = sizeof(est3_modes) / sizeof(est3_modes[0]);
+
+static int
+drm_est3_modes(struct drm_connector *connector, struct detailed_timing *timing)
+{
+	int i, j, m, modes = 0;
+	struct drm_display_mode *mode;
+	u8 *est = ((u8 *)timing) + 5;
+
+	for (i = 0; i < 6; i++) {
+		for (j = 7; j > 0; j--) {
+			m = (i * 8) + (7 - j);
+			if (m >= num_est3_modes)
+				break;
+			if (est[i] & (1 << j)) {
+				mode = drm_mode_find_dmt(connector->dev,
+							 est3_modes[m].w,
+							 est3_modes[m].h,
+							 est3_modes[m].r
+							 /*, est3_modes[m].rb */);
+				if (mode) {
+					drm_mode_probed_add(connector, mode);
+					modes++;
+				}
+			}
+		}
+	}
+
+	return modes;
+}
+
 static int add_detailed_modes(struct drm_connector *connector,
 			      struct detailed_timing *timing,
 			      struct edid *edid, u32 quirks, int preferred)
 {
 	int i, modes = 0;
 	struct detailed_non_pixel *data = &timing->data.other_data;
-	int timing_level = standard_timing_level(edid);
 	int gtf = (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF);
 	struct drm_display_mode *newmode;
 	struct drm_device *dev = connector->dev;
@@ -1015,7 +1432,8 @@ static int add_detailed_modes(struct drm_connector *connector,
 	switch (data->type) {
 	case EDID_DETAIL_MONITOR_RANGE:
 		if (gtf)
-			modes += drm_gtf_modes_for_range(connector, timing);
+			modes += drm_gtf_modes_for_range(connector, edid,
+							 timing);
 		break;
 	case EDID_DETAIL_STD_MODES:
 		/* Six modes per detailed section */
@@ -1024,8 +1442,8 @@ static int add_detailed_modes(struct drm_connector *connector,
 			struct drm_display_mode *newmode;
 
 			std = &data->data.timings[i];
-			newmode = drm_mode_std(dev, std, edid->revision,
-					       timing_level);
+			newmode = drm_mode_std(connector, edid, std,
+					       edid->revision);
 			if (newmode) {
 				drm_mode_probed_add(connector, newmode);
 				modes++;
@@ -1035,6 +1453,9 @@ static int add_detailed_modes(struct drm_connector *connector,
 	case EDID_DETAIL_CVT_3BYTE:
 		modes += drm_cvt_modes(connector, timing);
 		break;
+	case EDID_DETAIL_EST_TIMINGS:
+		modes += drm_est3_modes(connector, timing);
+		break;
 	default:
 		break;
 	}
@@ -1058,7 +1479,10 @@ static int add_detailed_info(struct drm_connector *connector,
 
 	for (i = 0; i < EDID_DETAILED_TIMINGS; i++) {
 		struct detailed_timing *timing = &edid->detailed_timings[i];
-		int preferred = (i == 0) && (edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING);
+		int preferred = (i == 0);
+
+		if (preferred && edid->version == 1 && edid->revision < 4)
+			preferred = (edid->features & DRM_EDID_FEATURE_PREFERRED_TIMING);
 
 		/* In 1.0, only timings are allowed */
 		if (!timing->pixel_clock && edid->version == 1 &&
@@ -1088,39 +1512,22 @@ static int add_detailed_info_eedid(struct drm_connector *connector,
 	int i, modes = 0;
 	char *edid_ext = NULL;
 	struct detailed_timing *timing;
-	int edid_ext_num;
 	int start_offset, end_offset;
-	int timing_level;
 
-	if (edid->version == 1 && edid->revision < 3) {
-		/* If the EDID version is less than 1.3, there is no
-		 * extension EDID.
-		 */
+	if (edid->version == 1 && edid->revision < 3)
 		return 0;
-	}
-	if (!edid->extensions) {
-		/* if there is no extension EDID, it is unnecessary to
-		 * parse the E-EDID to get detailed info
-		 */
+	if (!edid->extensions)
 		return 0;
-	}
-
-	/* Chose real EDID extension number */
-	edid_ext_num = edid->extensions > DRM_MAX_EDID_EXT_NUM ?
-		DRM_MAX_EDID_EXT_NUM : edid->extensions;
 
 	/* Find CEA extension */
-	for (i = 0; i < edid_ext_num; i++) {
+	for (i = 0; i < edid->extensions; i++) {
 		edid_ext = (char *)edid + EDID_LENGTH * (i + 1);
-		/* This block is CEA extension */
 		if (edid_ext[0] == 0x02)
 			break;
 	}
 
-	if (i == edid_ext_num) {
-		/* if there is no additional timing EDID block, return */
+	if (i == edid->extensions)
 		return 0;
-	}
 
 	/* Get the start offset of detailed timing block */
 	start_offset = edid_ext[2];
@@ -1132,7 +1539,6 @@ static int add_detailed_info_eedid(struct drm_connector *connector,
 		return 0;
 	}
 
-	timing_level = standard_timing_level(edid);
 	end_offset = EDID_LENGTH;
 	end_offset -= sizeof(struct detailed_timing);
 	for (i = start_offset; i < end_offset;
@@ -1144,123 +1550,6 @@ static int add_detailed_info_eedid(struct drm_connector *connector,
 	return modes;
 }
 
-#define DDC_ADDR 0x50
-/**
- * Get EDID information via I2C.
- *
- * \param adapter : i2c device adaptor
- * \param buf     : EDID data buffer to be filled
- * \param len     : EDID data buffer length
- * \return 0 on success or -1 on failure.
- *
- * Try to fetch EDID information by calling i2c driver function.
- */
-int drm_do_probe_ddc_edid(struct i2c_adapter *adapter,
-			  unsigned char *buf, int len)
-{
-	unsigned char start = 0x0;
-	struct i2c_msg msgs[] = {
-		{
-			.addr	= DDC_ADDR,
-			.flags	= 0,
-			.len	= 1,
-			.buf	= &start,
-		}, {
-			.addr	= DDC_ADDR,
-			.flags	= I2C_M_RD,
-			.len	= len,
-			.buf	= buf,
-		}
-	};
-
-	if (i2c_transfer(adapter, msgs, 2) == 2)
-		return 0;
-
-	return -1;
-}
-EXPORT_SYMBOL(drm_do_probe_ddc_edid);
-
-static int drm_ddc_read_edid(struct drm_connector *connector,
-			     struct i2c_adapter *adapter,
-			     char *buf, int len)
-{
-	int i;
-
-	for (i = 0; i < 4; i++) {
-		if (drm_do_probe_ddc_edid(adapter, buf, len))
-			return -1;
-		if (drm_edid_is_valid((struct edid *)buf))
-			return 0;
-	}
-
-	/* repeated checksum failures; warn, but carry on */
-	dev_warn(&connector->dev->pdev->dev, "%s: EDID invalid.\n",
-		 drm_get_connector_name(connector));
-	return -1;
-}
-
-/**
- * drm_get_edid - get EDID data, if available
- * @connector: connector we're probing
- * @adapter: i2c adapter to use for DDC
- *
- * Poke the given connector's i2c channel to grab EDID data if possible.
- *
- * Return edid data or NULL if we couldn't find any.
- */
-struct edid *drm_get_edid(struct drm_connector *connector,
-			  struct i2c_adapter *adapter)
-{
-	int ret;
-	struct edid *edid;
-
-	edid = kmalloc(EDID_LENGTH * (DRM_MAX_EDID_EXT_NUM + 1),
-		       GFP_KERNEL);
-	if (edid == NULL) {
-		dev_warn(&connector->dev->pdev->dev,
-			 "Failed to allocate EDID\n");
-		goto end;
-	}
-
-	/* Read first EDID block */
-	ret = drm_ddc_read_edid(connector, adapter,
-				(unsigned char *)edid, EDID_LENGTH);
-	if (ret != 0)
-		goto clean_up;
-
-	/* There are EDID extensions to be read */
-	if (edid->extensions != 0) {
-		int edid_ext_num = edid->extensions;
-
-		if (edid_ext_num > DRM_MAX_EDID_EXT_NUM) {
-			dev_warn(&connector->dev->pdev->dev,
-				 "The number of extension(%d) is "
-				 "over max (%d), actually read number (%d)\n",
-				 edid_ext_num, DRM_MAX_EDID_EXT_NUM,
-				 DRM_MAX_EDID_EXT_NUM);
-			/* Reset EDID extension number to be read */
-			edid_ext_num = DRM_MAX_EDID_EXT_NUM;
-		}
-		/* Read EDID including extensions too */
-		ret = drm_ddc_read_edid(connector, adapter, (char *)edid,
-					EDID_LENGTH * (edid_ext_num + 1));
-		if (ret != 0)
-			goto clean_up;
-
-	}
-
-	connector->display_info.raw_edid = (char *)edid;
-	goto end;
-
-clean_up:
-	kfree(edid);
-	edid = NULL;
-end:
-	return edid;
-
-}
-EXPORT_SYMBOL(drm_get_edid);
-
 #define HDMI_IDENTIFIER 0x000C03
 #define VENDOR_BLOCK    0x03
 /**
@@ -1273,7 +1562,7 @@ EXPORT_SYMBOL(drm_get_edid);
 bool drm_detect_hdmi_monitor(struct edid *edid)
 {
 	char *edid_ext = NULL;
-	int i, hdmi_id, edid_ext_num;
+	int i, hdmi_id;
 	int start_offset, end_offset;
 	bool is_hdmi = false;
 
@@ -1281,19 +1570,15 @@ bool drm_detect_hdmi_monitor(struct edid *edid)
 	if (edid == NULL || edid->extensions == 0)
 		goto end;
 
-	/* Chose real EDID extension number */
-	edid_ext_num = edid->extensions > DRM_MAX_EDID_EXT_NUM ?
-		       DRM_MAX_EDID_EXT_NUM : edid->extensions;
-
 	/* Find CEA extension */
-	for (i = 0; i < edid_ext_num; i++) {
+	for (i = 0; i < edid->extensions; i++) {
 		edid_ext = (char *)edid + EDID_LENGTH * (i + 1);
 		/* This block is CEA extension */
 		if (edid_ext[0] == 0x02)
 			break;
 	}
 
-	if (i == edid_ext_num)
+	if (i == edid->extensions)
 		goto end;
 
 	/* Data block offset in CEA extension block */
@@ -1348,10 +1633,24 @@ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid)
 
 	quirks = edid_get_quirks(edid);
 
-	num_modes += add_established_modes(connector, edid);
-	num_modes += add_standard_modes(connector, edid);
+	/*
+	 * EDID spec says modes should be preferred in this order:
+	 * - preferred detailed mode
+	 * - other detailed modes from base block
+	 * - detailed modes from extension blocks
+	 * - CVT 3-byte code modes
+	 * - standard timing codes
+	 * - established timing codes
+	 * - modes inferred from GTF or CVT range information
+	 *
+	 * We don't quite implement this yet, but we're close.
+	 *
+	 * XXX order for additional mode types in extension blocks?
+	 */
 	num_modes += add_detailed_info(connector, edid, quirks);
 	num_modes += add_detailed_info_eedid(connector, edid, quirks);
+	num_modes += add_standard_modes(connector, edid);
+	num_modes += add_established_modes(connector, edid);
 
 	if (quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
 		edid_fixup_preferred(connector, quirks);
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 288ea2f..08c4c92 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -42,15 +42,33 @@ MODULE_LICENSE("GPL and additional rights");
 
 static LIST_HEAD(kernel_fb_helper_list);
 
-int drm_fb_helper_add_connector(struct drm_connector *connector)
+/* simple single crtc case helper function */
+int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
 {
-	connector->fb_helper_private = kzalloc(sizeof(struct drm_fb_helper_connector), GFP_KERNEL);
-	if (!connector->fb_helper_private)
-		return -ENOMEM;
+	struct drm_device *dev = fb_helper->dev;
+	struct drm_connector *connector;
+	int i;
+
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+		struct drm_fb_helper_connector *fb_helper_connector;
+
+		fb_helper_connector = kzalloc(sizeof(struct drm_fb_helper_connector), GFP_KERNEL);
+		if (!fb_helper_connector)
+			goto fail;
 
+		fb_helper_connector->connector = connector;
+		fb_helper->connector_info[fb_helper->connector_count++] = fb_helper_connector;
+	}
 	return 0;
+fail:
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		kfree(fb_helper->connector_info[i]);
+		fb_helper->connector_info[i] = NULL;
+	}
+	fb_helper->connector_count = 0;
+	return -ENOMEM;
 }
-EXPORT_SYMBOL(drm_fb_helper_add_connector);
+EXPORT_SYMBOL(drm_fb_helper_single_add_all_connectors);
 
 /**
  * drm_fb_helper_connector_parse_command_line - parse command line for connector
@@ -65,7 +83,7 @@ EXPORT_SYMBOL(drm_fb_helper_add_connector);
  *
  * enable/enable Digital/disable bit at the end
  */
-static bool drm_fb_helper_connector_parse_command_line(struct drm_connector *connector,
+static bool drm_fb_helper_connector_parse_command_line(struct drm_fb_helper_connector *fb_helper_conn,
 						       const char *mode_option)
 {
 	const char *name;
@@ -75,13 +93,13 @@ static bool drm_fb_helper_connector_parse_command_line(struct drm_connector *con
 	int yres_specified = 0, cvt = 0, rb = 0, interlace = 0, margins = 0;
 	int i;
 	enum drm_connector_force force = DRM_FORCE_UNSPECIFIED;
-	struct drm_fb_helper_connector *fb_help_conn = connector->fb_helper_private;
 	struct drm_fb_helper_cmdline_mode *cmdline_mode;
+	struct drm_connector *connector = fb_helper_conn->connector;
 
-	if (!fb_help_conn)
+	if (!fb_helper_conn)
 		return false;
 
-	cmdline_mode = &fb_help_conn->cmdline_mode;
+	cmdline_mode = &fb_helper_conn->cmdline_mode;
 	if (!mode_option)
 		mode_option = fb_mode_option;
 
@@ -204,18 +222,21 @@ done:
 	return true;
 }
 
-int drm_fb_helper_parse_command_line(struct drm_device *dev)
+static int drm_fb_helper_parse_command_line(struct drm_fb_helper *fb_helper)
 {
-	struct drm_connector *connector;
+	struct drm_fb_helper_connector *fb_helper_conn;
+	int i;
 
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+	for (i = 0; i < fb_helper->connector_count; i++) {
 		char *option = NULL;
 
+		fb_helper_conn = fb_helper->connector_info[i];
+
 		/* do something on return - turn off connector maybe */
-		if (fb_get_options(drm_get_connector_name(connector), &option))
+		if (fb_get_options(drm_get_connector_name(fb_helper_conn->connector), &option))
 			continue;
 
-		drm_fb_helper_connector_parse_command_line(connector, option);
+		drm_fb_helper_connector_parse_command_line(fb_helper_conn, option);
 	}
 	return 0;
 }
@@ -243,7 +264,7 @@ bool drm_fb_helper_force_kernel_mode(void)
 int drm_fb_helper_panic(struct notifier_block *n, unsigned long ununsed,
 			void *panic_str)
 {
-	DRM_ERROR("panic occurred, switching back to text console\n");
+	printk(KERN_ERR "panic occurred, switching back to text console\n");
 	return drm_fb_helper_force_kernel_mode();
 	return 0;
 }
@@ -293,6 +314,7 @@ static void drm_fb_helper_on(struct fb_info *info)
 	struct drm_fb_helper *fb_helper = info->par;
 	struct drm_device *dev = fb_helper->dev;
 	struct drm_crtc *crtc;
+	struct drm_crtc_helper_funcs *crtc_funcs;
 	struct drm_encoder *encoder;
 	int i;
 
@@ -300,33 +322,28 @@ static void drm_fb_helper_on(struct fb_info *info)
 	 * For each CRTC in this fb, turn the crtc on then,
 	 * find all associated encoders and turn them on.
 	 */
+	mutex_lock(&dev->mode_config.mutex);
 	for (i = 0; i < fb_helper->crtc_count; i++) {
-		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-			struct drm_crtc_helper_funcs *crtc_funcs =
-				crtc->helper_private;
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+		crtc_funcs = crtc->helper_private;
 
-			/* Only mess with CRTCs in this fb */
-			if (crtc->base.id != fb_helper->crtc_info[i].crtc_id ||
-			    !crtc->enabled)
-				continue;
+		if (!crtc->enabled)
+			continue;
 
-			mutex_lock(&dev->mode_config.mutex);
-			crtc_funcs->dpms(crtc, DRM_MODE_DPMS_ON);
-			mutex_unlock(&dev->mode_config.mutex);
+		crtc_funcs->dpms(crtc, DRM_MODE_DPMS_ON);
 
-			/* Found a CRTC on this fb, now find encoders */
-			list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
-				if (encoder->crtc == crtc) {
-					struct drm_encoder_helper_funcs *encoder_funcs;
 
-					encoder_funcs = encoder->helper_private;
-					mutex_lock(&dev->mode_config.mutex);
-					encoder_funcs->dpms(encoder, DRM_MODE_DPMS_ON);
-					mutex_unlock(&dev->mode_config.mutex);
-				}
+		/* Found a CRTC on this fb, now find encoders */
+		list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+			if (encoder->crtc == crtc) {
+				struct drm_encoder_helper_funcs *encoder_funcs;
+
+				encoder_funcs = encoder->helper_private;
+				encoder_funcs->dpms(encoder, DRM_MODE_DPMS_ON);
 			}
 		}
 	}
+	mutex_unlock(&dev->mode_config.mutex);
 }
 
 static void drm_fb_helper_off(struct fb_info *info, int dpms_mode)
@@ -334,6 +351,7 @@ static void drm_fb_helper_off(struct fb_info *info, int dpms_mode)
 	struct drm_fb_helper *fb_helper = info->par;
 	struct drm_device *dev = fb_helper->dev;
 	struct drm_crtc *crtc;
+	struct drm_crtc_helper_funcs *crtc_funcs;
 	struct drm_encoder *encoder;
 	int i;
 
@@ -341,32 +359,26 @@ static void drm_fb_helper_off(struct fb_info *info, int dpms_mode)
 	 * For each CRTC in this fb, find all associated encoders
 	 * and turn them off, then turn off the CRTC.
 	 */
+	mutex_lock(&dev->mode_config.mutex);
 	for (i = 0; i < fb_helper->crtc_count; i++) {
-		list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-			struct drm_crtc_helper_funcs *crtc_funcs =
-				crtc->helper_private;
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+		crtc_funcs = crtc->helper_private;
 
-			/* Only mess with CRTCs in this fb */
-			if (crtc->base.id != fb_helper->crtc_info[i].crtc_id ||
-			    !crtc->enabled)
-				continue;
+		if (!crtc->enabled)
+			continue;
 
-			/* Found a CRTC on this fb, now find encoders */
-			list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
-				if (encoder->crtc == crtc) {
-					struct drm_encoder_helper_funcs *encoder_funcs;
+		/* Found a CRTC on this fb, now find encoders */
+		list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+			if (encoder->crtc == crtc) {
+				struct drm_encoder_helper_funcs *encoder_funcs;
 
-					encoder_funcs = encoder->helper_private;
-					mutex_lock(&dev->mode_config.mutex);
-					encoder_funcs->dpms(encoder, dpms_mode);
-					mutex_unlock(&dev->mode_config.mutex);
-				}
+				encoder_funcs = encoder->helper_private;
+				encoder_funcs->dpms(encoder, dpms_mode);
 			}
-			mutex_lock(&dev->mode_config.mutex);
-			crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF);
-			mutex_unlock(&dev->mode_config.mutex);
 		}
+		crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF);
 	}
+	mutex_unlock(&dev->mode_config.mutex);
 }
 
 int drm_fb_helper_blank(int blank, struct fb_info *info)
@@ -401,50 +413,81 @@ static void drm_fb_helper_crtc_free(struct drm_fb_helper *helper)
 {
 	int i;
 
+	for (i = 0; i < helper->connector_count; i++)
+		kfree(helper->connector_info[i]);
+	kfree(helper->connector_info);
 	for (i = 0; i < helper->crtc_count; i++)
 		kfree(helper->crtc_info[i].mode_set.connectors);
 	kfree(helper->crtc_info);
 }
 
-int drm_fb_helper_init_crtc_count(struct drm_fb_helper *helper, int crtc_count, int max_conn_count)
+int drm_fb_helper_init(struct drm_device *dev,
+		       struct drm_fb_helper *fb_helper,
+		       int crtc_count, int max_conn_count)
 {
-	struct drm_device *dev = helper->dev;
 	struct drm_crtc *crtc;
 	int ret = 0;
 	int i;
 
-	helper->crtc_info = kcalloc(crtc_count, sizeof(struct drm_fb_helper_crtc), GFP_KERNEL);
-	if (!helper->crtc_info)
+	fb_helper->dev = dev;
+
+	INIT_LIST_HEAD(&fb_helper->kernel_fb_list);
+
+	fb_helper->crtc_info = kcalloc(crtc_count, sizeof(struct drm_fb_helper_crtc), GFP_KERNEL);
+	if (!fb_helper->crtc_info)
 		return -ENOMEM;
 
-	helper->crtc_count = crtc_count;
+	fb_helper->crtc_count = crtc_count;
+	fb_helper->connector_info = kcalloc(dev->mode_config.num_connector, sizeof(struct drm_fb_helper_connector *), GFP_KERNEL);
+	if (!fb_helper->connector_info) {
+		kfree(fb_helper->crtc_info);
+		return -ENOMEM;
+	}
+	fb_helper->connector_count = 0;
 
 	for (i = 0; i < crtc_count; i++) {
-		helper->crtc_info[i].mode_set.connectors =
+		fb_helper->crtc_info[i].mode_set.connectors =
 			kcalloc(max_conn_count,
 				sizeof(struct drm_connector *),
 				GFP_KERNEL);
 
-		if (!helper->crtc_info[i].mode_set.connectors) {
+		if (!fb_helper->crtc_info[i].mode_set.connectors) {
 			ret = -ENOMEM;
 			goto out_free;
 		}
-		helper->crtc_info[i].mode_set.num_connectors = 0;
+		fb_helper->crtc_info[i].mode_set.num_connectors = 0;
 	}
 
 	i = 0;
 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		helper->crtc_info[i].crtc_id = crtc->base.id;
-		helper->crtc_info[i].mode_set.crtc = crtc;
+		fb_helper->crtc_info[i].crtc_id = crtc->base.id;
+		fb_helper->crtc_info[i].mode_set.crtc = crtc;
 		i++;
 	}
-	helper->conn_limit = max_conn_count;
+	fb_helper->conn_limit = max_conn_count;
 	return 0;
 out_free:
-	drm_fb_helper_crtc_free(helper);
+	drm_fb_helper_crtc_free(fb_helper);
 	return -ENOMEM;
 }
-EXPORT_SYMBOL(drm_fb_helper_init_crtc_count);
+EXPORT_SYMBOL(drm_fb_helper_init);
+
+void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
+{
+	if (!list_empty(&fb_helper->kernel_fb_list)) {
+		list_del(&fb_helper->kernel_fb_list);
+		if (list_empty(&kernel_fb_helper_list)) {
+			printk(KERN_INFO "drm: unregistered panic notifier\n");
+			atomic_notifier_chain_unregister(&panic_notifier_list,
+							 &paniced);
+			unregister_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
+		}
+	}
+
+	drm_fb_helper_crtc_free(fb_helper);
+
+}
+EXPORT_SYMBOL(drm_fb_helper_fini);
 
 static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
 		     u16 blue, u16 regno, struct fb_info *info)
@@ -508,20 +551,15 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
 {
 	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_device *dev = fb_helper->dev;
+	struct drm_crtc_helper_funcs *crtc_funcs;
 	u16 *red, *green, *blue, *transp;
 	struct drm_crtc *crtc;
 	int i, rc = 0;
 	int start;
 
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
-		for (i = 0; i < fb_helper->crtc_count; i++) {
-			if (crtc->base.id == fb_helper->crtc_info[i].crtc_id)
-				break;
-		}
-		if (i == fb_helper->crtc_count)
-			continue;
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+		crtc_funcs = crtc->helper_private;
 
 		red = cmap->red;
 		green = cmap->green;
@@ -549,41 +587,6 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
 }
 EXPORT_SYMBOL(drm_fb_helper_setcmap);
 
-int drm_fb_helper_setcolreg(unsigned regno,
-			    unsigned red,
-			    unsigned green,
-			    unsigned blue,
-			    unsigned transp,
-			    struct fb_info *info)
-{
-	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_device *dev = fb_helper->dev;
-	struct drm_crtc *crtc;
-	int i;
-	int ret;
-
-	if (regno > 255)
-		return 1;
-
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
-		for (i = 0; i < fb_helper->crtc_count; i++) {
-			if (crtc->base.id == fb_helper->crtc_info[i].crtc_id)
-				break;
-		}
-		if (i == fb_helper->crtc_count)
-			continue;
-
-		ret = setcolreg(crtc, red, green, blue, regno, info);
-		if (ret)
-			return ret;
-
-		crtc_funcs->load_lut(crtc);
-	}
-	return 0;
-}
-EXPORT_SYMBOL(drm_fb_helper_setcolreg);
-
 int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
 			    struct fb_info *info)
 {
@@ -687,23 +690,21 @@ int drm_fb_helper_set_par(struct fb_info *info)
 		return -EINVAL;
 	}
 
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-
-		for (i = 0; i < fb_helper->crtc_count; i++) {
-			if (crtc->base.id == fb_helper->crtc_info[i].crtc_id)
-				break;
-		}
-		if (i == fb_helper->crtc_count)
-			continue;
-
-		if (crtc->fb == fb_helper->crtc_info[i].mode_set.fb) {
-			mutex_lock(&dev->mode_config.mutex);
-			ret = crtc->funcs->set_config(&fb_helper->crtc_info[i].mode_set);
+	mutex_lock(&dev->mode_config.mutex);
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+		ret = crtc->funcs->set_config(&fb_helper->crtc_info[i].mode_set);
+		if (ret) {
 			mutex_unlock(&dev->mode_config.mutex);
-			if (ret)
-				return ret;
+			return ret;
 		}
 	}
+	mutex_unlock(&dev->mode_config.mutex);
+
+	if (fb_helper->delayed_hotplug) {
+		fb_helper->delayed_hotplug = false;
+		drm_fb_helper_hotplug_event(fb_helper);
+	}
 	return 0;
 }
 EXPORT_SYMBOL(drm_fb_helper_set_par);
@@ -718,14 +719,9 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
 	int ret = 0;
 	int i;
 
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		for (i = 0; i < fb_helper->crtc_count; i++) {
-			if (crtc->base.id == fb_helper->crtc_info[i].crtc_id)
-				break;
-		}
-
-		if (i == fb_helper->crtc_count)
-			continue;
+	mutex_lock(&dev->mode_config.mutex);
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
 
 		modeset = &fb_helper->crtc_info[i].mode_set;
 
@@ -733,209 +729,138 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
 		modeset->y = var->yoffset;
 
 		if (modeset->num_connectors) {
-			mutex_lock(&dev->mode_config.mutex);
 			ret = crtc->funcs->set_config(modeset);
-			mutex_unlock(&dev->mode_config.mutex);
 			if (!ret) {
 				info->var.xoffset = var->xoffset;
 				info->var.yoffset = var->yoffset;
 			}
 		}
 	}
+	mutex_unlock(&dev->mode_config.mutex);
 	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_pan_display);
 
-int drm_fb_helper_single_fb_probe(struct drm_device *dev,
-				  int preferred_bpp,
-				  int (*fb_create)(struct drm_device *dev,
-						   uint32_t fb_width,
-						   uint32_t fb_height,
-						   uint32_t surface_width,
-						   uint32_t surface_height,
-						   uint32_t surface_depth,
-						   uint32_t surface_bpp,
-						   struct drm_framebuffer **fb_ptr))
+int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
+				  int preferred_bpp)
 {
-	struct drm_crtc *crtc;
-	struct drm_connector *connector;
-	unsigned int fb_width = (unsigned)-1, fb_height = (unsigned)-1;
-	unsigned int surface_width = 0, surface_height = 0;
 	int new_fb = 0;
 	int crtc_count = 0;
-	int ret, i, conn_count = 0;
+	int i;
 	struct fb_info *info;
-	struct drm_framebuffer *fb;
-	struct drm_mode_set *modeset = NULL;
-	struct drm_fb_helper *fb_helper;
-	uint32_t surface_depth = 24, surface_bpp = 32;
+	struct drm_fb_helper_surface_size sizes;
+	int gamma_size = 0;
+
+	memset(&sizes, 0, sizeof(struct drm_fb_helper_surface_size));
+	sizes.surface_depth = 24;
+	sizes.surface_bpp = 32;
+	sizes.fb_width = (unsigned)-1;
+	sizes.fb_height = (unsigned)-1;
 
 	/* if driver picks 8 or 16 by default use that
 	   for both depth/bpp */
-	if (preferred_bpp != surface_bpp) {
-		surface_depth = surface_bpp = preferred_bpp;
+	if (preferred_bpp != sizes.surface_bpp) {
+		sizes.surface_depth = sizes.surface_bpp = preferred_bpp;
 	}
 	/* first up get a count of crtcs now in use and new min/maxes width/heights */
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		struct drm_fb_helper_connector *fb_help_conn = connector->fb_helper_private;
-
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		struct drm_fb_helper_connector *fb_helper_conn = fb_helper->connector_info[i];
 		struct drm_fb_helper_cmdline_mode *cmdline_mode;
 
-		if (!fb_help_conn)
-			continue;
-		
-		cmdline_mode = &fb_help_conn->cmdline_mode;
+		cmdline_mode = &fb_helper_conn->cmdline_mode;
 
 		if (cmdline_mode->bpp_specified) {
 			switch (cmdline_mode->bpp) {
 			case 8:
-				surface_depth = surface_bpp = 8;
+				sizes.surface_depth = sizes.surface_bpp = 8;
 				break;
 			case 15:
-				surface_depth = 15;
-				surface_bpp = 16;
+				sizes.surface_depth = 15;
+				sizes.surface_bpp = 16;
 				break;
 			case 16:
-				surface_depth = surface_bpp = 16;
+				sizes.surface_depth = sizes.surface_bpp = 16;
 				break;
 			case 24:
-				surface_depth = surface_bpp = 24;
+				sizes.surface_depth = sizes.surface_bpp = 24;
 				break;
 			case 32:
-				surface_depth = 24;
-				surface_bpp = 32;
+				sizes.surface_depth = 24;
+				sizes.surface_bpp = 32;
 				break;
 			}
 			break;
 		}
 	}
 
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		if (drm_helper_crtc_in_use(crtc)) {
-			if (crtc->desired_mode) {
-				if (crtc->desired_mode->hdisplay < fb_width)
-					fb_width = crtc->desired_mode->hdisplay;
-
-				if (crtc->desired_mode->vdisplay < fb_height)
-					fb_height = crtc->desired_mode->vdisplay;
-
-				if (crtc->desired_mode->hdisplay > surface_width)
-					surface_width = crtc->desired_mode->hdisplay;
-
-				if (crtc->desired_mode->vdisplay > surface_height)
-					surface_height = crtc->desired_mode->vdisplay;
-			}
+	crtc_count = 0;
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		struct drm_display_mode *desired_mode;
+		desired_mode = fb_helper->crtc_info[i].desired_mode;
+
+		if (desired_mode) {
+			if (gamma_size == 0)
+				gamma_size = fb_helper->crtc_info[i].mode_set.crtc->gamma_size;
+			if (desired_mode->hdisplay < sizes.fb_width)
+				sizes.fb_width = desired_mode->hdisplay;
+			if (desired_mode->vdisplay < sizes.fb_height)
+				sizes.fb_height = desired_mode->vdisplay;
+			if (desired_mode->hdisplay > sizes.surface_width)
+				sizes.surface_width = desired_mode->hdisplay;
+			if (desired_mode->vdisplay > sizes.surface_height)
+				sizes.surface_height = desired_mode->vdisplay;
 			crtc_count++;
 		}
 	}
 
-	if (crtc_count == 0 || fb_width == -1 || fb_height == -1) {
+	if (crtc_count == 0 || sizes.fb_width == -1 || sizes.fb_height == -1) {
 		/* hmm everyone went away - assume VGA cable just fell out
 		   and will come back later. */
-		return 0;
+		DRM_INFO("Cannot find any crtc or sizes - going 1024x768\n");
+		sizes.fb_width = sizes.surface_width = 1024;
+		sizes.fb_height = sizes.surface_height = 768;
 	}
 
-	/* do we have an fb already? */
-	if (list_empty(&dev->mode_config.fb_kernel_list)) {
-		ret = (*fb_create)(dev, fb_width, fb_height, surface_width,
-				   surface_height, surface_depth, surface_bpp,
-				   &fb);
-		if (ret)
-			return -EINVAL;
-		new_fb = 1;
-	} else {
-		fb = list_first_entry(&dev->mode_config.fb_kernel_list,
-				      struct drm_framebuffer, filp_head);
-
-		/* if someone hotplugs something bigger than we have already allocated, we are pwned.
-		   As really we can't resize an fbdev that is in the wild currently due to fbdev
-		   not really being designed for the lower layers moving stuff around under it.
-		   - so in the grand style of things - punt. */
-		if ((fb->width < surface_width) ||
-		    (fb->height < surface_height)) {
-			DRM_ERROR("Framebuffer not large enough to scale console onto.\n");
-			return -EINVAL;
-		}
-	}
-
-	info = fb->fbdev;
-	fb_helper = info->par;
-
-	crtc_count = 0;
-	/* okay we need to setup new connector sets in the crtcs */
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		modeset = &fb_helper->crtc_info[crtc_count].mode_set;
-		modeset->fb = fb;
-		conn_count = 0;
-		list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-			if (connector->encoder)
-				if (connector->encoder->crtc == modeset->crtc) {
-					modeset->connectors[conn_count] = connector;
-					conn_count++;
-					if (conn_count > fb_helper->conn_limit)
-						BUG();
-				}
-		}
-
-		for (i = conn_count; i < fb_helper->conn_limit; i++)
-			modeset->connectors[i] = NULL;
+	/* push down into drivers */
+	new_fb = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
+	if (new_fb < 0)
+		return new_fb;
 
-		modeset->crtc = crtc;
-		crtc_count++;
+	info = fb_helper->fbdev;
 
-		modeset->num_connectors = conn_count;
-		if (modeset->crtc->desired_mode) {
-			if (modeset->mode)
-				drm_mode_destroy(dev, modeset->mode);
-			modeset->mode = drm_mode_duplicate(dev,
-							   modeset->crtc->desired_mode);
-		}
+	/* set the fb pointer */
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb;
 	}
-	fb_helper->crtc_count = crtc_count;
-	fb_helper->fb = fb;
 
 	if (new_fb) {
 		info->var.pixclock = 0;
-		ret = fb_alloc_cmap(&info->cmap, modeset->crtc->gamma_size, 0);
-		if (ret)
-			return ret;
 		if (register_framebuffer(info) < 0) {
-			fb_dealloc_cmap(&info->cmap);
 			return -EINVAL;
 		}
+
+		printk(KERN_INFO "fb%d: %s frame buffer device\n", info->node,
+		       info->fix.id);
+
 	} else {
 		drm_fb_helper_set_par(info);
 	}
-	printk(KERN_INFO "fb%d: %s frame buffer device\n", info->node,
-	       info->fix.id);
 
 	/* Switch back to kernel console on panic */
 	/* multi card linked list maybe */
 	if (list_empty(&kernel_fb_helper_list)) {
-		printk(KERN_INFO "registered panic notifier\n");
+		printk(KERN_INFO "drm: registered panic notifier\n");
 		atomic_notifier_chain_register(&panic_notifier_list,
 					       &paniced);
 		register_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
 	}
-	list_add(&fb_helper->kernel_fb_list, &kernel_fb_helper_list);
+	if (new_fb)
+		list_add(&fb_helper->kernel_fb_list, &kernel_fb_helper_list);
+
 	return 0;
 }
 EXPORT_SYMBOL(drm_fb_helper_single_fb_probe);
 
-void drm_fb_helper_free(struct drm_fb_helper *helper)
-{
-	list_del(&helper->kernel_fb_list);
-	if (list_empty(&kernel_fb_helper_list)) {
-		printk(KERN_INFO "unregistered panic notifier\n");
-		atomic_notifier_chain_unregister(&panic_notifier_list,
-						 &paniced);
-		unregister_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
-	}
-	drm_fb_helper_crtc_free(helper);
-	fb_dealloc_cmap(&helper->fb->fbdev->cmap);
-}
-EXPORT_SYMBOL(drm_fb_helper_free);
-
 void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
 			    uint32_t depth)
 {
@@ -954,10 +879,11 @@ void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
 }
 EXPORT_SYMBOL(drm_fb_helper_fill_fix);
 
-void drm_fb_helper_fill_var(struct fb_info *info, struct drm_framebuffer *fb,
+void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helper,
 			    uint32_t fb_width, uint32_t fb_height)
 {
-	info->pseudo_palette = fb->pseudo_palette;
+	struct drm_framebuffer *fb = fb_helper->fb;
+	info->pseudo_palette = fb_helper->pseudo_palette;
 	info->var.xres_virtual = fb->width;
 	info->var.yres_virtual = fb->height;
 	info->var.bits_per_pixel = fb->bits_per_pixel;
@@ -1025,3 +951,457 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_framebuffer *fb,
 	info->var.yres = fb_height;
 }
 EXPORT_SYMBOL(drm_fb_helper_fill_var);
+
+static int drm_fb_helper_probe_connector_modes(struct drm_fb_helper *fb_helper,
+					       uint32_t maxX,
+					       uint32_t maxY)
+{
+	struct drm_connector *connector;
+	int count = 0;
+	int i;
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		connector = fb_helper->connector_info[i]->connector;
+		count += connector->funcs->fill_modes(connector, maxX, maxY);
+	}
+
+	return count;
+}
+
+static struct drm_display_mode *drm_has_preferred_mode(struct drm_fb_helper_connector *fb_connector, int width, int height)
+{
+	struct drm_display_mode *mode;
+
+	list_for_each_entry(mode, &fb_connector->connector->modes, head) {
+		if (drm_mode_width(mode) > width ||
+		    drm_mode_height(mode) > height)
+			continue;
+		if (mode->type & DRM_MODE_TYPE_PREFERRED)
+			return mode;
+	}
+	return NULL;
+}
+
+static bool drm_has_cmdline_mode(struct drm_fb_helper_connector *fb_connector)
+{
+	struct drm_fb_helper_cmdline_mode *cmdline_mode;
+	cmdline_mode = &fb_connector->cmdline_mode;
+	return cmdline_mode->specified;
+}
+
+static struct drm_display_mode *drm_pick_cmdline_mode(struct drm_fb_helper_connector *fb_helper_conn,
+						      int width, int height)
+{
+	struct drm_fb_helper_cmdline_mode *cmdline_mode;
+	struct drm_display_mode *mode = NULL;
+
+	cmdline_mode = &fb_helper_conn->cmdline_mode;
+	if (cmdline_mode->specified == false)
+		return mode;
+
+	/* attempt to find a matching mode in the list of modes
+	 *  we have gotten so far, if not add a CVT mode that conforms
+	 */
+	if (cmdline_mode->rb || cmdline_mode->margins)
+		goto create_mode;
+
+	list_for_each_entry(mode, &fb_helper_conn->connector->modes, head) {
+		/* check width/height */
+		if (mode->hdisplay != cmdline_mode->xres ||
+		    mode->vdisplay != cmdline_mode->yres)
+			continue;
+
+		if (cmdline_mode->refresh_specified) {
+			if (mode->vrefresh != cmdline_mode->refresh)
+				continue;
+		}
+
+		if (cmdline_mode->interlace) {
+			if (!(mode->flags & DRM_MODE_FLAG_INTERLACE))
+				continue;
+		}
+		return mode;
+	}
+
+create_mode:
+	mode = drm_cvt_mode(fb_helper_conn->connector->dev, cmdline_mode->xres,
+			    cmdline_mode->yres,
+			    cmdline_mode->refresh_specified ? cmdline_mode->refresh : 60,
+			    cmdline_mode->rb, cmdline_mode->interlace,
+			    cmdline_mode->margins);
+	drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
+	list_add(&mode->head, &fb_helper_conn->connector->modes);
+	return mode;
+}
+
+static bool drm_connector_enabled(struct drm_connector *connector, bool strict)
+{
+	bool enable;
+
+	if (strict) {
+		enable = connector->status == connector_status_connected;
+	} else {
+		enable = connector->status != connector_status_disconnected;
+	}
+	return enable;
+}
+
+static void drm_enable_connectors(struct drm_fb_helper *fb_helper,
+				  bool *enabled)
+{
+	bool any_enabled = false;
+	struct drm_connector *connector;
+	int i = 0;
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		connector = fb_helper->connector_info[i]->connector;
+		enabled[i] = drm_connector_enabled(connector, true);
+		DRM_DEBUG_KMS("connector %d enabled? %s\n", connector->base.id,
+			  enabled[i] ? "yes" : "no");
+		any_enabled |= enabled[i];
+	}
+
+	if (any_enabled)
+		return;
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		connector = fb_helper->connector_info[i]->connector;
+		enabled[i] = drm_connector_enabled(connector, false);
+	}
+}
+
+static bool drm_target_cloned(struct drm_fb_helper *fb_helper,
+			      struct drm_display_mode **modes,
+			      bool *enabled, int width, int height)
+{
+	int count, i, j;
+	bool can_clone = false;
+	struct drm_fb_helper_connector *fb_helper_conn;
+	struct drm_display_mode *dmt_mode, *mode;
+
+	/* only contemplate cloning in the single crtc case */
+	if (fb_helper->crtc_count > 1)
+		return false;
+
+	count = 0;
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		if (enabled[i])
+			count++;
+	}
+
+	/* only contemplate cloning if more than one connector is enabled */
+	if (count <= 1)
+		return false;
+
+	/* check the command line or if nothing common pick 1024x768 */
+	can_clone = true;
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		if (!enabled[i])
+			continue;
+		fb_helper_conn = fb_helper->connector_info[i];
+		modes[i] = drm_pick_cmdline_mode(fb_helper_conn, width, height);
+		if (!modes[i]) {
+			can_clone = false;
+			break;
+		}
+		for (j = 0; j < i; j++) {
+			if (!enabled[j])
+				continue;
+			if (!drm_mode_equal(modes[j], modes[i]))
+				can_clone = false;
+		}
+	}
+
+	if (can_clone) {
+		DRM_DEBUG_KMS("can clone using command line\n");
+		return true;
+	}
+
+	/* try and find a 1024x768 mode on each connector */
+	can_clone = true;
+	dmt_mode = drm_mode_find_dmt(fb_helper->dev, 1024, 768, 60);
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+
+		if (!enabled[i])
+			continue;
+
+		fb_helper_conn = fb_helper->connector_info[i];
+		list_for_each_entry(mode, &fb_helper_conn->connector->modes, head) {
+			if (drm_mode_equal(mode, dmt_mode))
+				modes[i] = mode;
+		}
+		if (!modes[i])
+			can_clone = false;
+	}
+
+	if (can_clone) {
+		DRM_DEBUG_KMS("can clone using 1024x768\n");
+		return true;
+	}
+	DRM_INFO("kms: can't enable cloning when we probably wanted to.\n");
+	return false;
+}
+
+static bool drm_target_preferred(struct drm_fb_helper *fb_helper,
+				 struct drm_display_mode **modes,
+				 bool *enabled, int width, int height)
+{
+	struct drm_fb_helper_connector *fb_helper_conn;
+	int i;
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		fb_helper_conn = fb_helper->connector_info[i];
+
+		if (enabled[i] == false)
+			continue;
+
+		DRM_DEBUG_KMS("looking for cmdline mode on connector %d\n",
+			      fb_helper_conn->connector->base.id);
+
+		/* got for command line mode first */
+		modes[i] = drm_pick_cmdline_mode(fb_helper_conn, width, height);
+		if (!modes[i]) {
+			DRM_DEBUG_KMS("looking for preferred mode on connector %d\n",
+				      fb_helper_conn->connector->base.id);
+			modes[i] = drm_has_preferred_mode(fb_helper_conn, width, height);
+		}
+		/* No preferred modes, pick one off the list */
+		if (!modes[i] && !list_empty(&fb_helper_conn->connector->modes)) {
+			list_for_each_entry(modes[i], &fb_helper_conn->connector->modes, head)
+				break;
+		}
+		DRM_DEBUG_KMS("found mode %s\n", modes[i] ? modes[i]->name :
+			  "none");
+	}
+	return true;
+}
+
+static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
+			  struct drm_fb_helper_crtc **best_crtcs,
+			  struct drm_display_mode **modes,
+			  int n, int width, int height)
+{
+	int c, o;
+	struct drm_device *dev = fb_helper->dev;
+	struct drm_connector *connector;
+	struct drm_connector_helper_funcs *connector_funcs;
+	struct drm_encoder *encoder;
+	struct drm_fb_helper_crtc *best_crtc;
+	int my_score, best_score, score;
+	struct drm_fb_helper_crtc **crtcs, *crtc;
+	struct drm_fb_helper_connector *fb_helper_conn;
+
+	if (n == fb_helper->connector_count)
+		return 0;
+
+	fb_helper_conn = fb_helper->connector_info[n];
+	connector = fb_helper_conn->connector;
+
+	best_crtcs[n] = NULL;
+	best_crtc = NULL;
+	best_score = drm_pick_crtcs(fb_helper, best_crtcs, modes, n+1, width, height);
+	if (modes[n] == NULL)
+		return best_score;
+
+	crtcs = kzalloc(dev->mode_config.num_connector *
+			sizeof(struct drm_fb_helper_crtc *), GFP_KERNEL);
+	if (!crtcs)
+		return best_score;
+
+	my_score = 1;
+	if (connector->status == connector_status_connected)
+		my_score++;
+	if (drm_has_cmdline_mode(fb_helper_conn))
+		my_score++;
+	if (drm_has_preferred_mode(fb_helper_conn, width, height))
+		my_score++;
+
+	connector_funcs = connector->helper_private;
+	encoder = connector_funcs->best_encoder(connector);
+	if (!encoder)
+		goto out;
+
+	/* select a crtc for this connector and then attempt to configure
+	   remaining connectors */
+	for (c = 0; c < fb_helper->crtc_count; c++) {
+		crtc = &fb_helper->crtc_info[c];
+
+		if ((encoder->possible_crtcs & (1 << c)) == 0) {
+			continue;
+		}
+
+		for (o = 0; o < n; o++)
+			if (best_crtcs[o] == crtc)
+				break;
+
+		if (o < n) {
+			/* ignore cloning unless only a single crtc */
+			if (fb_helper->crtc_count > 1)
+				continue;
+
+			if (!drm_mode_equal(modes[o], modes[n]))
+				continue;
+		}
+
+		crtcs[n] = crtc;
+		memcpy(crtcs, best_crtcs, n * sizeof(struct drm_fb_helper_crtc *));
+		score = my_score + drm_pick_crtcs(fb_helper, crtcs, modes, n + 1,
+						  width, height);
+		if (score > best_score) {
+			best_crtc = crtc;
+			best_score = score;
+			memcpy(best_crtcs, crtcs,
+			       dev->mode_config.num_connector *
+			       sizeof(struct drm_fb_helper_crtc *));
+		}
+	}
+out:
+	kfree(crtcs);
+	return best_score;
+}
+
+static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
+{
+	struct drm_device *dev = fb_helper->dev;
+	struct drm_fb_helper_crtc **crtcs;
+	struct drm_display_mode **modes;
+	struct drm_encoder *encoder;
+	struct drm_mode_set *modeset;
+	bool *enabled;
+	int width, height;
+	int i, ret;
+
+	DRM_DEBUG_KMS("\n");
+
+	width = dev->mode_config.max_width;
+	height = dev->mode_config.max_height;
+
+	/* clean out all the encoder/crtc combos */
+	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		encoder->crtc = NULL;
+	}
+
+	crtcs = kcalloc(dev->mode_config.num_connector,
+			sizeof(struct drm_fb_helper_crtc *), GFP_KERNEL);
+	modes = kcalloc(dev->mode_config.num_connector,
+			sizeof(struct drm_display_mode *), GFP_KERNEL);
+	enabled = kcalloc(dev->mode_config.num_connector,
+			  sizeof(bool), GFP_KERNEL);
+
+	drm_enable_connectors(fb_helper, enabled);
+
+	ret = drm_target_cloned(fb_helper, modes, enabled, width, height);
+	if (!ret) {
+		ret = drm_target_preferred(fb_helper, modes, enabled, width, height);
+		if (!ret)
+			DRM_ERROR("Unable to find initial modes\n");
+	}
+
+	DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n", width, height);
+
+	drm_pick_crtcs(fb_helper, crtcs, modes, 0, width, height);
+
+	/* need to set the modesets up here for use later */
+	/* fill out the connector<->crtc mappings into the modesets */
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		modeset = &fb_helper->crtc_info[i].mode_set;
+		modeset->num_connectors = 0;
+	}
+
+	for (i = 0; i < fb_helper->connector_count; i++) {
+		struct drm_display_mode *mode = modes[i];
+		struct drm_fb_helper_crtc *fb_crtc = crtcs[i];
+		modeset = &fb_crtc->mode_set;
+
+		if (mode && fb_crtc) {
+			DRM_DEBUG_KMS("desired mode %s set on crtc %d\n",
+				      mode->name, fb_crtc->mode_set.crtc->base.id);
+			fb_crtc->desired_mode = mode;
+			if (modeset->mode)
+				drm_mode_destroy(dev, modeset->mode);
+			modeset->mode = drm_mode_duplicate(dev,
+							   fb_crtc->desired_mode);
+			modeset->connectors[modeset->num_connectors++] = fb_helper->connector_info[i]->connector;
+		}
+	}
+
+	kfree(crtcs);
+	kfree(modes);
+	kfree(enabled);
+}
+
+/**
+ * drm_helper_initial_config - setup a sane initial connector configuration
+ * @dev: DRM device
+ *
+ * LOCKING:
+ * Called at init time, must take mode config lock.
+ *
+ * Scan the CRTCs and connectors and try to put together an initial setup.
+ * At the moment, this is a cloned configuration across all heads with
+ * a new framebuffer object as the backing store.
+ *
+ * RETURNS:
+ * Zero if everything went ok, nonzero otherwise.
+ */
+bool drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel)
+{
+	struct drm_device *dev = fb_helper->dev;
+	int count = 0;
+
+	/* disable all the possible outputs/crtcs before entering KMS mode */
+	drm_helper_disable_unused_functions(fb_helper->dev);
+
+	drm_fb_helper_parse_command_line(fb_helper);
+
+	count = drm_fb_helper_probe_connector_modes(fb_helper,
+						    dev->mode_config.max_width,
+						    dev->mode_config.max_height);
+	/*
+	 * we shouldn't end up with no modes here.
+	 */
+	if (count == 0) {
+		printk(KERN_INFO "No connectors reported connected with modes\n");
+	}
+	drm_setup_crtcs(fb_helper);
+
+	return drm_fb_helper_single_fb_probe(fb_helper, bpp_sel);
+}
+EXPORT_SYMBOL(drm_fb_helper_initial_config);
+
+bool drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper)
+{
+	int count = 0;
+	u32 max_width, max_height, bpp_sel;
+	bool bound = false, crtcs_bound = false;
+	struct drm_crtc *crtc;
+
+	if (!fb_helper->fb)
+		return false;
+
+	list_for_each_entry(crtc, &fb_helper->dev->mode_config.crtc_list, head) {
+		if (crtc->fb)
+			crtcs_bound = true;
+		if (crtc->fb == fb_helper->fb)
+			bound = true;
+	}
+
+	if (!bound && crtcs_bound) {
+		fb_helper->delayed_hotplug = true;
+		return false;
+	}
+	DRM_DEBUG_KMS("\n");
+
+	max_width = fb_helper->fb->width;
+	max_height = fb_helper->fb->height;
+	bpp_sel = fb_helper->fb->bits_per_pixel;
+
+	count = drm_fb_helper_probe_connector_modes(fb_helper, max_width,
+						    max_height);
+	drm_setup_crtcs(fb_helper);
+
+	return drm_fb_helper_single_fb_probe(fb_helper, bpp_sel);
+}
+EXPORT_SYMBOL(drm_fb_helper_hotplug_event);
+
diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
index 9d532d7..e7aace2 100644
--- a/drivers/gpu/drm/drm_fops.c
+++ b/drivers/gpu/drm/drm_fops.c
@@ -243,11 +243,10 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
 
 	DRM_DEBUG("pid = %d, minor = %d\n", task_pid_nr(current), minor_id);
 
-	priv = kmalloc(sizeof(*priv), GFP_KERNEL);
+	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
 		return -ENOMEM;
 
-	memset(priv, 0, sizeof(*priv));
 	filp->private_data = priv;
 	priv->filp = filp;
 	priv->uid = current_euid();
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index aa89d4b..33dad3f 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -124,6 +124,31 @@ drm_gem_destroy(struct drm_device *dev)
 }
 
 /**
+ * Initialize an already allocate GEM object of the specified size with
+ * shmfs backing store.
+ */
+int drm_gem_object_init(struct drm_device *dev,
+			struct drm_gem_object *obj, size_t size)
+{
+	BUG_ON((size & (PAGE_SIZE - 1)) != 0);
+
+	obj->dev = dev;
+	obj->filp = shmem_file_setup("drm mm object", size, VM_NORESERVE);
+	if (IS_ERR(obj->filp))
+		return -ENOMEM;
+
+	kref_init(&obj->refcount);
+	kref_init(&obj->handlecount);
+	obj->size = size;
+
+	atomic_inc(&dev->object_count);
+	atomic_add(obj->size, &dev->object_memory);
+
+	return 0;
+}
+EXPORT_SYMBOL(drm_gem_object_init);
+
+/**
  * Allocate a GEM object of the specified size with shmfs backing store
  */
 struct drm_gem_object *
@@ -131,28 +156,22 @@ drm_gem_object_alloc(struct drm_device *dev, size_t size)
 {
 	struct drm_gem_object *obj;
 
-	BUG_ON((size & (PAGE_SIZE - 1)) != 0);
-
 	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
 	if (!obj)
 		goto free;
 
-	obj->dev = dev;
-	obj->filp = shmem_file_setup("drm mm object", size, VM_NORESERVE);
-	if (IS_ERR(obj->filp))
+	if (drm_gem_object_init(dev, obj, size) != 0)
 		goto free;
 
-	kref_init(&obj->refcount);
-	kref_init(&obj->handlecount);
-	obj->size = size;
 	if (dev->driver->gem_init_object != NULL &&
 	    dev->driver->gem_init_object(obj) != 0) {
 		goto fput;
 	}
-	atomic_inc(&dev->object_count);
-	atomic_add(obj->size, &dev->object_memory);
 	return obj;
 fput:
+	/* Object_init mangles the global counters - readjust them. */
+	atomic_dec(&dev->object_count);
+	atomic_sub(obj->size, &dev->object_memory);
 	fput(obj->filp);
 free:
 	kfree(obj);
@@ -403,15 +422,15 @@ drm_gem_release(struct drm_device *dev, struct drm_file *file_private)
 	idr_destroy(&file_private->object_idr);
 }
 
-static void
-drm_gem_object_free_common(struct drm_gem_object *obj)
+void
+drm_gem_object_release(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
 	fput(obj->filp);
 	atomic_dec(&dev->object_count);
 	atomic_sub(obj->size, &dev->object_memory);
-	kfree(obj);
 }
+EXPORT_SYMBOL(drm_gem_object_release);
 
 /**
  * Called after the last reference to the object has been lost.
@@ -429,8 +448,6 @@ drm_gem_object_free(struct kref *kref)
 
 	if (dev->driver->gem_free_object != NULL)
 		dev->driver->gem_free_object(obj);
-
-	drm_gem_object_free_common(obj);
 }
 EXPORT_SYMBOL(drm_gem_object_free);
 
@@ -453,8 +470,6 @@ drm_gem_object_free_unlocked(struct kref *kref)
 		dev->driver->gem_free_object(obj);
 		mutex_unlock(&dev->struct_mutex);
 	}
-
-	drm_gem_object_free_common(obj);
 }
 EXPORT_SYMBOL(drm_gem_object_free_unlocked);
 
diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
index 76d6339..f1f473e 100644
--- a/drivers/gpu/drm/drm_modes.c
+++ b/drivers/gpu/drm/drm_modes.c
@@ -258,8 +258,10 @@ struct drm_display_mode *drm_cvt_mode(struct drm_device *dev, int hdisplay,
 	drm_mode->clock -= drm_mode->clock % CVT_CLOCK_STEP;
 	/* 18/16. Find actual vertical frame frequency */
 	/* ignore - just set the mode flag for interlaced */
-	if (interlaced)
+	if (interlaced) {
 		drm_mode->vtotal *= 2;
+		drm_mode->flags |= DRM_MODE_FLAG_INTERLACE;
+	}
 	/* Fill the mode line name */
 	drm_mode_set_name(drm_mode);
 	if (reduced)
@@ -268,43 +270,35 @@ struct drm_display_mode *drm_cvt_mode(struct drm_device *dev, int hdisplay,
 	else
 		drm_mode->flags |= (DRM_MODE_FLAG_PVSYNC |
 					DRM_MODE_FLAG_NHSYNC);
-	if (interlaced)
-		drm_mode->flags |= DRM_MODE_FLAG_INTERLACE;
 
-    return drm_mode;
+	return drm_mode;
 }
 EXPORT_SYMBOL(drm_cvt_mode);
 
 /**
- * drm_gtf_mode - create the modeline based on GTF algorithm
+ * drm_gtf_mode_complex - create the modeline based on full GTF algorithm
  *
  * @dev		:drm device
  * @hdisplay	:hdisplay size
  * @vdisplay	:vdisplay size
  * @vrefresh	:vrefresh rate.
  * @interlaced	:whether the interlace is supported
- * @margins	:whether the margin is supported
+ * @margins	:desired margin size
+ * @GTF_[MCKJ]  :extended GTF formula parameters
  *
  * LOCKING.
  * none.
  *
- * return the modeline based on GTF algorithm
- *
- * This function is to create the modeline based on the GTF algorithm.
- * Generalized Timing Formula is derived from:
- *	GTF Spreadsheet by Andy Morrish (1/5/97)
- *	available at http://www.vesa.org
+ * return the modeline based on full GTF algorithm.
  *
- * And it is copied from the file of xserver/hw/xfree86/modes/xf86gtf.c.
- * What I have done is to translate it by using integer calculation.
- * I also refer to the function of fb_get_mode in the file of
- * drivers/video/fbmon.c
+ * GTF feature blocks specify C and J in multiples of 0.5, so we pass them
+ * in here multiplied by two.  For a C of 40, pass in 80.
  */
-struct drm_display_mode *drm_gtf_mode(struct drm_device *dev, int hdisplay,
-				      int vdisplay, int vrefresh,
-				      bool interlaced, int margins)
-{
-	/* 1) top/bottom margin size (% of height) - default: 1.8, */
+struct drm_display_mode *
+drm_gtf_mode_complex(struct drm_device *dev, int hdisplay, int vdisplay,
+		     int vrefresh, bool interlaced, int margins,
+		     int GTF_M, int GTF_2C, int GTF_K, int GTF_2J)
+{	/* 1) top/bottom margin size (% of height) - default: 1.8, */
 #define	GTF_MARGIN_PERCENTAGE		18
 	/* 2) character cell horizontal granularity (pixels) - default 8 */
 #define	GTF_CELL_GRAN			8
@@ -316,17 +310,9 @@ struct drm_display_mode *drm_gtf_mode(struct drm_device *dev, int hdisplay,
 #define H_SYNC_PERCENT			8
 	/* min time of vsync + back porch (microsec) */
 #define MIN_VSYNC_PLUS_BP		550
-	/* blanking formula gradient */
-#define GTF_M				600
-	/* blanking formula offset */
-#define GTF_C				40
-	/* blanking formula scaling factor */
-#define GTF_K				128
-	/* blanking formula scaling factor */
-#define GTF_J				20
 	/* C' and M' are part of the Blanking Duty Cycle computation */
-#define GTF_C_PRIME		(((GTF_C - GTF_J) * GTF_K / 256) + GTF_J)
-#define GTF_M_PRIME		(GTF_K * GTF_M / 256)
+#define GTF_C_PRIME	((((GTF_2C - GTF_2J) * GTF_K / 256) + GTF_2J) / 2)
+#define GTF_M_PRIME	(GTF_K * GTF_M / 256)
 	struct drm_display_mode *drm_mode;
 	unsigned int hdisplay_rnd, vdisplay_rnd, vfieldrate_rqd;
 	int top_margin, bottom_margin;
@@ -460,17 +446,61 @@ struct drm_display_mode *drm_gtf_mode(struct drm_device *dev, int hdisplay,
 
 	drm_mode->clock = pixel_freq;
 
-	drm_mode_set_name(drm_mode);
-	drm_mode->flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC;
-
 	if (interlaced) {
 		drm_mode->vtotal *= 2;
 		drm_mode->flags |= DRM_MODE_FLAG_INTERLACE;
 	}
 
+	drm_mode_set_name(drm_mode);
+	if (GTF_M == 600 && GTF_2C == 80 && GTF_K == 128 && GTF_2J == 40)
+		drm_mode->flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC;
+	else
+		drm_mode->flags = DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_NVSYNC;
+
 	return drm_mode;
 }
+EXPORT_SYMBOL(drm_gtf_mode_complex);
+
+/**
+ * drm_gtf_mode - create the modeline based on GTF algorithm
+ *
+ * @dev		:drm device
+ * @hdisplay	:hdisplay size
+ * @vdisplay	:vdisplay size
+ * @vrefresh	:vrefresh rate.
+ * @interlaced	:whether the interlace is supported
+ * @margins	:whether the margin is supported
+ *
+ * LOCKING.
+ * none.
+ *
+ * return the modeline based on GTF algorithm
+ *
+ * This function is to create the modeline based on the GTF algorithm.
+ * Generalized Timing Formula is derived from:
+ *	GTF Spreadsheet by Andy Morrish (1/5/97)
+ *	available at http://www.vesa.org
+ *
+ * And it is copied from the file of xserver/hw/xfree86/modes/xf86gtf.c.
+ * What I have done is to translate it by using integer calculation.
+ * I also refer to the function of fb_get_mode in the file of
+ * drivers/video/fbmon.c
+ *
+ * Standard GTF parameters:
+ * M = 600
+ * C = 40
+ * K = 128
+ * J = 20
+ */
+struct drm_display_mode *
+drm_gtf_mode(struct drm_device *dev, int hdisplay, int vdisplay, int vrefresh,
+	     bool lace, int margins)
+{
+	return drm_gtf_mode_complex(dev, hdisplay, vdisplay, vrefresh, lace,
+				    margins, 600, 40 * 2, 128, 20 * 2);
+}
 EXPORT_SYMBOL(drm_gtf_mode);
+
 /**
  * drm_mode_set_name - set the name on a mode
  * @mode: name will be set in this mode
@@ -482,8 +512,11 @@ EXPORT_SYMBOL(drm_gtf_mode);
  */
 void drm_mode_set_name(struct drm_display_mode *mode)
 {
-	snprintf(mode->name, DRM_DISPLAY_MODE_LEN, "%dx%d", mode->hdisplay,
-		 mode->vdisplay);
+	bool interlaced = !!(mode->flags & DRM_MODE_FLAG_INTERLACE);
+
+	snprintf(mode->name, DRM_DISPLAY_MODE_LEN, "%dx%d%s",
+		 mode->hdisplay, mode->vdisplay,
+		 interlaced ? "i" : "");
 }
 EXPORT_SYMBOL(drm_mode_set_name);
 
diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c
index 25bbd30..3a3a451 100644
--- a/drivers/gpu/drm/drm_sysfs.c
+++ b/drivers/gpu/drm/drm_sysfs.c
@@ -333,7 +333,7 @@ static struct device_attribute connector_attrs_opt1[] = {
 static struct bin_attribute edid_attr = {
 	.attr.name = "edid",
 	.attr.mode = 0444,
-	.size = 128,
+	.size = 0,
 	.read = edid_show,
 };
 
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 9929f84..da78f2c 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -22,6 +22,7 @@ i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o \
 	  intel_fb.o \
 	  intel_tv.o \
 	  intel_dvo.o \
+	  intel_ringbuffer.o \
 	  intel_overlay.o \
 	  dvo_ch7xxx.o \
 	  dvo_ch7017.o \
@@ -33,3 +34,5 @@ i915-$(CONFIG_ACPI)	+= i915_opregion.o
 i915-$(CONFIG_COMPAT)   += i915_ioc32.o
 
 obj-$(CONFIG_DRM_I915)  += i915.o
+
+CFLAGS_i915_trace_points.o := -I$(src)
diff --git a/drivers/gpu/drm/i915/dvo.h b/drivers/gpu/drm/i915/dvo.h
index 288fc50..0d6ff64 100644
--- a/drivers/gpu/drm/i915/dvo.h
+++ b/drivers/gpu/drm/i915/dvo.h
@@ -70,16 +70,6 @@ struct intel_dvo_dev_ops {
 	void (*dpms)(struct intel_dvo_device *dvo, int mode);
 
 	/*
-	 * Saves the output's state for restoration on VT switch.
-	 */
-	void (*save)(struct intel_dvo_device *dvo);
-
-	/*
-	 * Restore's the output's state at VT switch.
-	 */
-	void (*restore)(struct intel_dvo_device *dvo);
-
-	/*
 	 * Callback for testing a video mode for a given output.
 	 *
 	 * This function should only check for cases where a mode can't
diff --git a/drivers/gpu/drm/i915/dvo_ch7017.c b/drivers/gpu/drm/i915/dvo_ch7017.c
index 1184c14..14d5980 100644
--- a/drivers/gpu/drm/i915/dvo_ch7017.c
+++ b/drivers/gpu/drm/i915/dvo_ch7017.c
@@ -159,16 +159,7 @@
 #define CH7017_BANG_LIMIT_CONTROL	0x7f
 
 struct ch7017_priv {
-	uint8_t save_hapi;
-	uint8_t save_vali;
-	uint8_t save_valo;
-	uint8_t save_ailo;
-	uint8_t save_lvds_pll_vco;
-	uint8_t save_feedback_div;
-	uint8_t save_lvds_control_2;
-	uint8_t save_outputs_enable;
-	uint8_t save_lvds_power_down;
-	uint8_t save_power_management;
+	uint8_t dummy;
 };
 
 static void ch7017_dump_regs(struct intel_dvo_device *dvo);
@@ -401,39 +392,6 @@ do {							\
 	DUMP(CH7017_LVDS_POWER_DOWN);
 }
 
-static void ch7017_save(struct intel_dvo_device *dvo)
-{
-	struct ch7017_priv *priv = dvo->dev_priv;
-
-	ch7017_read(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT, &priv->save_hapi);
-	ch7017_read(dvo, CH7017_VERTICAL_ACTIVE_LINE_OUTPUT, &priv->save_valo);
-	ch7017_read(dvo, CH7017_ACTIVE_INPUT_LINE_OUTPUT, &priv->save_ailo);
-	ch7017_read(dvo, CH7017_LVDS_PLL_VCO_CONTROL, &priv->save_lvds_pll_vco);
-	ch7017_read(dvo, CH7017_LVDS_PLL_FEEDBACK_DIV, &priv->save_feedback_div);
-	ch7017_read(dvo, CH7017_LVDS_CONTROL_2, &priv->save_lvds_control_2);
-	ch7017_read(dvo, CH7017_OUTPUTS_ENABLE, &priv->save_outputs_enable);
-	ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &priv->save_lvds_power_down);
-	ch7017_read(dvo, CH7017_POWER_MANAGEMENT, &priv->save_power_management);
-}
-
-static void ch7017_restore(struct intel_dvo_device *dvo)
-{
-	struct ch7017_priv *priv = dvo->dev_priv;
-
-	/* Power down before changing mode */
-	ch7017_dpms(dvo, DRM_MODE_DPMS_OFF);
-
-	ch7017_write(dvo, CH7017_HORIZONTAL_ACTIVE_PIXEL_INPUT, priv->save_hapi);
-	ch7017_write(dvo, CH7017_VERTICAL_ACTIVE_LINE_OUTPUT, priv->save_valo);
-	ch7017_write(dvo, CH7017_ACTIVE_INPUT_LINE_OUTPUT, priv->save_ailo);
-	ch7017_write(dvo, CH7017_LVDS_PLL_VCO_CONTROL, priv->save_lvds_pll_vco);
-	ch7017_write(dvo, CH7017_LVDS_PLL_FEEDBACK_DIV, priv->save_feedback_div);
-	ch7017_write(dvo, CH7017_LVDS_CONTROL_2, priv->save_lvds_control_2);
-	ch7017_write(dvo, CH7017_OUTPUTS_ENABLE, priv->save_outputs_enable);
-	ch7017_write(dvo, CH7017_LVDS_POWER_DOWN, priv->save_lvds_power_down);
-	ch7017_write(dvo, CH7017_POWER_MANAGEMENT, priv->save_power_management);
-}
-
 static void ch7017_destroy(struct intel_dvo_device *dvo)
 {
 	struct ch7017_priv *priv = dvo->dev_priv;
@@ -451,7 +409,5 @@ struct intel_dvo_dev_ops ch7017_ops = {
 	.mode_set = ch7017_mode_set,
 	.dpms = ch7017_dpms,
 	.dump_regs = ch7017_dump_regs,
-	.save = ch7017_save,
-	.restore = ch7017_restore,
 	.destroy = ch7017_destroy,
 };
diff --git a/drivers/gpu/drm/i915/dvo_ch7xxx.c b/drivers/gpu/drm/i915/dvo_ch7xxx.c
index d56ff5c..6f1944b 100644
--- a/drivers/gpu/drm/i915/dvo_ch7xxx.c
+++ b/drivers/gpu/drm/i915/dvo_ch7xxx.c
@@ -92,21 +92,10 @@ static struct ch7xxx_id_struct {
 	{ CH7301_VID, "CH7301" },
 };
 
-struct ch7xxx_reg_state {
-    uint8_t regs[CH7xxx_NUM_REGS];
-};
-
 struct ch7xxx_priv {
 	bool quiet;
-
-	struct ch7xxx_reg_state save_reg;
-	struct ch7xxx_reg_state mode_reg;
-	uint8_t save_TCTL, save_TPCP, save_TPD, save_TPVT;
-	uint8_t save_TLPF, save_TCT, save_PM, save_IDF;
 };
 
-static void ch7xxx_save(struct intel_dvo_device *dvo);
-
 static char *ch7xxx_get_id(uint8_t vid)
 {
 	int i;
@@ -312,42 +301,17 @@ static void ch7xxx_dpms(struct intel_dvo_device *dvo, int mode)
 
 static void ch7xxx_dump_regs(struct intel_dvo_device *dvo)
 {
-	struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
 	int i;
 
 	for (i = 0; i < CH7xxx_NUM_REGS; i++) {
+		uint8_t val;
 		if ((i % 8) == 0 )
 			DRM_LOG_KMS("\n %02X: ", i);
-		DRM_LOG_KMS("%02X ", ch7xxx->mode_reg.regs[i]);
+		ch7xxx_readb(dvo, i, &val);
+		DRM_LOG_KMS("%02X ", val);
 	}
 }
 
-static void ch7xxx_save(struct intel_dvo_device *dvo)
-{
-	struct ch7xxx_priv *ch7xxx= dvo->dev_priv;
-
-	ch7xxx_readb(dvo, CH7xxx_TCTL, &ch7xxx->save_TCTL);
-	ch7xxx_readb(dvo, CH7xxx_TPCP, &ch7xxx->save_TPCP);
-	ch7xxx_readb(dvo, CH7xxx_TPD, &ch7xxx->save_TPD);
-	ch7xxx_readb(dvo, CH7xxx_TPVT, &ch7xxx->save_TPVT);
-	ch7xxx_readb(dvo, CH7xxx_TLPF, &ch7xxx->save_TLPF);
-	ch7xxx_readb(dvo, CH7xxx_PM, &ch7xxx->save_PM);
-	ch7xxx_readb(dvo, CH7xxx_IDF, &ch7xxx->save_IDF);
-}
-
-static void ch7xxx_restore(struct intel_dvo_device *dvo)
-{
-	struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
-
-	ch7xxx_writeb(dvo, CH7xxx_TCTL, ch7xxx->save_TCTL);
-	ch7xxx_writeb(dvo, CH7xxx_TPCP, ch7xxx->save_TPCP);
-	ch7xxx_writeb(dvo, CH7xxx_TPD, ch7xxx->save_TPD);
-	ch7xxx_writeb(dvo, CH7xxx_TPVT, ch7xxx->save_TPVT);
-	ch7xxx_writeb(dvo, CH7xxx_TLPF, ch7xxx->save_TLPF);
-	ch7xxx_writeb(dvo, CH7xxx_IDF, ch7xxx->save_IDF);
-	ch7xxx_writeb(dvo, CH7xxx_PM, ch7xxx->save_PM);
-}
-
 static void ch7xxx_destroy(struct intel_dvo_device *dvo)
 {
 	struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
@@ -365,7 +329,5 @@ struct intel_dvo_dev_ops ch7xxx_ops = {
 	.mode_set = ch7xxx_mode_set,
 	.dpms = ch7xxx_dpms,
 	.dump_regs = ch7xxx_dump_regs,
-	.save = ch7xxx_save,
-	.restore = ch7xxx_restore,
 	.destroy = ch7xxx_destroy,
 };
diff --git a/drivers/gpu/drm/i915/dvo_ivch.c b/drivers/gpu/drm/i915/dvo_ivch.c
index 24169e5..a2ec3f4 100644
--- a/drivers/gpu/drm/i915/dvo_ivch.c
+++ b/drivers/gpu/drm/i915/dvo_ivch.c
@@ -153,9 +153,6 @@ struct ivch_priv {
 	bool quiet;
 
 	uint16_t width, height;
-
-	uint16_t save_VR01;
-	uint16_t save_VR40;
 };
 
 
@@ -405,22 +402,6 @@ static void ivch_dump_regs(struct intel_dvo_device *dvo)
 	DRM_LOG_KMS("VR8F: 0x%04x\n", val);
 }
 
-static void ivch_save(struct intel_dvo_device *dvo)
-{
-	struct ivch_priv *priv = dvo->dev_priv;
-
-	ivch_read(dvo, VR01, &priv->save_VR01);
-	ivch_read(dvo, VR40, &priv->save_VR40);
-}
-
-static void ivch_restore(struct intel_dvo_device *dvo)
-{
-	struct ivch_priv *priv = dvo->dev_priv;
-
-	ivch_write(dvo, VR01, priv->save_VR01);
-	ivch_write(dvo, VR40, priv->save_VR40);
-}
-
 static void ivch_destroy(struct intel_dvo_device *dvo)
 {
 	struct ivch_priv *priv = dvo->dev_priv;
@@ -434,8 +415,6 @@ static void ivch_destroy(struct intel_dvo_device *dvo)
 struct intel_dvo_dev_ops ivch_ops= {
 	.init = ivch_init,
 	.dpms = ivch_dpms,
-	.save = ivch_save,
-	.restore = ivch_restore,
 	.mode_valid = ivch_mode_valid,
 	.mode_set = ivch_mode_set,
 	.detect = ivch_detect,
diff --git a/drivers/gpu/drm/i915/dvo_sil164.c b/drivers/gpu/drm/i915/dvo_sil164.c
index 0001c13..9b8e676 100644
--- a/drivers/gpu/drm/i915/dvo_sil164.c
+++ b/drivers/gpu/drm/i915/dvo_sil164.c
@@ -58,17 +58,9 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
 
 #define SIL164_REGC 0x0c
 
-struct sil164_save_rec {
-	uint8_t reg8;
-	uint8_t reg9;
-	uint8_t regc;
-};
-
 struct sil164_priv {
 	//I2CDevRec d;
 	bool quiet;
-	struct sil164_save_rec save_regs;
-	struct sil164_save_rec mode_regs;
 };
 
 #define SILPTR(d) ((SIL164Ptr)(d->DriverPrivate.ptr))
@@ -252,34 +244,6 @@ static void sil164_dump_regs(struct intel_dvo_device *dvo)
 	DRM_LOG_KMS("SIL164_REGC: 0x%02x\n", val);
 }
 
-static void sil164_save(struct intel_dvo_device *dvo)
-{
-	struct sil164_priv *sil= dvo->dev_priv;
-
-	if (!sil164_readb(dvo, SIL164_REG8, &sil->save_regs.reg8))
-		return;
-
-	if (!sil164_readb(dvo, SIL164_REG9, &sil->save_regs.reg9))
-		return;
-
-	if (!sil164_readb(dvo, SIL164_REGC, &sil->save_regs.regc))
-		return;
-
-	return;
-}
-
-static void sil164_restore(struct intel_dvo_device *dvo)
-{
-	struct sil164_priv *sil = dvo->dev_priv;
-
-	/* Restore it powered down initially */
-	sil164_writeb(dvo, SIL164_REG8, sil->save_regs.reg8 & ~0x1);
-
-	sil164_writeb(dvo, SIL164_REG9, sil->save_regs.reg9);
-	sil164_writeb(dvo, SIL164_REGC, sil->save_regs.regc);
-	sil164_writeb(dvo, SIL164_REG8, sil->save_regs.reg8);
-}
-
 static void sil164_destroy(struct intel_dvo_device *dvo)
 {
 	struct sil164_priv *sil = dvo->dev_priv;
@@ -297,7 +261,5 @@ struct intel_dvo_dev_ops sil164_ops = {
 	.mode_set = sil164_mode_set,
 	.dpms = sil164_dpms,
 	.dump_regs = sil164_dump_regs,
-	.save = sil164_save,
-	.restore = sil164_restore,
 	.destroy = sil164_destroy,
 };
diff --git a/drivers/gpu/drm/i915/dvo_tfp410.c b/drivers/gpu/drm/i915/dvo_tfp410.c
index c7c391b..66c697b 100644
--- a/drivers/gpu/drm/i915/dvo_tfp410.c
+++ b/drivers/gpu/drm/i915/dvo_tfp410.c
@@ -86,16 +86,8 @@
 #define TFP410_V_RES_LO		0x3C
 #define TFP410_V_RES_HI		0x3D
 
-struct tfp410_save_rec {
-	uint8_t ctl1;
-	uint8_t ctl2;
-};
-
 struct tfp410_priv {
 	bool quiet;
-
-	struct tfp410_save_rec saved_reg;
-	struct tfp410_save_rec mode_reg;
 };
 
 static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
@@ -293,28 +285,6 @@ static void tfp410_dump_regs(struct intel_dvo_device *dvo)
 	DRM_LOG_KMS("TFP410_V_RES: 0x%02X%02X\n", val2, val);
 }
 
-static void tfp410_save(struct intel_dvo_device *dvo)
-{
-	struct tfp410_priv *tfp = dvo->dev_priv;
-
-	if (!tfp410_readb(dvo, TFP410_CTL_1, &tfp->saved_reg.ctl1))
-		return;
-
-	if (!tfp410_readb(dvo, TFP410_CTL_2, &tfp->saved_reg.ctl2))
-		return;
-}
-
-static void tfp410_restore(struct intel_dvo_device *dvo)
-{
-	struct tfp410_priv *tfp = dvo->dev_priv;
-
-	/* Restore it powered down initially */
-	tfp410_writeb(dvo, TFP410_CTL_1, tfp->saved_reg.ctl1 & ~0x1);
-
-	tfp410_writeb(dvo, TFP410_CTL_2, tfp->saved_reg.ctl2);
-	tfp410_writeb(dvo, TFP410_CTL_1, tfp->saved_reg.ctl1);
-}
-
 static void tfp410_destroy(struct intel_dvo_device *dvo)
 {
 	struct tfp410_priv *tfp = dvo->dev_priv;
@@ -332,7 +302,5 @@ struct intel_dvo_dev_ops tfp410_ops = {
 	.mode_set = tfp410_mode_set,
 	.dpms = tfp410_dpms,
 	.dump_regs = tfp410_dump_regs,
-	.save = tfp410_save,
-	.restore = tfp410_restore,
 	.destroy = tfp410_destroy,
 };
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index a0b8447..52510ad 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -77,7 +77,7 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
 	case ACTIVE_LIST:
 		seq_printf(m, "Active:\n");
 		lock = &dev_priv->mm.active_list_lock;
-		head = &dev_priv->mm.active_list;
+		head = &dev_priv->render_ring.active_list;
 		break;
 	case INACTIVE_LIST:
 		seq_printf(m, "Inactive:\n");
@@ -96,19 +96,18 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
 		spin_lock(lock);
 	list_for_each_entry(obj_priv, head, list)
 	{
-		struct drm_gem_object *obj = obj_priv->obj;
-
 		seq_printf(m, "    %p: %s %8zd %08x %08x %d%s%s",
-			   obj,
+			   &obj_priv->base,
 			   get_pin_flag(obj_priv),
-			   obj->size,
-			   obj->read_domains, obj->write_domain,
+			   obj_priv->base.size,
+			   obj_priv->base.read_domains,
+			   obj_priv->base.write_domain,
 			   obj_priv->last_rendering_seqno,
 			   obj_priv->dirty ? " dirty" : "",
 			   obj_priv->madv == I915_MADV_DONTNEED ? " purgeable" : "");
 
-		if (obj->name)
-			seq_printf(m, " (name: %d)", obj->name);
+		if (obj_priv->base.name)
+			seq_printf(m, " (name: %d)", obj_priv->base.name);
 		if (obj_priv->fence_reg != I915_FENCE_REG_NONE)
 			seq_printf(m, " (fence: %d)", obj_priv->fence_reg);
 		if (obj_priv->gtt_space != NULL)
@@ -130,7 +129,8 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
 	struct drm_i915_gem_request *gem_request;
 
 	seq_printf(m, "Request:\n");
-	list_for_each_entry(gem_request, &dev_priv->mm.request_list, list) {
+	list_for_each_entry(gem_request, &dev_priv->render_ring.request_list,
+			list) {
 		seq_printf(m, "    %d @ %d\n",
 			   gem_request->seqno,
 			   (int) (jiffies - gem_request->emitted_jiffies));
@@ -144,9 +144,9 @@ static int i915_gem_seqno_info(struct seq_file *m, void *data)
 	struct drm_device *dev = node->minor->dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
 
-	if (dev_priv->hw_status_page != NULL) {
+	if (dev_priv->render_ring.status_page.page_addr != NULL) {
 		seq_printf(m, "Current sequence: %d\n",
-			   i915_get_gem_seqno(dev));
+			   i915_get_gem_seqno(dev,  &dev_priv->render_ring));
 	} else {
 		seq_printf(m, "Current sequence: hws uninitialized\n");
 	}
@@ -196,9 +196,9 @@ static int i915_interrupt_info(struct seq_file *m, void *data)
 	}
 	seq_printf(m, "Interrupts received: %d\n",
 		   atomic_read(&dev_priv->irq_received));
-	if (dev_priv->hw_status_page != NULL) {
+	if (dev_priv->render_ring.status_page.page_addr != NULL) {
 		seq_printf(m, "Current sequence:    %d\n",
-			   i915_get_gem_seqno(dev));
+			   i915_get_gem_seqno(dev,  &dev_priv->render_ring));
 	} else {
 		seq_printf(m, "Current sequence:    hws uninitialized\n");
 	}
@@ -252,7 +252,7 @@ static int i915_hws_info(struct seq_file *m, void *data)
 	int i;
 	volatile u32 *hws;
 
-	hws = (volatile u32 *)dev_priv->hw_status_page;
+	hws = (volatile u32 *)dev_priv->render_ring.status_page.page_addr;
 	if (hws == NULL)
 		return 0;
 
@@ -288,8 +288,9 @@ static int i915_batchbuffer_info(struct seq_file *m, void *data)
 
 	spin_lock(&dev_priv->mm.active_list_lock);
 
-	list_for_each_entry(obj_priv, &dev_priv->mm.active_list, list) {
-		obj = obj_priv->obj;
+	list_for_each_entry(obj_priv, &dev_priv->render_ring.active_list,
+			list) {
+		obj = &obj_priv->base;
 		if (obj->read_domains & I915_GEM_DOMAIN_COMMAND) {
 		    ret = i915_gem_object_get_pages(obj, 0);
 		    if (ret) {
@@ -318,14 +319,14 @@ static int i915_ringbuffer_data(struct seq_file *m, void *data)
 	u8 *virt;
 	uint32_t *ptr, off;
 
-	if (!dev_priv->ring.ring_obj) {
+	if (!dev_priv->render_ring.gem_object) {
 		seq_printf(m, "No ringbuffer setup\n");
 		return 0;
 	}
 
-	virt = dev_priv->ring.virtual_start;
+	virt = dev_priv->render_ring.virtual_start;
 
-	for (off = 0; off < dev_priv->ring.Size; off += 4) {
+	for (off = 0; off < dev_priv->render_ring.size; off += 4) {
 		ptr = (uint32_t *)(virt + off);
 		seq_printf(m, "%08x :  %08x\n", off, *ptr);
 	}
@@ -345,7 +346,7 @@ static int i915_ringbuffer_info(struct seq_file *m, void *data)
 
 	seq_printf(m, "RingHead :  %08x\n", head);
 	seq_printf(m, "RingTail :  %08x\n", tail);
-	seq_printf(m, "RingSize :  %08lx\n", dev_priv->ring.Size);
+	seq_printf(m, "RingSize :  %08lx\n", dev_priv->render_ring.size);
 	seq_printf(m, "Acthd :     %08x\n", I915_READ(IS_I965G(dev) ? ACTHD_I965 : ACTHD));
 
 	return 0;
@@ -490,11 +491,14 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
 	struct drm_device *dev = node->minor->dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	u16 rgvswctl = I915_READ16(MEMSWCTL);
+	u16 rgvstat = I915_READ16(MEMSTAT_ILK);
 
-	seq_printf(m, "Last command: 0x%01x\n", (rgvswctl >> 13) & 0x3);
-	seq_printf(m, "Command status: %d\n", (rgvswctl >> 12) & 1);
-	seq_printf(m, "P%d DELAY 0x%02x\n", (rgvswctl >> 8) & 0xf,
-		   rgvswctl & 0x3f);
+	seq_printf(m, "Requested P-state: %d\n", (rgvswctl >> 8) & 0xf);
+	seq_printf(m, "Requested VID: %d\n", rgvswctl & 0x3f);
+	seq_printf(m, "Current VID: %d\n", (rgvstat & MEMSTAT_VID_MASK) >>
+		   MEMSTAT_VID_SHIFT);
+	seq_printf(m, "Current P-state: %d\n",
+		   (rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT);
 
 	return 0;
 }
@@ -509,7 +513,8 @@ static int i915_delayfreq_table(struct seq_file *m, void *unused)
 
 	for (i = 0; i < 16; i++) {
 		delayfreq = I915_READ(PXVFREQ_BASE + i * 4);
-		seq_printf(m, "P%02dVIDFREQ: 0x%08x\n", i, delayfreq);
+		seq_printf(m, "P%02dVIDFREQ: 0x%08x (VID: %d)\n", i, delayfreq,
+			   (delayfreq & PXVFREQ_PX_MASK) >> PXVFREQ_PX_SHIFT);
 	}
 
 	return 0;
@@ -542,6 +547,8 @@ static int i915_drpc_info(struct seq_file *m, void *unused)
 	struct drm_device *dev = node->minor->dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 rgvmodectl = I915_READ(MEMMODECTL);
+	u32 rstdbyctl = I915_READ(MCHBAR_RENDER_STANDBY);
+	u16 crstandvid = I915_READ16(CRSTANDVID);
 
 	seq_printf(m, "HD boost: %s\n", (rgvmodectl & MEMMODE_BOOST_EN) ?
 		   "yes" : "no");
@@ -556,9 +563,13 @@ static int i915_drpc_info(struct seq_file *m, void *unused)
 		   rgvmodectl & MEMMODE_RCLK_GATE ? "yes" : "no");
 	seq_printf(m, "Starting frequency: P%d\n",
 		   (rgvmodectl & MEMMODE_FSTART_MASK) >> MEMMODE_FSTART_SHIFT);
-	seq_printf(m, "Max frequency: P%d\n",
+	seq_printf(m, "Max P-state: P%d\n",
 		   (rgvmodectl & MEMMODE_FMAX_MASK) >> MEMMODE_FMAX_SHIFT);
-	seq_printf(m, "Min frequency: P%d\n", (rgvmodectl & MEMMODE_FMIN_MASK));
+	seq_printf(m, "Min P-state: P%d\n", (rgvmodectl & MEMMODE_FMIN_MASK));
+	seq_printf(m, "RS1 VID: %d\n", (crstandvid & 0x3f));
+	seq_printf(m, "RS2 VID: %d\n", ((crstandvid >> 8) & 0x3f));
+	seq_printf(m, "Render standby enabled: %s\n",
+		   (rstdbyctl & RCX_SW_EXIT) ? "no" : "yes");
 
 	return 0;
 }
@@ -567,23 +578,14 @@ static int i915_fbc_status(struct seq_file *m, void *unused)
 {
 	struct drm_info_node *node = (struct drm_info_node *) m->private;
 	struct drm_device *dev = node->minor->dev;
-	struct drm_crtc *crtc;
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	bool fbc_enabled = false;
 
-	if (!dev_priv->display.fbc_enabled) {
+	if (!I915_HAS_FBC(dev)) {
 		seq_printf(m, "FBC unsupported on this chipset\n");
 		return 0;
 	}
 
-	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
-		if (!crtc->enabled)
-			continue;
-		if (dev_priv->display.fbc_enabled(crtc))
-			fbc_enabled = true;
-	}
-
-	if (fbc_enabled) {
+	if (intel_fbc_enabled(dev)) {
 		seq_printf(m, "FBC enabled\n");
 	} else {
 		seq_printf(m, "FBC disabled: ");
@@ -631,6 +633,36 @@ static int i915_sr_status(struct seq_file *m, void *unused)
 	return 0;
 }
 
+static int i915_emon_status(struct seq_file *m, void *unused)
+{
+	struct drm_info_node *node = (struct drm_info_node *) m->private;
+	struct drm_device *dev = node->minor->dev;
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	unsigned long temp, chipset, gfx;
+
+	temp = i915_mch_val(dev_priv);
+	chipset = i915_chipset_val(dev_priv);
+	gfx = i915_gfx_val(dev_priv);
+
+	seq_printf(m, "GMCH temp: %ld\n", temp);
+	seq_printf(m, "Chipset power: %ld\n", chipset);
+	seq_printf(m, "GFX power: %ld\n", gfx);
+	seq_printf(m, "Total power: %ld\n", chipset + gfx);
+
+	return 0;
+}
+
+static int i915_gfxec(struct seq_file *m, void *unused)
+{
+	struct drm_info_node *node = (struct drm_info_node *) m->private;
+	struct drm_device *dev = node->minor->dev;
+	drm_i915_private_t *dev_priv = dev->dev_private;
+
+	seq_printf(m, "GFXEC: %ld\n", (unsigned long)I915_READ(0x112f4));
+
+	return 0;
+}
+
 static int
 i915_wedged_open(struct inode *inode,
 		 struct file *filp)
@@ -753,6 +785,8 @@ static struct drm_info_list i915_debugfs_list[] = {
 	{"i915_delayfreq_table", i915_delayfreq_table, 0},
 	{"i915_inttoext_table", i915_inttoext_table, 0},
 	{"i915_drpc_info", i915_drpc_info, 0},
+	{"i915_emon_status", i915_emon_status, 0},
+	{"i915_gfxec", i915_gfxec, 0},
 	{"i915_fbc_status", i915_fbc_status, 0},
 	{"i915_sr_status", i915_sr_status, 0},
 };
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index c3cfafc..59a2bf8 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -40,84 +40,6 @@
 #include <linux/vga_switcheroo.h>
 #include <linux/slab.h>
 
-/* Really want an OS-independent resettable timer.  Would like to have
- * this loop run for (eg) 3 sec, but have the timer reset every time
- * the head pointer changes, so that EBUSY only happens if the ring
- * actually stalls for (eg) 3 seconds.
- */
-int i915_wait_ring(struct drm_device * dev, int n, const char *caller)
-{
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	drm_i915_ring_buffer_t *ring = &(dev_priv->ring);
-	u32 acthd_reg = IS_I965G(dev) ? ACTHD_I965 : ACTHD;
-	u32 last_acthd = I915_READ(acthd_reg);
-	u32 acthd;
-	u32 last_head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-	int i;
-
-	trace_i915_ring_wait_begin (dev);
-
-	for (i = 0; i < 100000; i++) {
-		ring->head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-		acthd = I915_READ(acthd_reg);
-		ring->space = ring->head - (ring->tail + 8);
-		if (ring->space < 0)
-			ring->space += ring->Size;
-		if (ring->space >= n) {
-			trace_i915_ring_wait_end (dev);
-			return 0;
-		}
-
-		if (dev->primary->master) {
-			struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
-			if (master_priv->sarea_priv)
-				master_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT;
-		}
-
-
-		if (ring->head != last_head)
-			i = 0;
-		if (acthd != last_acthd)
-			i = 0;
-
-		last_head = ring->head;
-		last_acthd = acthd;
-		msleep_interruptible(10);
-
-	}
-
-	trace_i915_ring_wait_end (dev);
-	return -EBUSY;
-}
-
-/* As a ringbuffer is only allowed to wrap between instructions, fill
- * the tail with NOOPs.
- */
-int i915_wrap_ring(struct drm_device *dev)
-{
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	volatile unsigned int *virt;
-	int rem;
-
-	rem = dev_priv->ring.Size - dev_priv->ring.tail;
-	if (dev_priv->ring.space < rem) {
-		int ret = i915_wait_ring(dev, rem, __func__);
-		if (ret)
-			return ret;
-	}
-	dev_priv->ring.space -= rem;
-
-	virt = (unsigned int *)
-		(dev_priv->ring.virtual_start + dev_priv->ring.tail);
-	rem /= 4;
-	while (rem--)
-		*virt++ = MI_NOOP;
-
-	dev_priv->ring.tail = 0;
-
-	return 0;
-}
-
 /**
  * Sets up the hardware status page for devices that need a physical address
  * in the register.
@@ -133,10 +55,11 @@ static int i915_init_phys_hws(struct drm_device *dev)
 		DRM_ERROR("Can not allocate hardware status page\n");
 		return -ENOMEM;
 	}
-	dev_priv->hw_status_page = dev_priv->status_page_dmah->vaddr;
+	dev_priv->render_ring.status_page.page_addr
+		= dev_priv->status_page_dmah->vaddr;
 	dev_priv->dma_status_page = dev_priv->status_page_dmah->busaddr;
 
-	memset(dev_priv->hw_status_page, 0, PAGE_SIZE);
+	memset(dev_priv->render_ring.status_page.page_addr, 0, PAGE_SIZE);
 
 	if (IS_I965G(dev))
 		dev_priv->dma_status_page |= (dev_priv->dma_status_page >> 28) &
@@ -159,8 +82,8 @@ static void i915_free_hws(struct drm_device *dev)
 		dev_priv->status_page_dmah = NULL;
 	}
 
-	if (dev_priv->status_gfx_addr) {
-		dev_priv->status_gfx_addr = 0;
+	if (dev_priv->render_ring.status_page.gfx_addr) {
+		dev_priv->render_ring.status_page.gfx_addr = 0;
 		drm_core_ioremapfree(&dev_priv->hws_map, dev);
 	}
 
@@ -172,7 +95,7 @@ void i915_kernel_lost_context(struct drm_device * dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_master_private *master_priv;
-	drm_i915_ring_buffer_t *ring = &(dev_priv->ring);
+	struct intel_ring_buffer *ring = &dev_priv->render_ring;
 
 	/*
 	 * We should never lose context on the ring with modesetting
@@ -185,7 +108,7 @@ void i915_kernel_lost_context(struct drm_device * dev)
 	ring->tail = I915_READ(PRB0_TAIL) & TAIL_ADDR;
 	ring->space = ring->head - (ring->tail + 8);
 	if (ring->space < 0)
-		ring->space += ring->Size;
+		ring->space += ring->size;
 
 	if (!dev->primary->master)
 		return;
@@ -205,12 +128,9 @@ static int i915_dma_cleanup(struct drm_device * dev)
 	if (dev->irq_enabled)
 		drm_irq_uninstall(dev);
 
-	if (dev_priv->ring.virtual_start) {
-		drm_core_ioremapfree(&dev_priv->ring.map, dev);
-		dev_priv->ring.virtual_start = NULL;
-		dev_priv->ring.map.handle = NULL;
-		dev_priv->ring.map.size = 0;
-	}
+	intel_cleanup_ring_buffer(dev, &dev_priv->render_ring);
+	if (HAS_BSD(dev))
+		intel_cleanup_ring_buffer(dev, &dev_priv->bsd_ring);
 
 	/* Clear the HWS virtual address at teardown */
 	if (I915_NEED_GFX_HWS(dev))
@@ -233,24 +153,24 @@ static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init)
 	}
 
 	if (init->ring_size != 0) {
-		if (dev_priv->ring.ring_obj != NULL) {
+		if (dev_priv->render_ring.gem_object != NULL) {
 			i915_dma_cleanup(dev);
 			DRM_ERROR("Client tried to initialize ringbuffer in "
 				  "GEM mode\n");
 			return -EINVAL;
 		}
 
-		dev_priv->ring.Size = init->ring_size;
+		dev_priv->render_ring.size = init->ring_size;
 
-		dev_priv->ring.map.offset = init->ring_start;
-		dev_priv->ring.map.size = init->ring_size;
-		dev_priv->ring.map.type = 0;
-		dev_priv->ring.map.flags = 0;
-		dev_priv->ring.map.mtrr = 0;
+		dev_priv->render_ring.map.offset = init->ring_start;
+		dev_priv->render_ring.map.size = init->ring_size;
+		dev_priv->render_ring.map.type = 0;
+		dev_priv->render_ring.map.flags = 0;
+		dev_priv->render_ring.map.mtrr = 0;
 
-		drm_core_ioremap_wc(&dev_priv->ring.map, dev);
+		drm_core_ioremap_wc(&dev_priv->render_ring.map, dev);
 
-		if (dev_priv->ring.map.handle == NULL) {
+		if (dev_priv->render_ring.map.handle == NULL) {
 			i915_dma_cleanup(dev);
 			DRM_ERROR("can not ioremap virtual address for"
 				  " ring buffer\n");
@@ -258,7 +178,7 @@ static int i915_initialize(struct drm_device * dev, drm_i915_init_t * init)
 		}
 	}
 
-	dev_priv->ring.virtual_start = dev_priv->ring.map.handle;
+	dev_priv->render_ring.virtual_start = dev_priv->render_ring.map.handle;
 
 	dev_priv->cpp = init->cpp;
 	dev_priv->back_offset = init->back_offset;
@@ -278,26 +198,29 @@ static int i915_dma_resume(struct drm_device * dev)
 {
 	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
 
+	struct intel_ring_buffer *ring;
 	DRM_DEBUG_DRIVER("%s\n", __func__);
 
-	if (dev_priv->ring.map.handle == NULL) {
+	ring = &dev_priv->render_ring;
+
+	if (ring->map.handle == NULL) {
 		DRM_ERROR("can not ioremap virtual address for"
 			  " ring buffer\n");
 		return -ENOMEM;
 	}
 
 	/* Program Hardware Status Page */
-	if (!dev_priv->hw_status_page) {
+	if (!ring->status_page.page_addr) {
 		DRM_ERROR("Can not find hardware status page\n");
 		return -EINVAL;
 	}
 	DRM_DEBUG_DRIVER("hw status page @ %p\n",
-				dev_priv->hw_status_page);
-
-	if (dev_priv->status_gfx_addr != 0)
-		I915_WRITE(HWS_PGA, dev_priv->status_gfx_addr);
+				ring->status_page.page_addr);
+	if (ring->status_page.gfx_addr != 0)
+		ring->setup_status_page(dev, ring);
 	else
 		I915_WRITE(HWS_PGA, dev_priv->dma_status_page);
+
 	DRM_DEBUG_DRIVER("Enabled hardware status page\n");
 
 	return 0;
@@ -407,9 +330,8 @@ static int i915_emit_cmds(struct drm_device * dev, int *buffer, int dwords)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	int i;
-	RING_LOCALS;
 
-	if ((dwords+1) * sizeof(int) >= dev_priv->ring.Size - 8)
+	if ((dwords+1) * sizeof(int) >= dev_priv->render_ring.size - 8)
 		return -EINVAL;
 
 	BEGIN_LP_RING((dwords+1)&~1);
@@ -442,9 +364,7 @@ i915_emit_box(struct drm_device *dev,
 	      struct drm_clip_rect *boxes,
 	      int i, int DR1, int DR4)
 {
-	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_clip_rect box = boxes[i];
-	RING_LOCALS;
 
 	if (box.y2 <= box.y1 || box.x2 <= box.x1 || box.y2 <= 0 || box.x2 <= 0) {
 		DRM_ERROR("Bad box %d,%d..%d,%d\n",
@@ -481,7 +401,6 @@ static void i915_emit_breadcrumb(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
-	RING_LOCALS;
 
 	dev_priv->counter++;
 	if (dev_priv->counter > 0x7FFFFFFFUL)
@@ -535,10 +454,8 @@ static int i915_dispatch_batchbuffer(struct drm_device * dev,
 				     drm_i915_batchbuffer_t * batch,
 				     struct drm_clip_rect *cliprects)
 {
-	drm_i915_private_t *dev_priv = dev->dev_private;
 	int nbox = batch->num_cliprects;
 	int i = 0, count;
-	RING_LOCALS;
 
 	if ((batch->start | batch->used) & 0x7) {
 		DRM_ERROR("alignment");
@@ -587,7 +504,6 @@ static int i915_dispatch_flip(struct drm_device * dev)
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_master_private *master_priv =
 		dev->primary->master->driver_priv;
-	RING_LOCALS;
 
 	if (!master_priv->sarea_priv)
 		return -EINVAL;
@@ -640,7 +556,8 @@ static int i915_quiescent(struct drm_device * dev)
 	drm_i915_private_t *dev_priv = dev->dev_private;
 
 	i915_kernel_lost_context(dev);
-	return i915_wait_ring(dev, dev_priv->ring.Size - 8, __func__);
+	return intel_wait_ring_buffer(dev, &dev_priv->render_ring,
+				      dev_priv->render_ring.size - 8);
 }
 
 static int i915_flush_ioctl(struct drm_device *dev, void *data,
@@ -827,6 +744,9 @@ static int i915_getparam(struct drm_device *dev, void *data,
 		/* depends on GEM */
 		value = dev_priv->has_gem;
 		break;
+	case I915_PARAM_HAS_BSD:
+		value = HAS_BSD(dev);
+		break;
 	default:
 		DRM_DEBUG_DRIVER("Unknown parameter %d\n",
 				 param->param);
@@ -882,6 +802,7 @@ static int i915_set_status_page(struct drm_device *dev, void *data,
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	drm_i915_hws_addr_t *hws = data;
+	struct intel_ring_buffer *ring = &dev_priv->render_ring;
 
 	if (!I915_NEED_GFX_HWS(dev))
 		return -EINVAL;
@@ -898,7 +819,7 @@ static int i915_set_status_page(struct drm_device *dev, void *data,
 
 	DRM_DEBUG_DRIVER("set status page addr 0x%08x\n", (u32)hws->addr);
 
-	dev_priv->status_gfx_addr = hws->addr & (0x1ffff<<12);
+	ring->status_page.gfx_addr = hws->addr & (0x1ffff<<12);
 
 	dev_priv->hws_map.offset = dev->agp->base + hws->addr;
 	dev_priv->hws_map.size = 4*1024;
@@ -909,19 +830,19 @@ static int i915_set_status_page(struct drm_device *dev, void *data,
 	drm_core_ioremap_wc(&dev_priv->hws_map, dev);
 	if (dev_priv->hws_map.handle == NULL) {
 		i915_dma_cleanup(dev);
-		dev_priv->status_gfx_addr = 0;
+		ring->status_page.gfx_addr = 0;
 		DRM_ERROR("can not ioremap virtual address for"
 				" G33 hw status page\n");
 		return -ENOMEM;
 	}
-	dev_priv->hw_status_page = dev_priv->hws_map.handle;
+	ring->status_page.page_addr = dev_priv->hws_map.handle;
+	memset(ring->status_page.page_addr, 0, PAGE_SIZE);
+	I915_WRITE(HWS_PGA, ring->status_page.gfx_addr);
 
-	memset(dev_priv->hw_status_page, 0, PAGE_SIZE);
-	I915_WRITE(HWS_PGA, dev_priv->status_gfx_addr);
 	DRM_DEBUG_DRIVER("load hws HWS_PGA with gfx mem 0x%x\n",
-				dev_priv->status_gfx_addr);
+			 ring->status_page.gfx_addr);
 	DRM_DEBUG_DRIVER("load hws at %p\n",
-				dev_priv->hw_status_page);
+			 ring->status_page.page_addr);
 	return 0;
 }
 
@@ -1357,13 +1278,12 @@ static void i915_setup_compression(struct drm_device *dev, int size)
 
 	dev_priv->cfb_size = size;
 
+	intel_disable_fbc(dev);
 	dev_priv->compressed_fb = compressed_fb;
 
 	if (IS_GM45(dev)) {
-		g4x_disable_fbc(dev);
 		I915_WRITE(DPFC_CB_BASE, compressed_fb->start);
 	} else {
-		i8xx_disable_fbc(dev);
 		I915_WRITE(FBC_CFB_BASE, cfb_base);
 		I915_WRITE(FBC_LL_BASE, ll_base);
 		dev_priv->compressed_llb = compressed_llb;
@@ -1400,12 +1320,14 @@ static void i915_switcheroo_set_state(struct pci_dev *pdev, enum vga_switcheroo_
 	struct drm_device *dev = pci_get_drvdata(pdev);
 	pm_message_t pmm = { .event = PM_EVENT_SUSPEND };
 	if (state == VGA_SWITCHEROO_ON) {
-		printk(KERN_INFO "i915: switched off\n");
+		printk(KERN_INFO "i915: switched on\n");
 		/* i915 resume handler doesn't set to D0 */
 		pci_set_power_state(dev->pdev, PCI_D0);
 		i915_resume(dev);
+		drm_kms_helper_poll_enable(dev);
 	} else {
 		printk(KERN_ERR "i915: switched off\n");
+		drm_kms_helper_poll_disable(dev);
 		i915_suspend(dev, pmm);
 	}
 }
@@ -1480,23 +1402,23 @@ static int i915_load_modeset_init(struct drm_device *dev,
 	/* if we have > 1 VGA cards, then disable the radeon VGA resources */
 	ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode);
 	if (ret)
-		goto destroy_ringbuffer;
+		goto cleanup_ringbuffer;
 
 	ret = vga_switcheroo_register_client(dev->pdev,
 					     i915_switcheroo_set_state,
 					     i915_switcheroo_can_switch);
 	if (ret)
-		goto destroy_ringbuffer;
+		goto cleanup_vga_client;
 
 	/* IIR "flip pending" bit means done if this bit is set */
 	if (IS_GEN3(dev) && (I915_READ(ECOSKPD) & ECO_FLIP_DONE))
 		dev_priv->flip_pending_is_done = true;
 
 	intel_modeset_init(dev);
 
 	ret = drm_irq_install(dev);
 	if (ret)
-		goto destroy_ringbuffer;
+		goto cleanup_vga_switcheroo;
 
 	/* Always safe in the mode setting case. */
 	/* FIXME: do pre/post-mode set stuff in core KMS code */
@@ -1504,11 +1426,20 @@ static int i915_load_modeset_init(struct drm_device *dev,
 
 	I915_WRITE(INSTPM, (1 << 5) | (1 << 21));
 
-	drm_helper_initial_config(dev);
+	ret = intel_fbdev_init(dev);
+	if (ret)
+		goto cleanup_irq;
 
+	drm_kms_helper_poll_init(dev);
 	return 0;
 
-destroy_ringbuffer:
+cleanup_irq:
+	drm_irq_uninstall(dev);
+cleanup_vga_switcheroo:
+	vga_switcheroo_unregister_client(dev->pdev);
+cleanup_vga_client:
+	vga_client_register(dev->pdev, NULL, NULL, NULL);
+cleanup_ringbuffer:
 	mutex_lock(&dev->struct_mutex);
 	i915_gem_cleanup_ringbuffer(dev);
 	mutex_unlock(&dev->struct_mutex);
@@ -1540,14 +1471,11 @@ void i915_master_destroy(struct drm_device *dev, struct drm_master *master)
 	master->driver_priv = NULL;
 }
 
-static void i915_get_mem_freq(struct drm_device *dev)
+static void i915_pineview_get_mem_freq(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 tmp;
 
-	if (!IS_PINEVIEW(dev))
-		return;
-
 	tmp = I915_READ(CLKCFG);
 
 	switch (tmp & CLKCFG_FSB_MASK) {
@@ -1576,8 +1504,525 @@ static void i915_get_mem_freq(struct drm_device *dev)
 		dev_priv->mem_freq = 800;
 		break;
 	}
+
+	/* detect pineview DDR3 setting */
+	tmp = I915_READ(CSHRDDR3CTL);
+	dev_priv->is_ddr3 = (tmp & CSHRDDR3CTL_DDR3) ? 1 : 0;
+}
+
+static void i915_ironlake_get_mem_freq(struct drm_device *dev)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	u16 ddrpll, csipll;
+
+	ddrpll = I915_READ16(DDRMPLL1);
+	csipll = I915_READ16(CSIPLL0);
+
+	switch (ddrpll & 0xff) {
+	case 0xc:
+		dev_priv->mem_freq = 800;
+		break;
+	case 0x10:
+		dev_priv->mem_freq = 1066;
+		break;
+	case 0x14:
+		dev_priv->mem_freq = 1333;
+		break;
+	case 0x18:
+		dev_priv->mem_freq = 1600;
+		break;
+	default:
+		DRM_DEBUG_DRIVER("unknown memory frequency 0x%02x\n",
+				 ddrpll & 0xff);
+		dev_priv->mem_freq = 0;
+		break;
+	}
+
+	dev_priv->r_t = dev_priv->mem_freq;
+
+	switch (csipll & 0x3ff) {
+	case 0x00c:
+		dev_priv->fsb_freq = 3200;
+		break;
+	case 0x00e:
+		dev_priv->fsb_freq = 3733;
+		break;
+	case 0x010:
+		dev_priv->fsb_freq = 4266;
+		break;
+	case 0x012:
+		dev_priv->fsb_freq = 4800;
+		break;
+	case 0x014:
+		dev_priv->fsb_freq = 5333;
+		break;
+	case 0x016:
+		dev_priv->fsb_freq = 5866;
+		break;
+	case 0x018:
+		dev_priv->fsb_freq = 6400;
+		break;
+	default:
+		DRM_DEBUG_DRIVER("unknown fsb frequency 0x%04x\n",
+				 csipll & 0x3ff);
+		dev_priv->fsb_freq = 0;
+		break;
+	}
+
+	if (dev_priv->fsb_freq == 3200) {
+		dev_priv->c_m = 0;
+	} else if (dev_priv->fsb_freq > 3200 && dev_priv->fsb_freq <= 4800) {
+		dev_priv->c_m = 1;
+	} else {
+		dev_priv->c_m = 2;
+	}
+}
+
+struct v_table {
+	u8 vid;
+	unsigned long vd; /* in .1 mil */
+	unsigned long vm; /* in .1 mil */
+	u8 pvid;
+};
+
+static struct v_table v_table[] = {
+	{ 0, 16125, 15000, 0x7f, },
+	{ 1, 16000, 14875, 0x7e, },
+	{ 2, 15875, 14750, 0x7d, },
+	{ 3, 15750, 14625, 0x7c, },
+	{ 4, 15625, 14500, 0x7b, },
+	{ 5, 15500, 14375, 0x7a, },
+	{ 6, 15375, 14250, 0x79, },
+	{ 7, 15250, 14125, 0x78, },
+	{ 8, 15125, 14000, 0x77, },
+	{ 9, 15000, 13875, 0x76, },
+	{ 10, 14875, 13750, 0x75, },
+	{ 11, 14750, 13625, 0x74, },
+	{ 12, 14625, 13500, 0x73, },
+	{ 13, 14500, 13375, 0x72, },
+	{ 14, 14375, 13250, 0x71, },
+	{ 15, 14250, 13125, 0x70, },
+	{ 16, 14125, 13000, 0x6f, },
+	{ 17, 14000, 12875, 0x6e, },
+	{ 18, 13875, 12750, 0x6d, },
+	{ 19, 13750, 12625, 0x6c, },
+	{ 20, 13625, 12500, 0x6b, },
+	{ 21, 13500, 12375, 0x6a, },
+	{ 22, 13375, 12250, 0x69, },
+	{ 23, 13250, 12125, 0x68, },
+	{ 24, 13125, 12000, 0x67, },
+	{ 25, 13000, 11875, 0x66, },
+	{ 26, 12875, 11750, 0x65, },
+	{ 27, 12750, 11625, 0x64, },
+	{ 28, 12625, 11500, 0x63, },
+	{ 29, 12500, 11375, 0x62, },
+	{ 30, 12375, 11250, 0x61, },
+	{ 31, 12250, 11125, 0x60, },
+	{ 32, 12125, 11000, 0x5f, },
+	{ 33, 12000, 10875, 0x5e, },
+	{ 34, 11875, 10750, 0x5d, },
+	{ 35, 11750, 10625, 0x5c, },
+	{ 36, 11625, 10500, 0x5b, },
+	{ 37, 11500, 10375, 0x5a, },
+	{ 38, 11375, 10250, 0x59, },
+	{ 39, 11250, 10125, 0x58, },
+	{ 40, 11125, 10000, 0x57, },
+	{ 41, 11000, 9875, 0x56, },
+	{ 42, 10875, 9750, 0x55, },
+	{ 43, 10750, 9625, 0x54, },
+	{ 44, 10625, 9500, 0x53, },
+	{ 45, 10500, 9375, 0x52, },
+	{ 46, 10375, 9250, 0x51, },
+	{ 47, 10250, 9125, 0x50, },
+	{ 48, 10125, 9000, 0x4f, },
+	{ 49, 10000, 8875, 0x4e, },
+	{ 50, 9875, 8750, 0x4d, },
+	{ 51, 9750, 8625, 0x4c, },
+	{ 52, 9625, 8500, 0x4b, },
+	{ 53, 9500, 8375, 0x4a, },
+	{ 54, 9375, 8250, 0x49, },
+	{ 55, 9250, 8125, 0x48, },
+	{ 56, 9125, 8000, 0x47, },
+	{ 57, 9000, 7875, 0x46, },
+	{ 58, 8875, 7750, 0x45, },
+	{ 59, 8750, 7625, 0x44, },
+	{ 60, 8625, 7500, 0x43, },
+	{ 61, 8500, 7375, 0x42, },
+	{ 62, 8375, 7250, 0x41, },
+	{ 63, 8250, 7125, 0x40, },
+	{ 64, 8125, 7000, 0x3f, },
+	{ 65, 8000, 6875, 0x3e, },
+	{ 66, 7875, 6750, 0x3d, },
+	{ 67, 7750, 6625, 0x3c, },
+	{ 68, 7625, 6500, 0x3b, },
+	{ 69, 7500, 6375, 0x3a, },
+	{ 70, 7375, 6250, 0x39, },
+	{ 71, 7250, 6125, 0x38, },
+	{ 72, 7125, 6000, 0x37, },
+	{ 73, 7000, 5875, 0x36, },
+	{ 74, 6875, 5750, 0x35, },
+	{ 75, 6750, 5625, 0x34, },
+	{ 76, 6625, 5500, 0x33, },
+	{ 77, 6500, 5375, 0x32, },
+	{ 78, 6375, 5250, 0x31, },
+	{ 79, 6250, 5125, 0x30, },
+	{ 80, 6125, 5000, 0x2f, },
+	{ 81, 6000, 4875, 0x2e, },
+	{ 82, 5875, 4750, 0x2d, },
+	{ 83, 5750, 4625, 0x2c, },
+	{ 84, 5625, 4500, 0x2b, },
+	{ 85, 5500, 4375, 0x2a, },
+	{ 86, 5375, 4250, 0x29, },
+	{ 87, 5250, 4125, 0x28, },
+	{ 88, 5125, 4000, 0x27, },
+	{ 89, 5000, 3875, 0x26, },
+	{ 90, 4875, 3750, 0x25, },
+	{ 91, 4750, 3625, 0x24, },
+	{ 92, 4625, 3500, 0x23, },
+	{ 93, 4500, 3375, 0x22, },
+	{ 94, 4375, 3250, 0x21, },
+	{ 95, 4250, 3125, 0x20, },
+	{ 96, 4125, 3000, 0x1f, },
+	{ 97, 4125, 3000, 0x1e, },
+	{ 98, 4125, 3000, 0x1d, },
+	{ 99, 4125, 3000, 0x1c, },
+	{ 100, 4125, 3000, 0x1b, },
+	{ 101, 4125, 3000, 0x1a, },
+	{ 102, 4125, 3000, 0x19, },
+	{ 103, 4125, 3000, 0x18, },
+	{ 104, 4125, 3000, 0x17, },
+	{ 105, 4125, 3000, 0x16, },
+	{ 106, 4125, 3000, 0x15, },
+	{ 107, 4125, 3000, 0x14, },
+	{ 108, 4125, 3000, 0x13, },
+	{ 109, 4125, 3000, 0x12, },
+	{ 110, 4125, 3000, 0x11, },
+	{ 111, 4125, 3000, 0x10, },
+	{ 112, 4125, 3000, 0x0f, },
+	{ 113, 4125, 3000, 0x0e, },
+	{ 114, 4125, 3000, 0x0d, },
+	{ 115, 4125, 3000, 0x0c, },
+	{ 116, 4125, 3000, 0x0b, },
+	{ 117, 4125, 3000, 0x0a, },
+	{ 118, 4125, 3000, 0x09, },
+	{ 119, 4125, 3000, 0x08, },
+	{ 120, 1125, 0, 0x07, },
+	{ 121, 1000, 0, 0x06, },
+	{ 122, 875, 0, 0x05, },
+	{ 123, 750, 0, 0x04, },
+	{ 124, 625, 0, 0x03, },
+	{ 125, 500, 0, 0x02, },
+	{ 126, 375, 0, 0x01, },
+	{ 127, 0, 0, 0x00, },
+};
+
+struct cparams {
+	int i;
+	int t;
+	int m;
+	int c;
+};
+
+static struct cparams cparams[] = {
+	{ 1, 1333, 301, 28664 },
+	{ 1, 1066, 294, 24460 },
+	{ 1, 800, 294, 25192 },
+	{ 0, 1333, 276, 27605 },
+	{ 0, 1066, 276, 27605 },
+	{ 0, 800, 231, 23784 },
+};
+
+unsigned long i915_chipset_val(struct drm_i915_private *dev_priv)
+{
+	u64 total_count, diff, ret;
+	u32 count1, count2, count3, m = 0, c = 0;
+	unsigned long now = jiffies_to_msecs(jiffies), diff1;
+	int i;
+
+	diff1 = now - dev_priv->last_time1;
+
+	count1 = I915_READ(DMIEC);
+	count2 = I915_READ(DDREC);
+	count3 = I915_READ(CSIEC);
+
+	total_count = count1 + count2 + count3;
+
+	/* FIXME: handle per-counter overflow */
+	if (total_count < dev_priv->last_count1) {
+		diff = ~0UL - dev_priv->last_count1;
+		diff += total_count;
+	} else {
+		diff = total_count - dev_priv->last_count1;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(cparams); i++) {
+		if (cparams[i].i == dev_priv->c_m &&
+		    cparams[i].t == dev_priv->r_t) {
+			m = cparams[i].m;
+			c = cparams[i].c;
+			break;
+		}
+	}
+
+	div_u64(diff, diff1);
+	ret = ((m * diff) + c);
+	div_u64(ret, 10);
+
+	dev_priv->last_count1 = total_count;
+	dev_priv->last_time1 = now;
+
+	return ret;
+}
+
+unsigned long i915_mch_val(struct drm_i915_private *dev_priv)
+{
+	unsigned long m, x, b;
+	u32 tsfs;
+
+	tsfs = I915_READ(TSFS);
+
+	m = ((tsfs & TSFS_SLOPE_MASK) >> TSFS_SLOPE_SHIFT);
+	x = I915_READ8(TR1);
+
+	b = tsfs & TSFS_INTR_MASK;
+
+	return ((m * x) / 127) - b;
 }
 
+static unsigned long pvid_to_extvid(struct drm_i915_private *dev_priv, u8 pxvid)
+{
+	unsigned long val = 0;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(v_table); i++) {
+		if (v_table[i].pvid == pxvid) {
+			if (IS_MOBILE(dev_priv->dev))
+				val = v_table[i].vm;
+			else
+				val = v_table[i].vd;
+		}
+	}
+
+	return val;
+}
+
+void i915_update_gfx_val(struct drm_i915_private *dev_priv)
+{
+	struct timespec now, diff1;
+	u64 diff;
+	unsigned long diffms;
+	u32 count;
+
+	getrawmonotonic(&now);
+	diff1 = timespec_sub(now, dev_priv->last_time2);
+
+	/* Don't divide by 0 */
+	diffms = diff1.tv_sec * 1000 + diff1.tv_nsec / 1000000;
+	if (!diffms)
+		return;
+
+	count = I915_READ(GFXEC);
+
+	if (count < dev_priv->last_count2) {
+		diff = ~0UL - dev_priv->last_count2;
+		diff += count;
+	} else {
+		diff = count - dev_priv->last_count2;
+	}
+
+	dev_priv->last_count2 = count;
+	dev_priv->last_time2 = now;
+
+	/* More magic constants... */
+	diff = diff * 1181;
+	div_u64(diff, diffms * 10);
+	dev_priv->gfx_power = diff;
+}
+
+unsigned long i915_gfx_val(struct drm_i915_private *dev_priv)
+{
+	unsigned long t, corr, state1, corr2, state2;
+	u32 pxvid, ext_v;
+
+	pxvid = I915_READ(PXVFREQ_BASE + (dev_priv->cur_delay * 4));
+	pxvid = (pxvid >> 24) & 0x7f;
+	ext_v = pvid_to_extvid(dev_priv, pxvid);
+
+	state1 = ext_v;
+
+	t = i915_mch_val(dev_priv);
+
+	/* Revel in the empirically derived constants */
+
+	/* Correction factor in 1/100000 units */
+	if (t > 80)
+		corr = ((t * 2349) + 135940);
+	else if (t >= 50)
+		corr = ((t * 964) + 29317);
+	else /* < 50 */
+		corr = ((t * 301) + 1004);
+
+	corr = corr * ((150142 * state1) / 10000 - 78642);
+	corr /= 100000;
+	corr2 = (corr * dev_priv->corr);
+
+	state2 = (corr2 * state1) / 10000;
+	state2 /= 100; /* convert to mW */
+
+	i915_update_gfx_val(dev_priv);
+
+	return dev_priv->gfx_power + state2;
+}
+
+/* Global for IPS driver to get at the current i915 device */
+static struct drm_i915_private *i915_mch_dev;
+/*
+ * Lock protecting IPS related data structures
+ *   - i915_mch_dev
+ *   - dev_priv->max_delay
+ *   - dev_priv->min_delay
+ *   - dev_priv->fmax
+ *   - dev_priv->gpu_busy
+ */
+DEFINE_SPINLOCK(mchdev_lock);
+
+/**
+ * i915_read_mch_val - return value for IPS use
+ *
+ * Calculate and return a value for the IPS driver to use when deciding whether
+ * we have thermal and power headroom to increase CPU or GPU power budget.
+ */
+unsigned long i915_read_mch_val(void)
+{
+  	struct drm_i915_private *dev_priv;
+	unsigned long chipset_val, graphics_val, ret = 0;
+
+  	spin_lock(&mchdev_lock);
+	if (!i915_mch_dev)
+		goto out_unlock;
+	dev_priv = i915_mch_dev;
+
+	chipset_val = i915_chipset_val(dev_priv);
+	graphics_val = i915_gfx_val(dev_priv);
+
+	ret = chipset_val + graphics_val;
+
+out_unlock:
+  	spin_unlock(&mchdev_lock);
+
+  	return ret;
+}
+EXPORT_SYMBOL_GPL(i915_read_mch_val);
+
+/**
+ * i915_gpu_raise - raise GPU frequency limit
+ *
+ * Raise the limit; IPS indicates we have thermal headroom.
+ */
+bool i915_gpu_raise(void)
+{
+  	struct drm_i915_private *dev_priv;
+	bool ret = true;
+
+  	spin_lock(&mchdev_lock);
+	if (!i915_mch_dev) {
+		ret = false;
+		goto out_unlock;
+	}
+	dev_priv = i915_mch_dev;
+
+	if (dev_priv->max_delay > dev_priv->fmax)
+		dev_priv->max_delay--;
+
+out_unlock:
+  	spin_unlock(&mchdev_lock);
+
+  	return ret;
+}
+EXPORT_SYMBOL_GPL(i915_gpu_raise);
+
+/**
+ * i915_gpu_lower - lower GPU frequency limit
+ *
+ * IPS indicates we're close to a thermal limit, so throttle back the GPU
+ * frequency maximum.
+ */
+bool i915_gpu_lower(void)
+{
+  	struct drm_i915_private *dev_priv;
+	bool ret = true;
+
+  	spin_lock(&mchdev_lock);
+	if (!i915_mch_dev) {
+		ret = false;
+		goto out_unlock;
+	}
+	dev_priv = i915_mch_dev;
+
+	if (dev_priv->max_delay < dev_priv->min_delay)
+		dev_priv->max_delay++;
+
+out_unlock:
+  	spin_unlock(&mchdev_lock);
+
+  	return ret;
+}
+EXPORT_SYMBOL_GPL(i915_gpu_lower);
+
+/**
+ * i915_gpu_busy - indicate GPU business to IPS
+ *
+ * Tell the IPS driver whether or not the GPU is busy.
+ */
+bool i915_gpu_busy(void)
+{
+  	struct drm_i915_private *dev_priv;
+	bool ret = false;
+
+  	spin_lock(&mchdev_lock);
+	if (!i915_mch_dev)
+		goto out_unlock;
+	dev_priv = i915_mch_dev;
+
+	ret = dev_priv->busy;
+
+out_unlock:
+  	spin_unlock(&mchdev_lock);
+
+  	return ret;
+}
+EXPORT_SYMBOL_GPL(i915_gpu_busy);
+
+/**
+ * i915_gpu_turbo_disable - disable graphics turbo
+ *
+ * Disable graphics turbo by resetting the max frequency and setting the
+ * current frequency to the default.
+ */
+bool i915_gpu_turbo_disable(void)
+{
+  	struct drm_i915_private *dev_priv;
+	bool ret = true;
+
+  	spin_lock(&mchdev_lock);
+	if (!i915_mch_dev) {
+		ret = false;
+		goto out_unlock;
+	}
+	dev_priv = i915_mch_dev;
+
+	dev_priv->max_delay = dev_priv->fstart;
+
+	if (!ironlake_set_drps(dev_priv->dev, dev_priv->fstart))
+		ret = false;
+
+out_unlock:
+  	spin_unlock(&mchdev_lock);
+
+  	return ret;
+}
+EXPORT_SYMBOL_GPL(i915_gpu_turbo_disable);
+
 /**
  * i915_driver_load - setup chip and create an initial config
  * @dev: DRM device
@@ -1591,11 +2036,10 @@ static void i915_get_mem_freq(struct drm_device *dev)
  */
 int i915_driver_load(struct drm_device *dev, unsigned long flags)
 {
-	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct drm_i915_private *dev_priv;
 	resource_size_t base, size;
 	int ret = 0, mmio_bar;
 	uint32_t agp_size, prealloc_size, prealloc_start;
-
 	/* i915 has 4 more counters */
 	dev->counters += 4;
 	dev->types[6] = _DRM_STAT_IRQ;
@@ -1673,6 +2117,13 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
 		dev_priv->has_gem = 0;
 	}
 
+	if (dev_priv->has_gem == 0 &&
+	    drm_core_check_feature(dev, DRIVER_MODESET)) {
+		DRM_ERROR("kernel modesetting requires GEM, disabling driver.\n");
+		ret = -ENODEV;
+		goto out_iomapfree;
+	}
+
 	dev->driver->get_vblank_counter = i915_get_vblank_counter;
 	dev->max_vblank_count = 0xffffff; /* only 24 bits of frame count */
 	if (IS_G4X(dev) || IS_IRONLAKE(dev) || IS_GEN6(dev)) {
@@ -1692,7 +2143,10 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
 			goto out_workqueue_free;
 	}
 
-	i915_get_mem_freq(dev);
+	if (IS_PINEVIEW(dev))
+		i915_pineview_get_mem_freq(dev);
+	else if (IS_IRONLAKE(dev))
+		i915_ironlake_get_mem_freq(dev);
 
 	/* On the 945G/GM, the chipset reports the MSI capability on the
 	 * integrated graphics even though the support isn't actually there
@@ -1710,7 +2164,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
 
 	spin_lock_init(&dev_priv->user_irq_lock);
 	spin_lock_init(&dev_priv->error_lock);
-	dev_priv->user_irq_refcount = 0;
 	dev_priv->trace_irq_seqno = 0;
 
 	ret = drm_vblank_init(dev, I915_NUM_PIPE);
@@ -1723,6 +2176,8 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
 	/* Start out suspended */
 	dev_priv->mm.suspended = 1;
 
+	intel_detect_pch(dev);
+
 	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
 		ret = i915_load_modeset_init(dev, prealloc_start,
 					     prealloc_size, agp_size);
@@ -1737,6 +2192,12 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
 
 	setup_timer(&dev_priv->hangcheck_timer, i915_hangcheck_elapsed,
 		    (unsigned long) dev);
+
+	spin_lock(&mchdev_lock);
+	i915_mch_dev = dev_priv;
+	dev_priv->mchdev_lock = &mchdev_lock;
+	spin_unlock(&mchdev_lock);
+
 	return 0;
 
 out_workqueue_free:
@@ -1758,6 +2219,10 @@ int i915_driver_unload(struct drm_device *dev)
 
 	i915_destroy_error_state(dev);
 
+	spin_lock(&mchdev_lock);
+	i915_mch_dev = NULL;
+	spin_unlock(&mchdev_lock);
+
 	destroy_workqueue(dev_priv->wq);
 	del_timer_sync(&dev_priv->hangcheck_timer);
 
@@ -1769,6 +2234,8 @@ int i915_driver_unload(struct drm_device *dev)
 	}
 
 	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
+		intel_modeset_cleanup(dev);
+
 		/*
 		 * free the memory space allocated for the child device
 		 * config parsed from VBT
@@ -1792,8 +2259,6 @@ int i915_driver_unload(struct drm_device *dev)
 	intel_opregion_free(dev, 0);
 
 	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
-		intel_modeset_cleanup(dev);
-
 		i915_gem_free_all_phys_object(dev);
 
 		mutex_lock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index cc03537..423dc90 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -60,95 +60,95 @@ extern int intel_agp_enabled;
 	.subdevice = PCI_ANY_ID,		\
 	.driver_data = (unsigned long) info }
 
-const static struct intel_device_info intel_i830_info = {
+static const struct intel_device_info intel_i830_info = {
 	.is_i8xx = 1, .is_mobile = 1, .cursor_needs_physical = 1,
 };
 
-const static struct intel_device_info intel_845g_info = {
+static const struct intel_device_info intel_845g_info = {
 	.is_i8xx = 1,
 };
 
-const static struct intel_device_info intel_i85x_info = {
+static const struct intel_device_info intel_i85x_info = {
 	.is_i8xx = 1, .is_i85x = 1, .is_mobile = 1,
 	.cursor_needs_physical = 1,
 };
 
-const static struct intel_device_info intel_i865g_info = {
+static const struct intel_device_info intel_i865g_info = {
 	.is_i8xx = 1,
 };
 
-const static struct intel_device_info intel_i915g_info = {
+static const struct intel_device_info intel_i915g_info = {
 	.is_i915g = 1, .is_i9xx = 1, .cursor_needs_physical = 1,
 };
-const static struct intel_device_info intel_i915gm_info = {
+static const struct intel_device_info intel_i915gm_info = {
 	.is_i9xx = 1,  .is_mobile = 1,
 	.cursor_needs_physical = 1,
 };
-const static struct intel_device_info intel_i945g_info = {
+static const struct intel_device_info intel_i945g_info = {
 	.is_i9xx = 1, .has_hotplug = 1, .cursor_needs_physical = 1,
 };
-const static struct intel_device_info intel_i945gm_info = {
+static const struct intel_device_info intel_i945gm_info = {
 	.is_i945gm = 1, .is_i9xx = 1, .is_mobile = 1,
 	.has_hotplug = 1, .cursor_needs_physical = 1,
 };
 
-const static struct intel_device_info intel_i965g_info = {
+static const struct intel_device_info intel_i965g_info = {
 	.is_i965g = 1, .is_i9xx = 1, .has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_i965gm_info = {
+static const struct intel_device_info intel_i965gm_info = {
 	.is_i965g = 1, .is_mobile = 1, .is_i965gm = 1, .is_i9xx = 1,
 	.is_mobile = 1, .has_fbc = 1, .has_rc6 = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_g33_info = {
+static const struct intel_device_info intel_g33_info = {
 	.is_g33 = 1, .is_i9xx = 1, .need_gfx_hws = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_g45_info = {
+static const struct intel_device_info intel_g45_info = {
 	.is_i965g = 1, .is_g4x = 1, .is_i9xx = 1, .need_gfx_hws = 1,
 	.has_pipe_cxsr = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_gm45_info = {
+static const struct intel_device_info intel_gm45_info = {
 	.is_i965g = 1, .is_mobile = 1, .is_g4x = 1, .is_i9xx = 1,
 	.is_mobile = 1, .need_gfx_hws = 1, .has_fbc = 1, .has_rc6 = 1,
 	.has_pipe_cxsr = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_pineview_info = {
+static const struct intel_device_info intel_pineview_info = {
 	.is_g33 = 1, .is_pineview = 1, .is_mobile = 1, .is_i9xx = 1,
 	.need_gfx_hws = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_ironlake_d_info = {
+static const struct intel_device_info intel_ironlake_d_info = {
 	.is_ironlake = 1, .is_i965g = 1, .is_i9xx = 1, .need_gfx_hws = 1,
 	.has_pipe_cxsr = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_ironlake_m_info = {
+static const struct intel_device_info intel_ironlake_m_info = {
 	.is_ironlake = 1, .is_mobile = 1, .is_i965g = 1, .is_i9xx = 1,
 	.need_gfx_hws = 1, .has_rc6 = 1,
 	.has_hotplug = 1,
 };
 
-const static struct intel_device_info intel_sandybridge_d_info = {
+static const struct intel_device_info intel_sandybridge_d_info = {
 	.is_i965g = 1, .is_i9xx = 1, .need_gfx_hws = 1,
 	.has_hotplug = 1, .is_gen6 = 1,
 };
 
-const static struct intel_device_info intel_sandybridge_m_info = {
+static const struct intel_device_info intel_sandybridge_m_info = {
 	.is_i965g = 1, .is_mobile = 1, .is_i9xx = 1, .need_gfx_hws = 1,
 	.has_hotplug = 1, .is_gen6 = 1,
 };
 
-const static struct pci_device_id pciidlist[] = {
+static const struct pci_device_id pciidlist[] = {
 	INTEL_VGA_DEVICE(0x3577, &intel_i830_info),
 	INTEL_VGA_DEVICE(0x2562, &intel_845g_info),
 	INTEL_VGA_DEVICE(0x3582, &intel_i85x_info),
@@ -188,6 +188,35 @@ const static struct pci_device_id pciidlist[] = {
 MODULE_DEVICE_TABLE(pci, pciidlist);
 #endif
 
+#define INTEL_PCH_DEVICE_ID_MASK	0xff00
+#define INTEL_PCH_CPT_DEVICE_ID_TYPE	0x1c00
+
+void intel_detect_pch (struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct pci_dev *pch;
+
+	/*
+	 * The reason to probe ISA bridge instead of Dev31:Fun0 is to
+	 * make graphics device passthrough work easy for VMM, that only
+	 * need to expose ISA bridge to let driver know the real hardware
+	 * underneath. This is a requirement from virtualization team.
+	 */
+	pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
+	if (pch) {
+		if (pch->vendor == PCI_VENDOR_ID_INTEL) {
+			int id;
+			id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
+
+			if (id == INTEL_PCH_CPT_DEVICE_ID_TYPE) {
+				dev_priv->pch_type = PCH_CPT;
+				DRM_DEBUG_KMS("Found CougarPoint PCH\n");
+			}
+		}
+		pci_dev_put(pch);
+	}
+}
+
 static int i915_drm_freeze(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
@@ -311,7 +340,7 @@ int i965_reset(struct drm_device *dev, u8 flags)
 	/*
 	 * Clear request list
 	 */
-	i915_gem_retire_requests(dev);
+	i915_gem_retire_requests(dev, &dev_priv->render_ring);
 
 	if (need_display)
 		i915_save_display(dev);
@@ -341,6 +370,7 @@ int i965_reset(struct drm_device *dev, u8 flags)
 		}
 	} else {
 		DRM_ERROR("Error occurred. Don't know how to reset this chip.\n");
+		mutex_unlock(&dev->struct_mutex);
 		return -ENODEV;
 	}
 
@@ -359,33 +389,10 @@ int i965_reset(struct drm_device *dev, u8 flags)
 	 * switched away).
 	 */
 	if (drm_core_check_feature(dev, DRIVER_MODESET) ||
-	    !dev_priv->mm.suspended) {
-		drm_i915_ring_buffer_t *ring = &dev_priv->ring;
-		struct drm_gem_object *obj = ring->ring_obj;
-		struct drm_i915_gem_object *obj_priv = to_intel_bo(obj);
+			!dev_priv->mm.suspended) {
+		struct intel_ring_buffer *ring = &dev_priv->render_ring;
 		dev_priv->mm.suspended = 0;
-
-		/* Stop the ring if it's running. */
-		I915_WRITE(PRB0_CTL, 0);
-		I915_WRITE(PRB0_TAIL, 0);
-		I915_WRITE(PRB0_HEAD, 0);
-
-		/* Initialize the ring. */
-		I915_WRITE(PRB0_START, obj_priv->gtt_offset);
-		I915_WRITE(PRB0_CTL,
-			   ((obj->size - 4096) & RING_NR_PAGES) |
-			   RING_NO_REPORT |
-			   RING_VALID);
-		if (!drm_core_check_feature(dev, DRIVER_MODESET))
-			i915_kernel_lost_context(dev);
-		else {
-			ring->head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-			ring->tail = I915_READ(PRB0_TAIL) & TAIL_ADDR;
-			ring->space = ring->head - (ring->tail + 8);
-			if (ring->space < 0)
-				ring->space += ring->Size;
-		}
-
+		ring->init(dev, ring);
 		mutex_unlock(&dev->struct_mutex);
 		drm_irq_uninstall(dev);
 		drm_irq_install(dev);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 6e47900..2765831 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -32,6 +32,7 @@
 
 #include "i915_reg.h"
 #include "intel_bios.h"
+#include "intel_ringbuffer.h"
 #include <linux/io-mapping.h>
 
 /* General customization:
@@ -55,6 +56,8 @@ enum plane {
 
 #define I915_NUM_PIPE	2
 
+#define I915_GEM_GPU_DOMAINS	(~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
+
 /* Interface history:
  *
  * 1.1: Original.
@@ -89,16 +92,6 @@ struct drm_i915_gem_phys_object {
 	struct drm_gem_object *cur_obj;
 };
 
-typedef struct _drm_i915_ring_buffer {
-	unsigned long Size;
-	u8 *virtual_start;
-	int head;
-	int tail;
-	int space;
-	drm_local_map_t map;
-	struct drm_gem_object *ring_obj;
-} drm_i915_ring_buffer_t;
-
 struct mem_block {
 	struct mem_block *next;
 	struct mem_block *prev;
@@ -128,6 +121,7 @@ struct drm_i915_master_private {
 
 struct drm_i915_fence_reg {
 	struct drm_gem_object *obj;
+	struct list_head lru_list;
 };
 
 struct sdvo_device_mapping {
@@ -135,6 +129,7 @@ struct sdvo_device_mapping {
 	u8 slave_addr;
 	u8 dvo_wiring;
 	u8 initialized;
+	u8 ddc_pin;
 };
 
 struct drm_i915_error_state {
@@ -175,7 +170,7 @@ struct drm_i915_error_state {
 
 struct drm_i915_display_funcs {
 	void (*dpms)(struct drm_crtc *crtc, int mode);
-	bool (*fbc_enabled)(struct drm_crtc *crtc);
+	bool (*fbc_enabled)(struct drm_device *dev);
 	void (*enable_fbc)(struct drm_crtc *crtc, unsigned long interval);
 	void (*disable_fbc)(struct drm_device *dev);
 	int (*get_display_clock_speed)(struct drm_device *dev);
@@ -222,6 +217,13 @@ enum no_fbc_reason {
 	FBC_NOT_TILED, /* buffer not tiled */
 };
 
+enum intel_pch {
+	PCH_IBX,	/* Ibexpeak PCH */
+	PCH_CPT,	/* Cougarpoint PCH */
+};
+
+struct intel_fbdev;
+
 typedef struct drm_i915_private {
 	struct drm_device *dev;
 
@@ -232,17 +234,15 @@ typedef struct drm_i915_private {
 	void __iomem *regs;
 
 	struct pci_dev *bridge_dev;
-	drm_i915_ring_buffer_t ring;
+	struct intel_ring_buffer render_ring;
+	struct intel_ring_buffer bsd_ring;
 
 	drm_dma_handle_t *status_page_dmah;
-	void *hw_status_page;
 	void *seqno_page;
 	dma_addr_t dma_status_page;
 	uint32_t counter;
-	unsigned int status_gfx_addr;
 	unsigned int seqno_gfx_addr;
 	drm_local_map_t hws_map;
-	struct drm_gem_object *hws_obj;
 	struct drm_gem_object *seqno_obj;
 	struct drm_gem_object *pwrctx;
 
@@ -258,8 +258,6 @@ typedef struct drm_i915_private {
 	atomic_t irq_received;
 	/** Protects user_irq_refcount and irq_mask_reg */
 	spinlock_t user_irq_lock;
-	/** Refcount for i915_user_irq_get() versus i915_user_irq_put(). */
-	int user_irq_refcount;
 	u32 trace_irq_seqno;
 	/** Cached value of IMR to avoid reads in updating the bitfield */
 	u32 irq_mask_reg;
@@ -280,6 +278,7 @@ typedef struct drm_i915_private {
 	struct mem_block *agp_heap;
 	unsigned int sr01, adpa, ppcr, dvob, dvoc, lvds;
 	int vblank_pipe;
+	int num_pipe;
 
 	/* For hangcheck timer */
 #define DRM_I915_HANGCHECK_PERIOD 75 /* in jiffies */
@@ -325,7 +324,7 @@ typedef struct drm_i915_private {
 	int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
 	int num_fence_regs; /* 8 on pre-965, 16 otherwise */
 
-	unsigned int fsb_freq, mem_freq;
+	unsigned int fsb_freq, mem_freq, is_ddr3;
 
 	spinlock_t error_lock;
 	struct drm_i915_error_state *first_error;
@@ -335,6 +334,9 @@ typedef struct drm_i915_private {
 	/* Display functions */
 	struct drm_i915_display_funcs display;
 
+	/* PCH chipset type */
+	enum intel_pch pch_type;
+
 	/* Register state */
 	bool modeset_on_lid;
 	u8 saveLBB;
@@ -502,18 +504,7 @@ typedef struct drm_i915_private {
 		 */
 		struct list_head shrink_list;
 
-		/**
-		 * List of objects currently involved in rendering from the
-		 * ringbuffer.
-		 *
-		 * Includes buffers having the contents of their GPU caches
-		 * flushed, not necessarily primitives.  last_rendering_seqno
-		 * represents when the rendering involved will be completed.
-		 *
-		 * A reference is held on the buffer while on this list.
-		 */
 		spinlock_t active_list_lock;
-		struct list_head active_list;
 
 		/**
 		 * List of objects which are not in the ringbuffer but which
@@ -551,12 +542,6 @@ typedef struct drm_i915_private {
 		struct list_head fence_list;
 
 		/**
-		 * List of breadcrumbs associated with GPU requests currently
-		 * outstanding.
-		 */
-		struct list_head request_list;
-
-		/**
 		 * We leave the user IRQ off as much as possible,
 		 * but this means that requests will finish and never
 		 * be retired once the system goes idle. Set a timer to
@@ -632,16 +617,31 @@ typedef struct drm_i915_private {
 	u8 cur_delay;
 	u8 min_delay;
 	u8 max_delay;
+	u8 fmax;
+	u8 fstart;
+
+ 	u64 last_count1;
+ 	unsigned long last_time1;
+ 	u64 last_count2;
+ 	struct timespec last_time2;
+ 	unsigned long gfx_power;
+ 	int c_m;
+ 	int r_t;
+ 	u8 corr;
+	spinlock_t *mchdev_lock;
 
 	enum no_fbc_reason no_fbc_reason;
 
 	struct drm_mm_node *compressed_fb;
 	struct drm_mm_node *compressed_llb;
+
+	/* list of fbdev register on this device */
+	struct intel_fbdev *fbdev;
 } drm_i915_private_t;
 
 /** driver private structure attached to each drm_gem_object */
 struct drm_i915_gem_object {
-	struct drm_gem_object *obj;
+	struct drm_gem_object base;
 
 	/** Current space allocated to this object in the GTT, if any. */
 	struct drm_mm_node *gtt_space;
@@ -651,27 +651,69 @@ struct drm_i915_gem_object {
 	/** This object's place on GPU write list */
 	struct list_head gpu_write_list;
 
-	/** This object's place on the fenced object LRU */
-	struct list_head fence_list;
-
 	/**
 	 * This is set if the object is on the active or flushing lists
 	 * (has pending rendering), and is not set if it's on inactive (ready
 	 * to be unbound).
 	 */
-	int active;
+	unsigned int active : 1;
 
 	/**
 	 * This is set if the object has been written to since last bound
 	 * to the GTT
 	 */
-	int dirty;
+	unsigned int dirty : 1;
+
+	/**
+	 * Fence register bits (if any) for this object.  Will be set
+	 * as needed when mapped into the GTT.
+	 * Protected by dev->struct_mutex.
+	 *
+	 * Size: 4 bits for 16 fences + sign (for FENCE_REG_NONE)
+	 */
+	int fence_reg : 5;
+
+	/**
+	 * Used for checking the object doesn't appear more than once
+	 * in an execbuffer object list.
+	 */
+	unsigned int in_execbuffer : 1;
+
+	/**
+	 * Advice: are the backing pages purgeable?
+	 */
+	unsigned int madv : 2;
+
+	/**
+	 * Refcount for the pages array. With the current locking scheme, there
+	 * are at most two concurrent users: Binding a bo to the gtt and
+	 * pwrite/pread using physical addresses. So two bits for a maximum
+	 * of two users are enough.
+	 */
+	unsigned int pages_refcount : 2;
+#define DRM_I915_GEM_OBJECT_MAX_PAGES_REFCOUNT 0x3
+
+	/**
+	 * Current tiling mode for the object.
+	 */
+	unsigned int tiling_mode : 2;
+
+	/** How many users have pinned this object in GTT space. The following
+	 * users can each hold at most one reference: pwrite/pread, pin_ioctl
+	 * (via user_pin_count), execbuffer (objects are not allowed multiple
+	 * times for the same batchbuffer), and the framebuffer code. When
+	 * switching/pageflipping, the framebuffer code has at most two buffers
+	 * pinned per crtc.
+	 *
+	 * In the worst case this is 1 + 1 + 1 + 2*2 = 7. That would fit into 3
+	 * bits with absolutely no headroom. So use 4 bits. */
+	int pin_count : 4;
+#define DRM_I915_GEM_OBJECT_MAX_PIN_COUNT 0xf
 
 	/** AGP memory structure for our GTT binding. */
 	DRM_AGP_MEM *agp_mem;
 
 	struct page **pages;
-	int pages_refcount;
 
 	/**
 	 * Current offset of the object in GTT space.
@@ -680,26 +722,18 @@ struct drm_i915_gem_object {
 	 */
 	uint32_t gtt_offset;
 
+	/* Which ring is refering to is this object */
+	struct intel_ring_buffer *ring;
+
 	/**
 	 * Fake offset for use by mmap(2)
 	 */
 	uint64_t mmap_offset;
 
-	/**
-	 * Fence register bits (if any) for this object.  Will be set
-	 * as needed when mapped into the GTT.
-	 * Protected by dev->struct_mutex.
-	 */
-	int fence_reg;
-
-	/** How many users have pinned this object in GTT space */
-	int pin_count;
-
 	/** Breadcrumb of last rendering to the buffer. */
 	uint32_t last_rendering_seqno;
 
-	/** Current tiling mode for the object. */
-	uint32_t tiling_mode;
+	/** Current tiling stride for the object, if it's tiled. */
 	uint32_t stride;
 
 	/** Record of address bit 17 of each page at last unbind. */
@@ -722,17 +756,6 @@ struct drm_i915_gem_object {
 	struct drm_i915_gem_phys_object *phys_obj;
 
 	/**
-	 * Used for checking the object doesn't appear more than once
-	 * in an execbuffer object list.
-	 */
-	int in_execbuffer;
-
-	/**
-	 * Advice: are the backing pages purgeable?
-	 */
-	int madv;
-
-	/**
 	 * Number of crtcs where this object is currently the fb, but
 	 * will be page flipped away on the next vblank.  When it
 	 * reaches 0, dev_priv->pending_flip_queue will be woken up.
@@ -740,7 +763,7 @@ struct drm_i915_gem_object {
 	atomic_t pending_flip;
 };
 
-#define to_intel_bo(x) ((struct drm_i915_gem_object *) (x)->driver_private)
+#define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)
 
 /**
  * Request queue structure.
@@ -753,6 +776,9 @@ struct drm_i915_gem_object {
  * an emission time with seqnos for tracking how far ahead of the GPU we are.
  */
 struct drm_i915_gem_request {
+	/** On Which ring this request was generated */
+	struct intel_ring_buffer *ring;
+
 	/** GEM sequence number associated with this request. */
 	uint32_t seqno;
 
@@ -809,6 +835,11 @@ extern int i915_emit_box(struct drm_device *dev,
 			 struct drm_clip_rect *boxes,
 			 int i, int DR1, int DR4);
 extern int i965_reset(struct drm_device *dev, u8 flags);
+extern unsigned long i915_chipset_val(struct drm_i915_private *dev_priv);
+extern unsigned long i915_mch_val(struct drm_i915_private *dev_priv);
+extern unsigned long i915_gfx_val(struct drm_i915_private *dev_priv);
+extern void i915_update_gfx_val(struct drm_i915_private *dev_priv);
+
 
 /* i915_irq.c */
 void i915_hangcheck_elapsed(unsigned long data);
@@ -817,9 +848,7 @@ extern int i915_irq_emit(struct drm_device *dev, void *data,
 			 struct drm_file *file_priv);
 extern int i915_irq_wait(struct drm_device *dev, void *data,
 			 struct drm_file *file_priv);
-void i915_user_irq_get(struct drm_device *dev);
 void i915_trace_irq_get(struct drm_device *dev, u32 seqno);
-void i915_user_irq_put(struct drm_device *dev);
 extern void i915_enable_interrupt (struct drm_device *dev);
 
 extern irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS);
@@ -837,6 +866,11 @@ extern u32 gm45_get_vblank_counter(struct drm_device *dev, int crtc);
 extern int i915_vblank_swap(struct drm_device *dev, void *data,
 			    struct drm_file *file_priv);
 extern void i915_enable_irq(drm_i915_private_t *dev_priv, u32 mask);
+extern void i915_disable_irq(drm_i915_private_t *dev_priv, u32 mask);
+extern void ironlake_enable_graphics_irq(drm_i915_private_t *dev_priv,
+		u32 mask);
+extern void ironlake_disable_graphics_irq(drm_i915_private_t *dev_priv,
+		u32 mask);
 
 void
 i915_enable_pipestat(drm_i915_private_t *dev_priv, int pipe, u32 mask);
@@ -902,17 +936,21 @@ int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
 				struct drm_file *file_priv);
 void i915_gem_load(struct drm_device *dev);
 int i915_gem_init_object(struct drm_gem_object *obj);
+struct drm_gem_object * i915_gem_alloc_object(struct drm_device *dev,
+					      size_t size);
 void i915_gem_free_object(struct drm_gem_object *obj);
 int i915_gem_object_pin(struct drm_gem_object *obj, uint32_t alignment);
 void i915_gem_object_unpin(struct drm_gem_object *obj);
 int i915_gem_object_unbind(struct drm_gem_object *obj);
 void i915_gem_release_mmap(struct drm_gem_object *obj);
 void i915_gem_lastclose(struct drm_device *dev);
-uint32_t i915_get_gem_seqno(struct drm_device *dev);
+uint32_t i915_get_gem_seqno(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
 bool i915_seqno_passed(uint32_t seq1, uint32_t seq2);
 int i915_gem_object_get_fence_reg(struct drm_gem_object *obj);
 int i915_gem_object_put_fence_reg(struct drm_gem_object *obj);
-void i915_gem_retire_requests(struct drm_device *dev);
+void i915_gem_retire_requests(struct drm_device *dev,
+		 struct intel_ring_buffer *ring);
 void i915_gem_retire_work_handler(struct work_struct *work);
 void i915_gem_clflush_object(struct drm_gem_object *obj);
 int i915_gem_object_set_domain(struct drm_gem_object *obj,
@@ -923,9 +961,13 @@ void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
 int i915_gem_do_init(struct drm_device *dev, unsigned long start,
 		     unsigned long end);
 int i915_gem_idle(struct drm_device *dev);
-uint32_t i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
-			  uint32_t flush_domains);
-int i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible);
+uint32_t i915_add_request(struct drm_device *dev,
+		struct drm_file *file_priv,
+		uint32_t flush_domains,
+		struct intel_ring_buffer *ring);
+int i915_do_wait_request(struct drm_device *dev,
+		uint32_t seqno, int interruptible,
+		struct intel_ring_buffer *ring);
 int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
 int i915_gem_object_set_to_gtt_domain(struct drm_gem_object *obj,
 				      int write);
@@ -998,6 +1040,12 @@ extern void intel_modeset_cleanup(struct drm_device *dev);
 extern int intel_modeset_vga_set_state(struct drm_device *dev, bool state);
 extern void i8xx_disable_fbc(struct drm_device *dev);
 extern void g4x_disable_fbc(struct drm_device *dev);
+extern void intel_disable_fbc(struct drm_device *dev);
+extern void intel_enable_fbc(struct drm_crtc *crtc, unsigned long interval);
+extern bool intel_fbc_enabled(struct drm_device *dev);
+extern bool ironlake_set_drps(struct drm_device *dev, u8 val);
+extern void intel_detect_pch (struct drm_device *dev);
+extern int intel_trans_dp_port_sel (struct drm_crtc *crtc);
 
 /**
  * Lock test for when it's just for synchronization of ring access.
@@ -1006,7 +1054,8 @@ extern void g4x_disable_fbc(struct drm_device *dev);
  * has access to the ring.
  */
 #define RING_LOCK_TEST_WITH_RETURN(dev, file_priv) do {			\
-	if (((drm_i915_private_t *)dev->dev_private)->ring.ring_obj == NULL) \
+	if (((drm_i915_private_t *)dev->dev_private)->render_ring.gem_object \
+			== NULL)					\
 		LOCK_TEST_WITH_RETURN(dev, file_priv);			\
 } while (0)
 
@@ -1019,35 +1068,31 @@ extern void g4x_disable_fbc(struct drm_device *dev);
 #define I915_WRITE64(reg, val)	writeq(val, dev_priv->regs + (reg))
 #define I915_READ64(reg)	readq(dev_priv->regs + (reg))
 #define POSTING_READ(reg)	(void)I915_READ(reg)
+#define POSTING_READ16(reg)	(void)I915_READ16(reg)
 
 #define I915_VERBOSE 0
 
-#define RING_LOCALS	volatile unsigned int *ring_virt__;
-
-#define BEGIN_LP_RING(n) do {						\
-	int bytes__ = 4*(n);						\
-	if (I915_VERBOSE) DRM_DEBUG("BEGIN_LP_RING(%d)\n", (n));	\
-	/* a wrap must occur between instructions so pad beforehand */	\
-	if (unlikely (dev_priv->ring.tail + bytes__ > dev_priv->ring.Size)) \
-		i915_wrap_ring(dev);					\
-	if (unlikely (dev_priv->ring.space < bytes__))			\
-		i915_wait_ring(dev, bytes__, __func__);			\
-	ring_virt__ = (unsigned int *)					\
-	        (dev_priv->ring.virtual_start + dev_priv->ring.tail);	\
-	dev_priv->ring.tail += bytes__;					\
-	dev_priv->ring.tail &= dev_priv->ring.Size - 1;			\
-	dev_priv->ring.space -= bytes__;				\
+#define BEGIN_LP_RING(n)  do { \
+	drm_i915_private_t *dev_priv = dev->dev_private;                \
+	if (I915_VERBOSE)						\
+		DRM_DEBUG("   BEGIN_LP_RING %x\n", (int)(n));		\
+	intel_ring_begin(dev, &dev_priv->render_ring, 4*(n));		\
 } while (0)
 
-#define OUT_RING(n) do {						\
-	if (I915_VERBOSE) DRM_DEBUG("   OUT_RING %x\n", (int)(n));	\
-	*ring_virt__++ = (n);						\
+
+#define OUT_RING(x) do {						\
+	drm_i915_private_t *dev_priv = dev->dev_private;		\
+	if (I915_VERBOSE)						\
+		DRM_DEBUG("   OUT_RING %x\n", (int)(x));		\
+	intel_ring_emit(dev, &dev_priv->render_ring, x);		\
 } while (0)
 
 #define ADVANCE_LP_RING() do {						\
+	drm_i915_private_t *dev_priv = dev->dev_private;                \
 	if (I915_VERBOSE)						\
-		DRM_DEBUG("ADVANCE_LP_RING %x\n", dev_priv->ring.tail);	\
-	I915_WRITE(PRB0_TAIL, dev_priv->ring.tail);			\
+		DRM_DEBUG("ADVANCE_LP_RING %x\n",			\
+				dev_priv->render_ring.tail);		\
+	intel_ring_advance(dev, &dev_priv->render_ring);		\
 } while(0)
 
 /**
@@ -1065,14 +1110,12 @@ extern void g4x_disable_fbc(struct drm_device *dev);
  *
  * The area from dword 0x20 to 0x3ff is available for driver usage.
  */
-#define READ_HWSP(dev_priv, reg)  (((volatile u32*)(dev_priv->hw_status_page))[reg])
+#define READ_HWSP(dev_priv, reg)  (((volatile u32 *)\
+			(dev_priv->render_ring.status_page.page_addr))[reg])
 #define READ_BREADCRUMB(dev_priv) READ_HWSP(dev_priv, I915_BREADCRUMB_INDEX)
 #define I915_GEM_HWS_INDEX		0x20
 #define I915_BREADCRUMB_INDEX		0x21
 
-extern int i915_wrap_ring(struct drm_device * dev);
-extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
-
 #define INTEL_INFO(dev)	(((struct drm_i915_private *) (dev)->dev_private)->info)
 
 #define IS_I830(dev)		((dev)->pci_device == 0x3577)
@@ -1118,6 +1161,7 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
 			 (dev)->pci_device == 0x2A42 ||		\
 			 (dev)->pci_device == 0x2E42)
 
+#define HAS_BSD(dev)            (IS_IRONLAKE(dev) || IS_G4X(dev))
 #define I915_NEED_GFX_HWS(dev)	(INTEL_INFO(dev)->need_gfx_hws)
 
 /* With the 945 and later, Y tiling got adjusted so that it was 32 128-byte
@@ -1130,7 +1174,8 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
 #define SUPPORTS_INTEGRATED_DP(dev)	(IS_G4X(dev) || IS_IRONLAKE(dev))
 #define SUPPORTS_EDP(dev)		(IS_IRONLAKE_M(dev))
 #define SUPPORTS_TV(dev)		(IS_I9XX(dev) && IS_MOBILE(dev) && \
-					!IS_IRONLAKE(dev) && !IS_PINEVIEW(dev))
+					!IS_IRONLAKE(dev) && !IS_PINEVIEW(dev) && \
+					!IS_GEN6(dev))
 #define I915_HAS_HOTPLUG(dev)		 (INTEL_INFO(dev)->has_hotplug)
 /* dsparb controlled by hw only */
 #define DSPARB_HWCONTROL(dev) (IS_G4X(dev) || IS_IRONLAKE(dev))
@@ -1144,6 +1189,9 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller);
 			    IS_GEN6(dev))
 #define HAS_PIPE_CONTROL(dev) (IS_IRONLAKE(dev) || IS_GEN6(dev))
 
+#define INTEL_PCH_TYPE(dev) (((struct drm_i915_private *)(dev)->dev_private)->pch_type)
+#define HAS_PCH_CPT(dev) (INTEL_PCH_TYPE(dev) == PCH_CPT)
+
 #define PRIMARY_RINGBUFFER_SIZE         (128*1024)
 
 #endif
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ef3d91d..9ded3da 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -35,8 +35,6 @@
 #include <linux/swap.h>
 #include <linux/pci.h>
 
-#define I915_GEM_GPU_DOMAINS	(~(I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT))
-
 static void i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj);
 static void i915_gem_object_flush_gtt_write_domain(struct drm_gem_object *obj);
 static void i915_gem_object_flush_cpu_write_domain(struct drm_gem_object *obj);
@@ -124,7 +122,7 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data,
 	args->size = roundup(args->size, PAGE_SIZE);
 
 	/* Allocate the new object */
-	obj = drm_gem_object_alloc(dev, args->size);
+	obj = i915_gem_alloc_object(dev, args->size);
 	if (obj == NULL)
 		return -ENOMEM;
 
@@ -169,7 +167,7 @@ static int i915_gem_object_needs_bit17_swizzle(struct drm_gem_object *obj)
 		obj_priv->tiling_mode != I915_TILING_NONE;
 }
 
-static inline int
+static inline void
 slow_shmem_copy(struct page *dst_page,
 		int dst_offset,
 		struct page *src_page,
@@ -178,25 +176,16 @@ slow_shmem_copy(struct page *dst_page,
 {
 	char *dst_vaddr, *src_vaddr;
 
-	dst_vaddr = kmap_atomic(dst_page, KM_USER0);
-	if (dst_vaddr == NULL)
-		return -ENOMEM;
-
-	src_vaddr = kmap_atomic(src_page, KM_USER1);
-	if (src_vaddr == NULL) {
-		kunmap_atomic(dst_vaddr, KM_USER0);
-		return -ENOMEM;
-	}
+	dst_vaddr = kmap(dst_page);
+	src_vaddr = kmap(src_page);
 
 	memcpy(dst_vaddr + dst_offset, src_vaddr + src_offset, length);
 
-	kunmap_atomic(src_vaddr, KM_USER1);
-	kunmap_atomic(dst_vaddr, KM_USER0);
-
-	return 0;
+	kunmap(src_page);
+	kunmap(dst_page);
 }
 
-static inline int
+static inline void
 slow_shmem_bit17_copy(struct page *gpu_page,
 		      int gpu_offset,
 		      struct page *cpu_page,
@@ -216,15 +205,8 @@ slow_shmem_bit17_copy(struct page *gpu_page,
 					       cpu_page, cpu_offset, length);
 	}
 
-	gpu_vaddr = kmap_atomic(gpu_page, KM_USER0);
-	if (gpu_vaddr == NULL)
-		return -ENOMEM;
-
-	cpu_vaddr = kmap_atomic(cpu_page, KM_USER1);
-	if (cpu_vaddr == NULL) {
-		kunmap_atomic(gpu_vaddr, KM_USER0);
-		return -ENOMEM;
-	}
+	gpu_vaddr = kmap(gpu_page);
+	cpu_vaddr = kmap(cpu_page);
 
 	/* Copy the data, XORing A6 with A17 (1). The user already knows he's
 	 * XORing with the other bits (A9 for Y, A9 and A10 for X)
@@ -248,10 +230,8 @@ slow_shmem_bit17_copy(struct page *gpu_page,
 		length -= this_length;
 	}
 
-	kunmap_atomic(cpu_vaddr, KM_USER1);
-	kunmap_atomic(gpu_vaddr, KM_USER0);
-
-	return 0;
+	kunmap(cpu_page);
+	kunmap(gpu_page);
 }
 
 /**
@@ -427,21 +407,19 @@ i915_gem_shmem_pread_slow(struct drm_device *dev, struct drm_gem_object *obj,
 			page_length = PAGE_SIZE - data_page_offset;
 
 		if (do_bit17_swizzling) {
-			ret = slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
-						    shmem_page_offset,
-						    user_pages[data_page_index],
-						    data_page_offset,
-						    page_length,
-						    1);
-		} else {
-			ret = slow_shmem_copy(user_pages[data_page_index],
-					      data_page_offset,
-					      obj_priv->pages[shmem_page_index],
+			slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
 					      shmem_page_offset,
-					      page_length);
+					      user_pages[data_page_index],
+					      data_page_offset,
+					      page_length,
+					      1);
+		} else {
+			slow_shmem_copy(user_pages[data_page_index],
+					data_page_offset,
+					obj_priv->pages[shmem_page_index],
+					shmem_page_offset,
+					page_length);
 		}
-		if (ret)
-			goto fail_put_pages;
 
 		remain -= page_length;
 		data_ptr += page_length;
@@ -531,25 +509,24 @@ fast_user_write(struct io_mapping *mapping,
  * page faults
  */
 
-static inline int
+static inline void
 slow_kernel_write(struct io_mapping *mapping,
 		  loff_t gtt_base, int gtt_offset,
 		  struct page *user_page, int user_offset,
 		  int length)
 {
-	char *src_vaddr, *dst_vaddr;
-	unsigned long unwritten;
+	char __iomem *dst_vaddr;
+	char *src_vaddr;
 
-	dst_vaddr = io_mapping_map_atomic_wc(mapping, gtt_base);
-	src_vaddr = kmap_atomic(user_page, KM_USER1);
-	unwritten = __copy_from_user_inatomic_nocache(dst_vaddr + gtt_offset,
-						      src_vaddr + user_offset,
-						      length);
-	kunmap_atomic(src_vaddr, KM_USER1);
-	io_mapping_unmap_atomic(dst_vaddr);
-	if (unwritten)
-		return -EFAULT;
-	return 0;
+	dst_vaddr = io_mapping_map_wc(mapping, gtt_base);
+	src_vaddr = kmap(user_page);
+
+	memcpy_toio(dst_vaddr + gtt_offset,
+		    src_vaddr + user_offset,
+		    length);
+
+	kunmap(user_page);
+	io_mapping_unmap(dst_vaddr);
 }
 
 static inline int
@@ -722,18 +699,11 @@ i915_gem_gtt_pwrite_slow(struct drm_device *dev, struct drm_gem_object *obj,
 		if ((data_page_offset + page_length) > PAGE_SIZE)
 			page_length = PAGE_SIZE - data_page_offset;
 
-		ret = slow_kernel_write(dev_priv->mm.gtt_mapping,
-					gtt_page_base, gtt_page_offset,
-					user_pages[data_page_index],
-					data_page_offset,
-					page_length);
-
-		/* If we get a fault while copying data, then (presumably) our
-		 * source page isn't available.  Return the error and we'll
-		 * retry in the slow path.
-		 */
-		if (ret)
-			goto out_unpin_object;
+		slow_kernel_write(dev_priv->mm.gtt_mapping,
+				  gtt_page_base, gtt_page_offset,
+				  user_pages[data_page_index],
+				  data_page_offset,
+				  page_length);
 
 		remain -= page_length;
 		offset += page_length;
@@ -902,21 +872,19 @@ i915_gem_shmem_pwrite_slow(struct drm_device *dev, struct drm_gem_object *obj,
 			page_length = PAGE_SIZE - data_page_offset;
 
 		if (do_bit17_swizzling) {
-			ret = slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
-						    shmem_page_offset,
-						    user_pages[data_page_index],
-						    data_page_offset,
-						    page_length,
-						    0);
-		} else {
-			ret = slow_shmem_copy(obj_priv->pages[shmem_page_index],
+			slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
 					      shmem_page_offset,
 					      user_pages[data_page_index],
 					      data_page_offset,
-					      page_length);
+					      page_length,
+					      0);
+		} else {
+			slow_shmem_copy(obj_priv->pages[shmem_page_index],
+					shmem_page_offset,
+					user_pages[data_page_index],
+					data_page_offset,
+					page_length);
 		}
-		if (ret)
-			goto fail_put_pages;
 
 		remain -= page_length;
 		data_ptr += page_length;
@@ -973,7 +941,8 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
 	if (obj_priv->phys_obj)
 		ret = i915_gem_phys_pwrite(dev, obj, args, file_priv);
 	else if (obj_priv->tiling_mode == I915_TILING_NONE &&
-		 dev->gtt_total != 0) {
+		 dev->gtt_total != 0 &&
+		 obj->write_domain != I915_GEM_DOMAIN_CPU) {
 		ret = i915_gem_gtt_pwrite_fast(dev, obj, args, file_priv);
 		if (ret == -EFAULT) {
 			ret = i915_gem_gtt_pwrite_slow(dev, obj, args,
@@ -1051,7 +1020,9 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
 		 * about to occur.
 		 */
 		if (obj_priv->fence_reg != I915_FENCE_REG_NONE) {
-			list_move_tail(&obj_priv->fence_list,
+			struct drm_i915_fence_reg *reg =
+				&dev_priv->fence_regs[obj_priv->fence_reg];
+			list_move_tail(&reg->lru_list,
 				       &dev_priv->mm.fence_list);
 		}
 
@@ -1482,11 +1453,14 @@ i915_gem_object_put_pages(struct drm_gem_object *obj)
 }
 
 static void
-i915_gem_object_move_to_active(struct drm_gem_object *obj, uint32_t seqno)
+i915_gem_object_move_to_active(struct drm_gem_object *obj, uint32_t seqno,
+			       struct intel_ring_buffer *ring)
 {
 	struct drm_device *dev = obj->dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj_priv = to_intel_bo(obj);
+	BUG_ON(ring == NULL);
+	obj_priv->ring = ring;
 
 	/* Add a reference if we're newly entering the active list. */
 	if (!obj_priv->active) {
@@ -1495,8 +1469,7 @@ i915_gem_object_move_to_active(struct drm_gem_object *obj, uint32_t seqno)
 	}
 	/* Move from whatever list we were on to the tail of execution. */
 	spin_lock(&dev_priv->mm.active_list_lock);
-	list_move_tail(&obj_priv->list,
-		       &dev_priv->mm.active_list);
+	list_move_tail(&obj_priv->list, &ring->active_list);
 	spin_unlock(&dev_priv->mm.active_list_lock);
 	obj_priv->last_rendering_seqno = seqno;
 }
@@ -1549,6 +1522,7 @@ i915_gem_object_move_to_inactive(struct drm_gem_object *obj)
 	BUG_ON(!list_empty(&obj_priv->gpu_write_list));
 
 	obj_priv->last_rendering_seqno = 0;
+	obj_priv->ring = NULL;
 	if (obj_priv->active) {
 		obj_priv->active = 0;
 		drm_gem_object_unreference(obj);
@@ -1558,7 +1532,8 @@ i915_gem_object_move_to_inactive(struct drm_gem_object *obj)
 
 static void
 i915_gem_process_flushing_list(struct drm_device *dev,
-			       uint32_t flush_domains, uint32_t seqno)
+			       uint32_t flush_domains, uint32_t seqno,
+			       struct intel_ring_buffer *ring)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj_priv, *next;
@@ -1566,20 +1541,24 @@ i915_gem_process_flushing_list(struct drm_device *dev,
 	list_for_each_entry_safe(obj_priv, next,
 				 &dev_priv->mm.gpu_write_list,
 				 gpu_write_list) {
-		struct drm_gem_object *obj = obj_priv->obj;
+		struct drm_gem_object *obj = &obj_priv->base;
 
 		if ((obj->write_domain & flush_domains) ==
-		    obj->write_domain) {
+		    obj->write_domain &&
+		    obj_priv->ring->ring_flag == ring->ring_flag) {
 			uint32_t old_write_domain = obj->write_domain;
 
 			obj->write_domain = 0;
 			list_del_init(&obj_priv->gpu_write_list);
-			i915_gem_object_move_to_active(obj, seqno);
+			i915_gem_object_move_to_active(obj, seqno, ring);
 
 			/* update the fence lru list */
-			if (obj_priv->fence_reg != I915_FENCE_REG_NONE)
-				list_move_tail(&obj_priv->fence_list,
+			if (obj_priv->fence_reg != I915_FENCE_REG_NONE) {
+				struct drm_i915_fence_reg *reg =
+					&dev_priv->fence_regs[obj_priv->fence_reg];
+				list_move_tail(&reg->lru_list,
 						&dev_priv->mm.fence_list);
+			}
 
 			trace_i915_gem_object_change_domain(obj,
 							    obj->read_domains,
@@ -1588,31 +1567,15 @@ i915_gem_process_flushing_list(struct drm_device *dev,
 	}
 }
 
-#define PIPE_CONTROL_FLUSH(addr)					\
-	OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |		\
-		 PIPE_CONTROL_DEPTH_STALL);				\
-	OUT_RING(addr | PIPE_CONTROL_GLOBAL_GTT);			\
-	OUT_RING(0);							\
-	OUT_RING(0);							\
-
-/**
- * Creates a new sequence number, emitting a write of it to the status page
- * plus an interrupt, which will trigger i915_user_interrupt_handler.
- *
- * Must be called with struct_lock held.
- *
- * Returned sequence numbers are nonzero on success.
- */
 uint32_t
 i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
-		 uint32_t flush_domains)
+		 uint32_t flush_domains, struct intel_ring_buffer *ring)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_file_private *i915_file_priv = NULL;
 	struct drm_i915_gem_request *request;
 	uint32_t seqno;
 	int was_empty;
-	RING_LOCALS;
 
 	if (file_priv != NULL)
 		i915_file_priv = file_priv->driver_priv;
@@ -1621,62 +1584,14 @@ i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
 	if (request == NULL)
 		return 0;
 
-	/* Grab the seqno we're going to make this request be, and bump the
-	 * next (skipping 0 so it can be the reserved no-seqno value).
-	 */
-	seqno = dev_priv->mm.next_gem_seqno;
-	dev_priv->mm.next_gem_seqno++;
-	if (dev_priv->mm.next_gem_seqno == 0)
-		dev_priv->mm.next_gem_seqno++;
-
-	if (HAS_PIPE_CONTROL(dev)) {
-		u32 scratch_addr = dev_priv->seqno_gfx_addr + 128;
-
-		/*
-		 * Workaround qword write incoherence by flushing the
-		 * PIPE_NOTIFY buffers out to memory before requesting
-		 * an interrupt.
-		 */
-		BEGIN_LP_RING(32);
-		OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |
-			 PIPE_CONTROL_WC_FLUSH | PIPE_CONTROL_TC_FLUSH);
-		OUT_RING(dev_priv->seqno_gfx_addr | PIPE_CONTROL_GLOBAL_GTT);
-		OUT_RING(seqno);
-		OUT_RING(0);
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		scratch_addr += 128; /* write to separate cachelines */
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		scratch_addr += 128;
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		scratch_addr += 128;
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		scratch_addr += 128;
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		scratch_addr += 128;
-		PIPE_CONTROL_FLUSH(scratch_addr);
-		OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |
-			 PIPE_CONTROL_WC_FLUSH | PIPE_CONTROL_TC_FLUSH |
-			 PIPE_CONTROL_NOTIFY);
-		OUT_RING(dev_priv->seqno_gfx_addr | PIPE_CONTROL_GLOBAL_GTT);
-		OUT_RING(seqno);
-		OUT_RING(0);
-		ADVANCE_LP_RING();
-	} else {
-		BEGIN_LP_RING(4);
-		OUT_RING(MI_STORE_DWORD_INDEX);
-		OUT_RING(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
-		OUT_RING(seqno);
-
-		OUT_RING(MI_USER_INTERRUPT);
-		ADVANCE_LP_RING();
-	}
-
-	DRM_DEBUG_DRIVER("%d\n", seqno);
+	seqno = ring->add_request(dev, ring, file_priv, flush_domains);
 
 	request->seqno = seqno;
+	request->ring = ring;
 	request->emitted_jiffies = jiffies;
-	was_empty = list_empty(&dev_priv->mm.request_list);
-	list_add_tail(&request->list, &dev_priv->mm.request_list);
+	was_empty = list_empty(&ring->request_list);
+	list_add_tail(&request->list, &ring->request_list);
+
 	if (i915_file_priv) {
 		list_add_tail(&request->client_list,
 			      &i915_file_priv->mm.request_list);
@@ -1688,7 +1603,7 @@ i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
 	 * domain we're flushing with our flush.
 	 */
 	if (flush_domains != 0) 
-		i915_gem_process_flushing_list(dev, flush_domains, seqno);
+		i915_gem_process_flushing_list(dev, flush_domains, seqno, ring);
 
 	if (!dev_priv->mm.suspended) {
 		mod_timer(&dev_priv->hangcheck_timer, jiffies + DRM_I915_HANGCHECK_PERIOD);
@@ -1705,20 +1620,16 @@ i915_add_request(struct drm_device *dev, struct drm_file *file_priv,
  * before signalling the CPU
  */
 static uint32_t
-i915_retire_commands(struct drm_device *dev)
+i915_retire_commands(struct drm_device *dev, struct intel_ring_buffer *ring)
 {
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	uint32_t cmd = MI_FLUSH | MI_NO_WRITE_FLUSH;
 	uint32_t flush_domains = 0;
-	RING_LOCALS;
 
 	/* The sampler always gets flushed on i965 (sigh) */
 	if (IS_I965G(dev))
 		flush_domains |= I915_GEM_DOMAIN_SAMPLER;
-	BEGIN_LP_RING(2);
-	OUT_RING(cmd);
-	OUT_RING(0); /* noop */
-	ADVANCE_LP_RING();
+
+	ring->flush(dev, ring,
+			I915_GEM_DOMAIN_COMMAND, flush_domains);
 	return flush_domains;
 }
 
@@ -1738,14 +1649,14 @@ i915_gem_retire_request(struct drm_device *dev,
 	 * by the ringbuffer to the flushing/inactive lists as appropriate.
 	 */
 	spin_lock(&dev_priv->mm.active_list_lock);
-	while (!list_empty(&dev_priv->mm.active_list)) {
+	while (!list_empty(&request->ring->active_list)) {
 		struct drm_gem_object *obj;
 		struct drm_i915_gem_object *obj_priv;
 
-		obj_priv = list_first_entry(&dev_priv->mm.active_list,
+		obj_priv = list_first_entry(&request->ring->active_list,
 					    struct drm_i915_gem_object,
 					    list);
-		obj = obj_priv->obj;
+		obj = &obj_priv->base;
 
 		/* If the seqno being retired doesn't match the oldest in the
 		 * list, then the oldest in the list must still be newer than
@@ -1789,35 +1700,33 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)
 }
 
 uint32_t
-i915_get_gem_seqno(struct drm_device *dev)
+i915_get_gem_seqno(struct drm_device *dev,
+		   struct intel_ring_buffer *ring)
 {
-	drm_i915_private_t *dev_priv = dev->dev_private;
-
-	if (HAS_PIPE_CONTROL(dev))
-		return ((volatile u32 *)(dev_priv->seqno_page))[0];
-	else
-		return READ_HWSP(dev_priv, I915_GEM_HWS_INDEX);
+	return ring->get_gem_seqno(dev, ring);
 }
 
 /**
  * This function clears the request list as sequence numbers are passed.
  */
 void
-i915_gem_retire_requests(struct drm_device *dev)
+i915_gem_retire_requests(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	uint32_t seqno;
 
-	if (!dev_priv->hw_status_page || list_empty(&dev_priv->mm.request_list))
+	if (!ring->status_page.page_addr
+			|| list_empty(&ring->request_list))
 		return;
 
-	seqno = i915_get_gem_seqno(dev);
+	seqno = i915_get_gem_seqno(dev, ring);
 
-	while (!list_empty(&dev_priv->mm.request_list)) {
+	while (!list_empty(&ring->request_list)) {
 		struct drm_i915_gem_request *request;
 		uint32_t retiring_seqno;
 
-		request = list_first_entry(&dev_priv->mm.request_list,
+		request = list_first_entry(&ring->request_list,
 					   struct drm_i915_gem_request,
 					   list);
 		retiring_seqno = request->seqno;
@@ -1835,7 +1744,8 @@ i915_gem_retire_requests(struct drm_device *dev)
 
 	if (unlikely (dev_priv->trace_irq_seqno &&
 		      i915_seqno_passed(dev_priv->trace_irq_seqno, seqno))) {
-		i915_user_irq_put(dev);
+
+		ring->user_irq_put(dev, ring);
 		dev_priv->trace_irq_seqno = 0;
 	}
 }
@@ -1851,15 +1761,22 @@ i915_gem_retire_work_handler(struct work_struct *work)
 	dev = dev_priv->dev;
 
 	mutex_lock(&dev->struct_mutex);
-	i915_gem_retire_requests(dev);
+	i915_gem_retire_requests(dev, &dev_priv->render_ring);
+
+	if (HAS_BSD(dev))
+		i915_gem_retire_requests(dev, &dev_priv->bsd_ring);
+
 	if (!dev_priv->mm.suspended &&
-	    !list_empty(&dev_priv->mm.request_list))
+		(!list_empty(&dev_priv->render_ring.request_list) ||
+			(HAS_BSD(dev) &&
+			 !list_empty(&dev_priv->bsd_ring.request_list))))
 		queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, HZ);
 	mutex_unlock(&dev->struct_mutex);
 }
 
 int
-i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
+i915_do_wait_request(struct drm_device *dev, uint32_t seqno,
+		int interruptible, struct intel_ring_buffer *ring)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 ier;
@@ -1870,7 +1787,7 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
 	if (atomic_read(&dev_priv->mm.wedged))
 		return -EIO;
 
-	if (!i915_seqno_passed(i915_get_gem_seqno(dev), seqno)) {
+	if (!i915_seqno_passed(ring->get_gem_seqno(dev, ring), seqno)) {
 		if (HAS_PCH_SPLIT(dev))
 			ier = I915_READ(DEIER) | I915_READ(GTIER);
 		else
@@ -1884,19 +1801,21 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
 
 		trace_i915_gem_request_wait_begin(dev, seqno);
 
-		dev_priv->mm.waiting_gem_seqno = seqno;
-		i915_user_irq_get(dev);
+		ring->waiting_gem_seqno = seqno;
+		ring->user_irq_get(dev, ring);
 		if (interruptible)
-			ret = wait_event_interruptible(dev_priv->irq_queue,
-				i915_seqno_passed(i915_get_gem_seqno(dev), seqno) ||
-				atomic_read(&dev_priv->mm.wedged));
+			ret = wait_event_interruptible(ring->irq_queue,
+				i915_seqno_passed(
+					ring->get_gem_seqno(dev, ring), seqno)
+				|| atomic_read(&dev_priv->mm.wedged));
 		else
-			wait_event(dev_priv->irq_queue,
-				i915_seqno_passed(i915_get_gem_seqno(dev), seqno) ||
-				atomic_read(&dev_priv->mm.wedged));
+			wait_event(ring->irq_queue,
+				i915_seqno_passed(
+					ring->get_gem_seqno(dev, ring), seqno)
+				|| atomic_read(&dev_priv->mm.wedged));
 
-		i915_user_irq_put(dev);
-		dev_priv->mm.waiting_gem_seqno = 0;
+		ring->user_irq_put(dev, ring);
+		ring->waiting_gem_seqno = 0;
 
 		trace_i915_gem_request_wait_end(dev, seqno);
 	}
@@ -1905,7 +1824,7 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
 
 	if (ret && ret != -ERESTARTSYS)
 		DRM_ERROR("%s returns %d (awaiting %d at %d)\n",
-			  __func__, ret, seqno, i915_get_gem_seqno(dev));
+			  __func__, ret, seqno, ring->get_gem_seqno(dev, ring));
 
 	/* Directly dispatch request retiring.  While we have the work queue
 	 * to handle this, the waiter on a request often wants an associated
@@ -1913,7 +1832,7 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
 	 * a separate wait queue to handle that.
 	 */
 	if (ret == 0)
-		i915_gem_retire_requests(dev);
+		i915_gem_retire_requests(dev, ring);
 
 	return ret;
 }
@@ -1923,9 +1842,10 @@ i915_do_wait_request(struct drm_device *dev, uint32_t seqno, int interruptible)
  * request and object lists appropriately for that event.
  */
 static int
-i915_wait_request(struct drm_device *dev, uint32_t seqno)
+i915_wait_request(struct drm_device *dev, uint32_t seqno,
+		struct intel_ring_buffer *ring)
 {
-	return i915_do_wait_request(dev, seqno, 1);
+	return i915_do_wait_request(dev, seqno, 1, ring);
 }
 
 static void
@@ -1934,71 +1854,29 @@ i915_gem_flush(struct drm_device *dev,
 	       uint32_t flush_domains)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	uint32_t cmd;
-	RING_LOCALS;
-
-#if WATCH_EXEC
-	DRM_INFO("%s: invalidate %08x flush %08x\n", __func__,
-		  invalidate_domains, flush_domains);
-#endif
-	trace_i915_gem_request_flush(dev, dev_priv->mm.next_gem_seqno,
-				     invalidate_domains, flush_domains);
-
 	if (flush_domains & I915_GEM_DOMAIN_CPU)
 		drm_agp_chipset_flush(dev);
+	dev_priv->render_ring.flush(dev, &dev_priv->render_ring,
+			invalidate_domains,
+			flush_domains);
+
+	if (HAS_BSD(dev))
+		dev_priv->bsd_ring.flush(dev, &dev_priv->bsd_ring,
+				invalidate_domains,
+				flush_domains);
+}
 
-	if ((invalidate_domains | flush_domains) & I915_GEM_GPU_DOMAINS) {
-		/*
-		 * read/write caches:
-		 *
-		 * I915_GEM_DOMAIN_RENDER is always invalidated, but is
-		 * only flushed if MI_NO_WRITE_FLUSH is unset.  On 965, it is
-		 * also flushed at 2d versus 3d pipeline switches.
-		 *
-		 * read-only caches:
-		 *
-		 * I915_GEM_DOMAIN_SAMPLER is flushed on pre-965 if
-		 * MI_READ_FLUSH is set, and is always flushed on 965.
-		 *
-		 * I915_GEM_DOMAIN_COMMAND may not exist?
-		 *
-		 * I915_GEM_DOMAIN_INSTRUCTION, which exists on 965, is
-		 * invalidated when MI_EXE_FLUSH is set.
-		 *
-		 * I915_GEM_DOMAIN_VERTEX, which exists on 965, is
-		 * invalidated with every MI_FLUSH.
-		 *
-		 * TLBs:
-		 *
-		 * On 965, TLBs associated with I915_GEM_DOMAIN_COMMAND
-		 * and I915_GEM_DOMAIN_CPU in are invalidated at PTE write and
-		 * I915_GEM_DOMAIN_RENDER and I915_GEM_DOMAIN_SAMPLER
-		 * are flushed at any MI_FLUSH.
-		 */
-
-		cmd = MI_FLUSH | MI_NO_WRITE_FLUSH;
-		if ((invalidate_domains|flush_domains) &
-		    I915_GEM_DOMAIN_RENDER)
-			cmd &= ~MI_NO_WRITE_FLUSH;
-		if (!IS_I965G(dev)) {
-			/*
-			 * On the 965, the sampler cache always gets flushed
-			 * and this bit is reserved.
-			 */
-			if (invalidate_domains & I915_GEM_DOMAIN_SAMPLER)
-				cmd |= MI_READ_FLUSH;
-		}
-		if (invalidate_domains & I915_GEM_DOMAIN_INSTRUCTION)
-			cmd |= MI_EXE_FLUSH;
-
-#if WATCH_EXEC
-		DRM_INFO("%s: queue flush %08x to ring\n", __func__, cmd);
-#endif
-		BEGIN_LP_RING(2);
-		OUT_RING(cmd);
-		OUT_RING(MI_NOOP);
-		ADVANCE_LP_RING();
-	}
+static void
+i915_gem_flush_ring(struct drm_device *dev,
+	       uint32_t invalidate_domains,
+	       uint32_t flush_domains,
+	       struct intel_ring_buffer *ring)
+{
+	if (flush_domains & I915_GEM_DOMAIN_CPU)
+		drm_agp_chipset_flush(dev);
+	ring->flush(dev, ring,
+			invalidate_domains,
+			flush_domains);
 }
 
 /**
@@ -2025,7 +1903,8 @@ i915_gem_object_wait_rendering(struct drm_gem_object *obj)
 		DRM_INFO("%s: object %p wait for seqno %08x\n",
 			  __func__, obj, obj_priv->last_rendering_seqno);
 #endif
-		ret = i915_wait_request(dev, obj_priv->last_rendering_seqno);
+		ret = i915_wait_request(dev,
+				obj_priv->last_rendering_seqno, obj_priv->ring);
 		if (ret != 0)
 			return ret;
 	}
@@ -2119,7 +1998,7 @@ i915_gem_find_inactive_object(struct drm_device *dev, int min_size)
 
 	/* Try to find the smallest clean object */
 	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
-		struct drm_gem_object *obj = obj_priv->obj;
+		struct drm_gem_object *obj = &obj_priv->base;
 		if (obj->size >= min_size) {
 			if ((!obj_priv->dirty ||
 			     i915_gem_object_is_purgeable(obj_priv)) &&
@@ -2141,11 +2020,14 @@ i915_gpu_idle(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	bool lists_empty;
-	uint32_t seqno;
+	uint32_t seqno1, seqno2;
+	int ret;
 
 	spin_lock(&dev_priv->mm.active_list_lock);
-	lists_empty = list_empty(&dev_priv->mm.flushing_list) &&
-		      list_empty(&dev_priv->mm.active_list);
+	lists_empty = (list_empty(&dev_priv->mm.flushing_list) &&
+		       list_empty(&dev_priv->render_ring.active_list) &&
+		       (!HAS_BSD(dev) ||
+			list_empty(&dev_priv->bsd_ring.active_list)));
 	spin_unlock(&dev_priv->mm.active_list_lock);
 
 	if (lists_empty)
@@ -2153,11 +2035,25 @@ i915_gpu_idle(struct drm_device *dev)
 
 	/* Flush everything onto the inactive list. */
 	i915_gem_flush(dev, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
-	seqno = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS);
-	if (seqno == 0)
+	seqno1 = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS,
+			&dev_priv->render_ring);
+	if (seqno1 == 0)
 		return -ENOMEM;
+	ret = i915_wait_request(dev, seqno1, &dev_priv->render_ring);
 
-	return i915_wait_request(dev, seqno);
+	if (HAS_BSD(dev)) {
+		seqno2 = i915_add_request(dev, NULL, I915_GEM_GPU_DOMAINS,
+				&dev_priv->bsd_ring);
+		if (seqno2 == 0)
+			return -ENOMEM;
+
+		ret = i915_wait_request(dev, seqno2, &dev_priv->bsd_ring);
+		if (ret)
+			return ret;
+	}
+
+
+	return ret;
 }
 
 static int
@@ -2170,7 +2066,9 @@ i915_gem_evict_everything(struct drm_device *dev)
 	spin_lock(&dev_priv->mm.active_list_lock);
 	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
 		       list_empty(&dev_priv->mm.flushing_list) &&
-		       list_empty(&dev_priv->mm.active_list));
+		       list_empty(&dev_priv->render_ring.active_list) &&
+		       (!HAS_BSD(dev)
+			|| list_empty(&dev_priv->bsd_ring.active_list)));
 	spin_unlock(&dev_priv->mm.active_list_lock);
 
 	if (lists_empty)
@@ -2190,7 +2088,9 @@ i915_gem_evict_everything(struct drm_device *dev)
 	spin_lock(&dev_priv->mm.active_list_lock);
 	lists_empty = (list_empty(&dev_priv->mm.inactive_list) &&
 		       list_empty(&dev_priv->mm.flushing_list) &&
-		       list_empty(&dev_priv->mm.active_list));
+		       list_empty(&dev_priv->render_ring.active_list) &&
+		       (!HAS_BSD(dev)
+			|| list_empty(&dev_priv->bsd_ring.active_list)));
 	spin_unlock(&dev_priv->mm.active_list_lock);
 	BUG_ON(!lists_empty);
 
@@ -2204,8 +2104,13 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
 	struct drm_gem_object *obj;
 	int ret;
 
+	struct intel_ring_buffer *render_ring = &dev_priv->render_ring;
+	struct intel_ring_buffer *bsd_ring = &dev_priv->bsd_ring;
 	for (;;) {
-		i915_gem_retire_requests(dev);
+		i915_gem_retire_requests(dev, render_ring);
+
+		if (HAS_BSD(dev))
+			i915_gem_retire_requests(dev, bsd_ring);
 
 		/* If there's an inactive buffer available now, grab it
 		 * and be done.
@@ -2229,14 +2134,30 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
 		 * things, wait for the next to finish and hopefully leave us
 		 * a buffer to evict.
 		 */
-		if (!list_empty(&dev_priv->mm.request_list)) {
+		if (!list_empty(&render_ring->request_list)) {
+			struct drm_i915_gem_request *request;
+
+			request = list_first_entry(&render_ring->request_list,
+						   struct drm_i915_gem_request,
+						   list);
+
+			ret = i915_wait_request(dev,
+					request->seqno, request->ring);
+			if (ret)
+				return ret;
+
+			continue;
+		}
+
+		if (HAS_BSD(dev) && !list_empty(&bsd_ring->request_list)) {
 			struct drm_i915_gem_request *request;
 
-			request = list_first_entry(&dev_priv->mm.request_list,
+			request = list_first_entry(&bsd_ring->request_list,
 						   struct drm_i915_gem_request,
 						   list);
 
-			ret = i915_wait_request(dev, request->seqno);
+			ret = i915_wait_request(dev,
+					request->seqno, request->ring);
 			if (ret)
 				return ret;
 
@@ -2253,7 +2174,7 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
 
 			/* Find an object that we can immediately reuse */
 			list_for_each_entry(obj_priv, &dev_priv->mm.flushing_list, list) {
-				obj = obj_priv->obj;
+				obj = &obj_priv->base;
 				if (obj->size >= min_size)
 					break;
 
@@ -2263,10 +2184,13 @@ i915_gem_evict_something(struct drm_device *dev, int min_size)
 			if (obj != NULL) {
 				uint32_t seqno;
 
-				i915_gem_flush(dev,
+				i915_gem_flush_ring(dev,
 					       obj->write_domain,
-					       obj->write_domain);
-				seqno = i915_add_request(dev, NULL, obj->write_domain);
+					       obj->write_domain,
+					       obj_priv->ring);
+				seqno = i915_add_request(dev, NULL,
+						obj->write_domain,
+						obj_priv->ring);
 				if (seqno == 0)
 					return -ENOMEM;
 				continue;
@@ -2294,6 +2218,9 @@ i915_gem_object_get_pages(struct drm_gem_object *obj,
 	struct inode *inode;
 	struct page *page;
 
+	BUG_ON(obj_priv->pages_refcount
+			== DRM_I915_GEM_OBJECT_MAX_PAGES_REFCOUNT);
+
 	if (obj_priv->pages_refcount++ != 0)
 		return 0;
 
@@ -2485,9 +2412,10 @@ static int i915_find_fence_reg(struct drm_device *dev)
 
 	/* None available, try to steal one or wait for a user to finish */
 	i = I915_FENCE_REG_NONE;
-	list_for_each_entry(obj_priv, &dev_priv->mm.fence_list,
-			    fence_list) {
-		obj = obj_priv->obj;
+	list_for_each_entry(reg, &dev_priv->mm.fence_list,
+			    lru_list) {
+		obj = reg->obj;
+		obj_priv = to_intel_bo(obj);
 
 		if (obj_priv->pin_count)
 			continue;
@@ -2536,7 +2464,8 @@ i915_gem_object_get_fence_reg(struct drm_gem_object *obj)
 
 	/* Just update our place in the LRU if our fence is getting used. */
 	if (obj_priv->fence_reg != I915_FENCE_REG_NONE) {
-		list_move_tail(&obj_priv->fence_list, &dev_priv->mm.fence_list);
+		reg = &dev_priv->fence_regs[obj_priv->fence_reg];
+		list_move_tail(&reg->lru_list, &dev_priv->mm.fence_list);
 		return 0;
 	}
 
@@ -2566,7 +2495,7 @@ i915_gem_object_get_fence_reg(struct drm_gem_object *obj)
 
 	obj_priv->fence_reg = ret;
 	reg = &dev_priv->fence_regs[obj_priv->fence_reg];
-	list_add_tail(&obj_priv->fence_list, &dev_priv->mm.fence_list);
+	list_add_tail(&reg->lru_list, &dev_priv->mm.fence_list);
 
 	reg->obj = obj;
 
@@ -2598,6 +2527,8 @@ i915_gem_clear_fence_reg(struct drm_gem_object *obj)
 	struct drm_device *dev = obj->dev;
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_gem_object *obj_priv = to_intel_bo(obj);
+	struct drm_i915_fence_reg *reg =
+		&dev_priv->fence_regs[obj_priv->fence_reg];
 
 	if (IS_GEN6(dev)) {
 		I915_WRITE64(FENCE_REG_SANDYBRIDGE_0 +
@@ -2616,9 +2547,9 @@ i915_gem_clear_fence_reg(struct drm_gem_object *obj)
 		I915_WRITE(fence_reg, 0);
 	}
 
-	dev_priv->fence_regs[obj_priv->fence_reg].obj = NULL;
+	reg->obj = NULL;
 	obj_priv->fence_reg = I915_FENCE_REG_NONE;
-	list_del_init(&obj_priv->fence_list);
+	list_del_init(&reg->lru_list);
 }
 
 /**
@@ -2688,6 +2619,14 @@ i915_gem_object_bind_to_gtt(struct drm_gem_object *obj, unsigned alignment)
 		return -EINVAL;
 	}
 
+	/* If the object is bigger than the entire aperture, reject it early
+	 * before evicting everything in a vain attempt to find space.
+	 */
+	if (obj->size > dev->gtt_total) {
+		DRM_ERROR("Attempting to bind an object larger than the aperture\n");
+		return -E2BIG;
+	}
+
  search_free:
 	free_space = drm_mm_search_free(&dev_priv->mm.gtt_space,
 					obj->size, alignment, 0);
@@ -2798,6 +2737,7 @@ i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
 	uint32_t old_write_domain;
+	struct drm_i915_gem_object *obj_priv = to_intel_bo(obj);
 
 	if ((obj->write_domain & I915_GEM_GPU_DOMAINS) == 0)
 		return;
@@ -2805,7 +2745,7 @@ i915_gem_object_flush_gpu_write_domain(struct drm_gem_object *obj)
 	/* Queue the GPU write cache flushing we need. */
 	old_write_domain = obj->write_domain;
 	i915_gem_flush(dev, 0, obj->write_domain);
-	(void) i915_add_request(dev, NULL, obj->write_domain);
+	(void) i915_add_request(dev, NULL, obj->write_domain, obj_priv->ring);
 	BUG_ON(obj->write_domain);
 
 	trace_i915_gem_object_change_domain(obj,
@@ -2945,23 +2885,24 @@ i915_gem_object_set_to_display_plane(struct drm_gem_object *obj)
 		DRM_INFO("%s: object %p wait for seqno %08x\n",
 			  __func__, obj, obj_priv->last_rendering_seqno);
 #endif
-		ret = i915_do_wait_request(dev, obj_priv->last_rendering_seqno, 0);
+		ret = i915_do_wait_request(dev,
+				obj_priv->last_rendering_seqno,
+				0,
+				obj_priv->ring);
 		if (ret != 0)
 			return ret;
 	}
 
+	i915_gem_object_flush_cpu_write_domain(obj);
+
 	old_write_domain = obj->write_domain;
 	old_read_domains = obj->read_domains;
 
-	obj->read_domains &= I915_GEM_DOMAIN_GTT;
-
-	i915_gem_object_flush_cpu_write_domain(obj);
-
 	/* It should now be out of any other write domains, and we can update
 	 * the domain values for our changes.
 	 */
 	BUG_ON((obj->write_domain & ~I915_GEM_DOMAIN_GTT) != 0);
-	obj->read_domains |= I915_GEM_DOMAIN_GTT;
+	obj->read_domains = I915_GEM_DOMAIN_GTT;
 	obj->write_domain = I915_GEM_DOMAIN_GTT;
 	obj_priv->dirty = 1;
 
@@ -3345,9 +3286,13 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
 	             obj_priv->tiling_mode != I915_TILING_NONE;
 
 	/* Check fence reg constraints and rebind if necessary */
-	if (need_fence && !i915_gem_object_fence_offset_ok(obj,
-	    obj_priv->tiling_mode))
-		i915_gem_object_unbind(obj);
+	if (need_fence &&
+	    !i915_gem_object_fence_offset_ok(obj,
+					     obj_priv->tiling_mode)) {
+		ret = i915_gem_object_unbind(obj);
+		if (ret)
+			return ret;
+	}
 
 	/* Choose the GTT offset for our buffer and put it there. */
 	ret = i915_gem_object_pin(obj, (uint32_t) entry->alignment);
@@ -3361,9 +3306,6 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
 	if (need_fence) {
 		ret = i915_gem_object_get_fence_reg(obj);
 		if (ret != 0) {
-			if (ret != -EBUSY && ret != -ERESTARTSYS)
-				DRM_ERROR("Failure to install fence: %d\n",
-					  ret);
 			i915_gem_object_unpin(obj);
 			return ret;
 		}
@@ -3536,62 +3478,6 @@ i915_gem_object_pin_and_relocate(struct drm_gem_object *obj,
 	return 0;
 }
 
-/** Dispatch a batchbuffer to the ring
- */
-static int
-i915_dispatch_gem_execbuffer(struct drm_device *dev,
-			      struct drm_i915_gem_execbuffer2 *exec,
-			      struct drm_clip_rect *cliprects,
-			      uint64_t exec_offset)
-{
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	int nbox = exec->num_cliprects;
-	int i = 0, count;
-	uint32_t exec_start, exec_len;
-	RING_LOCALS;
-
-	exec_start = (uint32_t) exec_offset + exec->batch_start_offset;
-	exec_len = (uint32_t) exec->batch_len;
-
-	trace_i915_gem_request_submit(dev, dev_priv->mm.next_gem_seqno + 1);
-
-	count = nbox ? nbox : 1;
-
-	for (i = 0; i < count; i++) {
-		if (i < nbox) {
-			int ret = i915_emit_box(dev, cliprects, i,
-						exec->DR1, exec->DR4);
-			if (ret)
-				return ret;
-		}
-
-		if (IS_I830(dev) || IS_845G(dev)) {
-			BEGIN_LP_RING(4);
-			OUT_RING(MI_BATCH_BUFFER);
-			OUT_RING(exec_start | MI_BATCH_NON_SECURE);
-			OUT_RING(exec_start + exec_len - 4);
-			OUT_RING(0);
-			ADVANCE_LP_RING();
-		} else {
-			BEGIN_LP_RING(2);
-			if (IS_I965G(dev)) {
-				OUT_RING(MI_BATCH_BUFFER_START |
-					 (2 << 6) |
-					 MI_BATCH_NON_SECURE_I965);
-				OUT_RING(exec_start);
-			} else {
-				OUT_RING(MI_BATCH_BUFFER_START |
-					 (2 << 6));
-				OUT_RING(exec_start | MI_BATCH_NON_SECURE);
-			}
-			ADVANCE_LP_RING();
-		}
-	}
-
-	/* XXX breadcrumb */
-	return 0;
-}
-
 /* Throttle our rendering by waiting until the ring has completed our requests
  * emitted over 20 msec ago.
  *
@@ -3620,7 +3506,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file_priv)
 		if (time_after_eq(request->emitted_jiffies, recent_enough))
 			break;
 
-		ret = i915_wait_request(dev, request->seqno);
+		ret = i915_wait_request(dev, request->seqno, request->ring);
 		if (ret != 0)
 			break;
 	}
@@ -3777,10 +3663,22 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	uint32_t seqno, flush_domains, reloc_index;
 	int pin_tries, flips;
 
+	struct intel_ring_buffer *ring = NULL;
+
 #if WATCH_EXEC
 	DRM_INFO("buffers_ptr %d buffer_count %d len %08x\n",
 		  (int) args->buffers_ptr, args->buffer_count, args->batch_len);
 #endif
+	if (args->flags & I915_EXEC_BSD) {
+		if (!HAS_BSD(dev)) {
+			DRM_ERROR("execbuf with wrong flag\n");
+			return -EINVAL;
+		}
+		ring = &dev_priv->bsd_ring;
+	} else {
+		ring = &dev_priv->render_ring;
+	}
+
 
 	if (args->buffer_count < 1) {
 		DRM_ERROR("execbuf with %d buffers\n", args->buffer_count);
@@ -3893,11 +3791,19 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		if (ret != -ENOSPC || pin_tries >= 1) {
 			if (ret != -ERESTARTSYS) {
 				unsigned long long total_size = 0;
-				for (i = 0; i < args->buffer_count; i++)
+				int num_fences = 0;
+				for (i = 0; i < args->buffer_count; i++) {
+					obj_priv = object_list[i]->driver_private;
+
 					total_size += object_list[i]->size;
-				DRM_ERROR("Failed to pin buffer %d of %d, total %llu bytes: %d\n",
+					num_fences +=
+						exec_list[i].flags & EXEC_OBJECT_NEEDS_FENCE &&
+						obj_priv->tiling_mode != I915_TILING_NONE;
+				}
+				DRM_ERROR("Failed to pin buffer %d of %d, total %llu bytes, %d fences: %d\n",
 					  pinned+1, args->buffer_count,
-					  total_size, ret);
+					  total_size, num_fences,
+					  ret);
 				DRM_ERROR("%d objects [%d pinned], "
 					  "%d object bytes [%d pinned], "
 					  "%d/%d gtt bytes\n",
@@ -3967,9 +3873,16 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 		i915_gem_flush(dev,
 			       dev->invalidate_domains,
 			       dev->flush_domains);
-		if (dev->flush_domains & I915_GEM_GPU_DOMAINS)
+		if (dev->flush_domains & I915_GEM_GPU_DOMAINS) {
 			(void)i915_add_request(dev, file_priv,
-					       dev->flush_domains);
+					dev->flush_domains,
+					&dev_priv->render_ring);
+
+			if (HAS_BSD(dev))
+				(void)i915_add_request(dev, file_priv,
+						dev->flush_domains,
+						&dev_priv->bsd_ring);
+		}
 	}
 
 	for (i = 0; i < args->buffer_count; i++) {
@@ -4006,7 +3919,8 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 #endif
 
 	/* Exec the batchbuffer */
-	ret = i915_dispatch_gem_execbuffer(dev, args, cliprects, exec_offset);
+	ret = ring->dispatch_gem_execbuffer(dev, ring, args,
+			cliprects, exec_offset);
 	if (ret) {
 		DRM_ERROR("dispatch failed %d\n", ret);
 		goto err;
@@ -4016,7 +3930,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	 * Ensure that the commands in the batch buffer are
 	 * finished before the interrupt fires
 	 */
-	flush_domains = i915_retire_commands(dev);
+	flush_domains = i915_retire_commands(dev, ring);
 
 	i915_verify_inactive(dev, __FILE__, __LINE__);
 
@@ -4027,12 +3941,13 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
 	 * *some* interrupts representing completion of buffers that we can
 	 * wait on when trying to clear up gtt space).
 	 */
-	seqno = i915_add_request(dev, file_priv, flush_domains);
+	seqno = i915_add_request(dev, file_priv, flush_domains, ring);
 	BUG_ON(seqno == 0);
 	for (i = 0; i < args->buffer_count; i++) {
 		struct drm_gem_object *obj = object_list[i];
+		obj_priv = to_intel_bo(obj);
 
-		i915_gem_object_move_to_active(obj, seqno);
+		i915_gem_object_move_to_active(obj, seqno, ring);
 #if WATCH_LRU
 		DRM_INFO("%s: move to exec list %p\n", __func__, obj);
 #endif
@@ -4144,7 +4059,7 @@ i915_gem_execbuffer(struct drm_device *dev, void *data,
 	exec2.DR4 = args->DR4;
 	exec2.num_cliprects = args->num_cliprects;
 	exec2.cliprects_ptr = args->cliprects_ptr;
-	exec2.flags = 0;
+	exec2.flags = I915_EXEC_RENDER;
 
 	ret = i915_gem_do_execbuffer(dev, data, file_priv, &exec2, exec2_list);
 	if (!ret) {
@@ -4230,7 +4145,20 @@ i915_gem_object_pin(struct drm_gem_object *obj, uint32_t alignment)
 	struct drm_i915_gem_object *obj_priv = to_intel_bo(obj);
 	int ret;
 
+	BUG_ON(obj_priv->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT);
+
 	i915_verify_inactive(dev, __FILE__, __LINE__);
+
+	if (obj_priv->gtt_space != NULL) {
+		if (alignment == 0)
+			alignment = i915_gem_get_gtt_alignment(obj);
+		if (obj_priv->gtt_offset & (alignment - 1)) {
+			ret = i915_gem_object_unbind(obj);
+			if (ret)
+				return ret;
+		}
+	}
+
 	if (obj_priv->gtt_space == NULL) {
 		ret = i915_gem_object_bind_to_gtt(obj, alignment);
 		if (ret)
@@ -4383,6 +4311,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	struct drm_i915_gem_busy *args = data;
 	struct drm_gem_object *obj;
 	struct drm_i915_gem_object *obj_priv;
+	drm_i915_private_t *dev_priv = dev->dev_private;
 
 	obj = drm_gem_object_lookup(dev, file_priv, args->handle);
 	if (obj == NULL) {
@@ -4397,7 +4326,10 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
 	 * actually unmasked, and our working set ends up being larger than
 	 * required.
 	 */
-	i915_gem_retire_requests(dev);
+	i915_gem_retire_requests(dev, &dev_priv->render_ring);
+
+	if (HAS_BSD(dev))
+		i915_gem_retire_requests(dev, &dev_priv->bsd_ring);
 
 	obj_priv = to_intel_bo(obj);
 	/* Don't count being on the flushing list against the object being
@@ -4471,34 +4403,38 @@ i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
 	return 0;
 }
 
-int i915_gem_init_object(struct drm_gem_object *obj)
+struct drm_gem_object * i915_gem_alloc_object(struct drm_device *dev,
+					      size_t size)
 {
-	struct drm_i915_gem_object *obj_priv;
+	struct drm_i915_gem_object *obj;
 
-	obj_priv = kzalloc(sizeof(*obj_priv), GFP_KERNEL);
-	if (obj_priv == NULL)
-		return -ENOMEM;
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (obj == NULL)
+		return NULL;
 
-	/*
-	 * We've just allocated pages from the kernel,
-	 * so they've just been written by the CPU with
-	 * zeros. They'll need to be clflushed before we
-	 * use them with the GPU.
-	 */
-	obj->write_domain = I915_GEM_DOMAIN_CPU;
-	obj->read_domains = I915_GEM_DOMAIN_CPU;
+	if (drm_gem_object_init(dev, &obj->base, size) != 0) {
+		kfree(obj);
+		return NULL;
+	}
 
-	obj_priv->agp_type = AGP_USER_MEMORY;
+	obj->base.write_domain = I915_GEM_DOMAIN_CPU;
+	obj->base.read_domains = I915_GEM_DOMAIN_CPU;
 
-	obj->driver_private = obj_priv;
-	obj_priv->obj = obj;
-	obj_priv->fence_reg = I915_FENCE_REG_NONE;
-	INIT_LIST_HEAD(&obj_priv->list);
-	INIT_LIST_HEAD(&obj_priv->gpu_write_list);
-	INIT_LIST_HEAD(&obj_priv->fence_list);
-	obj_priv->madv = I915_MADV_WILLNEED;
+	obj->agp_type = AGP_USER_MEMORY;
+	obj->base.driver_private = NULL;
+	obj->fence_reg = I915_FENCE_REG_NONE;
+	INIT_LIST_HEAD(&obj->list);
+	INIT_LIST_HEAD(&obj->gpu_write_list);
+	obj->madv = I915_MADV_WILLNEED;
+
+	trace_i915_gem_object_create(&obj->base);
 
-	trace_i915_gem_object_create(obj);
+	return &obj->base;
+}
+
+int i915_gem_init_object(struct drm_gem_object *obj)
+{
+	BUG();
 
 	return 0;
 }
@@ -4521,9 +4457,11 @@ void i915_gem_free_object(struct drm_gem_object *obj)
 	if (obj_priv->mmap_offset)
 		i915_gem_free_mmap_offset(obj);
 
+	drm_gem_object_release(obj);
+
 	kfree(obj_priv->page_cpu_valid);
 	kfree(obj_priv->bit_17);
-	kfree(obj->driver_private);
+	kfree(obj_priv);
 }
 
 /** Unbinds all inactive objects. */
@@ -4536,9 +4474,9 @@ i915_gem_evict_from_inactive_list(struct drm_device *dev)
 		struct drm_gem_object *obj;
 		int ret;
 
-		obj = list_first_entry(&dev_priv->mm.inactive_list,
-				       struct drm_i915_gem_object,
-				       list)->obj;
+		obj = &list_first_entry(&dev_priv->mm.inactive_list,
+					struct drm_i915_gem_object,
+					list)->base;
 
 		ret = i915_gem_object_unbind(obj);
 		if (ret != 0) {
@@ -4558,7 +4496,10 @@ i915_gem_idle(struct drm_device *dev)
 
 	mutex_lock(&dev->struct_mutex);
 
-	if (dev_priv->mm.suspended || dev_priv->ring.ring_obj == NULL) {
+	if (dev_priv->mm.suspended ||
+			(dev_priv->render_ring.gem_object == NULL) ||
+			(HAS_BSD(dev) &&
+			 dev_priv->bsd_ring.gem_object == NULL)) {
 		mutex_unlock(&dev->struct_mutex);
 		return 0;
 	}
@@ -4608,7 +4549,7 @@ i915_gem_init_pipe_control(struct drm_device *dev)
 	struct drm_i915_gem_object *obj_priv;
 	int ret;
 
-	obj = drm_gem_object_alloc(dev, 4096);
+	obj = i915_gem_alloc_object(dev, 4096);
 	if (obj == NULL) {
 		DRM_ERROR("Failed to allocate seqno page\n");
 		ret = -ENOMEM;
@@ -4639,71 +4580,6 @@ err:
 	return ret;
 }
 
-static int
-i915_gem_init_hws(struct drm_device *dev)
-{
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	struct drm_gem_object *obj;
-	struct drm_i915_gem_object *obj_priv;
-	int ret;
-
-	/* If we need a physical address for the status page, it's already
-	 * initialized at driver load time.
-	 */
-	if (!I915_NEED_GFX_HWS(dev))
-		return 0;
-
-	obj = drm_gem_object_alloc(dev, 4096);
-	if (obj == NULL) {
-		DRM_ERROR("Failed to allocate status page\n");
-		ret = -ENOMEM;
-		goto err;
-	}
-	obj_priv = to_intel_bo(obj);
-	obj_priv->agp_type = AGP_USER_CACHED_MEMORY;
-
-	ret = i915_gem_object_pin(obj, 4096);
-	if (ret != 0) {
-		drm_gem_object_unreference(obj);
-		goto err_unref;
-	}
-
-	dev_priv->status_gfx_addr = obj_priv->gtt_offset;
-
-	dev_priv->hw_status_page = kmap(obj_priv->pages[0]);
-	if (dev_priv->hw_status_page == NULL) {
-		DRM_ERROR("Failed to map status page.\n");
-		memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));
-		ret = -EINVAL;
-		goto err_unpin;
-	}
-
-	if (HAS_PIPE_CONTROL(dev)) {
-		ret = i915_gem_init_pipe_control(dev);
-		if (ret)
-			goto err_unpin;
-	}
-
-	dev_priv->hws_obj = obj;
-	memset(dev_priv->hw_status_page, 0, PAGE_SIZE);
-	if (IS_GEN6(dev)) {
-		I915_WRITE(HWS_PGA_GEN6, dev_priv->status_gfx_addr);
-		I915_READ(HWS_PGA_GEN6); /* posting read */
-	} else {
-		I915_WRITE(HWS_PGA, dev_priv->status_gfx_addr);
-		I915_READ(HWS_PGA); /* posting read */
-	}
-	DRM_DEBUG_DRIVER("hws offset: 0x%08x\n", dev_priv->status_gfx_addr);
-
-	return 0;
-
-err_unpin:
-	i915_gem_object_unpin(obj);
-err_unref:
-	drm_gem_object_unreference(obj);
-err:
-	return 0;
-}
 
 static void
 i915_gem_cleanup_pipe_control(struct drm_device *dev)
@@ -4722,146 +4598,46 @@ i915_gem_cleanup_pipe_control(struct drm_device *dev)
 	dev_priv->seqno_page = NULL;
 }
 
-static void
-i915_gem_cleanup_hws(struct drm_device *dev)
-{
-	drm_i915_private_t *dev_priv = dev->dev_private;
-	struct drm_gem_object *obj;
-	struct drm_i915_gem_object *obj_priv;
-
-	if (dev_priv->hws_obj == NULL)
-		return;
-
-	obj = dev_priv->hws_obj;
-	obj_priv = to_intel_bo(obj);
-
-	kunmap(obj_priv->pages[0]);
-	i915_gem_object_unpin(obj);
-	drm_gem_object_unreference(obj);
-	dev_priv->hws_obj = NULL;
-
-	memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));
-	dev_priv->hw_status_page = NULL;
-
-	if (HAS_PIPE_CONTROL(dev))
-		i915_gem_cleanup_pipe_control(dev);
-
-	/* Write high address into HWS_PGA when disabling. */
-	I915_WRITE(HWS_PGA, 0x1ffff000);
-}
-
 int
 i915_gem_init_ringbuffer(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	struct drm_gem_object *obj;
-	struct drm_i915_gem_object *obj_priv;
-	drm_i915_ring_buffer_t *ring = &dev_priv->ring;
 	int ret;
-	u32 head;
 
-	ret = i915_gem_init_hws(dev);
-	if (ret != 0)
-		return ret;
+	dev_priv->render_ring = render_ring;
 
-	obj = drm_gem_object_alloc(dev, 128 * 1024);
-	if (obj == NULL) {
-		DRM_ERROR("Failed to allocate ringbuffer\n");
-		i915_gem_cleanup_hws(dev);
-		return -ENOMEM;
-	}
-	obj_priv = to_intel_bo(obj);
-
-	ret = i915_gem_object_pin(obj, 4096);
-	if (ret != 0) {
-		drm_gem_object_unreference(obj);
-		i915_gem_cleanup_hws(dev);
-		return ret;
+	if (!I915_NEED_GFX_HWS(dev)) {
+		dev_priv->render_ring.status_page.page_addr
+			= dev_priv->status_page_dmah->vaddr;
+		memset(dev_priv->render_ring.status_page.page_addr,
+				0, PAGE_SIZE);
 	}
 
-	/* Set up the kernel mapping for the ring. */
-	ring->Size = obj->size;
-
-	ring->map.offset = dev->agp->base + obj_priv->gtt_offset;
-	ring->map.size = obj->size;
-	ring->map.type = 0;
-	ring->map.flags = 0;
-	ring->map.mtrr = 0;
-
-	drm_core_ioremap_wc(&ring->map, dev);
-	if (ring->map.handle == NULL) {
-		DRM_ERROR("Failed to map ringbuffer.\n");
-		memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));
-		i915_gem_object_unpin(obj);
-		drm_gem_object_unreference(obj);
-		i915_gem_cleanup_hws(dev);
-		return -EINVAL;
-	}
-	ring->ring_obj = obj;
-	ring->virtual_start = ring->map.handle;
-
-	/* Stop the ring if it's running. */
-	I915_WRITE(PRB0_CTL, 0);
-	I915_WRITE(PRB0_TAIL, 0);
-	I915_WRITE(PRB0_HEAD, 0);
-
-	/* Initialize the ring. */
-	I915_WRITE(PRB0_START, obj_priv->gtt_offset);
-	head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-
-	/* G45 ring initialization fails to reset head to zero */
-	if (head != 0) {
-		DRM_ERROR("Ring head not reset to zero "
-			  "ctl %08x head %08x tail %08x start %08x\n",
-			  I915_READ(PRB0_CTL),
-			  I915_READ(PRB0_HEAD),
-			  I915_READ(PRB0_TAIL),
-			  I915_READ(PRB0_START));
-		I915_WRITE(PRB0_HEAD, 0);
-
-		DRM_ERROR("Ring head forced to zero "
-			  "ctl %08x head %08x tail %08x start %08x\n",
-			  I915_READ(PRB0_CTL),
-			  I915_READ(PRB0_HEAD),
-			  I915_READ(PRB0_TAIL),
-			  I915_READ(PRB0_START));
-	}
-
-	I915_WRITE(PRB0_CTL,
-		   ((obj->size - 4096) & RING_NR_PAGES) |
-		   RING_NO_REPORT |
-		   RING_VALID);
-
-	head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-
-	/* If the head is still not zero, the ring is dead */
-	if (head != 0) {
-		DRM_ERROR("Ring initialization failed "
-			  "ctl %08x head %08x tail %08x start %08x\n",
-			  I915_READ(PRB0_CTL),
-			  I915_READ(PRB0_HEAD),
-			  I915_READ(PRB0_TAIL),
-			  I915_READ(PRB0_START));
-		return -EIO;
+	if (HAS_PIPE_CONTROL(dev)) {
+		ret = i915_gem_init_pipe_control(dev);
+		if (ret)
+			return ret;
 	}
 
-	/* Update our cache of the ring state */
-	if (!drm_core_check_feature(dev, DRIVER_MODESET))
-		i915_kernel_lost_context(dev);
-	else {
-		ring->head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-		ring->tail = I915_READ(PRB0_TAIL) & TAIL_ADDR;
-		ring->space = ring->head - (ring->tail + 8);
-		if (ring->space < 0)
-			ring->space += ring->Size;
-	}
+	ret = intel_init_ring_buffer(dev, &dev_priv->render_ring);
+	if (ret)
+		goto cleanup_pipe_control;
 
-	if (IS_I9XX(dev) && !IS_GEN3(dev)) {
-		I915_WRITE(MI_MODE,
-			   (VS_TIMER_DISPATCH) << 16 | VS_TIMER_DISPATCH);
+	if (HAS_BSD(dev)) {
+		dev_priv->bsd_ring = bsd_ring;
+		ret = intel_init_ring_buffer(dev, &dev_priv->bsd_ring);
+		if (ret)
+			goto cleanup_render_ring;
 	}
 
 	return 0;
+
+cleanup_render_ring:
+	intel_cleanup_ring_buffer(dev, &dev_priv->render_ring);
+cleanup_pipe_control:
+	if (HAS_PIPE_CONTROL(dev))
+		i915_gem_cleanup_pipe_control(dev);
+	return ret;
 }
 
 void
@@ -4869,17 +4645,11 @@ i915_gem_cleanup_ringbuffer(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 
-	if (dev_priv->ring.ring_obj == NULL)
-		return;
-
-	drm_core_ioremapfree(&dev_priv->ring.map, dev);
-
-	i915_gem_object_unpin(dev_priv->ring.ring_obj);
-	drm_gem_object_unreference(dev_priv->ring.ring_obj);
-	dev_priv->ring.ring_obj = NULL;
-	memset(&dev_priv->ring, 0, sizeof(dev_priv->ring));
-
-	i915_gem_cleanup_hws(dev);
+	intel_cleanup_ring_buffer(dev, &dev_priv->render_ring);
+	if (HAS_BSD(dev))
+		intel_cleanup_ring_buffer(dev, &dev_priv->bsd_ring);
+	if (HAS_PIPE_CONTROL(dev))
+		i915_gem_cleanup_pipe_control(dev);
 }
 
 int
@@ -4907,12 +4677,14 @@ i915_gem_entervt_ioctl(struct drm_device *dev, void *data,
 	}
 
 	spin_lock(&dev_priv->mm.active_list_lock);
-	BUG_ON(!list_empty(&dev_priv->mm.active_list));
+	BUG_ON(!list_empty(&dev_priv->render_ring.active_list));
+	BUG_ON(HAS_BSD(dev) && !list_empty(&dev_priv->bsd_ring.active_list));
 	spin_unlock(&dev_priv->mm.active_list_lock);
 
 	BUG_ON(!list_empty(&dev_priv->mm.flushing_list));
 	BUG_ON(!list_empty(&dev_priv->mm.inactive_list));
-	BUG_ON(!list_empty(&dev_priv->mm.request_list));
+	BUG_ON(!list_empty(&dev_priv->render_ring.request_list));
+	BUG_ON(HAS_BSD(dev) && !list_empty(&dev_priv->bsd_ring.request_list));
 	mutex_unlock(&dev->struct_mutex);
 
 	drm_irq_install(dev);
@@ -4951,16 +4723,20 @@ i915_gem_load(struct drm_device *dev)
 	drm_i915_private_t *dev_priv = dev->dev_private;
 
 	spin_lock_init(&dev_priv->mm.active_list_lock);
-	INIT_LIST_HEAD(&dev_priv->mm.active_list);
 	INIT_LIST_HEAD(&dev_priv->mm.flushing_list);
 	INIT_LIST_HEAD(&dev_priv->mm.gpu_write_list);
 	INIT_LIST_HEAD(&dev_priv->mm.inactive_list);
-	INIT_LIST_HEAD(&dev_priv->mm.request_list);
 	INIT_LIST_HEAD(&dev_priv->mm.fence_list);
+	INIT_LIST_HEAD(&dev_priv->render_ring.active_list);
+	INIT_LIST_HEAD(&dev_priv->render_ring.request_list);
+	if (HAS_BSD(dev)) {
+		INIT_LIST_HEAD(&dev_priv->bsd_ring.active_list);
+		INIT_LIST_HEAD(&dev_priv->bsd_ring.request_list);
+	}
+	for (i = 0; i < 16; i++)
+		INIT_LIST_HEAD(&dev_priv->fence_regs[i].lru_list);
 	INIT_DELAYED_WORK(&dev_priv->mm.retire_work,
 			  i915_gem_retire_work_handler);
-	dev_priv->mm.next_gem_seqno = 1;
-
 	spin_lock(&shrink_list_lock);
 	list_add(&dev_priv->mm.shrink_list, &shrink_list);
 	spin_unlock(&shrink_list_lock);
@@ -5185,6 +4961,22 @@ void i915_gem_release(struct drm_device * dev, struct drm_file *file_priv)
 }
 
 static int
+i915_gpu_is_active(struct drm_device *dev)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	int lists_empty;
+
+	spin_lock(&dev_priv->mm.active_list_lock);
+	lists_empty = list_empty(&dev_priv->mm.flushing_list) &&
+		      list_empty(&dev_priv->render_ring.active_list);
+	if (HAS_BSD(dev))
+		lists_empty &= list_empty(&dev_priv->bsd_ring.active_list);
+	spin_unlock(&dev_priv->mm.active_list_lock);
+
+	return !lists_empty;
+}
+
+static int
 i915_gem_shrink(int nr_to_scan, gfp_t gfp_mask)
 {
 	drm_i915_private_t *dev_priv, *next_dev;
@@ -5213,6 +5005,7 @@ i915_gem_shrink(int nr_to_scan, gfp_t gfp_mask)
 
 	spin_lock(&shrink_list_lock);
 
+rescan:
 	/* first scan for clean buffers */
 	list_for_each_entry_safe(dev_priv, next_dev,
 				 &shrink_list, mm.shrink_list) {
@@ -5222,14 +5015,16 @@ i915_gem_shrink(int nr_to_scan, gfp_t gfp_mask)
 			continue;
 
 		spin_unlock(&shrink_list_lock);
+		i915_gem_retire_requests(dev, &dev_priv->render_ring);
 
-		i915_gem_retire_requests(dev);
+		if (HAS_BSD(dev))
+			i915_gem_retire_requests(dev, &dev_priv->bsd_ring);
 
 		list_for_each_entry_safe(obj_priv, next_obj,
 					 &dev_priv->mm.inactive_list,
 					 list) {
 			if (i915_gem_object_is_purgeable(obj_priv)) {
-				i915_gem_object_unbind(obj_priv->obj);
+				i915_gem_object_unbind(&obj_priv->base);
 				if (--nr_to_scan <= 0)
 					break;
 			}
@@ -5258,7 +5053,7 @@ i915_gem_shrink(int nr_to_scan, gfp_t gfp_mask)
 					 &dev_priv->mm.inactive_list,
 					 list) {
 			if (nr_to_scan > 0) {
-				i915_gem_object_unbind(obj_priv->obj);
+				i915_gem_object_unbind(&obj_priv->base);
 				nr_to_scan--;
 			} else
 				cnt++;
@@ -5270,6 +5065,36 @@ i915_gem_shrink(int nr_to_scan, gfp_t gfp_mask)
 		would_deadlock = 0;
 	}
 
+	if (nr_to_scan) {
+		int active = 0;
+
+		/*
+		 * We are desperate for pages, so as a last resort, wait
+		 * for the GPU to finish and discard whatever we can.
+		 * This has a dramatic impact to reduce the number of
+		 * OOM-killer events whilst running the GPU aggressively.
+		 */
+		list_for_each_entry(dev_priv, &shrink_list, mm.shrink_list) {
+			struct drm_device *dev = dev_priv->dev;
+
+			if (!mutex_trylock(&dev->struct_mutex))
+				continue;
+
+			spin_unlock(&shrink_list_lock);
+
+			if (i915_gpu_is_active(dev)) {
+				i915_gpu_idle(dev);
+				active++;
+			}
+
+			spin_lock(&shrink_list_lock);
+			mutex_unlock(&dev->struct_mutex);
+		}
+
+		if (active)
+			goto rescan;
+	}
+
 	spin_unlock(&shrink_list_lock);
 
 	if (would_deadlock)
diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
index 35507cf..80f380b 100644
--- a/drivers/gpu/drm/i915/i915_gem_debug.c
+++ b/drivers/gpu/drm/i915/i915_gem_debug.c
@@ -39,7 +39,7 @@ i915_verify_inactive(struct drm_device *dev, char *file, int line)
 	struct drm_i915_gem_object *obj_priv;
 
 	list_for_each_entry(obj_priv, &dev_priv->mm.inactive_list, list) {
-		obj = obj_priv->obj;
+		obj = &obj_priv->base;
 		if (obj_priv->pin_count || obj_priv->active ||
 		    (obj->write_domain & ~(I915_GEM_DOMAIN_CPU |
 					   I915_GEM_DOMAIN_GTT)))
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 4bdccef..4b7c49d 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -283,6 +283,11 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
 		return -EINVAL;
 	}
 
+	if (obj_priv->pin_count) {
+		drm_gem_object_unreference_unlocked(obj);
+		return -EBUSY;
+	}
+
 	if (args->tiling_mode == I915_TILING_NONE) {
 		args->swizzle_mode = I915_BIT_6_SWIZZLE_NONE;
 		args->stride = 0;
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index df6a9cd..2479be0 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -53,7 +53,7 @@
 	 I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT)
 
 /** Interrupts that we mask and unmask at runtime. */
-#define I915_INTERRUPT_ENABLE_VAR (I915_USER_INTERRUPT)
+#define I915_INTERRUPT_ENABLE_VAR (I915_USER_INTERRUPT | I915_BSD_USER_INTERRUPT)
 
 #define I915_PIPE_VBLANK_STATUS	(PIPE_START_VBLANK_INTERRUPT_STATUS |\
 				 PIPE_VBLANK_INTERRUPT_STATUS)
@@ -74,7 +74,7 @@ ironlake_enable_graphics_irq(drm_i915_private_t *dev_priv, u32 mask)
 	}
 }
 
-static inline void
+void
 ironlake_disable_graphics_irq(drm_i915_private_t *dev_priv, u32 mask)
 {
 	if ((dev_priv->gt_irq_mask_reg & mask) != mask) {
@@ -115,7 +115,7 @@ i915_enable_irq(drm_i915_private_t *dev_priv, u32 mask)
 	}
 }
 
-static inline void
+void
 i915_disable_irq(drm_i915_private_t *dev_priv, u32 mask)
 {
 	if ((dev_priv->irq_mask_reg & mask) != mask) {
@@ -169,9 +169,13 @@ void intel_enable_asle (struct drm_device *dev)
 
 	if (HAS_PCH_SPLIT(dev))
 		ironlake_enable_display_irq(dev_priv, DE_GSE);
-	else
+	else {
 		i915_enable_pipestat(dev_priv, 1,
 				     I915_LEGACY_BLC_EVENT_ENABLE);
+		if (IS_I965G(dev))
+			i915_enable_pipestat(dev_priv, 0,
+					     I915_LEGACY_BLC_EVENT_ENABLE);
+	}
 }
 
 /**
@@ -256,28 +260,27 @@ static void i915_hotplug_work_func(struct work_struct *work)
 						    hotplug_work);
 	struct drm_device *dev = dev_priv->dev;
 	struct drm_mode_config *mode_config = &dev->mode_config;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 
-	if (mode_config->num_connector) {
-		list_for_each_entry(connector, &mode_config->connector_list, head) {
-			struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	if (mode_config->num_encoder) {
+		list_for_each_entry(encoder, &mode_config->encoder_list, head) {
+			struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	
 			if (intel_encoder->hot_plug)
 				(*intel_encoder->hot_plug) (intel_encoder);
 		}
 	}
 	/* Just fire off a uevent and let userspace tell us what to do */
-	drm_sysfs_hotplug_event(dev);
+	drm_helper_hpd_irq_event(dev);
 }
 
 static void i915_handle_rps_change(struct drm_device *dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 busy_up, busy_down, max_avg, min_avg;
-	u16 rgvswctl;
 	u8 new_delay = dev_priv->cur_delay;
 
-	I915_WRITE(MEMINTRSTS, I915_READ(MEMINTRSTS) & ~MEMINT_EVAL_CHG);
+	I915_WRITE16(MEMINTRSTS, MEMINT_EVAL_CHG);
 	busy_up = I915_READ(RCPREVBSYTUPAVG);
 	busy_down = I915_READ(RCPREVBSYTDNAVG);
 	max_avg = I915_READ(RCBMAXAVG);
@@ -296,27 +299,8 @@ static void i915_handle_rps_change(struct drm_device *dev)
 			new_delay = dev_priv->min_delay;
 	}
 
-	DRM_DEBUG("rps change requested: %d -> %d\n",
-		  dev_priv->cur_delay, new_delay);
-
-	rgvswctl = I915_READ(MEMSWCTL);
-	if (rgvswctl & MEMCTL_CMD_STS) {
-		DRM_ERROR("gpu busy, RCS change rejected\n");
-		return; /* still busy with another command */
-	}
-
-	/* Program the new state */
-	rgvswctl = (MEMCTL_CMD_CHFREQ << MEMCTL_CMD_SHIFT) |
-		(new_delay << MEMCTL_FREQ_SHIFT) | MEMCTL_SFCAVM;
-	I915_WRITE(MEMSWCTL, rgvswctl);
-	POSTING_READ(MEMSWCTL);
-
-	rgvswctl |= MEMCTL_CMD_STS;
-	I915_WRITE(MEMSWCTL, rgvswctl);
-
-	dev_priv->cur_delay = new_delay;
-
-	DRM_DEBUG("rps changed\n");
+	if (ironlake_set_drps(dev, new_delay))
+		dev_priv->cur_delay = new_delay;
 
 	return;
 }
@@ -327,6 +311,7 @@ irqreturn_t ironlake_irq_handler(struct drm_device *dev)
 	int ret = IRQ_NONE;
 	u32 de_iir, gt_iir, de_ier, pch_iir;
 	struct drm_i915_master_private *master_priv;
+	struct intel_ring_buffer *render_ring = &dev_priv->render_ring;
 
 	/* disable master interrupt before clearing iir  */
 	de_ier = I915_READ(DEIER);
@@ -350,13 +335,16 @@ irqreturn_t ironlake_irq_handler(struct drm_device *dev)
 	}
 
 	if (gt_iir & GT_PIPE_NOTIFY) {
-		u32 seqno = i915_get_gem_seqno(dev);
-		dev_priv->mm.irq_gem_seqno = seqno;
+		u32 seqno = render_ring->get_gem_seqno(dev, render_ring);
+		render_ring->irq_gem_seqno = seqno;
 		trace_i915_gem_request_complete(dev, seqno);
-		DRM_WAKEUP(&dev_priv->irq_queue);
+		DRM_WAKEUP(&dev_priv->render_ring.irq_queue);
 		dev_priv->hangcheck_count = 0;
 		mod_timer(&dev_priv->hangcheck_timer, jiffies + DRM_I915_HANGCHECK_PERIOD);
 	}
+	if (gt_iir & GT_BSD_USER_INTERRUPT)
+		DRM_WAKEUP(&dev_priv->bsd_ring.irq_queue);
+
 
 	if (de_iir & DE_GSE)
 		ironlake_opregion_gse_intr(dev);
@@ -384,7 +372,7 @@ irqreturn_t ironlake_irq_handler(struct drm_device *dev)
 	}
 
 	if (de_iir & DE_PCU_EVENT) {
-		I915_WRITE(MEMINTRSTS, I915_READ(MEMINTRSTS));
+		I915_WRITE16(MEMINTRSTS, I915_READ(MEMINTRSTS));
 		i915_handle_rps_change(dev);
 	}
 
@@ -532,17 +520,18 @@ i915_ringbuffer_last_batch(struct drm_device *dev)
 	 */
 	bbaddr = 0;
 	head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
-	ring = (u32 *)(dev_priv->ring.virtual_start + head);
+	ring = (u32 *)(dev_priv->render_ring.virtual_start + head);
 
-	while (--ring >= (u32 *)dev_priv->ring.virtual_start) {
+	while (--ring >= (u32 *)dev_priv->render_ring.virtual_start) {
 		bbaddr = i915_get_bbaddr(dev, ring);
 		if (bbaddr)
 			break;
 	}
 
 	if (bbaddr == 0) {
-		ring = (u32 *)(dev_priv->ring.virtual_start + dev_priv->ring.Size);
-		while (--ring >= (u32 *)dev_priv->ring.virtual_start) {
+		ring = (u32 *)(dev_priv->render_ring.virtual_start
+				+ dev_priv->render_ring.size);
+		while (--ring >= (u32 *)dev_priv->render_ring.virtual_start) {
 			bbaddr = i915_get_bbaddr(dev, ring);
 			if (bbaddr)
 				break;
@@ -583,7 +572,7 @@ static void i915_capture_error_state(struct drm_device *dev)
 		return;
 	}
 
-	error->seqno = i915_get_gem_seqno(dev);
+	error->seqno = i915_get_gem_seqno(dev, &dev_priv->render_ring);
 	error->eir = I915_READ(EIR);
 	error->pgtbl_er = I915_READ(PGTBL_ER);
 	error->pipeastat = I915_READ(PIPEASTAT);
@@ -611,8 +600,10 @@ static void i915_capture_error_state(struct drm_device *dev)
 	batchbuffer[0] = NULL;
 	batchbuffer[1] = NULL;
 	count = 0;
-	list_for_each_entry(obj_priv, &dev_priv->mm.active_list, list) {
-		struct drm_gem_object *obj = obj_priv->obj;
+	list_for_each_entry(obj_priv,
+			&dev_priv->render_ring.active_list, list) {
+
+		struct drm_gem_object *obj = &obj_priv->base;
 
 		if (batchbuffer[0] == NULL &&
 		    bbaddr >= obj_priv->gtt_offset &&
@@ -635,7 +626,8 @@ static void i915_capture_error_state(struct drm_device *dev)
 	error->batchbuffer[1] = i915_error_object_create(dev, batchbuffer[1]);
 
 	/* Record the ringbuffer */
-	error->ringbuffer = i915_error_object_create(dev, dev_priv->ring.ring_obj);
+	error->ringbuffer = i915_error_object_create(dev,
+			dev_priv->render_ring.gem_object);
 
 	/* Record buffers on the active list. */
 	error->active_bo = NULL;
@@ -647,8 +639,9 @@ static void i915_capture_error_state(struct drm_device *dev)
 
 	if (error->active_bo) {
 		int i = 0;
-		list_for_each_entry(obj_priv, &dev_priv->mm.active_list, list) {
-			struct drm_gem_object *obj = obj_priv->obj;
+		list_for_each_entry(obj_priv,
+				&dev_priv->render_ring.active_list, list) {
+			struct drm_gem_object *obj = &obj_priv->base;
 
 			error->active_bo[i].size = obj->size;
 			error->active_bo[i].name = obj->name;
@@ -699,24 +692,13 @@ void i915_destroy_error_state(struct drm_device *dev)
 		i915_error_state_free(dev, error);
 }
 
-/**
- * i915_handle_error - handle an error interrupt
- * @dev: drm device
- *
- * Do some basic checking of regsiter state at error interrupt time and
- * dump it to the syslog.  Also call i915_capture_error_state() to make
- * sure we get a record and make it available in debugfs.  Fire a uevent
- * so userspace knows something bad happened (should trigger collection
- * of a ring dump etc.).
- */
-static void i915_handle_error(struct drm_device *dev, bool wedged)
+static void i915_report_and_clear_eir(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	u32 eir = I915_READ(EIR);
-	u32 pipea_stats = I915_READ(PIPEASTAT);
-	u32 pipeb_stats = I915_READ(PIPEBSTAT);
 
-	i915_capture_error_state(dev);
+	if (!eir)
+		return;
 
 	printk(KERN_ERR "render error detected, EIR: 0x%08x\n",
 	       eir);
@@ -762,6 +744,9 @@ static void i915_handle_error(struct drm_device *dev, bool wedged)
 	}
 
 	if (eir & I915_ERROR_MEMORY_REFRESH) {
+		u32 pipea_stats = I915_READ(PIPEASTAT);
+		u32 pipeb_stats = I915_READ(PIPEBSTAT);
+
 		printk(KERN_ERR "memory refresh error\n");
 		printk(KERN_ERR "PIPEASTAT: 0x%08x\n",
 		       pipea_stats);
@@ -818,6 +803,24 @@ static void i915_handle_error(struct drm_device *dev, bool wedged)
 		I915_WRITE(EMR, I915_READ(EMR) | eir);
 		I915_WRITE(IIR, I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT);
 	}
+}
+
+/**
+ * i915_handle_error - handle an error interrupt
+ * @dev: drm device
+ *
+ * Do some basic checking of regsiter state at error interrupt time and
+ * dump it to the syslog.  Also call i915_capture_error_state() to make
+ * sure we get a record and make it available in debugfs.  Fire a uevent
+ * so userspace knows something bad happened (should trigger collection
+ * of a ring dump etc.).
+ */
+static void i915_handle_error(struct drm_device *dev, bool wedged)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+
+	i915_capture_error_state(dev);
+	i915_report_and_clear_eir(dev);
 
 	if (wedged) {
 		atomic_set(&dev_priv->mm.wedged, 1);
@@ -825,7 +828,7 @@ static void i915_handle_error(struct drm_device *dev, bool wedged)
 		/*
 		 * Wakeup waiting processes so they don't hang
 		 */
-		DRM_WAKEUP(&dev_priv->irq_queue);
+		DRM_WAKEUP(&dev_priv->render_ring.irq_queue);
 	}
 
 	queue_work(dev_priv->wq, &dev_priv->error_work);
@@ -844,6 +847,7 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS)
 	unsigned long irqflags;
 	int irq_received;
 	int ret = IRQ_NONE;
+	struct intel_ring_buffer *render_ring = &dev_priv->render_ring;
 
 	atomic_inc(&dev_priv->irq_received);
 
@@ -924,14 +928,18 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS)
 		}
 
 		if (iir & I915_USER_INTERRUPT) {
-			u32 seqno = i915_get_gem_seqno(dev);
-			dev_priv->mm.irq_gem_seqno = seqno;
+			u32 seqno =
+				render_ring->get_gem_seqno(dev, render_ring);
+			render_ring->irq_gem_seqno = seqno;
 			trace_i915_gem_request_complete(dev, seqno);
-			DRM_WAKEUP(&dev_priv->irq_queue);
+			DRM_WAKEUP(&dev_priv->render_ring.irq_queue);
 			dev_priv->hangcheck_count = 0;
 			mod_timer(&dev_priv->hangcheck_timer, jiffies + DRM_I915_HANGCHECK_PERIOD);
 		}
 
+		if (HAS_BSD(dev) && (iir & I915_BSD_USER_INTERRUPT))
+			DRM_WAKEUP(&dev_priv->bsd_ring.irq_queue);
+
 		if (iir & I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT) {
 			intel_prepare_page_flip(dev, 0);
 			if (dev_priv->flip_pending_is_done)
@@ -950,7 +958,8 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS)
 			intel_finish_page_flip(dev, 1);
 		}
 
-		if ((pipeb_stats & I915_LEGACY_BLC_EVENT_STATUS) ||
+		if ((pipea_stats & I915_LEGACY_BLC_EVENT_STATUS) ||
+		    (pipeb_stats & I915_LEGACY_BLC_EVENT_STATUS) ||
 		    (iir & I915_ASLE_INTERRUPT))
 			opregion_asle_intr(dev);
 
@@ -979,7 +988,6 @@ static int i915_emit_irq(struct drm_device * dev)
 {
 	drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
-	RING_LOCALS;
 
 	i915_kernel_lost_context(dev);
 
@@ -1001,43 +1009,13 @@ static int i915_emit_irq(struct drm_device * dev)
 	return dev_priv->counter;
 }
 
-void i915_user_irq_get(struct drm_device *dev)
-{
-	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&dev_priv->user_irq_lock, irqflags);
-	if (dev->irq_enabled && (++dev_priv->user_irq_refcount == 1)) {
-		if (HAS_PCH_SPLIT(dev))
-			ironlake_enable_graphics_irq(dev_priv, GT_PIPE_NOTIFY);
-		else
-			i915_enable_irq(dev_priv, I915_USER_INTERRUPT);
-	}
-	spin_unlock_irqrestore(&dev_priv->user_irq_lock, irqflags);
-}
-
-void i915_user_irq_put(struct drm_device *dev)
-{
-	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
-	unsigned long irqflags;
-
-	spin_lock_irqsave(&dev_priv->user_irq_lock, irqflags);
-	BUG_ON(dev->irq_enabled && dev_priv->user_irq_refcount <= 0);
-	if (dev->irq_enabled && (--dev_priv->user_irq_refcount == 0)) {
-		if (HAS_PCH_SPLIT(dev))
-			ironlake_disable_graphics_irq(dev_priv, GT_PIPE_NOTIFY);
-		else
-			i915_disable_irq(dev_priv, I915_USER_INTERRUPT);
-	}
-	spin_unlock_irqrestore(&dev_priv->user_irq_lock, irqflags);
-}
-
 void i915_trace_irq_get(struct drm_device *dev, u32 seqno)
 {
 	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
+	struct intel_ring_buffer *render_ring = &dev_priv->render_ring;
 
 	if (dev_priv->trace_irq_seqno == 0)
-		i915_user_irq_get(dev);
+		render_ring->user_irq_get(dev, render_ring);
 
 	dev_priv->trace_irq_seqno = seqno;
 }
@@ -1047,6 +1025,7 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr)
 	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
 	struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
 	int ret = 0;
+	struct intel_ring_buffer *render_ring = &dev_priv->render_ring;
 
 	DRM_DEBUG_DRIVER("irq_nr=%d breadcrumb=%d\n", irq_nr,
 		  READ_BREADCRUMB(dev_priv));
@@ -1060,10 +1039,10 @@ static int i915_wait_irq(struct drm_device * dev, int irq_nr)
 	if (master_priv->sarea_priv)
 		master_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT;
 
-	i915_user_irq_get(dev);
-	DRM_WAIT_ON(ret, dev_priv->irq_queue, 3 * DRM_HZ,
+	render_ring->user_irq_get(dev, render_ring);
+	DRM_WAIT_ON(ret, dev_priv->render_ring.irq_queue, 3 * DRM_HZ,
 		    READ_BREADCRUMB(dev_priv) >= irq_nr);
-	i915_user_irq_put(dev);
+	render_ring->user_irq_put(dev, render_ring);
 
 	if (ret == -EBUSY) {
 		DRM_ERROR("EBUSY -- rec: %d emitted: %d\n",
@@ -1082,7 +1061,7 @@ int i915_irq_emit(struct drm_device *dev, void *data,
 	drm_i915_irq_emit_t *emit = data;
 	int result;
 
-	if (!dev_priv || !dev_priv->ring.virtual_start) {
+	if (!dev_priv || !dev_priv->render_ring.virtual_start) {
 		DRM_ERROR("called with no initialization\n");
 		return -EINVAL;
 	}
@@ -1228,9 +1207,12 @@ int i915_vblank_swap(struct drm_device *dev, void *data,
 	return -EINVAL;
 }
 
-struct drm_i915_gem_request *i915_get_tail_request(struct drm_device *dev) {
+struct drm_i915_gem_request *
+i915_get_tail_request(struct drm_device *dev)
+{
 	drm_i915_private_t *dev_priv = dev->dev_private;
-	return list_entry(dev_priv->mm.request_list.prev, struct drm_i915_gem_request, list);
+	return list_entry(dev_priv->render_ring.request_list.prev,
+			struct drm_i915_gem_request, list);
 }
 
 /**
@@ -1255,8 +1237,10 @@ void i915_hangcheck_elapsed(unsigned long data)
 		acthd = I915_READ(ACTHD_I965);
 
 	/* If all work is done then ACTHD clearly hasn't advanced. */
-	if (list_empty(&dev_priv->mm.request_list) ||
-		       i915_seqno_passed(i915_get_gem_seqno(dev), i915_get_tail_request(dev)->seqno)) {
+	if (list_empty(&dev_priv->render_ring.request_list) ||
+		i915_seqno_passed(i915_get_gem_seqno(dev,
+				&dev_priv->render_ring),
+			i915_get_tail_request(dev)->seqno)) {
 		dev_priv->hangcheck_count = 0;
 		return;
 	}
@@ -1309,7 +1293,7 @@ static int ironlake_irq_postinstall(struct drm_device *dev)
 	/* enable kind of interrupts always enabled */
 	u32 display_mask = DE_MASTER_IRQ_CONTROL | DE_GSE | DE_PCH_EVENT |
 			   DE_PLANEA_FLIP_DONE | DE_PLANEB_FLIP_DONE;
-	u32 render_mask = GT_PIPE_NOTIFY;
+	u32 render_mask = GT_PIPE_NOTIFY | GT_BSD_USER_INTERRUPT;
 	u32 hotplug_mask = SDE_CRT_HOTPLUG | SDE_PORTB_HOTPLUG |
 			   SDE_PORTC_HOTPLUG | SDE_PORTD_HOTPLUG;
 
@@ -1323,7 +1307,7 @@ static int ironlake_irq_postinstall(struct drm_device *dev)
 	(void) I915_READ(DEIER);
 
 	/* user interrupt should be enabled, but masked initial */
-	dev_priv->gt_irq_mask_reg = 0xffffffff;
+	dev_priv->gt_irq_mask_reg = ~render_mask;
 	dev_priv->gt_irq_enable_reg = render_mask;
 
 	I915_WRITE(GTIIR, I915_READ(GTIIR));
@@ -1386,7 +1370,10 @@ int i915_driver_irq_postinstall(struct drm_device *dev)
 	u32 enable_mask = I915_INTERRUPT_ENABLE_FIX | I915_INTERRUPT_ENABLE_VAR;
 	u32 error_mask;
 
-	DRM_INIT_WAITQUEUE(&dev_priv->irq_queue);
+	DRM_INIT_WAITQUEUE(&dev_priv->render_ring.irq_queue);
+
+	if (HAS_BSD(dev))
+		DRM_INIT_WAITQUEUE(&dev_priv->bsd_ring.irq_queue);
 
 	dev_priv->vblank_pipe = DRM_I915_VBLANK_PIPE_A | DRM_I915_VBLANK_PIPE_B;
 
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 4cbc521..64b0a3a 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -334,6 +334,7 @@
 #define   I915_DEBUG_INTERRUPT				(1<<2)
 #define   I915_USER_INTERRUPT				(1<<1)
 #define   I915_ASLE_INTERRUPT				(1<<0)
+#define   I915_BSD_USER_INTERRUPT                      (1<<25)
 #define EIR		0x020b0
 #define EMR		0x020b4
 #define ESR		0x020b8
@@ -368,6 +369,36 @@
 #define   ECO_GATING_CX_ONLY	(1<<3)
 #define   ECO_FLIP_DONE		(1<<0)
 
+/* GEN6 interrupt control */
+#define GEN6_RENDER_HWSTAM	0x2098
+#define GEN6_RENDER_IMR		0x20a8
+#define   GEN6_RENDER_CONTEXT_SWITCH_INTERRUPT		(1 << 8)
+#define   GEN6_RENDER_PPGTT_PAGE_FAULT			(1 << 7)
+#define   GEN6_RENDER TIMEOUT_COUNTER_EXPIRED		(1 << 6)
+#define   GEN6_RENDER_L3_PARITY_ERROR			(1 << 5)
+#define   GEN6_RENDER_PIPE_CONTROL_NOTIFY_INTERRUPT	(1 << 4)
+#define   GEN6_RENDER_COMMAND_PARSER_MASTER_ERROR	(1 << 3)
+#define   GEN6_RENDER_SYNC_STATUS			(1 << 2)
+#define   GEN6_RENDER_DEBUG_INTERRUPT			(1 << 1)
+#define   GEN6_RENDER_USER_INTERRUPT			(1 << 0)
+
+#define GEN6_BLITTER_HWSTAM	0x22098
+#define GEN6_BLITTER_IMR	0x220a8
+#define   GEN6_BLITTER_MI_FLUSH_DW_NOTIFY_INTERRUPT	(1 << 26)
+#define   GEN6_BLITTER_COMMAND_PARSER_MASTER_ERROR	(1 << 25)
+#define   GEN6_BLITTER_SYNC_STATUS			(1 << 24)
+#define   GEN6_BLITTER_USER_INTERRUPT			(1 << 22)
+/*
+ * BSD (bit stream decoder instruction and interrupt control register defines
+ * (G4X and Ironlake only)
+ */
+
+#define BSD_RING_TAIL          0x04030
+#define BSD_RING_HEAD          0x04034
+#define BSD_RING_START         0x04038
+#define BSD_RING_CTL           0x0403c
+#define BSD_RING_ACTHD         0x04074
+#define BSD_HWS_PGA            0x04080
 
 /*
  * Framebuffer compression (915+ only)
@@ -805,6 +836,10 @@
 #define DCC_CHANNEL_XOR_DISABLE				(1 << 10)
 #define DCC_CHANNEL_XOR_BIT_17				(1 << 9)
 
+/** Pineview MCH register contains DDR3 setting */
+#define CSHRDDR3CTL            0x101a8
+#define CSHRDDR3CTL_DDR3       (1 << 2)
+
 /** 965 MCH register controlling DRAM channel configuration */
 #define C0DRB3			0x10206
 #define C1DRB3			0x10606
@@ -826,6 +861,12 @@
 #define CLKCFG_MEM_800					(3 << 4)
 #define CLKCFG_MEM_MASK					(7 << 4)
 
+#define TR1			0x11006
+#define TSFS			0x11020
+#define   TSFS_SLOPE_MASK	0x0000ff00
+#define   TSFS_SLOPE_SHIFT	8
+#define   TSFS_INTR_MASK	0x000000ff
+
 #define CRSTANDVID		0x11100
 #define PXVFREQ_BASE		0x11110 /* P[0-15]VIDFREQ (0x1114c) (Ironlake) */
 #define   PXVFREQ_PX_MASK	0x7f000000
@@ -964,6 +1005,41 @@
 #define   MEMSTAT_SRC_CTL_STDBY 3
 #define RCPREVBSYTUPAVG		0x113b8
 #define RCPREVBSYTDNAVG		0x113bc
+#define SDEW			0x1124c
+#define CSIEW0			0x11250
+#define CSIEW1			0x11254
+#define CSIEW2			0x11258
+#define PEW			0x1125c
+#define DEW			0x11270
+#define MCHAFE			0x112c0
+#define CSIEC			0x112e0
+#define DMIEC			0x112e4
+#define DDREC			0x112e8
+#define PEG0EC			0x112ec
+#define PEG1EC			0x112f0
+#define GFXEC			0x112f4
+#define RPPREVBSYTUPAVG		0x113b8
+#define RPPREVBSYTDNAVG		0x113bc
+#define ECR			0x11600
+#define   ECR_GPFE		(1<<31)
+#define   ECR_IMONE		(1<<30)
+#define   ECR_CAP_MASK		0x0000001f /* Event range, 0-31 */
+#define OGW0			0x11608
+#define OGW1			0x1160c
+#define EG0			0x11610
+#define EG1			0x11614
+#define EG2			0x11618
+#define EG3			0x1161c
+#define EG4			0x11620
+#define EG5			0x11624
+#define EG6			0x11628
+#define EG7			0x1162c
+#define PXW			0x11664
+#define PXWL			0x11680
+#define LCFUSE02		0x116c0
+#define   LCFUSE_HIV_MASK	0x000000ff
+#define CSIPLL0			0x12c10
+#define DDRMPLL1		0X12c20
 #define PEG_BAND_GAP_DATA	0x14d68
 
 /*
@@ -1055,7 +1131,6 @@
 #define CRT_HOTPLUG_DETECT_VOLTAGE_325MV	(0 << 2)
 #define CRT_HOTPLUG_DETECT_VOLTAGE_475MV	(1 << 2)
 #define CRT_HOTPLUG_MASK			(0x3fc) /* Bits 9-2 */
-#define CRT_FORCE_HOTPLUG_MASK			0xfffffe1f
 
 #define PORT_HOTPLUG_STAT	0x61114
 #define   HDMIB_HOTPLUG_INT_STATUS		(1 << 29)
@@ -1764,6 +1839,14 @@
 #define   DP_LINK_TRAIN_MASK		(3 << 28)
 #define   DP_LINK_TRAIN_SHIFT		28
 
+/* CPT Link training mode */
+#define   DP_LINK_TRAIN_PAT_1_CPT	(0 << 8)
+#define   DP_LINK_TRAIN_PAT_2_CPT	(1 << 8)
+#define   DP_LINK_TRAIN_PAT_IDLE_CPT	(2 << 8)
+#define   DP_LINK_TRAIN_OFF_CPT		(3 << 8)
+#define   DP_LINK_TRAIN_MASK_CPT	(7 << 8)
+#define   DP_LINK_TRAIN_SHIFT_CPT	8
+
 /* Signal voltages. These are mostly controlled by the other end */
 #define   DP_VOLTAGE_0_4		(0 << 25)
 #define   DP_VOLTAGE_0_6		(1 << 25)
@@ -1924,7 +2007,10 @@
 /* Display & cursor control */
 
 /* dithering flag on Ironlake */
-#define PIPE_ENABLE_DITHER	(1 << 4)
+#define PIPE_ENABLE_DITHER		(1 << 4)
+#define PIPE_DITHER_TYPE_MASK		(3 << 2)
+#define PIPE_DITHER_TYPE_SPATIAL	(0 << 2)
+#define PIPE_DITHER_TYPE_ST01		(1 << 2)
 /* Pipe A */
 #define PIPEADSL		0x70000
 #define PIPEACONF		0x70008
@@ -1988,15 +2074,24 @@
 
 #define DSPFW1			0x70034
 #define   DSPFW_SR_SHIFT	23
+#define   DSPFW_SR_MASK 	(0x1ff<<23)
 #define   DSPFW_CURSORB_SHIFT	16
+#define   DSPFW_CURSORB_MASK	(0x3f<<16)
 #define   DSPFW_PLANEB_SHIFT	8
+#define   DSPFW_PLANEB_MASK	(0x7f<<8)
+#define   DSPFW_PLANEA_MASK	(0x7f)
 #define DSPFW2			0x70038
 #define   DSPFW_CURSORA_MASK	0x00003f00
 #define   DSPFW_CURSORA_SHIFT	8
+#define   DSPFW_PLANEC_MASK	(0x7f)
 #define DSPFW3			0x7003c
 #define   DSPFW_HPLL_SR_EN	(1<<31)
 #define   DSPFW_CURSOR_SR_SHIFT	24
 #define   PINEVIEW_SELF_REFRESH_EN	(1<<30)
+#define   DSPFW_CURSOR_SR_MASK		(0x3f<<24)
+#define   DSPFW_HPLL_CURSOR_SHIFT	16
+#define   DSPFW_HPLL_CURSOR_MASK	(0x3f<<16)
+#define   DSPFW_HPLL_SR_MASK		(0x1ff)
 
 /* FIFO watermark sizes etc */
 #define G4X_FIFO_LINE_SIZE	64
@@ -2023,6 +2118,43 @@
 #define PINEVIEW_CURSOR_DFT_WM	0
 #define PINEVIEW_CURSOR_GUARD_WM	5
 
+
+/* define the Watermark register on Ironlake */
+#define WM0_PIPEA_ILK		0x45100
+#define  WM0_PIPE_PLANE_MASK	(0x7f<<16)
+#define  WM0_PIPE_PLANE_SHIFT	16
+#define  WM0_PIPE_SPRITE_MASK	(0x3f<<8)
+#define  WM0_PIPE_SPRITE_SHIFT	8
+#define  WM0_PIPE_CURSOR_MASK	(0x1f)
+
+#define WM0_PIPEB_ILK		0x45104
+#define WM1_LP_ILK		0x45108
+#define  WM1_LP_SR_EN		(1<<31)
+#define  WM1_LP_LATENCY_SHIFT	24
+#define  WM1_LP_LATENCY_MASK	(0x7f<<24)
+#define  WM1_LP_SR_MASK		(0x1ff<<8)
+#define  WM1_LP_SR_SHIFT	8
+#define  WM1_LP_CURSOR_MASK	(0x3f)
+
+/* Memory latency timer register */
+#define MLTR_ILK		0x11222
+/* the unit of memory self-refresh latency time is 0.5us */
+#define  ILK_SRLT_MASK		0x3f
+
+/* define the fifo size on Ironlake */
+#define ILK_DISPLAY_FIFO	128
+#define ILK_DISPLAY_MAXWM	64
+#define ILK_DISPLAY_DFTWM	8
+
+#define ILK_DISPLAY_SR_FIFO	512
+#define ILK_DISPLAY_MAX_SRWM	0x1ff
+#define ILK_DISPLAY_DFT_SRWM	0x3f
+#define ILK_CURSOR_SR_FIFO	64
+#define ILK_CURSOR_MAX_SRWM	0x3f
+#define ILK_CURSOR_DFT_SRWM	8
+
+#define ILK_FIFO_LINE_SIZE	64
+
 /*
  * The two pipe frame counter registers are not synchronized, so
  * reading a stable value is somewhat tricky. The following code
@@ -2298,14 +2430,23 @@
 #define GT_PIPE_NOTIFY		(1 << 4)
 #define GT_SYNC_STATUS          (1 << 2)
 #define GT_USER_INTERRUPT       (1 << 0)
+#define GT_BSD_USER_INTERRUPT   (1 << 5)
+
 
 #define GTISR   0x44010
 #define GTIMR   0x44014
 #define GTIIR   0x44018
 #define GTIER   0x4401c
 
+#define ILK_DISPLAY_CHICKEN2	0x42004
+#define  ILK_DPARB_GATE	(1<<22)
+#define  ILK_VSDPFD_FULL	(1<<21)
+#define ILK_DSPCLK_GATE		0x42020
+#define  ILK_DPARB_CLK_GATE	(1<<5)
+
 #define DISP_ARB_CTL	0x45000
 #define  DISP_TILE_SURFACE_SWIZZLING	(1<<13)
+#define  DISP_FBC_WM_DIS		(1<<15)
 
 /* PCH */
 
@@ -2316,6 +2457,11 @@
 #define SDE_PORTB_HOTPLUG       (1 << 8)
 #define SDE_SDVOB_HOTPLUG       (1 << 6)
 #define SDE_HOTPLUG_MASK	(0xf << 8)
+/* CPT */
+#define SDE_CRT_HOTPLUG_CPT	(1 << 19)
+#define SDE_PORTD_HOTPLUG_CPT	(1 << 23)
+#define SDE_PORTC_HOTPLUG_CPT	(1 << 22)
+#define SDE_PORTB_HOTPLUG_CPT	(1 << 21)
 
 #define SDEISR  0xc4000
 #define SDEIMR  0xc4004
@@ -2407,6 +2553,17 @@
 #define PCH_SSC4_PARMS          0xc6210
 #define PCH_SSC4_AUX_PARMS      0xc6214
 
+#define PCH_DPLL_SEL		0xc7000
+#define  TRANSA_DPLL_ENABLE	(1<<3)
+#define	 TRANSA_DPLLB_SEL	(1<<0)
+#define	 TRANSA_DPLLA_SEL	0
+#define  TRANSB_DPLL_ENABLE	(1<<7)
+#define	 TRANSB_DPLLB_SEL	(1<<4)
+#define	 TRANSB_DPLLA_SEL	(0)
+#define  TRANSC_DPLL_ENABLE	(1<<11)
+#define	 TRANSC_DPLLB_SEL	(1<<8)
+#define	 TRANSC_DPLLA_SEL	(0)
+
 /* transcoder */
 
 #define TRANS_HTOTAL_A          0xe0000
@@ -2493,6 +2650,19 @@
 #define  FDI_LINK_TRAIN_PRE_EMPHASIS_1_5X (1<<22)
 #define  FDI_LINK_TRAIN_PRE_EMPHASIS_2X   (2<<22)
 #define  FDI_LINK_TRAIN_PRE_EMPHASIS_3X   (3<<22)
+/* ILK always use 400mV 0dB for voltage swing and pre-emphasis level.
+   SNB has different settings. */
+/* SNB A-stepping */
+#define  FDI_LINK_TRAIN_400MV_0DB_SNB_A		(0x38<<22)
+#define  FDI_LINK_TRAIN_400MV_6DB_SNB_A		(0x02<<22)
+#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01<<22)
+#define  FDI_LINK_TRAIN_800MV_0DB_SNB_A		(0x0<<22)
+/* SNB B-stepping */
+#define  FDI_LINK_TRAIN_400MV_0DB_SNB_B		(0x0<<22)
+#define  FDI_LINK_TRAIN_400MV_6DB_SNB_B		(0x3a<<22)
+#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_B	(0x39<<22)
+#define  FDI_LINK_TRAIN_800MV_0DB_SNB_B		(0x38<<22)
+#define  FDI_LINK_TRAIN_VOL_EMP_MASK		(0x3f<<22)
 #define  FDI_DP_PORT_WIDTH_X1           (0<<19)
 #define  FDI_DP_PORT_WIDTH_X2           (1<<19)
 #define  FDI_DP_PORT_WIDTH_X3           (2<<19)
@@ -2525,6 +2695,13 @@
 #define  FDI_RX_ENHANCE_FRAME_ENABLE    (1<<6)
 #define  FDI_SEL_RAWCLK                 (0<<4)
 #define  FDI_SEL_PCDCLK                 (1<<4)
+/* CPT */
+#define  FDI_AUTO_TRAINING			(1<<10)
+#define  FDI_LINK_TRAIN_PATTERN_1_CPT		(0<<8)
+#define  FDI_LINK_TRAIN_PATTERN_2_CPT		(1<<8)
+#define  FDI_LINK_TRAIN_PATTERN_IDLE_CPT	(2<<8)
+#define  FDI_LINK_TRAIN_NORMAL_CPT		(3<<8)
+#define  FDI_LINK_TRAIN_PATTERN_MASK_CPT	(3<<8)
 
 #define FDI_RXA_MISC            0xf0010
 #define FDI_RXB_MISC            0xf1010
@@ -2590,12 +2767,18 @@
 #define  SDVO_ENCODING          (0)
 #define  TMDS_ENCODING          (2 << 10)
 #define  NULL_PACKET_VSYNC_ENABLE       (1 << 9)
+/* CPT */
+#define  HDMI_MODE_SELECT	(1 << 9)
+#define  DVI_MODE_SELECT	(0)
 #define  SDVOB_BORDER_ENABLE    (1 << 7)
 #define  AUDIO_ENABLE           (1 << 6)
 #define  VSYNC_ACTIVE_HIGH      (1 << 4)
 #define  HSYNC_ACTIVE_HIGH      (1 << 3)
 #define  PORT_DETECTED          (1 << 2)
 
+/* PCH SDVOB multiplex with HDMIB */
+#define PCH_SDVOB	HDMIB
+
 #define HDMIC   0xe1150
 #define HDMID   0xe1160
 
@@ -2653,4 +2836,42 @@
 #define PCH_DPD_AUX_CH_DATA4	0xe4320
 #define PCH_DPD_AUX_CH_DATA5	0xe4324
 
+/* CPT */
+#define  PORT_TRANS_A_SEL_CPT	0
+#define  PORT_TRANS_B_SEL_CPT	(1<<29)
+#define  PORT_TRANS_C_SEL_CPT	(2<<29)
+#define  PORT_TRANS_SEL_MASK	(3<<29)
+
+#define TRANS_DP_CTL_A		0xe0300
+#define TRANS_DP_CTL_B		0xe1300
+#define TRANS_DP_CTL_C		0xe2300
+#define  TRANS_DP_OUTPUT_ENABLE	(1<<31)
+#define  TRANS_DP_PORT_SEL_B	(0<<29)
+#define  TRANS_DP_PORT_SEL_C	(1<<29)
+#define  TRANS_DP_PORT_SEL_D	(2<<29)
+#define  TRANS_DP_PORT_SEL_MASK	(3<<29)
+#define  TRANS_DP_AUDIO_ONLY	(1<<26)
+#define  TRANS_DP_ENH_FRAMING	(1<<18)
+#define  TRANS_DP_8BPC		(0<<9)
+#define  TRANS_DP_10BPC		(1<<9)
+#define  TRANS_DP_6BPC		(2<<9)
+#define  TRANS_DP_12BPC		(3<<9)
+#define  TRANS_DP_VSYNC_ACTIVE_HIGH	(1<<4)
+#define  TRANS_DP_VSYNC_ACTIVE_LOW	0
+#define  TRANS_DP_HSYNC_ACTIVE_HIGH	(1<<3)
+#define  TRANS_DP_HSYNC_ACTIVE_LOW	0
+
+/* SNB eDP training params */
+/* SNB A-stepping */
+#define  EDP_LINK_TRAIN_400MV_0DB_SNB_A		(0x38<<22)
+#define  EDP_LINK_TRAIN_400MV_6DB_SNB_A		(0x02<<22)
+#define  EDP_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01<<22)
+#define  EDP_LINK_TRAIN_800MV_0DB_SNB_A		(0x0<<22)
+/* SNB B-stepping */
+#define  EDP_LINK_TRAIN_400MV_0DB_SNB_B		(0x0<<22)
+#define  EDP_LINK_TRAIN_400MV_6DB_SNB_B		(0x3a<<22)
+#define  EDP_LINK_TRAIN_600MV_3_5DB_SNB_B	(0x39<<22)
+#define  EDP_LINK_TRAIN_800MV_0DB_SNB_B		(0x38<<22)
+#define  EDP_LINK_TRAIN_VOL_EMP_MASK_SNB	(0x3f<<22)
+
 #endif /* _I915_REG_H_ */
diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
index ac0d1a7..60a5800 100644
--- a/drivers/gpu/drm/i915/i915_suspend.c
+++ b/drivers/gpu/drm/i915/i915_suspend.c
@@ -600,14 +600,16 @@ void i915_save_display(struct drm_device *dev)
 	}
 	/* FIXME: save TV & SDVO state */
 
-	/* FBC state */
-	if (IS_GM45(dev)) {
-		dev_priv->saveDPFC_CB_BASE = I915_READ(DPFC_CB_BASE);
-	} else {
-		dev_priv->saveFBC_CFB_BASE = I915_READ(FBC_CFB_BASE);
-		dev_priv->saveFBC_LL_BASE = I915_READ(FBC_LL_BASE);
-		dev_priv->saveFBC_CONTROL2 = I915_READ(FBC_CONTROL2);
-		dev_priv->saveFBC_CONTROL = I915_READ(FBC_CONTROL);
+	/* Only save FBC state on the platform that supports FBC */
+	if (I915_HAS_FBC(dev)) {
+		if (IS_GM45(dev)) {
+			dev_priv->saveDPFC_CB_BASE = I915_READ(DPFC_CB_BASE);
+		} else {
+			dev_priv->saveFBC_CFB_BASE = I915_READ(FBC_CFB_BASE);
+			dev_priv->saveFBC_LL_BASE = I915_READ(FBC_LL_BASE);
+			dev_priv->saveFBC_CONTROL2 = I915_READ(FBC_CONTROL2);
+			dev_priv->saveFBC_CONTROL = I915_READ(FBC_CONTROL);
+		}
 	}
 
 	/* VGA state */
@@ -702,18 +704,19 @@ void i915_restore_display(struct drm_device *dev)
 	}
 	/* FIXME: restore TV & SDVO state */
 
-	/* FBC info */
-	if (IS_GM45(dev)) {
-		g4x_disable_fbc(dev);
-		I915_WRITE(DPFC_CB_BASE, dev_priv->saveDPFC_CB_BASE);
-	} else {
-		i8xx_disable_fbc(dev);
-		I915_WRITE(FBC_CFB_BASE, dev_priv->saveFBC_CFB_BASE);
-		I915_WRITE(FBC_LL_BASE, dev_priv->saveFBC_LL_BASE);
-		I915_WRITE(FBC_CONTROL2, dev_priv->saveFBC_CONTROL2);
-		I915_WRITE(FBC_CONTROL, dev_priv->saveFBC_CONTROL);
+	/* only restore FBC info on the platform that supports FBC*/
+	if (I915_HAS_FBC(dev)) {
+		if (IS_GM45(dev)) {
+			g4x_disable_fbc(dev);
+			I915_WRITE(DPFC_CB_BASE, dev_priv->saveDPFC_CB_BASE);
+		} else {
+			i8xx_disable_fbc(dev);
+			I915_WRITE(FBC_CFB_BASE, dev_priv->saveFBC_CFB_BASE);
+			I915_WRITE(FBC_LL_BASE, dev_priv->saveFBC_LL_BASE);
+			I915_WRITE(FBC_CONTROL2, dev_priv->saveFBC_CONTROL2);
+			I915_WRITE(FBC_CONTROL, dev_priv->saveFBC_CONTROL);
+		}
 	}
-
 	/* VGA state */
 	if (IS_IRONLAKE(dev))
 		I915_WRITE(CPU_VGACNTRL, dev_priv->saveVGACNTRL);
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 01840d9..fab2176 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -53,23 +53,6 @@ TRACE_EVENT(i915_gem_object_bind,
 		      __entry->obj, __entry->gtt_offset)
 );
 
-TRACE_EVENT(i915_gem_object_clflush,
-
-	    TP_PROTO(struct drm_gem_object *obj),
-
-	    TP_ARGS(obj),
-
-	    TP_STRUCT__entry(
-			     __field(struct drm_gem_object *, obj)
-			     ),
-
-	    TP_fast_assign(
-			   __entry->obj = obj;
-			   ),
-
-	    TP_printk("obj=%p", __entry->obj)
-);
-
 TRACE_EVENT(i915_gem_object_change_domain,
 
 	    TP_PROTO(struct drm_gem_object *obj, uint32_t old_read_domains, uint32_t old_write_domain),
@@ -115,7 +98,7 @@ TRACE_EVENT(i915_gem_object_get_fence,
 		      __entry->obj, __entry->fence, __entry->tiling_mode)
 );
 
-TRACE_EVENT(i915_gem_object_unbind,
+DECLARE_EVENT_CLASS(i915_gem_object,
 
 	    TP_PROTO(struct drm_gem_object *obj),
 
@@ -132,21 +115,25 @@ TRACE_EVENT(i915_gem_object_unbind,
 	    TP_printk("obj=%p", __entry->obj)
 );
 
-TRACE_EVENT(i915_gem_object_destroy,
+DEFINE_EVENT(i915_gem_object, i915_gem_object_clflush,
 
 	    TP_PROTO(struct drm_gem_object *obj),
 
-	    TP_ARGS(obj),
+	    TP_ARGS(obj)
+);
 
-	    TP_STRUCT__entry(
-			     __field(struct drm_gem_object *, obj)
-			     ),
+DEFINE_EVENT(i915_gem_object, i915_gem_object_unbind,
 
-	    TP_fast_assign(
-			   __entry->obj = obj;
-			   ),
+	    TP_PROTO(struct drm_gem_object *obj),
 
-	    TP_printk("obj=%p", __entry->obj)
+	    TP_ARGS(obj)
+);
+
+DEFINE_EVENT(i915_gem_object, i915_gem_object_destroy,
+
+	    TP_PROTO(struct drm_gem_object *obj),
+
+	    TP_ARGS(obj)
 );
 
 /* batch tracing */
@@ -197,8 +184,7 @@ TRACE_EVENT(i915_gem_request_flush,
 		      __entry->flush_domains, __entry->invalidate_domains)
 );
 
-
-TRACE_EVENT(i915_gem_request_complete,
+DECLARE_EVENT_CLASS(i915_gem_request,
 
 	    TP_PROTO(struct drm_device *dev, u32 seqno),
 
@@ -217,64 +203,35 @@ TRACE_EVENT(i915_gem_request_complete,
 	    TP_printk("dev=%u, seqno=%u", __entry->dev, __entry->seqno)
 );
 
-TRACE_EVENT(i915_gem_request_retire,
+DEFINE_EVENT(i915_gem_request, i915_gem_request_complete,
 
 	    TP_PROTO(struct drm_device *dev, u32 seqno),
 
-	    TP_ARGS(dev, seqno),
-
-	    TP_STRUCT__entry(
-			     __field(u32, dev)
-			     __field(u32, seqno)
-			     ),
-
-	    TP_fast_assign(
-			   __entry->dev = dev->primary->index;
-			   __entry->seqno = seqno;
-			   ),
-
-	    TP_printk("dev=%u, seqno=%u", __entry->dev, __entry->seqno)
+	    TP_ARGS(dev, seqno)
 );
 
-TRACE_EVENT(i915_gem_request_wait_begin,
+DEFINE_EVENT(i915_gem_request, i915_gem_request_retire,
 
 	    TP_PROTO(struct drm_device *dev, u32 seqno),
 
-	    TP_ARGS(dev, seqno),
-
-	    TP_STRUCT__entry(
-			     __field(u32, dev)
-			     __field(u32, seqno)
-			     ),
-
-	    TP_fast_assign(
-			   __entry->dev = dev->primary->index;
-			   __entry->seqno = seqno;
-			   ),
-
-	    TP_printk("dev=%u, seqno=%u", __entry->dev, __entry->seqno)
+	    TP_ARGS(dev, seqno)
 );
 
-TRACE_EVENT(i915_gem_request_wait_end,
+DEFINE_EVENT(i915_gem_request, i915_gem_request_wait_begin,
 
 	    TP_PROTO(struct drm_device *dev, u32 seqno),
 
-	    TP_ARGS(dev, seqno),
+	    TP_ARGS(dev, seqno)
+);
 
-	    TP_STRUCT__entry(
-			     __field(u32, dev)
-			     __field(u32, seqno)
-			     ),
+DEFINE_EVENT(i915_gem_request, i915_gem_request_wait_end,
 
-	    TP_fast_assign(
-			   __entry->dev = dev->primary->index;
-			   __entry->seqno = seqno;
-			   ),
+	    TP_PROTO(struct drm_device *dev, u32 seqno),
 
-	    TP_printk("dev=%u, seqno=%u", __entry->dev, __entry->seqno)
+	    TP_ARGS(dev, seqno)
 );
 
-TRACE_EVENT(i915_ring_wait_begin,
+DECLARE_EVENT_CLASS(i915_ring,
 
 	    TP_PROTO(struct drm_device *dev),
 
@@ -291,26 +248,23 @@ TRACE_EVENT(i915_ring_wait_begin,
 	    TP_printk("dev=%u", __entry->dev)
 );
 
-TRACE_EVENT(i915_ring_wait_end,
+DEFINE_EVENT(i915_ring, i915_ring_wait_begin,
 
 	    TP_PROTO(struct drm_device *dev),
 
-	    TP_ARGS(dev),
+	    TP_ARGS(dev)
+);
 
-	    TP_STRUCT__entry(
-			     __field(u32, dev)
-			     ),
+DEFINE_EVENT(i915_ring, i915_ring_wait_end,
 
-	    TP_fast_assign(
-			   __entry->dev = dev->primary->index;
-			   ),
+	    TP_PROTO(struct drm_device *dev),
 
-	    TP_printk("dev=%u", __entry->dev)
+	    TP_ARGS(dev)
 );
 
 #endif /* _I915_TRACE_H_ */
 
 /* This part must be outside protection */
 #undef TRACE_INCLUDE_PATH
-#define TRACE_INCLUDE_PATH ../../drivers/gpu/drm/i915
+#define TRACE_INCLUDE_PATH .
 #include <trace/define_trace.h>
diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
index f9ba452..96f75d7 100644
--- a/drivers/gpu/drm/i915/intel_bios.c
+++ b/drivers/gpu/drm/i915/intel_bios.c
@@ -95,6 +95,16 @@ fill_detail_timing_data(struct drm_display_mode *panel_fixed_mode,
 	panel_fixed_mode->clock = dvo_timing->clock * 10;
 	panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED;
 
+	if (dvo_timing->hsync_positive)
+		panel_fixed_mode->flags |= DRM_MODE_FLAG_PHSYNC;
+	else
+		panel_fixed_mode->flags |= DRM_MODE_FLAG_NHSYNC;
+
+	if (dvo_timing->vsync_positive)
+		panel_fixed_mode->flags |= DRM_MODE_FLAG_PVSYNC;
+	else
+		panel_fixed_mode->flags |= DRM_MODE_FLAG_NVSYNC;
+
 	/* Some VBTs have bogus h/vtotal values */
 	if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal)
 		panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1;
@@ -366,6 +376,7 @@ parse_sdvo_device_mapping(struct drm_i915_private *dev_priv,
 			p_mapping->dvo_port = p_child->dvo_port;
 			p_mapping->slave_addr = p_child->slave_addr;
 			p_mapping->dvo_wiring = p_child->dvo_wiring;
+			p_mapping->ddc_pin = p_child->ddc_pin;
 			p_mapping->initialized = 1;
 		} else {
 			DRM_DEBUG_KMS("Maybe one SDVO port is shared by "
diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c
index 759c2ef..22ff384 100644
--- a/drivers/gpu/drm/i915/intel_crt.c
+++ b/drivers/gpu/drm/i915/intel_crt.c
@@ -136,11 +136,17 @@ static void intel_crt_mode_set(struct drm_encoder *encoder,
 		adpa |= ADPA_VSYNC_ACTIVE_HIGH;
 
 	if (intel_crtc->pipe == 0) {
-		adpa |= ADPA_PIPE_A_SELECT;
+		if (HAS_PCH_CPT(dev))
+			adpa |= PORT_TRANS_A_SEL_CPT;
+		else
+			adpa |= ADPA_PIPE_A_SELECT;
 		if (!HAS_PCH_SPLIT(dev))
 			I915_WRITE(BCLRPAT_A, 0);
 	} else {
-		adpa |= ADPA_PIPE_B_SELECT;
+		if (HAS_PCH_CPT(dev))
+			adpa |= PORT_TRANS_B_SEL_CPT;
+		else
+			adpa |= ADPA_PIPE_B_SELECT;
 		if (!HAS_PCH_SPLIT(dev))
 			I915_WRITE(BCLRPAT_B, 0);
 	}
@@ -152,15 +158,21 @@ static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 adpa;
+	u32 adpa, temp;
 	bool ret;
 
-	adpa = I915_READ(PCH_ADPA);
+	temp = adpa = I915_READ(PCH_ADPA);
 
-	adpa &= ~ADPA_CRT_HOTPLUG_MASK;
-	/* disable HPD first */
-	I915_WRITE(PCH_ADPA, adpa);
-	(void)I915_READ(PCH_ADPA);
+	if (HAS_PCH_CPT(dev)) {
+		/* Disable DAC before force detect */
+		I915_WRITE(PCH_ADPA, adpa & ~ADPA_DAC_ENABLE);
+		(void)I915_READ(PCH_ADPA);
+	} else {
+		adpa &= ~ADPA_CRT_HOTPLUG_MASK;
+		/* disable HPD first */
+		I915_WRITE(PCH_ADPA, adpa);
+		(void)I915_READ(PCH_ADPA);
+	}
 
 	adpa |= (ADPA_CRT_HOTPLUG_PERIOD_128 |
 			ADPA_CRT_HOTPLUG_WARMUP_10MS |
@@ -176,6 +188,11 @@ static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)
 	while ((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) != 0)
 		;
 
+	if (HAS_PCH_CPT(dev)) {
+		I915_WRITE(PCH_ADPA, temp);
+		(void)I915_READ(PCH_ADPA);
+	}
+
 	/* Check the status to see if both blue and green are on now */
 	adpa = I915_READ(PCH_ADPA);
 	adpa &= ADPA_CRT_HOTPLUG_MONITOR_MASK;
@@ -200,7 +217,8 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 hotplug_en;
+	u32 hotplug_en, orig, stat;
+	bool ret = false;
 	int i, tries = 0;
 
 	if (HAS_PCH_SPLIT(dev))
@@ -215,8 +233,8 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
 		tries = 2;
 	else
 		tries = 1;
-	hotplug_en = I915_READ(PORT_HOTPLUG_EN);
-	hotplug_en &= CRT_FORCE_HOTPLUG_MASK;
+	hotplug_en = orig = I915_READ(PORT_HOTPLUG_EN);
+	hotplug_en &= CRT_HOTPLUG_MASK;
 	hotplug_en |= CRT_HOTPLUG_FORCE_DETECT;
 
 	if (IS_G4X(dev))
@@ -238,16 +256,22 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
 		} while (time_after(timeout, jiffies));
 	}
 
-	if ((I915_READ(PORT_HOTPLUG_STAT) & CRT_HOTPLUG_MONITOR_MASK) !=
-	    CRT_HOTPLUG_MONITOR_NONE)
-		return true;
+	stat = I915_READ(PORT_HOTPLUG_STAT);
+	if ((stat & CRT_HOTPLUG_MONITOR_MASK) != CRT_HOTPLUG_MONITOR_NONE)
+		ret = true;
+
+	/* clear the interrupt we just generated, if any */
+	I915_WRITE(PORT_HOTPLUG_STAT, CRT_HOTPLUG_INT_STATUS);
+
+	/* and put the bits back */
+	I915_WRITE(PORT_HOTPLUG_EN, orig);
 
-	return false;
+	return ret;
 }
 
-static bool intel_crt_detect_ddc(struct drm_connector *connector)
+static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 
 	/* CRT should always be at 0, but check anyway */
 	if (intel_encoder->type != INTEL_OUTPUT_ANALOG)
@@ -387,8 +411,8 @@ intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder
 static enum drm_connector_status intel_crt_detect(struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct drm_encoder *encoder = &intel_encoder->enc;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct drm_crtc *crtc;
 	int dpms_mode;
 	enum drm_connector_status status;
@@ -400,18 +424,19 @@ static enum drm_connector_status intel_crt_detect(struct drm_connector *connecto
 			return connector_status_disconnected;
 	}
 
-	if (intel_crt_detect_ddc(connector))
+	if (intel_crt_detect_ddc(encoder))
 		return connector_status_connected;
 
 	/* for pre-945g platforms use load detect */
 	if (encoder->crtc && encoder->crtc->enabled) {
 		status = intel_crt_load_detect(encoder->crtc, intel_encoder);
 	} else {
-		crtc = intel_get_load_detect_pipe(intel_encoder,
+		crtc = intel_get_load_detect_pipe(intel_encoder, connector,
 						  NULL, &dpms_mode);
 		if (crtc) {
 			status = intel_crt_load_detect(crtc, intel_encoder);
-			intel_release_load_detect_pipe(intel_encoder, dpms_mode);
+			intel_release_load_detect_pipe(intel_encoder,
+						       connector, dpms_mode);
 		} else
 			status = connector_status_unknown;
 	}
@@ -421,9 +446,6 @@ static enum drm_connector_status intel_crt_detect(struct drm_connector *connecto
 
 static void intel_crt_destroy(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-
-	intel_i2c_destroy(intel_encoder->ddc_bus);
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
 	kfree(connector);
@@ -432,29 +454,27 @@ static void intel_crt_destroy(struct drm_connector *connector)
 static int intel_crt_get_modes(struct drm_connector *connector)
 {
 	int ret;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct i2c_adapter *ddcbus;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct i2c_adapter *ddc_bus;
 	struct drm_device *dev = connector->dev;
 
 
-	ret = intel_ddc_get_modes(intel_encoder);
+	ret = intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 	if (ret || !IS_G4X(dev))
 		goto end;
 
-	ddcbus = intel_encoder->ddc_bus;
 	/* Try to probe digital port for output in DVI-I -> VGA mode. */
-	intel_encoder->ddc_bus =
-		intel_i2c_create(connector->dev, GPIOD, "CRTDDC_D");
+	ddc_bus = intel_i2c_create(connector->dev, GPIOD, "CRTDDC_D");
 
-	if (!intel_encoder->ddc_bus) {
-		intel_encoder->ddc_bus = ddcbus;
+	if (!ddc_bus) {
 		dev_printk(KERN_ERR, &connector->dev->pdev->dev,
 			   "DDC bus registration failed for CRTDDC_D.\n");
 		goto end;
 	}
 	/* Try to get modes by GPIOD port */
-	ret = intel_ddc_get_modes(intel_encoder);
-	intel_i2c_destroy(ddcbus);
+	ret = intel_ddc_get_modes(connector, ddc_bus);
+	intel_i2c_destroy(ddc_bus);
 
 end:
 	return ret;
@@ -491,12 +511,16 @@ static const struct drm_connector_funcs intel_crt_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_crt_connector_helper_funcs = {
 	.mode_valid = intel_crt_mode_valid,
 	.get_modes = intel_crt_get_modes,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_crt_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+
+	intel_i2c_destroy(intel_encoder->ddc_bus);
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_crt_enc_funcs = {
@@ -507,6 +531,7 @@ void intel_crt_init(struct drm_device *dev)
 {
 	struct drm_connector *connector;
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	u32 i2c_reg;
 
@@ -514,14 +539,20 @@ void intel_crt_init(struct drm_device *dev)
 	if (!intel_encoder)
 		return;
 
-	connector = &intel_encoder->base;
-	drm_connector_init(dev, &intel_encoder->base,
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
+	connector = &intel_connector->base;
+	drm_connector_init(dev, &intel_connector->base,
 			   &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
 
 	drm_encoder_init(dev, &intel_encoder->enc, &intel_crt_enc_funcs,
 			 DRM_MODE_ENCODER_DAC);
 
-	drm_mode_connector_attach_encoder(&intel_encoder->base,
+	drm_mode_connector_attach_encoder(&intel_connector->base,
 					  &intel_encoder->enc);
 
 	/* Set up the DDC bus. */
@@ -545,7 +576,7 @@ void intel_crt_init(struct drm_device *dev)
 				   (1 << INTEL_ANALOG_CLONE_BIT) |
 				   (1 << INTEL_SDVO_LVDS_CLONE_BIT);
 	intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
-	connector->interlace_allowed = 0;
+	connector->interlace_allowed = 1;
 	connector->doublescan_allowed = 0;
 
 	drm_encoder_helper_add(&intel_encoder->enc, &intel_crt_helper_funcs);
@@ -553,5 +584,10 @@ void intel_crt_init(struct drm_device *dev)
 
 	drm_sysfs_connector_add(connector);
 
+	if (I915_HAS_HOTPLUG(dev))
+		connector->polled = DRM_CONNECTOR_POLL_HPD;
+	else
+		connector->polled = DRM_CONNECTOR_POLL_CONNECT;
+
 	dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS;
 }
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index c7502b6..d753257 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -742,12 +742,11 @@ bool intel_pipe_has_type (struct drm_crtc *crtc, int type)
 {
     struct drm_device *dev = crtc->dev;
     struct drm_mode_config *mode_config = &dev->mode_config;
-    struct drm_connector *l_entry;
+    struct drm_encoder *l_entry;
 
-    list_for_each_entry(l_entry, &mode_config->connector_list, head) {
-	    if (l_entry->encoder &&
-	        l_entry->encoder->crtc == crtc) {
-		    struct intel_encoder *intel_encoder = to_intel_encoder(l_entry);
+    list_for_each_entry(l_entry, &mode_config->encoder_list, head) {
+	    if (l_entry && l_entry->crtc == crtc) {
+		    struct intel_encoder *intel_encoder = enc_to_intel_encoder(l_entry);
 		    if (intel_encoder->type == type)
 			    return true;
 	    }
@@ -755,23 +754,6 @@ bool intel_pipe_has_type (struct drm_crtc *crtc, int type)
     return false;
 }
 
-static struct drm_connector *
-intel_pipe_get_connector (struct drm_crtc *crtc)
-{
-    struct drm_device *dev = crtc->dev;
-    struct drm_mode_config *mode_config = &dev->mode_config;
-    struct drm_connector *l_entry, *ret = NULL;
-
-    list_for_each_entry(l_entry, &mode_config->connector_list, head) {
-	    if (l_entry->encoder &&
-	        l_entry->encoder->crtc == crtc) {
-		    ret = l_entry;
-		    break;
-	    }
-    }
-    return ret;
-}
-
 #define INTELPllInvalid(s)   do { /* DRM_DEBUG(s); */ return false; } while (0)
 /**
  * Returns whether the given set of divisors are valid for a given refclk with
@@ -1047,28 +1029,36 @@ static void i8xx_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
 void i8xx_disable_fbc(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
+	unsigned long timeout = jiffies + msecs_to_jiffies(1);
 	u32 fbc_ctl;
 
 	if (!I915_HAS_FBC(dev))
 		return;
 
+	if (!(I915_READ(FBC_CONTROL) & FBC_CTL_EN))
+		return;	/* Already off, just return */
+
 	/* Disable compression */
 	fbc_ctl = I915_READ(FBC_CONTROL);
 	fbc_ctl &= ~FBC_CTL_EN;
 	I915_WRITE(FBC_CONTROL, fbc_ctl);
 
 	/* Wait for compressing bit to clear */
-	while (I915_READ(FBC_STATUS) & FBC_STAT_COMPRESSING)
-		; /* nothing */
+	while (I915_READ(FBC_STATUS) & FBC_STAT_COMPRESSING) {
+		if (time_after(jiffies, timeout)) {
+			DRM_DEBUG_DRIVER("FBC idle timed out\n");
+			break;
+		}
+		; /* do nothing */
+	}
 
 	intel_wait_for_vblank(dev);
 
 	DRM_DEBUG_KMS("disabled FBC\n");
 }
 
-static bool i8xx_fbc_enabled(struct drm_crtc *crtc)
+static bool i8xx_fbc_enabled(struct drm_device *dev)
 {
-	struct drm_device *dev = crtc->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 
 	return I915_READ(FBC_CONTROL) & FBC_CTL_EN;
@@ -1125,14 +1115,43 @@ void g4x_disable_fbc(struct drm_device *dev)
 	DRM_DEBUG_KMS("disabled FBC\n");
 }
 
-static bool g4x_fbc_enabled(struct drm_crtc *crtc)
+static bool g4x_fbc_enabled(struct drm_device *dev)
 {
-	struct drm_device *dev = crtc->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 
 	return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN;
 }
 
+bool intel_fbc_enabled(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+
+	if (!dev_priv->display.fbc_enabled)
+		return false;
+
+	return dev_priv->display.fbc_enabled(dev);
+}
+
+void intel_enable_fbc(struct drm_crtc *crtc, unsigned long interval)
+{
+	struct drm_i915_private *dev_priv = crtc->dev->dev_private;
+
+	if (!dev_priv->display.enable_fbc)
+		return;
+
+	dev_priv->display.enable_fbc(crtc, interval);
+}
+
+void intel_disable_fbc(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+
+	if (!dev_priv->display.disable_fbc)
+		return;
+
+	dev_priv->display.disable_fbc(dev);
+}
+
 /**
  * intel_update_fbc - enable/disable FBC as needed
  * @crtc: CRTC to point the compressor at
@@ -1167,9 +1186,7 @@ static void intel_update_fbc(struct drm_crtc *crtc,
 	if (!i915_powersave)
 		return;
 
-	if (!dev_priv->display.fbc_enabled ||
-	    !dev_priv->display.enable_fbc ||
-	    !dev_priv->display.disable_fbc)
+	if (!I915_HAS_FBC(dev))
 		return;
 
 	if (!crtc->fb)
@@ -1216,28 +1233,26 @@ static void intel_update_fbc(struct drm_crtc *crtc,
 		goto out_disable;
 	}
 
-	if (dev_priv->display.fbc_enabled(crtc)) {
+	if (intel_fbc_enabled(dev)) {
 		/* We can re-enable it in this case, but need to update pitch */
-		if (fb->pitch > dev_priv->cfb_pitch)
-			dev_priv->display.disable_fbc(dev);
-		if (obj_priv->fence_reg != dev_priv->cfb_fence)
-			dev_priv->display.disable_fbc(dev);
-		if (plane != dev_priv->cfb_plane)
-			dev_priv->display.disable_fbc(dev);
+		if ((fb->pitch > dev_priv->cfb_pitch) ||
+		    (obj_priv->fence_reg != dev_priv->cfb_fence) ||
+		    (plane != dev_priv->cfb_plane))
+			intel_disable_fbc(dev);
 	}
 
-	if (!dev_priv->display.fbc_enabled(crtc)) {
-		/* Now try to turn it back on if possible */
-		dev_priv->display.enable_fbc(crtc, 500);
-	}
+	/* Now try to turn it back on if possible */
+	if (!intel_fbc_enabled(dev))
+		intel_enable_fbc(crtc, 500);
 
 	return;
 
 out_disable:
-	DRM_DEBUG_KMS("unsupported config, disabling FBC\n");
 	/* Multiple disables should be harmless */
-	if (dev_priv->display.fbc_enabled(crtc))
-		dev_priv->display.disable_fbc(dev);
+	if (intel_fbc_enabled(dev)) {
+		DRM_DEBUG_KMS("unsupported config, disabling FBC\n");
+		intel_disable_fbc(dev);
+	}
 }
 
 static int
@@ -1381,7 +1396,8 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
 	Start = obj_priv->gtt_offset;
 	Offset = y * crtc->fb->pitch + x * (crtc->fb->bits_per_pixel / 8);
 
-	DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d\n", Start, Offset, x, y);
+	DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n",
+		      Start, Offset, x, y, crtc->fb->pitch);
 	I915_WRITE(dspstride, crtc->fb->pitch);
 	if (IS_I965G(dev)) {
 		I915_WRITE(dspbase, Offset);
@@ -1510,6 +1526,219 @@ static void ironlake_set_pll_edp (struct drm_crtc *crtc, int clock)
 	udelay(500);
 }
 
+/* The FDI link training functions for ILK/Ibexpeak. */
+static void ironlake_fdi_link_train(struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+	int pipe = intel_crtc->pipe;
+	int fdi_tx_reg = (pipe == 0) ? FDI_TXA_CTL : FDI_TXB_CTL;
+	int fdi_rx_reg = (pipe == 0) ? FDI_RXA_CTL : FDI_RXB_CTL;
+	int fdi_rx_iir_reg = (pipe == 0) ? FDI_RXA_IIR : FDI_RXB_IIR;
+	int fdi_rx_imr_reg = (pipe == 0) ? FDI_RXA_IMR : FDI_RXB_IMR;
+	u32 temp, tries = 0;
+
+	/* enable CPU FDI TX and PCH FDI RX */
+	temp = I915_READ(fdi_tx_reg);
+	temp |= FDI_TX_ENABLE;
+	temp &= ~(7 << 19);
+	temp |= (intel_crtc->fdi_lanes - 1) << 19;
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_1;
+	I915_WRITE(fdi_tx_reg, temp);
+	I915_READ(fdi_tx_reg);
+
+	temp = I915_READ(fdi_rx_reg);
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_1;
+	I915_WRITE(fdi_rx_reg, temp | FDI_RX_ENABLE);
+	I915_READ(fdi_rx_reg);
+	udelay(150);
+
+	/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
+	   for train result */
+	temp = I915_READ(fdi_rx_imr_reg);
+	temp &= ~FDI_RX_SYMBOL_LOCK;
+	temp &= ~FDI_RX_BIT_LOCK;
+	I915_WRITE(fdi_rx_imr_reg, temp);
+	I915_READ(fdi_rx_imr_reg);
+	udelay(150);
+
+	for (;;) {
+		temp = I915_READ(fdi_rx_iir_reg);
+		DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+
+		if ((temp & FDI_RX_BIT_LOCK)) {
+			DRM_DEBUG_KMS("FDI train 1 done.\n");
+			I915_WRITE(fdi_rx_iir_reg,
+				   temp | FDI_RX_BIT_LOCK);
+			break;
+		}
+
+		tries++;
+
+		if (tries > 5) {
+			DRM_DEBUG_KMS("FDI train 1 fail!\n");
+			break;
+		}
+	}
+
+	/* Train 2 */
+	temp = I915_READ(fdi_tx_reg);
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_2;
+	I915_WRITE(fdi_tx_reg, temp);
+
+	temp = I915_READ(fdi_rx_reg);
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_2;
+	I915_WRITE(fdi_rx_reg, temp);
+	udelay(150);
+
+	tries = 0;
+
+	for (;;) {
+		temp = I915_READ(fdi_rx_iir_reg);
+		DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+
+		if (temp & FDI_RX_SYMBOL_LOCK) {
+			I915_WRITE(fdi_rx_iir_reg,
+				   temp | FDI_RX_SYMBOL_LOCK);
+			DRM_DEBUG_KMS("FDI train 2 done.\n");
+			break;
+		}
+
+		tries++;
+
+		if (tries > 5) {
+			DRM_DEBUG_KMS("FDI train 2 fail!\n");
+			break;
+		}
+	}
+
+	DRM_DEBUG_KMS("FDI train done\n");
+}
+
+static int snb_b_fdi_train_param [] = {
+	FDI_LINK_TRAIN_400MV_0DB_SNB_B,
+	FDI_LINK_TRAIN_400MV_6DB_SNB_B,
+	FDI_LINK_TRAIN_600MV_3_5DB_SNB_B,
+	FDI_LINK_TRAIN_800MV_0DB_SNB_B,
+};
+
+/* The FDI link training functions for SNB/Cougarpoint. */
+static void gen6_fdi_link_train(struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+	int pipe = intel_crtc->pipe;
+	int fdi_tx_reg = (pipe == 0) ? FDI_TXA_CTL : FDI_TXB_CTL;
+	int fdi_rx_reg = (pipe == 0) ? FDI_RXA_CTL : FDI_RXB_CTL;
+	int fdi_rx_iir_reg = (pipe == 0) ? FDI_RXA_IIR : FDI_RXB_IIR;
+	int fdi_rx_imr_reg = (pipe == 0) ? FDI_RXA_IMR : FDI_RXB_IMR;
+	u32 temp, i;
+
+	/* enable CPU FDI TX and PCH FDI RX */
+	temp = I915_READ(fdi_tx_reg);
+	temp |= FDI_TX_ENABLE;
+	temp &= ~(7 << 19);
+	temp |= (intel_crtc->fdi_lanes - 1) << 19;
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_1;
+	temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
+	/* SNB-B */
+	temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
+	I915_WRITE(fdi_tx_reg, temp);
+	I915_READ(fdi_tx_reg);
+
+	temp = I915_READ(fdi_rx_reg);
+	if (HAS_PCH_CPT(dev)) {
+		temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
+		temp |= FDI_LINK_TRAIN_PATTERN_1_CPT;
+	} else {
+		temp &= ~FDI_LINK_TRAIN_NONE;
+		temp |= FDI_LINK_TRAIN_PATTERN_1;
+	}
+	I915_WRITE(fdi_rx_reg, temp | FDI_RX_ENABLE);
+	I915_READ(fdi_rx_reg);
+	udelay(150);
+
+	/* Train 1: umask FDI RX Interrupt symbol_lock and bit_lock bit
+	   for train result */
+	temp = I915_READ(fdi_rx_imr_reg);
+	temp &= ~FDI_RX_SYMBOL_LOCK;
+	temp &= ~FDI_RX_BIT_LOCK;
+	I915_WRITE(fdi_rx_imr_reg, temp);
+	I915_READ(fdi_rx_imr_reg);
+	udelay(150);
+
+	for (i = 0; i < 4; i++ ) {
+		temp = I915_READ(fdi_tx_reg);
+		temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
+		temp |= snb_b_fdi_train_param[i];
+		I915_WRITE(fdi_tx_reg, temp);
+		udelay(500);
+
+		temp = I915_READ(fdi_rx_iir_reg);
+		DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+
+		if (temp & FDI_RX_BIT_LOCK) {
+			I915_WRITE(fdi_rx_iir_reg,
+				   temp | FDI_RX_BIT_LOCK);
+			DRM_DEBUG_KMS("FDI train 1 done.\n");
+			break;
+		}
+	}
+	if (i == 4)
+		DRM_DEBUG_KMS("FDI train 1 fail!\n");
+
+	/* Train 2 */
+	temp = I915_READ(fdi_tx_reg);
+	temp &= ~FDI_LINK_TRAIN_NONE;
+	temp |= FDI_LINK_TRAIN_PATTERN_2;
+	if (IS_GEN6(dev)) {
+		temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
+		/* SNB-B */
+		temp |= FDI_LINK_TRAIN_400MV_0DB_SNB_B;
+	}
+	I915_WRITE(fdi_tx_reg, temp);
+
+	temp = I915_READ(fdi_rx_reg);
+	if (HAS_PCH_CPT(dev)) {
+		temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
+		temp |= FDI_LINK_TRAIN_PATTERN_2_CPT;
+	} else {
+		temp &= ~FDI_LINK_TRAIN_NONE;
+		temp |= FDI_LINK_TRAIN_PATTERN_2;
+	}
+	I915_WRITE(fdi_rx_reg, temp);
+	udelay(150);
+
+	for (i = 0; i < 4; i++ ) {
+		temp = I915_READ(fdi_tx_reg);
+		temp &= ~FDI_LINK_TRAIN_VOL_EMP_MASK;
+		temp |= snb_b_fdi_train_param[i];
+		I915_WRITE(fdi_tx_reg, temp);
+		udelay(500);
+
+		temp = I915_READ(fdi_rx_iir_reg);
+		DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
+
+		if (temp & FDI_RX_SYMBOL_LOCK) {
+			I915_WRITE(fdi_rx_iir_reg,
+				   temp | FDI_RX_SYMBOL_LOCK);
+			DRM_DEBUG_KMS("FDI train 2 done.\n");
+			break;
+		}
+	}
+	if (i == 4)
+		DRM_DEBUG_KMS("FDI train 2 fail!\n");
+
+	DRM_DEBUG_KMS("FDI train done.\n");
+}
+
 static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 {
 	struct drm_device *dev = crtc->dev;
@@ -1523,8 +1752,6 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 	int dspbase_reg = (plane == 0) ? DSPAADDR : DSPBADDR;
 	int fdi_tx_reg = (pipe == 0) ? FDI_TXA_CTL : FDI_TXB_CTL;
 	int fdi_rx_reg = (pipe == 0) ? FDI_RXA_CTL : FDI_RXB_CTL;
-	int fdi_rx_iir_reg = (pipe == 0) ? FDI_RXA_IIR : FDI_RXB_IIR;
-	int fdi_rx_imr_reg = (pipe == 0) ? FDI_RXA_IMR : FDI_RXB_IMR;
 	int transconf_reg = (pipe == 0) ? TRANSACONF : TRANSBCONF;
 	int pf_ctl_reg = (pipe == 0) ? PFA_CTL_1 : PFB_CTL_1;
 	int pf_win_size = (pipe == 0) ? PFA_WIN_SZ : PFB_WIN_SZ;
@@ -1541,8 +1768,9 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 	int trans_vtot_reg = (pipe == 0) ? TRANS_VTOTAL_A : TRANS_VTOTAL_B;
 	int trans_vblank_reg = (pipe == 0) ? TRANS_VBLANK_A : TRANS_VBLANK_B;
 	int trans_vsync_reg = (pipe == 0) ? TRANS_VSYNC_A : TRANS_VSYNC_B;
+	int trans_dpll_sel = (pipe == 0) ? 0 : 1;
 	u32 temp;
-	int tries = 5, j, n;
+	int n;
 	u32 pipe_bpc;
 
 	temp = I915_READ(pipeconf_reg);
@@ -1569,12 +1797,6 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 			/* enable eDP PLL */
 			ironlake_enable_pll_edp(crtc);
 		} else {
-			/* enable PCH DPLL */
-			temp = I915_READ(pch_dpll_reg);
-			if ((temp & DPLL_VCO_ENABLE) == 0) {
-				I915_WRITE(pch_dpll_reg, temp | DPLL_VCO_ENABLE);
-				I915_READ(pch_dpll_reg);
-			}
 
 			/* enable PCH FDI RX PLL, wait warmup plus DMI latency */
 			temp = I915_READ(fdi_rx_reg);
@@ -1584,9 +1806,15 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 			 */
 			temp &= ~(0x7 << 16);
 			temp |= (pipe_bpc << 11);
-			I915_WRITE(fdi_rx_reg, temp | FDI_RX_PLL_ENABLE |
-					FDI_SEL_PCDCLK |
-					FDI_DP_PORT_WIDTH_X4); /* default 4 lanes */
+			temp &= ~(7 << 19);
+			temp |= (intel_crtc->fdi_lanes - 1) << 19;
+			I915_WRITE(fdi_rx_reg, temp | FDI_RX_PLL_ENABLE);
+			I915_READ(fdi_rx_reg);
+			udelay(200);
+
+			/* Switch from Rawclk to PCDclk */
+			temp = I915_READ(fdi_rx_reg);
+			I915_WRITE(fdi_rx_reg, temp | FDI_SEL_PCDCLK);
 			I915_READ(fdi_rx_reg);
 			udelay(200);
 
@@ -1629,91 +1857,32 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 		}
 
 		if (!HAS_eDP) {
-			/* enable CPU FDI TX and PCH FDI RX */
-			temp = I915_READ(fdi_tx_reg);
-			temp |= FDI_TX_ENABLE;
-			temp |= FDI_DP_PORT_WIDTH_X4; /* default */
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			temp |= FDI_LINK_TRAIN_PATTERN_1;
-			I915_WRITE(fdi_tx_reg, temp);
-			I915_READ(fdi_tx_reg);
-
-			temp = I915_READ(fdi_rx_reg);
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			temp |= FDI_LINK_TRAIN_PATTERN_1;
-			I915_WRITE(fdi_rx_reg, temp | FDI_RX_ENABLE);
-			I915_READ(fdi_rx_reg);
-
-			udelay(150);
-
-			/* Train FDI. */
-			/* umask FDI RX Interrupt symbol_lock and bit_lock bit
-			   for train result */
-			temp = I915_READ(fdi_rx_imr_reg);
-			temp &= ~FDI_RX_SYMBOL_LOCK;
-			temp &= ~FDI_RX_BIT_LOCK;
-			I915_WRITE(fdi_rx_imr_reg, temp);
-			I915_READ(fdi_rx_imr_reg);
-			udelay(150);
+			/* For PCH output, training FDI link */
+			if (IS_GEN6(dev))
+				gen6_fdi_link_train(crtc);
+			else
+				ironlake_fdi_link_train(crtc);
 
-			temp = I915_READ(fdi_rx_iir_reg);
-			DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
-
-			if ((temp & FDI_RX_BIT_LOCK) == 0) {
-				for (j = 0; j < tries; j++) {
-					temp = I915_READ(fdi_rx_iir_reg);
-					DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n",
-								temp);
-					if (temp & FDI_RX_BIT_LOCK)
-						break;
-					udelay(200);
-				}
-				if (j != tries)
-					I915_WRITE(fdi_rx_iir_reg,
-							temp | FDI_RX_BIT_LOCK);
-				else
-					DRM_DEBUG_KMS("train 1 fail\n");
-			} else {
-				I915_WRITE(fdi_rx_iir_reg,
-						temp | FDI_RX_BIT_LOCK);
-				DRM_DEBUG_KMS("train 1 ok 2!\n");
+			/* enable PCH DPLL */
+			temp = I915_READ(pch_dpll_reg);
+			if ((temp & DPLL_VCO_ENABLE) == 0) {
+				I915_WRITE(pch_dpll_reg, temp | DPLL_VCO_ENABLE);
+				I915_READ(pch_dpll_reg);
 			}
-			temp = I915_READ(fdi_tx_reg);
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			temp |= FDI_LINK_TRAIN_PATTERN_2;
-			I915_WRITE(fdi_tx_reg, temp);
-
-			temp = I915_READ(fdi_rx_reg);
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			temp |= FDI_LINK_TRAIN_PATTERN_2;
-			I915_WRITE(fdi_rx_reg, temp);
-
-			udelay(150);
+			udelay(200);
 
-			temp = I915_READ(fdi_rx_iir_reg);
-			DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n", temp);
-
-			if ((temp & FDI_RX_SYMBOL_LOCK) == 0) {
-				for (j = 0; j < tries; j++) {
-					temp = I915_READ(fdi_rx_iir_reg);
-					DRM_DEBUG_KMS("FDI_RX_IIR 0x%x\n",
-								temp);
-					if (temp & FDI_RX_SYMBOL_LOCK)
-						break;
-					udelay(200);
-				}
-				if (j != tries) {
-					I915_WRITE(fdi_rx_iir_reg,
-							temp | FDI_RX_SYMBOL_LOCK);
-					DRM_DEBUG_KMS("train 2 ok 1!\n");
-				} else
-					DRM_DEBUG_KMS("train 2 fail\n");
-			} else {
-				I915_WRITE(fdi_rx_iir_reg,
-						temp | FDI_RX_SYMBOL_LOCK);
-				DRM_DEBUG_KMS("train 2 ok 2!\n");
+			if (HAS_PCH_CPT(dev)) {
+				/* Be sure PCH DPLL SEL is set */
+				temp = I915_READ(PCH_DPLL_SEL);
+				if (trans_dpll_sel == 0 &&
+						(temp & TRANSA_DPLL_ENABLE) == 0)
+					temp |= (TRANSA_DPLL_ENABLE | TRANSA_DPLLA_SEL);
+				else if (trans_dpll_sel == 1 &&
+						(temp & TRANSB_DPLL_ENABLE) == 0)
+					temp |= (TRANSB_DPLL_ENABLE | TRANSB_DPLLB_SEL);
+				I915_WRITE(PCH_DPLL_SEL, temp);
+				I915_READ(PCH_DPLL_SEL);
 			}
-			DRM_DEBUG_KMS("train done\n");
 
 			/* set transcoder timing */
 			I915_WRITE(trans_htot_reg, I915_READ(cpu_htot_reg));
@@ -1724,6 +1893,60 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 			I915_WRITE(trans_vblank_reg, I915_READ(cpu_vblank_reg));
 			I915_WRITE(trans_vsync_reg, I915_READ(cpu_vsync_reg));
 
+			/* enable normal train */
+			temp = I915_READ(fdi_tx_reg);
+			temp &= ~FDI_LINK_TRAIN_NONE;
+			I915_WRITE(fdi_tx_reg, temp | FDI_LINK_TRAIN_NONE |
+					FDI_TX_ENHANCE_FRAME_ENABLE);
+			I915_READ(fdi_tx_reg);
+
+			temp = I915_READ(fdi_rx_reg);
+			if (HAS_PCH_CPT(dev)) {
+				temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
+				temp |= FDI_LINK_TRAIN_NORMAL_CPT;
+			} else {
+				temp &= ~FDI_LINK_TRAIN_NONE;
+				temp |= FDI_LINK_TRAIN_NONE;
+			}
+			I915_WRITE(fdi_rx_reg, temp | FDI_RX_ENHANCE_FRAME_ENABLE);
+			I915_READ(fdi_rx_reg);
+
+			/* wait one idle pattern time */
+			udelay(100);
+
+			/* For PCH DP, enable TRANS_DP_CTL */
+			if (HAS_PCH_CPT(dev) &&
+			    intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
+				int trans_dp_ctl = (pipe == 0) ? TRANS_DP_CTL_A : TRANS_DP_CTL_B;
+				int reg;
+
+				reg = I915_READ(trans_dp_ctl);
+				reg &= ~TRANS_DP_PORT_SEL_MASK;
+				reg = TRANS_DP_OUTPUT_ENABLE |
+				      TRANS_DP_ENH_FRAMING |
+				      TRANS_DP_VSYNC_ACTIVE_HIGH |
+				      TRANS_DP_HSYNC_ACTIVE_HIGH;
+
+				switch (intel_trans_dp_port_sel(crtc)) {
+				case PCH_DP_B:
+					reg |= TRANS_DP_PORT_SEL_B;
+					break;
+				case PCH_DP_C:
+					reg |= TRANS_DP_PORT_SEL_C;
+					break;
+				case PCH_DP_D:
+					reg |= TRANS_DP_PORT_SEL_D;
+					break;
+				default:
+					DRM_DEBUG_KMS("Wrong PCH DP port return. Guess port B\n");
+					reg |= TRANS_DP_PORT_SEL_B;
+					break;
+				}
+
+				I915_WRITE(trans_dp_ctl, reg);
+				POSTING_READ(trans_dp_ctl);
+			}
+
 			/* enable PCH transcoder */
 			temp = I915_READ(transconf_reg);
 			/*
@@ -1738,23 +1961,6 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 			while ((I915_READ(transconf_reg) & TRANS_STATE_ENABLE) == 0)
 				;
 
-			/* enable normal */
-
-			temp = I915_READ(fdi_tx_reg);
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			I915_WRITE(fdi_tx_reg, temp | FDI_LINK_TRAIN_NONE |
-					FDI_TX_ENHANCE_FRAME_ENABLE);
-			I915_READ(fdi_tx_reg);
-
-			temp = I915_READ(fdi_rx_reg);
-			temp &= ~FDI_LINK_TRAIN_NONE;
-			I915_WRITE(fdi_rx_reg, temp | FDI_LINK_TRAIN_NONE |
-					FDI_RX_ENHANCE_FRAME_ENABLE);
-			I915_READ(fdi_rx_reg);
-
-			/* wait one idle pattern time */
-			udelay(100);
-
 		}
 
 		intel_crtc_load_lut(crtc);
@@ -1805,6 +2011,8 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 			I915_READ(pf_ctl_reg);
 		}
 		I915_WRITE(pf_win_size, 0);
+		POSTING_READ(pf_win_size);
+
 
 		/* disable CPU FDI tx and PCH FDI rx */
 		temp = I915_READ(fdi_tx_reg);
@@ -1825,11 +2033,18 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 		temp &= ~FDI_LINK_TRAIN_NONE;
 		temp |= FDI_LINK_TRAIN_PATTERN_1;
 		I915_WRITE(fdi_tx_reg, temp);
+		POSTING_READ(fdi_tx_reg);
 
 		temp = I915_READ(fdi_rx_reg);
-		temp &= ~FDI_LINK_TRAIN_NONE;
-		temp |= FDI_LINK_TRAIN_PATTERN_1;
+		if (HAS_PCH_CPT(dev)) {
+			temp &= ~FDI_LINK_TRAIN_PATTERN_MASK_CPT;
+			temp |= FDI_LINK_TRAIN_PATTERN_1_CPT;
+		} else {
+			temp &= ~FDI_LINK_TRAIN_NONE;
+			temp |= FDI_LINK_TRAIN_PATTERN_1;
+		}
 		I915_WRITE(fdi_rx_reg, temp);
+		POSTING_READ(fdi_rx_reg);
 
 		udelay(100);
 
@@ -1859,6 +2074,7 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 				}
 			}
 		}
+
 		temp = I915_READ(transconf_reg);
 		/* BPC in transcoder is consistent with that in pipeconf */
 		temp &= ~PIPE_BPC_MASK;
@@ -1867,35 +2083,53 @@ static void ironlake_crtc_dpms(struct drm_crtc *crtc, int mode)
 		I915_READ(transconf_reg);
 		udelay(100);
 
+		if (HAS_PCH_CPT(dev)) {
+			/* disable TRANS_DP_CTL */
+			int trans_dp_ctl = (pipe == 0) ? TRANS_DP_CTL_A : TRANS_DP_CTL_B;
+			int reg;
+
+			reg = I915_READ(trans_dp_ctl);
+			reg &= ~(TRANS_DP_OUTPUT_ENABLE | TRANS_DP_PORT_SEL_MASK);
+			I915_WRITE(trans_dp_ctl, reg);
+			POSTING_READ(trans_dp_ctl);
+
+			/* disable DPLL_SEL */
+			temp = I915_READ(PCH_DPLL_SEL);
+			if (trans_dpll_sel == 0)
+				temp &= ~(TRANSA_DPLL_ENABLE | TRANSA_DPLLB_SEL);
+			else
+				temp &= ~(TRANSB_DPLL_ENABLE | TRANSB_DPLLB_SEL);
+			I915_WRITE(PCH_DPLL_SEL, temp);
+			I915_READ(PCH_DPLL_SEL);
+
+		}
+
 		/* disable PCH DPLL */
 		temp = I915_READ(pch_dpll_reg);
-		if ((temp & DPLL_VCO_ENABLE) != 0) {
-			I915_WRITE(pch_dpll_reg, temp & ~DPLL_VCO_ENABLE);
-			I915_READ(pch_dpll_reg);
-		}
+		I915_WRITE(pch_dpll_reg, temp & ~DPLL_VCO_ENABLE);
+		I915_READ(pch_dpll_reg);
 
 		if (HAS_eDP) {
 			ironlake_disable_pll_edp(crtc);
 		}
 
+		/* Switch from PCDclk to Rawclk */
 		temp = I915_READ(fdi_rx_reg);
 		temp &= ~FDI_SEL_PCDCLK;
 		I915_WRITE(fdi_rx_reg, temp);
 		I915_READ(fdi_rx_reg);
 
+		/* Disable CPU FDI TX PLL */
+		temp = I915_READ(fdi_tx_reg);
+		I915_WRITE(fdi_tx_reg, temp & ~FDI_TX_PLL_ENABLE);
+		I915_READ(fdi_tx_reg);
+		udelay(100);
+
 		temp = I915_READ(fdi_rx_reg);
 		temp &= ~FDI_RX_PLL_ENABLE;
 		I915_WRITE(fdi_rx_reg, temp);
 		I915_READ(fdi_rx_reg);
 
-		/* Disable CPU FDI TX PLL */
-		temp = I915_READ(fdi_tx_reg);
-		if ((temp & FDI_TX_PLL_ENABLE) != 0) {
-			I915_WRITE(fdi_tx_reg, temp & ~FDI_TX_PLL_ENABLE);
-			I915_READ(fdi_tx_reg);
-			udelay(100);
-		}
-
 		/* Wait for the clocks to turn off. */
 		udelay(100);
 		break;
@@ -2122,6 +2356,8 @@ static bool intel_crtc_mode_fixup(struct drm_crtc *crtc,
 		if (mode->clock * 3 > 27000 * 4)
 			return MODE_CLOCK_HIGH;
 	}
+
+	drm_mode_set_crtcinfo(adjusted_mode, 0);
 	return true;
 }
 
@@ -2331,6 +2567,30 @@ static struct intel_watermark_params i830_wm_info = {
 	I830_FIFO_LINE_SIZE
 };
 
+static struct intel_watermark_params ironlake_display_wm_info = {
+	ILK_DISPLAY_FIFO,
+	ILK_DISPLAY_MAXWM,
+	ILK_DISPLAY_DFTWM,
+	2,
+	ILK_FIFO_LINE_SIZE
+};
+
+static struct intel_watermark_params ironlake_display_srwm_info = {
+	ILK_DISPLAY_SR_FIFO,
+	ILK_DISPLAY_MAX_SRWM,
+	ILK_DISPLAY_DFT_SRWM,
+	2,
+	ILK_FIFO_LINE_SIZE
+};
+
+static struct intel_watermark_params ironlake_cursor_srwm_info = {
+	ILK_CURSOR_SR_FIFO,
+	ILK_CURSOR_MAX_SRWM,
+	ILK_CURSOR_DFT_SRWM,
+	2,
+	ILK_FIFO_LINE_SIZE
+};
+
 /**
  * intel_calculate_wm - calculate watermark level
  * @clock_in_khz: pixel clock
@@ -2382,6 +2642,7 @@ static unsigned long intel_calculate_wm(unsigned long clock_in_khz,
 
 struct cxsr_latency {
 	int is_desktop;
+	int is_ddr3;
 	unsigned long fsb_freq;
 	unsigned long mem_freq;
 	unsigned long display_sr;
@@ -2391,33 +2652,45 @@ struct cxsr_latency {
 };
 
 static struct cxsr_latency cxsr_latency_table[] = {
-	{1, 800, 400, 3382, 33382, 3983, 33983},    /* DDR2-400 SC */
-	{1, 800, 667, 3354, 33354, 3807, 33807},    /* DDR2-667 SC */
-	{1, 800, 800, 3347, 33347, 3763, 33763},    /* DDR2-800 SC */
-
-	{1, 667, 400, 3400, 33400, 4021, 34021},    /* DDR2-400 SC */
-	{1, 667, 667, 3372, 33372, 3845, 33845},    /* DDR2-667 SC */
-	{1, 667, 800, 3386, 33386, 3822, 33822},    /* DDR2-800 SC */
-
-	{1, 400, 400, 3472, 33472, 4173, 34173},    /* DDR2-400 SC */
-	{1, 400, 667, 3443, 33443, 3996, 33996},    /* DDR2-667 SC */
-	{1, 400, 800, 3430, 33430, 3946, 33946},    /* DDR2-800 SC */
-
-	{0, 800, 400, 3438, 33438, 4065, 34065},    /* DDR2-400 SC */
-	{0, 800, 667, 3410, 33410, 3889, 33889},    /* DDR2-667 SC */
-	{0, 800, 800, 3403, 33403, 3845, 33845},    /* DDR2-800 SC */
-
-	{0, 667, 400, 3456, 33456, 4103, 34106},    /* DDR2-400 SC */
-	{0, 667, 667, 3428, 33428, 3927, 33927},    /* DDR2-667 SC */
-	{0, 667, 800, 3443, 33443, 3905, 33905},    /* DDR2-800 SC */
-
-	{0, 400, 400, 3528, 33528, 4255, 34255},    /* DDR2-400 SC */
-	{0, 400, 667, 3500, 33500, 4079, 34079},    /* DDR2-667 SC */
-	{0, 400, 800, 3487, 33487, 4029, 34029},    /* DDR2-800 SC */
+	{1, 0, 800, 400, 3382, 33382, 3983, 33983},    /* DDR2-400 SC */
+	{1, 0, 800, 667, 3354, 33354, 3807, 33807},    /* DDR2-667 SC */
+	{1, 0, 800, 800, 3347, 33347, 3763, 33763},    /* DDR2-800 SC */
+	{1, 1, 800, 667, 6420, 36420, 6873, 36873},    /* DDR3-667 SC */
+	{1, 1, 800, 800, 5902, 35902, 6318, 36318},    /* DDR3-800 SC */
+
+	{1, 0, 667, 400, 3400, 33400, 4021, 34021},    /* DDR2-400 SC */
+	{1, 0, 667, 667, 3372, 33372, 3845, 33845},    /* DDR2-667 SC */
+	{1, 0, 667, 800, 3386, 33386, 3822, 33822},    /* DDR2-800 SC */
+	{1, 1, 667, 667, 6438, 36438, 6911, 36911},    /* DDR3-667 SC */
+	{1, 1, 667, 800, 5941, 35941, 6377, 36377},    /* DDR3-800 SC */
+
+	{1, 0, 400, 400, 3472, 33472, 4173, 34173},    /* DDR2-400 SC */
+	{1, 0, 400, 667, 3443, 33443, 3996, 33996},    /* DDR2-667 SC */
+	{1, 0, 400, 800, 3430, 33430, 3946, 33946},    /* DDR2-800 SC */
+	{1, 1, 400, 667, 6509, 36509, 7062, 37062},    /* DDR3-667 SC */
+	{1, 1, 400, 800, 5985, 35985, 6501, 36501},    /* DDR3-800 SC */
+
+	{0, 0, 800, 400, 3438, 33438, 4065, 34065},    /* DDR2-400 SC */
+	{0, 0, 800, 667, 3410, 33410, 3889, 33889},    /* DDR2-667 SC */
+	{0, 0, 800, 800, 3403, 33403, 3845, 33845},    /* DDR2-800 SC */
+	{0, 1, 800, 667, 6476, 36476, 6955, 36955},    /* DDR3-667 SC */
+	{0, 1, 800, 800, 5958, 35958, 6400, 36400},    /* DDR3-800 SC */
+
+	{0, 0, 667, 400, 3456, 33456, 4103, 34106},    /* DDR2-400 SC */
+	{0, 0, 667, 667, 3428, 33428, 3927, 33927},    /* DDR2-667 SC */
+	{0, 0, 667, 800, 3443, 33443, 3905, 33905},    /* DDR2-800 SC */
+	{0, 1, 667, 667, 6494, 36494, 6993, 36993},    /* DDR3-667 SC */
+	{0, 1, 667, 800, 5998, 35998, 6460, 36460},    /* DDR3-800 SC */
+
+	{0, 0, 400, 400, 3528, 33528, 4255, 34255},    /* DDR2-400 SC */
+	{0, 0, 400, 667, 3500, 33500, 4079, 34079},    /* DDR2-667 SC */
+	{0, 0, 400, 800, 3487, 33487, 4029, 34029},    /* DDR2-800 SC */
+	{0, 1, 400, 667, 6566, 36566, 7145, 37145},    /* DDR3-667 SC */
+	{0, 1, 400, 800, 6042, 36042, 6584, 36584},    /* DDR3-800 SC */
 };
 
-static struct cxsr_latency *intel_get_cxsr_latency(int is_desktop, int fsb,
-						   int mem)
+static struct cxsr_latency *intel_get_cxsr_latency(int is_desktop, int is_ddr3, 
+						   int fsb, int mem)
 {
 	int i;
 	struct cxsr_latency *latency;
@@ -2428,6 +2701,7 @@ static struct cxsr_latency *intel_get_cxsr_latency(int is_desktop, int fsb,
 	for (i = 0; i < ARRAY_SIZE(cxsr_latency_table); i++) {
 		latency = &cxsr_latency_table[i];
 		if (is_desktop == latency->is_desktop &&
+		    is_ddr3 == latency->is_ddr3 &&
 		    fsb == latency->fsb_freq && mem == latency->mem_freq)
 			return latency;
 	}
@@ -2449,66 +2723,6 @@ static void pineview_disable_cxsr(struct drm_device *dev)
 	DRM_INFO("Big FIFO is disabled\n");
 }
 
-static void pineview_enable_cxsr(struct drm_device *dev, unsigned long clock,
-				 int pixel_size)
-{
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 reg;
-	unsigned long wm;
-	struct cxsr_latency *latency;
-
-	latency = intel_get_cxsr_latency(IS_PINEVIEW_G(dev), dev_priv->fsb_freq,
-		dev_priv->mem_freq);
-	if (!latency) {
-		DRM_DEBUG_KMS("Unknown FSB/MEM found, disable CxSR\n");
-		pineview_disable_cxsr(dev);
-		return;
-	}
-
-	/* Display SR */
-	wm = intel_calculate_wm(clock, &pineview_display_wm, pixel_size,
-				latency->display_sr);
-	reg = I915_READ(DSPFW1);
-	reg &= 0x7fffff;
-	reg |= wm << 23;
-	I915_WRITE(DSPFW1, reg);
-	DRM_DEBUG_KMS("DSPFW1 register is %x\n", reg);
-
-	/* cursor SR */
-	wm = intel_calculate_wm(clock, &pineview_cursor_wm, pixel_size,
-				latency->cursor_sr);
-	reg = I915_READ(DSPFW3);
-	reg &= ~(0x3f << 24);
-	reg |= (wm & 0x3f) << 24;
-	I915_WRITE(DSPFW3, reg);
-
-	/* Display HPLL off SR */
-	wm = intel_calculate_wm(clock, &pineview_display_hplloff_wm,
-		latency->display_hpll_disable, I915_FIFO_LINE_SIZE);
-	reg = I915_READ(DSPFW3);
-	reg &= 0xfffffe00;
-	reg |= wm & 0x1ff;
-	I915_WRITE(DSPFW3, reg);
-
-	/* cursor HPLL off SR */
-	wm = intel_calculate_wm(clock, &pineview_cursor_hplloff_wm, pixel_size,
-				latency->cursor_hpll_disable);
-	reg = I915_READ(DSPFW3);
-	reg &= ~(0x3f << 16);
-	reg |= (wm & 0x3f) << 16;
-	I915_WRITE(DSPFW3, reg);
-	DRM_DEBUG_KMS("DSPFW3 register is %x\n", reg);
-
-	/* activate cxsr */
-	reg = I915_READ(DSPFW3);
-	reg |= PINEVIEW_SELF_REFRESH_EN;
-	I915_WRITE(DSPFW3, reg);
-
-	DRM_INFO("Big FIFO is enabled\n");
-
-	return;
-}
-
 /*
  * Latency for FIFO fetches is dependent on several factors:
  *   - memory configuration (speed, channels)
@@ -2593,6 +2807,71 @@ static int i830_get_fifo_size(struct drm_device *dev, int plane)
 	return size;
 }
 
+static void pineview_update_wm(struct drm_device *dev,  int planea_clock,
+			  int planeb_clock, int sr_hdisplay, int pixel_size)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	u32 reg;
+	unsigned long wm;
+	struct cxsr_latency *latency;
+	int sr_clock;
+
+	latency = intel_get_cxsr_latency(IS_PINEVIEW_G(dev), dev_priv->is_ddr3, 
+					 dev_priv->fsb_freq, dev_priv->mem_freq);
+	if (!latency) {
+		DRM_DEBUG_KMS("Unknown FSB/MEM found, disable CxSR\n");
+		pineview_disable_cxsr(dev);
+		return;
+	}
+
+	if (!planea_clock || !planeb_clock) {
+		sr_clock = planea_clock ? planea_clock : planeb_clock;
+
+		/* Display SR */
+		wm = intel_calculate_wm(sr_clock, &pineview_display_wm,
+					pixel_size, latency->display_sr);
+		reg = I915_READ(DSPFW1);
+		reg &= ~DSPFW_SR_MASK;
+		reg |= wm << DSPFW_SR_SHIFT;
+		I915_WRITE(DSPFW1, reg);
+		DRM_DEBUG_KMS("DSPFW1 register is %x\n", reg);
+
+		/* cursor SR */
+		wm = intel_calculate_wm(sr_clock, &pineview_cursor_wm,
+					pixel_size, latency->cursor_sr);
+		reg = I915_READ(DSPFW3);
+		reg &= ~DSPFW_CURSOR_SR_MASK;
+		reg |= (wm & 0x3f) << DSPFW_CURSOR_SR_SHIFT;
+		I915_WRITE(DSPFW3, reg);
+
+		/* Display HPLL off SR */
+		wm = intel_calculate_wm(sr_clock, &pineview_display_hplloff_wm,
+					pixel_size, latency->display_hpll_disable);
+		reg = I915_READ(DSPFW3);
+		reg &= ~DSPFW_HPLL_SR_MASK;
+		reg |= wm & DSPFW_HPLL_SR_MASK;
+		I915_WRITE(DSPFW3, reg);
+
+		/* cursor HPLL off SR */
+		wm = intel_calculate_wm(sr_clock, &pineview_cursor_hplloff_wm,
+					pixel_size, latency->cursor_hpll_disable);
+		reg = I915_READ(DSPFW3);
+		reg &= ~DSPFW_HPLL_CURSOR_MASK;
+		reg |= (wm & 0x3f) << DSPFW_HPLL_CURSOR_SHIFT;
+		I915_WRITE(DSPFW3, reg);
+		DRM_DEBUG_KMS("DSPFW3 register is %x\n", reg);
+
+		/* activate cxsr */
+		reg = I915_READ(DSPFW3);
+		reg |= PINEVIEW_SELF_REFRESH_EN;
+		I915_WRITE(DSPFW3, reg);
+		DRM_DEBUG_KMS("Self-refresh is enabled\n");
+	} else {
+		pineview_disable_cxsr(dev);
+		DRM_DEBUG_KMS("Self-refresh is disabled\n");
+	}
+}
+
 static void g4x_update_wm(struct drm_device *dev,  int planea_clock,
 			  int planeb_clock, int sr_hdisplay, int pixel_size)
 {
@@ -2813,6 +3092,108 @@ static void i830_update_wm(struct drm_device *dev, int planea_clock, int unused,
 	I915_WRITE(FW_BLC, fwater_lo);
 }
 
+#define ILK_LP0_PLANE_LATENCY		700
+
+static void ironlake_update_wm(struct drm_device *dev,  int planea_clock,
+		       int planeb_clock, int sr_hdisplay, int pixel_size)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	int planea_wm, planeb_wm, cursora_wm, cursorb_wm;
+	int sr_wm, cursor_wm;
+	unsigned long line_time_us;
+	int sr_clock, entries_required;
+	u32 reg_value;
+
+	/* Calculate and update the watermark for plane A */
+	if (planea_clock) {
+		entries_required = ((planea_clock / 1000) * pixel_size *
+				     ILK_LP0_PLANE_LATENCY) / 1000;
+		entries_required = DIV_ROUND_UP(entries_required,
+				   ironlake_display_wm_info.cacheline_size);
+		planea_wm = entries_required +
+			    ironlake_display_wm_info.guard_size;
+
+		if (planea_wm > (int)ironlake_display_wm_info.max_wm)
+			planea_wm = ironlake_display_wm_info.max_wm;
+
+		cursora_wm = 16;
+		reg_value = I915_READ(WM0_PIPEA_ILK);
+		reg_value &= ~(WM0_PIPE_PLANE_MASK | WM0_PIPE_CURSOR_MASK);
+		reg_value |= (planea_wm << WM0_PIPE_PLANE_SHIFT) |
+			     (cursora_wm & WM0_PIPE_CURSOR_MASK);
+		I915_WRITE(WM0_PIPEA_ILK, reg_value);
+		DRM_DEBUG_KMS("FIFO watermarks For pipe A - plane %d, "
+				"cursor: %d\n", planea_wm, cursora_wm);
+	}
+	/* Calculate and update the watermark for plane B */
+	if (planeb_clock) {
+		entries_required = ((planeb_clock / 1000) * pixel_size *
+				     ILK_LP0_PLANE_LATENCY) / 1000;
+		entries_required = DIV_ROUND_UP(entries_required,
+				   ironlake_display_wm_info.cacheline_size);
+		planeb_wm = entries_required +
+			    ironlake_display_wm_info.guard_size;
+
+		if (planeb_wm > (int)ironlake_display_wm_info.max_wm)
+			planeb_wm = ironlake_display_wm_info.max_wm;
+
+		cursorb_wm = 16;
+		reg_value = I915_READ(WM0_PIPEB_ILK);
+		reg_value &= ~(WM0_PIPE_PLANE_MASK | WM0_PIPE_CURSOR_MASK);
+		reg_value |= (planeb_wm << WM0_PIPE_PLANE_SHIFT) |
+			     (cursorb_wm & WM0_PIPE_CURSOR_MASK);
+		I915_WRITE(WM0_PIPEB_ILK, reg_value);
+		DRM_DEBUG_KMS("FIFO watermarks For pipe B - plane %d, "
+				"cursor: %d\n", planeb_wm, cursorb_wm);
+	}
+
+	/*
+	 * Calculate and update the self-refresh watermark only when one
+	 * display plane is used.
+	 */
+	if (!planea_clock || !planeb_clock) {
+		int line_count;
+		/* Read the self-refresh latency. The unit is 0.5us */
+		int ilk_sr_latency = I915_READ(MLTR_ILK) & ILK_SRLT_MASK;
+
+		sr_clock = planea_clock ? planea_clock : planeb_clock;
+		line_time_us = ((sr_hdisplay * 1000) / sr_clock);
+
+		/* Use ns/us then divide to preserve precision */
+		line_count = ((ilk_sr_latency * 500) / line_time_us + 1000)
+			       / 1000;
+
+		/* calculate the self-refresh watermark for display plane */
+		entries_required = line_count * sr_hdisplay * pixel_size;
+		entries_required = DIV_ROUND_UP(entries_required,
+				   ironlake_display_srwm_info.cacheline_size);
+		sr_wm = entries_required +
+			ironlake_display_srwm_info.guard_size;
+
+		/* calculate the self-refresh watermark for display cursor */
+		entries_required = line_count * pixel_size * 64;
+		entries_required = DIV_ROUND_UP(entries_required,
+				   ironlake_cursor_srwm_info.cacheline_size);
+		cursor_wm = entries_required +
+			    ironlake_cursor_srwm_info.guard_size;
+
+		/* configure watermark and enable self-refresh */
+		reg_value = I915_READ(WM1_LP_ILK);
+		reg_value &= ~(WM1_LP_LATENCY_MASK | WM1_LP_SR_MASK |
+			       WM1_LP_CURSOR_MASK);
+		reg_value |= WM1_LP_SR_EN |
+			     (ilk_sr_latency << WM1_LP_LATENCY_SHIFT) |
+			     (sr_wm << WM1_LP_SR_SHIFT) | cursor_wm;
+
+		I915_WRITE(WM1_LP_ILK, reg_value);
+		DRM_DEBUG_KMS("self-refresh watermark: display plane %d "
+				"cursor %d\n", sr_wm, cursor_wm);
+
+	} else {
+		/* Turn off self refresh if both pipes are enabled */
+		I915_WRITE(WM1_LP_ILK, I915_READ(WM1_LP_ILK) & ~WM1_LP_SR_EN);
+	}
+}
 /**
  * intel_update_watermarks - update FIFO watermark values based on current modes
  *
@@ -2882,12 +3263,6 @@ static void intel_update_watermarks(struct drm_device *dev)
 	if (enabled <= 0)
 		return;
 
-	/* Single plane configs can enable self refresh */
-	if (enabled == 1 && IS_PINEVIEW(dev))
-		pineview_enable_cxsr(dev, sr_clock, pixel_size);
-	else if (IS_PINEVIEW(dev))
-		pineview_disable_cxsr(dev);
-
 	dev_priv->display.update_wm(dev, planea_clock, planeb_clock,
 				    sr_hdisplay, pixel_size);
 }
@@ -2924,7 +3299,8 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 	bool is_crt = false, is_lvds = false, is_tv = false, is_dp = false;
 	bool is_edp = false;
 	struct drm_mode_config *mode_config = &dev->mode_config;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
+	struct intel_encoder *intel_encoder = NULL;
 	const intel_limit_t *limit;
 	int ret;
 	struct fdi_m_n m_n = {0};
@@ -2935,6 +3311,8 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 	int pch_fp_reg = (pipe == 0) ? PCH_FPA0 : PCH_FPB0;
 	int pch_dpll_reg = (pipe == 0) ? PCH_DPLL_A : PCH_DPLL_B;
 	int fdi_rx_reg = (pipe == 0) ? FDI_RXA_CTL : FDI_RXB_CTL;
+	int fdi_tx_reg = (pipe == 0) ? FDI_TXA_CTL : FDI_TXB_CTL;
+	int trans_dpll_sel = (pipe == 0) ? 0 : 1;
 	int lvds_reg = LVDS;
 	u32 temp;
 	int sdvo_pixel_multiply;
@@ -2942,12 +3320,13 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 
 	drm_vblank_pre_modeset(dev, pipe);
 
-	list_for_each_entry(connector, &mode_config->connector_list, head) {
-		struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	list_for_each_entry(encoder, &mode_config->encoder_list, head) {
 
-		if (!connector->encoder || connector->encoder->crtc != crtc)
+		if (!encoder || encoder->crtc != crtc)
 			continue;
 
+		intel_encoder = enc_to_intel_encoder(encoder);
+
 		switch (intel_encoder->type) {
 		case INTEL_OUTPUT_LVDS:
 			is_lvds = true;
@@ -3043,14 +3422,12 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 
 	/* FDI link */
 	if (HAS_PCH_SPLIT(dev)) {
-		int lane, link_bw, bpp;
+		int lane = 0, link_bw, bpp;
 		/* eDP doesn't require FDI link, so just set DP M/N
 		   according to current link config */
 		if (is_edp) {
-			struct drm_connector *edp;
 			target_clock = mode->clock;
-			edp = intel_pipe_get_connector(crtc);
-			intel_edp_link_config(to_intel_encoder(edp),
+			intel_edp_link_config(intel_encoder,
 					&lane, &link_bw);
 		} else {
 			/* DP over FDI requires target mode clock
@@ -3059,7 +3436,6 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 				target_clock = mode->clock;
 			else
 				target_clock = adjusted_mode->clock;
-			lane = 4;
 			link_bw = 270000;
 		}
 
@@ -3111,6 +3487,18 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 			bpp = 24;
 		}
 
+		if (!lane) {
+			/* 
+			 * Account for spread spectrum to avoid
+			 * oversubscribing the link. Max center spread
+			 * is 2.5%; use 5% for safety's sake.
+			 */
+			u32 bps = target_clock * bpp * 21 / 20;
+			lane = bps / (link_bw * 8) + 1;
+		}
+
+		intel_crtc->fdi_lanes = lane;
+
 		ironlake_compute_m_n(bpp, lane, target_clock, link_bw, &m_n);
 	}
 
@@ -3292,6 +3680,18 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 		udelay(150);
 	}
 
+	/* enable transcoder DPLL */
+	if (HAS_PCH_CPT(dev)) {
+		temp = I915_READ(PCH_DPLL_SEL);
+		if (trans_dpll_sel == 0)
+			temp |= (TRANSA_DPLL_ENABLE | TRANSA_DPLLA_SEL);
+		else
+			temp |=	(TRANSB_DPLL_ENABLE | TRANSB_DPLLB_SEL);
+		I915_WRITE(PCH_DPLL_SEL, temp);
+		I915_READ(PCH_DPLL_SEL);
+		udelay(150);
+	}
+
 	/* The LVDS pin pair needs to be on before the DPLLs are enabled.
 	 * This is an exception to the general rule that mode_set doesn't turn
 	 * things on.
@@ -3303,7 +3703,18 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 			lvds_reg = PCH_LVDS;
 
 		lvds = I915_READ(lvds_reg);
-		lvds |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP | LVDS_PIPEB_SELECT;
+		lvds |= LVDS_PORT_EN | LVDS_A0A2_CLKA_POWER_UP;
+		if (pipe == 1) {
+			if (HAS_PCH_CPT(dev))
+				lvds |= PORT_TRANS_B_SEL_CPT;
+			else
+				lvds |= LVDS_PIPEB_SELECT;
+		} else {
+			if (HAS_PCH_CPT(dev))
+				lvds &= ~PORT_TRANS_SEL_MASK;
+			else
+				lvds &= ~LVDS_PIPEB_SELECT;
+		}
 		/* set the corresponsding LVDS_BORDER bit */
 		lvds |= dev_priv->lvds_border_bits;
 		/* Set the B0-B3 data pairs corresponding to whether we're going to
@@ -3321,14 +3732,16 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 		/* set the dithering flag */
 		if (IS_I965G(dev)) {
 			if (dev_priv->lvds_dither) {
-				if (HAS_PCH_SPLIT(dev))
+				if (HAS_PCH_SPLIT(dev)) {
 					pipeconf |= PIPE_ENABLE_DITHER;
-				else
+					pipeconf |= PIPE_DITHER_TYPE_ST01;
+				} else
 					lvds |= LVDS_ENABLE_DITHER;
 			} else {
-				if (HAS_PCH_SPLIT(dev))
+				if (HAS_PCH_SPLIT(dev)) {
 					pipeconf &= ~PIPE_ENABLE_DITHER;
-				else
+					pipeconf &= ~PIPE_DITHER_TYPE_MASK;
+				} else
 					lvds &= ~LVDS_ENABLE_DITHER;
 			}
 		}
@@ -3337,6 +3750,20 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 	}
 	if (is_dp)
 		intel_dp_set_m_n(crtc, mode, adjusted_mode);
+	else if (HAS_PCH_SPLIT(dev)) {
+		/* For non-DP output, clear any trans DP clock recovery setting.*/
+		if (pipe == 0) {
+			I915_WRITE(TRANSA_DATA_M1, 0);
+			I915_WRITE(TRANSA_DATA_N1, 0);
+			I915_WRITE(TRANSA_DP_LINK_M1, 0);
+			I915_WRITE(TRANSA_DP_LINK_N1, 0);
+		} else {
+			I915_WRITE(TRANSB_DATA_M1, 0);
+			I915_WRITE(TRANSB_DATA_N1, 0);
+			I915_WRITE(TRANSB_DP_LINK_M1, 0);
+			I915_WRITE(TRANSB_DP_LINK_N1, 0);
+		}
+	}
 
 	if (!is_edp) {
 		I915_WRITE(fp_reg, fp);
@@ -3377,6 +3804,18 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 		}
 	}
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE) {
+		pipeconf |= PIPECONF_INTERLACE_W_FIELD_INDICATION;
+		/* the chip adds 2 halflines automatically */
+		adjusted_mode->crtc_vdisplay -= 1;
+		adjusted_mode->crtc_vtotal -= 1;
+		adjusted_mode->crtc_vblank_start -= 1;
+		adjusted_mode->crtc_vblank_end -= 1;
+		adjusted_mode->crtc_vsync_end -= 1;
+		adjusted_mode->crtc_vsync_start -= 1;
+	} else
+		pipeconf &= ~PIPECONF_INTERLACE_W_FIELD_INDICATION; /* progressive */
+
 	I915_WRITE(htot_reg, (adjusted_mode->crtc_hdisplay - 1) |
 		   ((adjusted_mode->crtc_htotal - 1) << 16));
 	I915_WRITE(hblank_reg, (adjusted_mode->crtc_hblank_start - 1) |
@@ -3411,6 +3850,18 @@ static int intel_crtc_mode_set(struct drm_crtc *crtc,
 			/* enable FDI RX PLL too */
 			temp = I915_READ(fdi_rx_reg);
 			I915_WRITE(fdi_rx_reg, temp | FDI_RX_PLL_ENABLE);
+			I915_READ(fdi_rx_reg);
+			udelay(200);
+
+			/* enable FDI TX PLL too */
+			temp = I915_READ(fdi_tx_reg);
+			I915_WRITE(fdi_tx_reg, temp | FDI_TX_PLL_ENABLE);
+			I915_READ(fdi_tx_reg);
+
+			/* enable FDI RX PCDCLK */
+			temp = I915_READ(fdi_rx_reg);
+			I915_WRITE(fdi_rx_reg, temp | FDI_SEL_PCDCLK);
+			I915_READ(fdi_rx_reg);
 			udelay(200);
 		}
 	}
@@ -3527,6 +3978,13 @@ static int intel_crtc_cursor_set(struct drm_crtc *crtc,
 			DRM_ERROR("failed to pin cursor bo\n");
 			goto fail_locked;
 		}
+
+		ret = i915_gem_object_set_to_gtt_domain(bo, 0);
+		if (ret) {
+			DRM_ERROR("failed to move cursor bo into the GTT\n");
+			goto fail_unpin;
+		}
+
 		addr = obj_priv->gtt_offset;
 	} else {
 		ret = i915_gem_attach_phys_object(dev, bo, (pipe == 0) ? I915_GEM_PHYS_CURSOR_0 : I915_GEM_PHYS_CURSOR_1);
@@ -3570,6 +4028,8 @@ static int intel_crtc_cursor_set(struct drm_crtc *crtc,
 	intel_crtc->cursor_bo = bo;
 
 	return 0;
+fail_unpin:
+	i915_gem_object_unpin(bo);
 fail_locked:
 	mutex_unlock(&dev->struct_mutex);
 fail:
@@ -3671,6 +4131,7 @@ static struct drm_display_mode load_detect_mode = {
 };
 
 struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder,
+					    struct drm_connector *connector,
 					    struct drm_display_mode *mode,
 					    int *dpms_mode)
 {
@@ -3729,7 +4190,7 @@ struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder,
 	}
 
 	encoder->crtc = crtc;
-	intel_encoder->base.encoder = encoder;
+	connector->encoder = encoder;
 	intel_encoder->load_detect_temp = true;
 
 	intel_crtc = to_intel_crtc(crtc);
@@ -3755,7 +4216,8 @@ struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder,
 	return crtc;
 }
 
-void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder, int dpms_mode)
+void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder,
+				    struct drm_connector *connector, int dpms_mode)
 {
 	struct drm_encoder *encoder = &intel_encoder->enc;
 	struct drm_device *dev = encoder->dev;
@@ -3765,7 +4227,7 @@ void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder, int dpm
 
 	if (intel_encoder->load_detect_temp) {
 		encoder->crtc = NULL;
-		intel_encoder->base.encoder = NULL;
+		connector->encoder = NULL;
 		intel_encoder->load_detect_temp = false;
 		crtc->enabled = drm_helper_crtc_in_use(crtc);
 		drm_helper_disable_unused_functions(dev);
@@ -4027,6 +4489,8 @@ static void intel_idle_update(struct work_struct *work)
 
 	mutex_lock(&dev->struct_mutex);
 
+	i915_update_gfx_val(dev_priv);
+
 	if (IS_I945G(dev) || IS_I945GM(dev)) {
 		DRM_DEBUG_DRIVER("enable memory self refresh on 945\n");
 		I915_WRITE(FW_BLC_SELF, FW_BLC_SELF_EN_MASK | FW_BLC_SELF_EN);
@@ -4155,12 +4619,6 @@ void intel_finish_page_flip(struct drm_device *dev, int pipe)
 	spin_lock_irqsave(&dev->event_lock, flags);
 	work = intel_crtc->unpin_work;
 	if (work == NULL || !work->pending) {
-		if (work && !work->pending) {
-			obj_priv = to_intel_bo(work->pending_flip_obj);
-			DRM_DEBUG_DRIVER("flip finish: %p (%d) not pending?\n",
-					 obj_priv,
-					 atomic_read(&obj_priv->pending_flip));
-		}
 		spin_unlock_irqrestore(&dev->event_lock, flags);
 		return;
 	}
@@ -4220,14 +4678,11 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	int pipesrc_reg = (intel_crtc->pipe == 0) ? PIPEASRC : PIPEBSRC;
 	int ret, pipesrc;
 	u32 flip_mask;
-	RING_LOCALS;
 
 	work = kzalloc(sizeof *work, GFP_KERNEL);
 	if (work == NULL)
 		return -ENOMEM;
 
-	mutex_lock(&dev->struct_mutex);
-
 	work->event = event;
 	work->dev = crtc->dev;
 	intel_fb = to_intel_framebuffer(crtc->fb);
@@ -4237,10 +4692,10 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	/* We borrow the event spin lock for protecting unpin_work */
 	spin_lock_irqsave(&dev->event_lock, flags);
 	if (intel_crtc->unpin_work) {
-		DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
 		spin_unlock_irqrestore(&dev->event_lock, flags);
 		kfree(work);
-		mutex_unlock(&dev->struct_mutex);
+
+		DRM_DEBUG_DRIVER("flip queue: crtc already busy\n");
 		return -EBUSY;
 	}
 	intel_crtc->unpin_work = work;
@@ -4249,13 +4704,19 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
 	intel_fb = to_intel_framebuffer(fb);
 	obj = intel_fb->obj;
 
+	mutex_lock(&dev->struct_mutex);
 	ret = intel_pin_and_fence_fb_obj(dev, obj);
 	if (ret != 0) {
-		DRM_DEBUG_DRIVER("flip queue: %p pin & fence failed\n",
-			  to_intel_bo(obj));
-		kfree(work);
-		intel_crtc->unpin_work = NULL;
 		mutex_unlock(&dev->struct_mutex);
+
+		spin_lock_irqsave(&dev->event_lock, flags);
+		intel_crtc->unpin_work = NULL;
+		spin_unlock_irqrestore(&dev->event_lock, flags);
+
+		kfree(work);
+
+		DRM_DEBUG_DRIVER("flip queue: %p pin & fence failed\n",
+				 to_intel_bo(obj));
 		return ret;
 	}
 
@@ -4392,14 +4853,14 @@ struct drm_crtc *intel_get_crtc_from_pipe(struct drm_device *dev, int pipe)
 	return crtc;
 }
 
-static int intel_connector_clones(struct drm_device *dev, int type_mask)
+static int intel_encoder_clones(struct drm_device *dev, int type_mask)
 {
 	int index_mask = 0;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 	int entry = 0;
 
-        list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+        list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 		if (type_mask & intel_encoder->clone_mask)
 			index_mask |= (1 << entry);
 		entry++;
@@ -4411,7 +4872,7 @@ static int intel_connector_clones(struct drm_device *dev, int type_mask)
 static void intel_setup_outputs(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 
 	intel_crt_init(dev);
 
@@ -4426,9 +4887,8 @@ static void intel_setup_outputs(struct drm_device *dev)
 			intel_dp_init(dev, DP_A);
 
 		if (I915_READ(HDMIB) & PORT_DETECTED) {
-			/* check SDVOB */
-			/* found = intel_sdvo_init(dev, HDMIB); */
-			found = 0;
+			/* PCH SDVOB multiplex with HDMIB */
+			found = intel_sdvo_init(dev, PCH_SDVOB);
 			if (!found)
 				intel_hdmi_init(dev, HDMIB);
 			if (!found && (I915_READ(PCH_DP_B) & DP_DETECTED))
@@ -4494,12 +4954,11 @@ static void intel_setup_outputs(struct drm_device *dev)
 	if (SUPPORTS_TV(dev))
 		intel_tv_init(dev);
 
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-		struct drm_encoder *encoder = &intel_encoder->enc;
+	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 
 		encoder->possible_crtcs = intel_encoder->crtc_mask;
-		encoder->possible_clones = intel_connector_clones(dev,
+		encoder->possible_clones = intel_encoder_clones(dev,
 						intel_encoder->clone_mask);
 	}
 }
@@ -4507,10 +4966,6 @@ static void intel_setup_outputs(struct drm_device *dev)
 static void intel_user_framebuffer_destroy(struct drm_framebuffer *fb)
 {
 	struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb);
-	struct drm_device *dev = fb->dev;
-
-	if (fb->fbdev)
-		intelfb_remove(dev, fb);
 
 	drm_framebuffer_cleanup(fb);
 	drm_gem_object_unreference_unlocked(intel_fb->obj);
@@ -4533,18 +4988,13 @@ static const struct drm_framebuffer_funcs intel_fb_funcs = {
 	.create_handle = intel_user_framebuffer_create_handle,
 };
 
-int intel_framebuffer_create(struct drm_device *dev,
-			     struct drm_mode_fb_cmd *mode_cmd,
-			     struct drm_framebuffer **fb,
-			     struct drm_gem_object *obj)
+int intel_framebuffer_init(struct drm_device *dev,
+			   struct intel_framebuffer *intel_fb,
+			   struct drm_mode_fb_cmd *mode_cmd,
+			   struct drm_gem_object *obj)
 {
-	struct intel_framebuffer *intel_fb;
 	int ret;
 
-	intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);
-	if (!intel_fb)
-		return -ENOMEM;
-
 	ret = drm_framebuffer_init(dev, &intel_fb->base, &intel_fb_funcs);
 	if (ret) {
 		DRM_ERROR("framebuffer init failed %d\n", ret);
@@ -4552,40 +5002,41 @@ int intel_framebuffer_create(struct drm_device *dev,
 	}
 
 	drm_helper_mode_fill_fb_struct(&intel_fb->base, mode_cmd);
-
 	intel_fb->obj = obj;
-
-	*fb = &intel_fb->base;
-
 	return 0;
 }
 
-
 static struct drm_framebuffer *
 intel_user_framebuffer_create(struct drm_device *dev,
 			      struct drm_file *filp,
 			      struct drm_mode_fb_cmd *mode_cmd)
 {
 	struct drm_gem_object *obj;
-	struct drm_framebuffer *fb;
+	struct intel_framebuffer *intel_fb;
 	int ret;
 
 	obj = drm_gem_object_lookup(dev, filp, mode_cmd->handle);
 	if (!obj)
 		return NULL;
 
-	ret = intel_framebuffer_create(dev, mode_cmd, &fb, obj);
+	intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);
+	if (!intel_fb)
+		return NULL;
+
+	ret = intel_framebuffer_init(dev, intel_fb,
+				     mode_cmd, obj);
 	if (ret) {
 		drm_gem_object_unreference_unlocked(obj);
+		kfree(intel_fb);
 		return NULL;
 	}
 
-	return fb;
+	return &intel_fb->base;
 }
 
 static const struct drm_mode_config_funcs intel_mode_funcs = {
 	.fb_create = intel_user_framebuffer_create,
-	.fb_changed = intelfb_probe,
+	.output_poll_changed = intel_fb_output_poll_changed,
 };
 
 static struct drm_gem_object *
@@ -4594,7 +5045,7 @@ intel_alloc_power_context(struct drm_device *dev)
 	struct drm_gem_object *pwrctx;
 	int ret;
 
-	pwrctx = drm_gem_object_alloc(dev, 4096);
+	pwrctx = i915_gem_alloc_object(dev, 4096);
 	if (!pwrctx) {
 		DRM_DEBUG("failed to alloc power context, RC6 disabled\n");
 		return NULL;
@@ -4624,10 +5075,32 @@ err_unref:
 	return NULL;
 }
 
+bool ironlake_set_drps(struct drm_device *dev, u8 val)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	u16 rgvswctl;
+
+	rgvswctl = I915_READ16(MEMSWCTL);
+	if (rgvswctl & MEMCTL_CMD_STS) {
+		DRM_DEBUG("gpu busy, RCS change rejected\n");
+		return false; /* still busy with another command */
+	}
+
+	rgvswctl = (MEMCTL_CMD_CHFREQ << MEMCTL_CMD_SHIFT) |
+		(val << MEMCTL_FREQ_SHIFT) | MEMCTL_SFCAVM;
+	I915_WRITE16(MEMSWCTL, rgvswctl);
+	POSTING_READ16(MEMSWCTL);
+
+	rgvswctl |= MEMCTL_CMD_STS;
+	I915_WRITE16(MEMSWCTL, rgvswctl);
+
+	return true;
+}
+
 void ironlake_enable_drps(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 rgvmodectl = I915_READ(MEMMODECTL), rgvswctl;
+	u32 rgvmodectl = I915_READ(MEMMODECTL);
 	u8 fmax, fmin, fstart, vstart;
 	int i = 0;
 
@@ -4646,13 +5119,21 @@ void ironlake_enable_drps(struct drm_device *dev)
 	fmin = (rgvmodectl & MEMMODE_FMIN_MASK);
 	fstart = (rgvmodectl & MEMMODE_FSTART_MASK) >>
 		MEMMODE_FSTART_SHIFT;
+	fstart = fmax;
+
 	vstart = (I915_READ(PXVFREQ_BASE + (fstart * 4)) & PXVFREQ_PX_MASK) >>
 		PXVFREQ_PX_SHIFT;
 
-	dev_priv->max_delay = fstart; /* can't go to fmax w/o IPS */
+	dev_priv->fmax = fstart; /* IPS callback will increase this */
+	dev_priv->fstart = fstart;
+
+	dev_priv->max_delay = fmax;
 	dev_priv->min_delay = fmin;
 	dev_priv->cur_delay = fstart;
 
+	DRM_DEBUG_DRIVER("fmax: %d, fmin: %d, fstart: %d\n", fmax, fmin,
+			 fstart);
+
 	I915_WRITE(MEMINTREN, MEMINT_CX_SUPR_EN | MEMINT_EVAL_CHG_EN);
 
 	/*
@@ -4674,20 +5155,19 @@ void ironlake_enable_drps(struct drm_device *dev)
 	}
 	msleep(1);
 
-	rgvswctl = (MEMCTL_CMD_CHFREQ << MEMCTL_CMD_SHIFT) |
-		(fstart << MEMCTL_FREQ_SHIFT) | MEMCTL_SFCAVM;
-	I915_WRITE(MEMSWCTL, rgvswctl);
-	POSTING_READ(MEMSWCTL);
+	ironlake_set_drps(dev, fstart);
 
-	rgvswctl |= MEMCTL_CMD_STS;
-	I915_WRITE(MEMSWCTL, rgvswctl);
+	dev_priv->last_count1 = I915_READ(0x112e4) + I915_READ(0x112e8) +
+		I915_READ(0x112e0);
+	dev_priv->last_time1 = jiffies_to_msecs(jiffies);
+	dev_priv->last_count2 = I915_READ(0x112f4);
+	getrawmonotonic(&dev_priv->last_time2);
 }
 
 void ironlake_disable_drps(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 rgvswctl;
-	u8 fstart;
+	u16 rgvswctl = I915_READ16(MEMSWCTL);
 
 	/* Ack interrupts, disable EFC interrupt */
 	I915_WRITE(MEMINTREN, I915_READ(MEMINTREN) & ~MEMINT_EVAL_CHG_EN);
@@ -4697,11 +5177,7 @@ void ironlake_disable_drps(struct drm_device *dev)
 	I915_WRITE(DEIMR, I915_READ(DEIMR) | DE_PCU_EVENT);
 
 	/* Go back to the starting frequency */
-	fstart = (I915_READ(MEMMODECTL) & MEMMODE_FSTART_MASK) >>
-		MEMMODE_FSTART_SHIFT;
-	rgvswctl = (MEMCTL_CMD_CHFREQ << MEMCTL_CMD_SHIFT) |
-		(fstart << MEMCTL_FREQ_SHIFT) | MEMCTL_SFCAVM;
-	I915_WRITE(MEMSWCTL, rgvswctl);
+	ironlake_set_drps(dev, dev_priv->fstart);
 	msleep(1);
 	rgvswctl |= MEMCTL_CMD_STS;
 	I915_WRITE(MEMSWCTL, rgvswctl);
@@ -4709,6 +5185,92 @@ void ironlake_disable_drps(struct drm_device *dev)
 
 }
 
+static unsigned long intel_pxfreq(u32 vidfreq)
+{
+	unsigned long freq;
+	int div = (vidfreq & 0x3f0000) >> 16;
+	int post = (vidfreq & 0x3000) >> 12;
+	int pre = (vidfreq & 0x7);
+
+	if (!pre)
+		return 0;
+
+	freq = ((div * 133333) / ((1<<post) * pre));
+
+	return freq;
+}
+
+void intel_init_emon(struct drm_device *dev)
+{
+	struct drm_i915_private *dev_priv = dev->dev_private;
+	u32 lcfuse;
+	u8 pxw[16];
+	int i;
+
+	/* Disable to program */
+	I915_WRITE(ECR, 0);
+	POSTING_READ(ECR);
+
+	/* Program energy weights for various events */
+	I915_WRITE(SDEW, 0x15040d00);
+	I915_WRITE(CSIEW0, 0x007f0000);
+	I915_WRITE(CSIEW1, 0x1e220004);
+	I915_WRITE(CSIEW2, 0x04000004);
+
+	for (i = 0; i < 5; i++)
+		I915_WRITE(PEW + (i * 4), 0);
+	for (i = 0; i < 3; i++)
+		I915_WRITE(DEW + (i * 4), 0);
+
+	/* Program P-state weights to account for frequency power adjustment */
+	for (i = 0; i < 16; i++) {
+		u32 pxvidfreq = I915_READ(PXVFREQ_BASE + (i * 4));
+		unsigned long freq = intel_pxfreq(pxvidfreq);
+		unsigned long vid = (pxvidfreq & PXVFREQ_PX_MASK) >>
+			PXVFREQ_PX_SHIFT;
+		unsigned long val;
+
+		val = vid * vid;
+		val *= (freq / 1000);
+		val *= 255;
+		val /= (127*127*900);
+		if (val > 0xff)
+			DRM_ERROR("bad pxval: %ld\n", val);
+		pxw[i] = val;
+	}
+	/* Render standby states get 0 weight */
+	pxw[14] = 0;
+	pxw[15] = 0;
+
+	for (i = 0; i < 4; i++) {
+		u32 val = (pxw[i*4] << 24) | (pxw[(i*4)+1] << 16) |
+			(pxw[(i*4)+2] << 8) | (pxw[(i*4)+3]);
+		I915_WRITE(PXW + (i * 4), val);
+	}
+
+	/* Adjust magic regs to magic values (more experimental results) */
+	I915_WRITE(OGW0, 0);
+	I915_WRITE(OGW1, 0);
+	I915_WRITE(EG0, 0x00007f00);
+	I915_WRITE(EG1, 0x0000000e);
+	I915_WRITE(EG2, 0x000e0000);
+	I915_WRITE(EG3, 0x68000300);
+	I915_WRITE(EG4, 0x42000000);
+	I915_WRITE(EG5, 0x00140031);
+	I915_WRITE(EG6, 0);
+	I915_WRITE(EG7, 0);
+
+	for (i = 0; i < 8; i++)
+		I915_WRITE(PXWL + (i * 4), 0);
+
+	/* Enable PMON + select events */
+	I915_WRITE(ECR, 0x80000019);
+
+	lcfuse = I915_READ(LCFUSE02);
+
+	dev_priv->corr = (lcfuse & LCFUSE_HIV_MASK);
+}
+
 void intel_init_clock_gating(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
@@ -4732,6 +5294,25 @@ void intel_init_clock_gating(struct drm_device *dev)
 		}
 
 		I915_WRITE(PCH_DSPCLK_GATE_D, dspclk_gate);
+
+		/*
+		 * According to the spec the following bits should be set in
+		 * order to enable memory self-refresh
+		 * The bit 22/21 of 0x42004
+		 * The bit 5 of 0x42020
+		 * The bit 15 of 0x45000
+		 */
+		if (IS_IRONLAKE(dev)) {
+			I915_WRITE(ILK_DISPLAY_CHICKEN2,
+					(I915_READ(ILK_DISPLAY_CHICKEN2) |
+					ILK_DPARB_GATE | ILK_VSDPFD_FULL));
+			I915_WRITE(ILK_DSPCLK_GATE,
+					(I915_READ(ILK_DSPCLK_GATE) |
+						ILK_DPARB_CLK_GATE));
+			I915_WRITE(DISP_ARB_CTL,
+					(I915_READ(DISP_ARB_CTL) |
+						DISP_FBC_WM_DIS));
+		}
 		return;
 	} else if (IS_G4X(dev)) {
 		uint32_t dspclk_gate;
@@ -4809,8 +5390,7 @@ static void intel_init_display(struct drm_device *dev)
 	else
 		dev_priv->display.dpms = i9xx_crtc_dpms;
 
-	/* Only mobile has FBC, leave pointers NULL for other chips */
-	if (IS_MOBILE(dev)) {
+	if (I915_HAS_FBC(dev)) {
 		if (IS_GM45(dev)) {
 			dev_priv->display.fbc_enabled = g4x_fbc_enabled;
 			dev_priv->display.enable_fbc = g4x_enable_fbc;
@@ -4847,9 +5427,33 @@ static void intel_init_display(struct drm_device *dev)
 			i830_get_display_clock_speed;
 
 	/* For FIFO watermark updates */
-	if (HAS_PCH_SPLIT(dev))
-		dev_priv->display.update_wm = NULL;
-	else if (IS_G4X(dev))
+	if (HAS_PCH_SPLIT(dev)) {
+		if (IS_IRONLAKE(dev)) {
+			if (I915_READ(MLTR_ILK) & ILK_SRLT_MASK)
+				dev_priv->display.update_wm = ironlake_update_wm;
+			else {
+				DRM_DEBUG_KMS("Failed to get proper latency. "
+					      "Disable CxSR\n");
+				dev_priv->display.update_wm = NULL;
+			}
+		} else
+			dev_priv->display.update_wm = NULL;
+	} else if (IS_PINEVIEW(dev)) {
+		if (!intel_get_cxsr_latency(IS_PINEVIEW_G(dev),
+					    dev_priv->is_ddr3,
+					    dev_priv->fsb_freq,
+					    dev_priv->mem_freq)) {
+			DRM_INFO("failed to find known CxSR latency "
+				 "(found ddr%s fsb freq %d, mem freq %d), "
+				 "disabling CxSR\n",
+				 (dev_priv->is_ddr3 == 1) ? "3": "2",
+				 dev_priv->fsb_freq, dev_priv->mem_freq);
+			/* Disable CxSR and never update its watermark again */
+			pineview_disable_cxsr(dev);
+			dev_priv->display.update_wm = NULL;
+		} else
+			dev_priv->display.update_wm = pineview_update_wm;
+	} else if (IS_G4X(dev))
 		dev_priv->display.update_wm = g4x_update_wm;
 	else if (IS_I965G(dev))
 		dev_priv->display.update_wm = i965_update_wm;
@@ -4871,7 +5475,6 @@ static void intel_init_display(struct drm_device *dev)
 void intel_modeset_init(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	int num_pipe;
 	int i;
 
 	drm_mode_config_init(dev);
@@ -4901,13 +5504,13 @@ void intel_modeset_init(struct drm_device *dev)
 		dev->mode_config.fb_base = pci_resource_start(dev->pdev, 0);
 
 	if (IS_MOBILE(dev) || IS_I9XX(dev))
-		num_pipe = 2;
+		dev_priv->num_pipe = 2;
 	else
-		num_pipe = 1;
+		dev_priv->num_pipe = 1;
 	DRM_DEBUG_KMS("%d display pipe%s available.\n",
-		  num_pipe, num_pipe > 1 ? "s" : "");
+		      dev_priv->num_pipe, dev_priv->num_pipe > 1 ? "s" : "");
 
-	for (i = 0; i < num_pipe; i++) {
+	for (i = 0; i < dev_priv->num_pipe; i++) {
 		intel_crtc_init(dev, i);
 	}
 
@@ -4915,21 +5518,16 @@ void intel_modeset_init(struct drm_device *dev)
 
 	intel_init_clock_gating(dev);
 
-	if (IS_IRONLAKE_M(dev))
+	if (IS_IRONLAKE_M(dev)) {
 		ironlake_enable_drps(dev);
+		intel_init_emon(dev);
+	}
 
 	INIT_WORK(&dev_priv->idle_work, intel_idle_update);
 	setup_timer(&dev_priv->idle_timer, intel_gpu_idle_timer,
 		    (unsigned long)dev);
 
 	intel_setup_overlay(dev);
-
-	if (IS_PINEVIEW(dev) && !intel_get_cxsr_latency(IS_PINEVIEW_G(dev),
-							dev_priv->fsb_freq,
-							dev_priv->mem_freq))
-		DRM_INFO("failed to find known CxSR latency "
-			 "(found fsb freq %d, mem freq %d), disabling CxSR\n",
-			 dev_priv->fsb_freq, dev_priv->mem_freq);
 }
 
 void intel_modeset_cleanup(struct drm_device *dev)
@@ -4940,6 +5538,9 @@ void intel_modeset_cleanup(struct drm_device *dev)
 
 	mutex_lock(&dev->struct_mutex);
 
+	drm_kms_helper_poll_fini(dev);
+	intel_fbdev_fini(dev);
+
 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
 		/* Skip inactive CRTCs */
 		if (!crtc->fb)
@@ -4974,14 +5575,29 @@ void intel_modeset_cleanup(struct drm_device *dev)
 }
 
 
-/* current intel driver doesn't take advantage of encoders
-   always give back the encoder for the connector
-*/
-struct drm_encoder *intel_best_encoder(struct drm_connector *connector)
+/*
+ * Return which encoder is currently attached for connector.
+ */
+struct drm_encoder *intel_attached_encoder (struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_mode_object *obj;
+	struct drm_encoder *encoder;
+	int i;
 
-	return &intel_encoder->enc;
+	for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) {
+		if (connector->encoder_ids[i] == 0)
+			break;
+
+		obj = drm_mode_object_find(connector->dev,
+                                           connector->encoder_ids[i],
+                                           DRM_MODE_OBJECT_ENCODER);
+		if (!obj)
+			continue;
+
+		encoder = obj_to_encoder(obj);
+		return encoder;
+	}
+	return NULL;
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 77e40cf..49b54f0 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -48,8 +48,6 @@ struct intel_dp_priv {
 	uint32_t output_reg;
 	uint32_t DP;
 	uint8_t  link_configuration[DP_LINK_CONFIGURATION_SIZE];
-	uint32_t save_DP;
-	uint8_t  save_link_configuration[DP_LINK_CONFIGURATION_SIZE];
 	bool has_audio;
 	int dpms_mode;
 	uint8_t link_bw;
@@ -141,7 +139,8 @@ static int
 intel_dp_mode_valid(struct drm_connector *connector,
 		    struct drm_display_mode *mode)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_encoder));
 	int max_lanes = intel_dp_max_lane_count(intel_encoder);
 
@@ -215,7 +214,7 @@ intel_dp_aux_ch(struct intel_encoder *intel_encoder,
 {
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	uint32_t output_reg = dp_priv->output_reg;
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	uint32_t ch_ctl = output_reg + 0x10;
 	uint32_t ch_data = ch_ctl + 4;
@@ -224,19 +223,27 @@ intel_dp_aux_ch(struct intel_encoder *intel_encoder,
 	uint32_t ctl;
 	uint32_t status;
 	uint32_t aux_clock_divider;
-	int try;
+	int try, precharge;
 
 	/* The clock divider is based off the hrawclk,
 	 * and would like to run at 2MHz. So, take the
 	 * hrawclk value and divide by 2 and use that
 	 */
-	if (IS_eDP(intel_encoder))
-		aux_clock_divider = 225; /* eDP input clock at 450Mhz */
-	else if (HAS_PCH_SPLIT(dev))
+	if (IS_eDP(intel_encoder)) {
+		if (IS_GEN6(dev))
+			aux_clock_divider = 200; /* SNB eDP input clock at 400Mhz */
+		else
+			aux_clock_divider = 225; /* eDP input clock at 450Mhz */
+	} else if (HAS_PCH_SPLIT(dev))
 		aux_clock_divider = 62; /* IRL input clock fixed at 125Mhz */
 	else
 		aux_clock_divider = intel_hrawclk(dev) / 2;
 
+	if (IS_GEN6(dev))
+		precharge = 3;
+	else
+		precharge = 5;
+
 	/* Must try at least 3 times according to DP spec */
 	for (try = 0; try < 5; try++) {
 		/* Load the send data into the aux channel data registers */
@@ -249,7 +256,7 @@ intel_dp_aux_ch(struct intel_encoder *intel_encoder,
 		ctl = (DP_AUX_CH_CTL_SEND_BUSY |
 		       DP_AUX_CH_CTL_TIME_OUT_400us |
 		       (send_bytes << DP_AUX_CH_CTL_MESSAGE_SIZE_SHIFT) |
-		       (5 << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) |
+		       (precharge << DP_AUX_CH_CTL_PRECHARGE_2US_SHIFT) |
 		       (aux_clock_divider << DP_AUX_CH_CTL_BIT_CLOCK_2X_SHIFT) |
 		       DP_AUX_CH_CTL_DONE |
 		       DP_AUX_CH_CTL_TIME_OUT_ERROR |
@@ -465,7 +472,8 @@ intel_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
 }
 
 static int
-intel_dp_i2c_init(struct intel_encoder *intel_encoder, const char *name)
+intel_dp_i2c_init(struct intel_encoder *intel_encoder,
+		  struct intel_connector *intel_connector, const char *name)
 {
 	struct intel_dp_priv   *dp_priv = intel_encoder->dev_priv;
 
@@ -480,7 +488,7 @@ intel_dp_i2c_init(struct intel_encoder *intel_encoder, const char *name)
 	strncpy (dp_priv->adapter.name, name, sizeof(dp_priv->adapter.name) - 1);
 	dp_priv->adapter.name[sizeof(dp_priv->adapter.name) - 1] = '\0';
 	dp_priv->adapter.algo_data = &dp_priv->algo;
-	dp_priv->adapter.dev.parent = &intel_encoder->base.kdev;
+	dp_priv->adapter.dev.parent = &intel_connector->base.kdev;
 	
 	return i2c_dp_aux_add_bus(&dp_priv->adapter);
 }
@@ -555,7 +563,7 @@ intel_dp_set_m_n(struct drm_crtc *crtc, struct drm_display_mode *mode,
 {
 	struct drm_device *dev = crtc->dev;
 	struct drm_mode_config *mode_config = &dev->mode_config;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 	int lane_count = 4;
@@ -564,13 +572,16 @@ intel_dp_set_m_n(struct drm_crtc *crtc, struct drm_display_mode *mode,
 	/*
 	 * Find the lane count in the intel_encoder private
 	 */
-	list_for_each_entry(connector, &mode_config->connector_list, head) {
-		struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-		struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
+	list_for_each_entry(encoder, &mode_config->encoder_list, head) {
+		struct intel_encoder *intel_encoder;
+		struct intel_dp_priv *dp_priv;
 
-		if (!connector->encoder || connector->encoder->crtc != crtc)
+		if (encoder->crtc != crtc)
 			continue;
 
+		intel_encoder = enc_to_intel_encoder(encoder);
+		dp_priv = intel_encoder->dev_priv;
+
 		if (intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT) {
 			lane_count = dp_priv->lane_count;
 			break;
@@ -626,16 +637,24 @@ static void
 intel_dp_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode,
 		  struct drm_display_mode *adjusted_mode)
 {
+	struct drm_device *dev = encoder->dev;
 	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	struct drm_crtc *crtc = intel_encoder->enc.crtc;
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
 
-	dp_priv->DP = (DP_LINK_TRAIN_OFF |
-			DP_VOLTAGE_0_4 |
-			DP_PRE_EMPHASIS_0 |
-			DP_SYNC_VS_HIGH |
-			DP_SYNC_HS_HIGH);
+	dp_priv->DP = (DP_VOLTAGE_0_4 |
+		       DP_PRE_EMPHASIS_0);
+
+	if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
+		dp_priv->DP |= DP_SYNC_HS_HIGH;
+	if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
+		dp_priv->DP |= DP_SYNC_VS_HIGH;
+
+	if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder))
+		dp_priv->DP |= DP_LINK_TRAIN_OFF_CPT;
+	else
+		dp_priv->DP |= DP_LINK_TRAIN_OFF;
 
 	switch (dp_priv->lane_count) {
 	case 1:
@@ -656,15 +675,15 @@ intel_dp_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode,
 	dp_priv->link_configuration[1] = dp_priv->lane_count;
 
 	/*
-	 * Check for DPCD version > 1.1,
-	 * enable enahanced frame stuff in that case
+	 * Check for DPCD version > 1.1 and enhanced framing support
 	 */
-	if (dp_priv->dpcd[0] >= 0x11) {
+	if (dp_priv->dpcd[0] >= 0x11 && (dp_priv->dpcd[2] & DP_ENHANCED_FRAME_CAP)) {
 		dp_priv->link_configuration[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN;
 		dp_priv->DP |= DP_ENHANCED_FRAMING;
 	}
 
-	if (intel_crtc->pipe == 1)
+	/* CPT DP's pipe select is decided in TRANS_DP_CTL */
+	if (intel_crtc->pipe == 1 && !HAS_PCH_CPT(dev))
 		dp_priv->DP |= DP_PIPEB_SELECT;
 
 	if (IS_eDP(intel_encoder)) {
@@ -704,7 +723,7 @@ intel_dp_dpms(struct drm_encoder *encoder, int mode)
 {
 	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	uint32_t dp_reg = I915_READ(dp_priv->output_reg);
 
@@ -749,20 +768,6 @@ intel_dp_link_status(uint8_t link_status[DP_LINK_STATUS_SIZE],
 	return link_status[r - DP_LANE0_1_STATUS];
 }
 
-static void
-intel_dp_save(struct drm_connector *connector)
-{
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct drm_device *dev = intel_encoder->base.dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
-
-	dp_priv->save_DP = I915_READ(dp_priv->output_reg);
-	intel_dp_aux_native_read(intel_encoder, DP_LINK_BW_SET,
-				 dp_priv->save_link_configuration,
-				 sizeof (dp_priv->save_link_configuration));
-}
-
 static uint8_t
 intel_get_adjust_request_voltage(uint8_t link_status[DP_LINK_STATUS_SIZE],
 				 int lane)
@@ -892,6 +897,25 @@ intel_dp_signal_levels(uint8_t train_set, int lane_count)
 	return signal_levels;
 }
 
+/* Gen6's DP voltage swing and pre-emphasis control */
+static uint32_t
+intel_gen6_edp_signal_levels(uint8_t train_set)
+{
+	switch (train_set & (DP_TRAIN_VOLTAGE_SWING_MASK|DP_TRAIN_PRE_EMPHASIS_MASK)) {
+	case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_0:
+		return EDP_LINK_TRAIN_400MV_0DB_SNB_B;
+	case DP_TRAIN_VOLTAGE_SWING_400 | DP_TRAIN_PRE_EMPHASIS_6:
+		return EDP_LINK_TRAIN_400MV_6DB_SNB_B;
+	case DP_TRAIN_VOLTAGE_SWING_600 | DP_TRAIN_PRE_EMPHASIS_3_5:
+		return EDP_LINK_TRAIN_600MV_3_5DB_SNB_B;
+	case DP_TRAIN_VOLTAGE_SWING_800 | DP_TRAIN_PRE_EMPHASIS_0:
+		return EDP_LINK_TRAIN_800MV_0DB_SNB_B;
+	default:
+		DRM_DEBUG_KMS("Unsupported voltage swing/pre-emphasis level\n");
+		return EDP_LINK_TRAIN_400MV_0DB_SNB_B;
+	}
+}
+
 static uint8_t
 intel_get_lane_status(uint8_t link_status[DP_LINK_STATUS_SIZE],
 		      int lane)
@@ -948,7 +972,7 @@ intel_dp_set_link_train(struct intel_encoder *intel_encoder,
 			uint8_t train_set[4],
 			bool first)
 {
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	int ret;
@@ -974,7 +998,7 @@ static void
 intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP,
 		    uint8_t link_configuration[DP_LINK_CONFIGURATION_SIZE])
 {
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	uint8_t	train_set[4];
@@ -985,23 +1009,38 @@ intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP,
 	bool channel_eq = false;
 	bool first = true;
 	int tries;
+	u32 reg;
 
 	/* Write the link configuration data */
-	intel_dp_aux_native_write(intel_encoder, 0x100,
+	intel_dp_aux_native_write(intel_encoder, DP_LINK_BW_SET,
 				  link_configuration, DP_LINK_CONFIGURATION_SIZE);
 
 	DP |= DP_PORT_EN;
-	DP &= ~DP_LINK_TRAIN_MASK;
+	if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder))
+		DP &= ~DP_LINK_TRAIN_MASK_CPT;
+	else
+		DP &= ~DP_LINK_TRAIN_MASK;
 	memset(train_set, 0, 4);
 	voltage = 0xff;
 	tries = 0;
 	clock_recovery = false;
 	for (;;) {
 		/* Use train_set[0] to set the voltage and pre emphasis values */
-		uint32_t    signal_levels = intel_dp_signal_levels(train_set[0], dp_priv->lane_count);
-		DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;
+		uint32_t    signal_levels;
+		if (IS_GEN6(dev) && IS_eDP(intel_encoder)) {
+			signal_levels = intel_gen6_edp_signal_levels(train_set[0]);
+			DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels;
+		} else {
+			signal_levels = intel_dp_signal_levels(train_set[0], dp_priv->lane_count);
+			DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;
+		}
+
+		if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder))
+			reg = DP | DP_LINK_TRAIN_PAT_1_CPT;
+		else
+			reg = DP | DP_LINK_TRAIN_PAT_1;
 
-		if (!intel_dp_set_link_train(intel_encoder, DP | DP_LINK_TRAIN_PAT_1,
+		if (!intel_dp_set_link_train(intel_encoder, reg,
 					     DP_TRAINING_PATTERN_1, train_set, first))
 			break;
 		first = false;
@@ -1041,11 +1080,23 @@ intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP,
 	channel_eq = false;
 	for (;;) {
 		/* Use train_set[0] to set the voltage and pre emphasis values */
-		uint32_t    signal_levels = intel_dp_signal_levels(train_set[0], dp_priv->lane_count);
-		DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;
+		uint32_t    signal_levels;
+
+		if (IS_GEN6(dev) && IS_eDP(intel_encoder)) {
+			signal_levels = intel_gen6_edp_signal_levels(train_set[0]);
+			DP = (DP & ~EDP_LINK_TRAIN_VOL_EMP_MASK_SNB) | signal_levels;
+		} else {
+			signal_levels = intel_dp_signal_levels(train_set[0], dp_priv->lane_count);
+			DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels;
+		}
+
+		if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder))
+			reg = DP | DP_LINK_TRAIN_PAT_2_CPT;
+		else
+			reg = DP | DP_LINK_TRAIN_PAT_2;
 
 		/* channel eq pattern */
-		if (!intel_dp_set_link_train(intel_encoder, DP | DP_LINK_TRAIN_PAT_2,
+		if (!intel_dp_set_link_train(intel_encoder, reg,
 					     DP_TRAINING_PATTERN_2, train_set,
 					     false))
 			break;
@@ -1068,7 +1119,12 @@ intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP,
 		++tries;
 	}
 
-	I915_WRITE(dp_priv->output_reg, DP | DP_LINK_TRAIN_OFF);
+	if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder))
+		reg = DP | DP_LINK_TRAIN_OFF_CPT;
+	else
+		reg = DP | DP_LINK_TRAIN_OFF;
+
+	I915_WRITE(dp_priv->output_reg, reg);
 	POSTING_READ(dp_priv->output_reg);
 	intel_dp_aux_native_write_1(intel_encoder,
 				    DP_TRAINING_PATTERN_SET, DP_TRAINING_PATTERN_DISABLE);
@@ -1077,7 +1133,7 @@ intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP,
 static void
 intel_dp_link_down(struct intel_encoder *intel_encoder, uint32_t DP)
 {
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 
@@ -1090,9 +1146,15 @@ intel_dp_link_down(struct intel_encoder *intel_encoder, uint32_t DP)
 		udelay(100);
 	}
 
-	DP &= ~DP_LINK_TRAIN_MASK;
-	I915_WRITE(dp_priv->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE);
-	POSTING_READ(dp_priv->output_reg);
+	if (HAS_PCH_CPT(dev) && !IS_eDP(intel_encoder)) {
+		DP &= ~DP_LINK_TRAIN_MASK_CPT;
+		I915_WRITE(dp_priv->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE_CPT);
+		POSTING_READ(dp_priv->output_reg);
+	} else {
+		DP &= ~DP_LINK_TRAIN_MASK;
+		I915_WRITE(dp_priv->output_reg, DP | DP_LINK_TRAIN_PAT_IDLE);
+		POSTING_READ(dp_priv->output_reg);
+	}
 
 	udelay(17000);
 
@@ -1102,18 +1164,6 @@ intel_dp_link_down(struct intel_encoder *intel_encoder, uint32_t DP)
 	POSTING_READ(dp_priv->output_reg);
 }
 
-static void
-intel_dp_restore(struct drm_connector *connector)
-{
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
-
-	if (dp_priv->save_DP & DP_PORT_EN)
-		intel_dp_link_train(intel_encoder, dp_priv->save_DP, dp_priv->save_link_configuration);
-	else
-		intel_dp_link_down(intel_encoder,  dp_priv->save_DP);
-}
-
 /*
  * According to DP spec
  * 5.1.2:
@@ -1144,7 +1194,8 @@ intel_dp_check_link_status(struct intel_encoder *intel_encoder)
 static enum drm_connector_status
 ironlake_dp_detect(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	enum drm_connector_status status;
 
@@ -1156,6 +1207,8 @@ ironlake_dp_detect(struct drm_connector *connector)
 		if (dp_priv->dpcd[0] != 0)
 			status = connector_status_connected;
 	}
+	DRM_DEBUG_KMS("DPCD: %hx%hx%hx%hx\n", dp_priv->dpcd[0],
+		      dp_priv->dpcd[1], dp_priv->dpcd[2], dp_priv->dpcd[3]);
 	return status;
 }
 
@@ -1168,8 +1221,9 @@ ironlake_dp_detect(struct drm_connector *connector)
 static enum drm_connector_status
 intel_dp_detect(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
 	uint32_t temp, bit;
@@ -1180,16 +1234,6 @@ intel_dp_detect(struct drm_connector *connector)
 	if (HAS_PCH_SPLIT(dev))
 		return ironlake_dp_detect(connector);
 
-	temp = I915_READ(PORT_HOTPLUG_EN);
-
-	I915_WRITE(PORT_HOTPLUG_EN,
-	       temp |
-	       DPB_HOTPLUG_INT_EN |
-	       DPC_HOTPLUG_INT_EN |
-	       DPD_HOTPLUG_INT_EN);
-
-	POSTING_READ(PORT_HOTPLUG_EN);
-
 	switch (dp_priv->output_reg) {
 	case DP_B:
 		bit = DPB_HOTPLUG_INT_STATUS;
@@ -1222,15 +1266,16 @@ intel_dp_detect(struct drm_connector *connector)
 
 static int intel_dp_get_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	int ret;
 
 	/* We should parse the EDID data and find out if it has an audio sink
 	 */
 
-	ret = intel_ddc_get_modes(intel_encoder);
+	ret = intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 	if (ret)
 		return ret;
 
@@ -1249,13 +1294,9 @@ static int intel_dp_get_modes(struct drm_connector *connector)
 static void
 intel_dp_destroy (struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-
-	if (intel_encoder->i2c_bus)
-		intel_i2c_destroy(intel_encoder->i2c_bus);
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
-	kfree(intel_encoder);
+	kfree(connector);
 }
 
 static const struct drm_encoder_helper_funcs intel_dp_helper_funcs = {
@@ -1268,8 +1309,6 @@ static const struct drm_encoder_helper_funcs intel_dp_helper_funcs = {
 
 static const struct drm_connector_funcs intel_dp_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_dp_save,
-	.restore = intel_dp_restore,
 	.detect = intel_dp_detect,
 	.fill_modes = drm_helper_probe_single_connector_modes,
 	.destroy = intel_dp_destroy,
@@ -1278,12 +1317,17 @@ static const struct drm_connector_funcs intel_dp_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_dp_connector_helper_funcs = {
 	.get_modes = intel_dp_get_modes,
 	.mode_valid = intel_dp_mode_valid,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_dp_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+
+	if (intel_encoder->i2c_bus)
+		intel_i2c_destroy(intel_encoder->i2c_bus);
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_dp_enc_funcs = {
@@ -1299,12 +1343,35 @@ intel_dp_hot_plug(struct intel_encoder *intel_encoder)
 		intel_dp_check_link_status(intel_encoder);
 }
 
+/* Return which DP Port should be selected for Transcoder DP control */
+int
+intel_trans_dp_port_sel (struct drm_crtc *crtc)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_mode_config *mode_config = &dev->mode_config;
+	struct drm_encoder *encoder;
+	struct intel_encoder *intel_encoder = NULL;
+
+	list_for_each_entry(encoder, &mode_config->encoder_list, head) {
+		if (encoder->crtc != crtc)
+			continue;
+
+		intel_encoder = enc_to_intel_encoder(encoder);
+		if (intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT) {
+			struct intel_dp_priv *dp_priv = intel_encoder->dev_priv;
+			return dp_priv->output_reg;
+		}
+	}
+	return -1;
+}
+
 void
 intel_dp_init(struct drm_device *dev, int output_reg)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_connector *connector;
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct intel_dp_priv *dp_priv;
 	const char *name = NULL;
 
@@ -1313,13 +1380,21 @@ intel_dp_init(struct drm_device *dev, int output_reg)
 	if (!intel_encoder)
 		return;
 
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
 	dp_priv = (struct intel_dp_priv *)(intel_encoder + 1);
 
-	connector = &intel_encoder->base;
+	connector = &intel_connector->base;
 	drm_connector_init(dev, connector, &intel_dp_connector_funcs,
 			   DRM_MODE_CONNECTOR_DisplayPort);
 	drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs);
 
+	connector->polled = DRM_CONNECTOR_POLL_HPD;
+
 	if (output_reg == DP_A)
 		intel_encoder->type = INTEL_OUTPUT_EDP;
 	else
@@ -1349,7 +1424,7 @@ intel_dp_init(struct drm_device *dev, int output_reg)
 			 DRM_MODE_ENCODER_TMDS);
 	drm_encoder_helper_add(&intel_encoder->enc, &intel_dp_helper_funcs);
 
-	drm_mode_connector_attach_encoder(&intel_encoder->base,
+	drm_mode_connector_attach_encoder(&intel_connector->base,
 					  &intel_encoder->enc);
 	drm_sysfs_connector_add(connector);
 
@@ -1378,7 +1453,7 @@ intel_dp_init(struct drm_device *dev, int output_reg)
 			break;
 	}
 
-	intel_dp_i2c_init(intel_encoder, name);
+	intel_dp_i2c_init(intel_encoder, intel_connector, name);
 
 	intel_encoder->ddc_bus = &dp_priv->adapter;
 	intel_encoder->hot_plug = intel_dp_hot_plug;
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index e302537..df931f7 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -96,8 +96,6 @@ struct intel_framebuffer {
 
 
 struct intel_encoder {
-	struct drm_connector base;
-
 	struct drm_encoder enc;
 	int type;
 	struct i2c_adapter *i2c_bus;
@@ -110,6 +108,11 @@ struct intel_encoder {
 	int clone_mask;
 };
 
+struct intel_connector {
+	struct drm_connector base;
+	void *dev_priv;
+};
+
 struct intel_crtc;
 struct intel_overlay {
 	struct drm_device *dev;
@@ -149,23 +152,24 @@ struct intel_crtc {
 	bool lowfreq_avail;
 	struct intel_overlay *overlay;
 	struct intel_unpin_work *unpin_work;
+	int fdi_lanes;
 
 	struct drm_gem_object *cursor_bo;
 	uint32_t cursor_addr;
 	int16_t cursor_x, cursor_y;
 	int16_t cursor_width, cursor_height;
 	bool cursor_visble;
 };
 
 #define to_intel_crtc(x) container_of(x, struct intel_crtc, base)
-#define to_intel_encoder(x) container_of(x, struct intel_encoder, base)
+#define to_intel_connector(x) container_of(x, struct intel_connector, base)
 #define enc_to_intel_encoder(x) container_of(x, struct intel_encoder, enc)
 #define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base)
 
 struct i2c_adapter *intel_i2c_create(struct drm_device *dev, const u32 reg,
 				     const char *name);
 void intel_i2c_destroy(struct i2c_adapter *adapter);
-int intel_ddc_get_modes(struct intel_encoder *intel_encoder);
+int intel_ddc_get_modes(struct drm_connector *c, struct i2c_adapter *adapter);
 extern bool intel_ddc_probe(struct intel_encoder *intel_encoder);
 void intel_i2c_quirk_set(struct drm_device *dev, bool enable);
 void intel_i2c_reset_gmbus(struct drm_device *dev);
@@ -183,7 +187,7 @@ extern void intel_crtc_load_lut(struct drm_crtc *crtc);
 extern void intel_encoder_prepare (struct drm_encoder *encoder);
 extern void intel_encoder_commit (struct drm_encoder *encoder);
 
-extern struct drm_encoder *intel_best_encoder(struct drm_connector *connector);
+extern struct drm_encoder *intel_attached_encoder(struct drm_connector *connector);
 
 extern struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
 						    struct drm_crtc *crtc);
@@ -192,17 +196,16 @@ int intel_get_pipe_from_crtc_id(struct drm_device *dev, void *data,
 extern void intel_wait_for_vblank(struct drm_device *dev);
 extern struct drm_crtc *intel_get_crtc_from_pipe(struct drm_device *dev, int pipe);
 extern struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder,
+						   struct drm_connector *connector,
 						   struct drm_display_mode *mode,
 						   int *dpms_mode);
 extern void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder,
+					   struct drm_connector *connector,
 					   int dpms_mode);
 
 extern struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB);
 extern int intel_sdvo_supports_hotplug(struct drm_connector *connector);
 extern void intel_sdvo_set_hotplug(struct drm_connector *connector, int enable);
-extern int intelfb_probe(struct drm_device *dev);
-extern int intelfb_remove(struct drm_device *dev, struct drm_framebuffer *fb);
-extern int intelfb_resize(struct drm_device *dev, struct drm_crtc *crtc);
 extern void intelfb_restore(void);
 extern void intel_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
 				    u16 blue, int regno);
@@ -212,10 +215,12 @@ extern void intel_init_clock_gating(struct drm_device *dev);
 extern void ironlake_enable_drps(struct drm_device *dev);
 extern void ironlake_disable_drps(struct drm_device *dev);
 
-extern int intel_framebuffer_create(struct drm_device *dev,
-				    struct drm_mode_fb_cmd *mode_cmd,
-				    struct drm_framebuffer **fb,
-				    struct drm_gem_object *obj);
+extern int intel_framebuffer_init(struct drm_device *dev,
+				  struct intel_framebuffer *ifb,
+				  struct drm_mode_fb_cmd *mode_cmd,
+				  struct drm_gem_object *obj);
+extern int intel_fbdev_init(struct drm_device *dev);
+extern void intel_fbdev_fini(struct drm_device *dev);
 
 extern void intel_prepare_page_flip(struct drm_device *dev, int plane);
 extern void intel_finish_page_flip(struct drm_device *dev, int pipe);
@@ -229,4 +234,6 @@ extern int intel_overlay_put_image(struct drm_device *dev, void *data,
 				   struct drm_file *file_priv);
 extern int intel_overlay_attrs(struct drm_device *dev, void *data,
 			       struct drm_file *file_priv);
+
+extern void intel_fb_output_poll_changed(struct drm_device *dev);
 #endif /* __INTEL_DRV_H__ */
diff --git a/drivers/gpu/drm/i915/intel_dvo.c b/drivers/gpu/drm/i915/intel_dvo.c
index ebf213c..227feca 100644
--- a/drivers/gpu/drm/i915/intel_dvo.c
+++ b/drivers/gpu/drm/i915/intel_dvo.c
@@ -96,39 +96,11 @@ static void intel_dvo_dpms(struct drm_encoder *encoder, int mode)
 	}
 }
 
-static void intel_dvo_save(struct drm_connector *connector)
-{
-	struct drm_i915_private *dev_priv = connector->dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
-
-	/* Each output should probably just save the registers it touches,
-	 * but for now, use more overkill.
-	 */
-	dev_priv->saveDVOA = I915_READ(DVOA);
-	dev_priv->saveDVOB = I915_READ(DVOB);
-	dev_priv->saveDVOC = I915_READ(DVOC);
-
-	dvo->dev_ops->save(dvo);
-}
-
-static void intel_dvo_restore(struct drm_connector *connector)
-{
-	struct drm_i915_private *dev_priv = connector->dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
-
-	dvo->dev_ops->restore(dvo);
-
-	I915_WRITE(DVOA, dev_priv->saveDVOA);
-	I915_WRITE(DVOB, dev_priv->saveDVOB);
-	I915_WRITE(DVOC, dev_priv->saveDVOC);
-}
-
 static int intel_dvo_mode_valid(struct drm_connector *connector,
 				struct drm_display_mode *mode)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
 
 	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
@@ -241,7 +213,8 @@ static void intel_dvo_mode_set(struct drm_encoder *encoder,
  */
 static enum drm_connector_status intel_dvo_detect(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
 
 	return dvo->dev_ops->detect(dvo);
@@ -249,7 +222,8 @@ static enum drm_connector_status intel_dvo_detect(struct drm_connector *connecto
 
 static int intel_dvo_get_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
 
 	/* We should probably have an i2c driver get_modes function for those
@@ -257,7 +231,7 @@ static int intel_dvo_get_modes(struct drm_connector *connector)
 	 * (TV-out, for example), but for now with just TMDS and LVDS,
 	 * that's not the case.
 	 */
-	intel_ddc_get_modes(intel_encoder);
+	intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 	if (!list_empty(&connector->probed_modes))
 		return 1;
 
@@ -275,38 +249,10 @@ static int intel_dvo_get_modes(struct drm_connector *connector)
 
 static void intel_dvo_destroy (struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
-
-	if (dvo) {
-		if (dvo->dev_ops->destroy)
-			dvo->dev_ops->destroy(dvo);
-		if (dvo->panel_fixed_mode)
-			kfree(dvo->panel_fixed_mode);
-		/* no need, in i830_dvoices[] now */
-		//kfree(dvo);
-	}
-	if (intel_encoder->i2c_bus)
-		intel_i2c_destroy(intel_encoder->i2c_bus);
-	if (intel_encoder->ddc_bus)
-		intel_i2c_destroy(intel_encoder->ddc_bus);
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
-	kfree(intel_encoder);
-}
-
-#ifdef RANDR_GET_CRTC_INTERFACE
-static struct drm_crtc *intel_dvo_get_crtc(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
-	int pipe = !!(I915_READ(dvo->dvo_reg) & SDVO_PIPE_B_SELECT);
-
-	return intel_pipe_to_crtc(pScrn, pipe);
+	kfree(connector);
 }
-#endif
 
 static const struct drm_encoder_helper_funcs intel_dvo_helper_funcs = {
 	.dpms = intel_dvo_dpms,
@@ -318,8 +264,6 @@ static const struct drm_encoder_helper_funcs intel_dvo_helper_funcs = {
 
 static const struct drm_connector_funcs intel_dvo_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_dvo_save,
-	.restore = intel_dvo_restore,
 	.detect = intel_dvo_detect,
 	.destroy = intel_dvo_destroy,
 	.fill_modes = drm_helper_probe_single_connector_modes,
@@ -328,12 +272,26 @@ static const struct drm_connector_funcs intel_dvo_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_dvo_connector_helper_funcs = {
 	.mode_valid = intel_dvo_mode_valid,
 	.get_modes = intel_dvo_get_modes,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_dvo_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
+
+	if (dvo) {
+		if (dvo->dev_ops->destroy)
+			dvo->dev_ops->destroy(dvo);
+		if (dvo->panel_fixed_mode)
+			kfree(dvo->panel_fixed_mode);
+	}
+	if (intel_encoder->i2c_bus)
+		intel_i2c_destroy(intel_encoder->i2c_bus);
+	if (intel_encoder->ddc_bus)
+		intel_i2c_destroy(intel_encoder->ddc_bus);
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_dvo_enc_funcs = {
@@ -352,7 +310,8 @@ intel_dvo_get_current_mode (struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_dvo_device *dvo = intel_encoder->dev_priv;
 	uint32_t dvo_reg = dvo->dvo_reg;
 	uint32_t dvo_val = I915_READ(dvo_reg);
@@ -384,6 +343,7 @@ intel_dvo_get_current_mode (struct drm_connector *connector)
 void intel_dvo_init(struct drm_device *dev)
 {
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct intel_dvo_device *dvo;
 	struct i2c_adapter *i2cbus = NULL;
 	int ret = 0;
@@ -393,6 +353,12 @@ void intel_dvo_init(struct drm_device *dev)
 	if (!intel_encoder)
 		return;
 
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
 	/* Set up the DDC bus */
 	intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOD, "DVODDC_D");
 	if (!intel_encoder->ddc_bus)
@@ -400,7 +366,7 @@ void intel_dvo_init(struct drm_device *dev)
 
 	/* Now, try to find a controller */
 	for (i = 0; i < ARRAY_SIZE(intel_dvo_devices); i++) {
-		struct drm_connector *connector = &intel_encoder->base;
+		struct drm_connector *connector = &intel_connector->base;
 		int gpio;
 
 		dvo = &intel_dvo_devices[i];
@@ -471,7 +437,7 @@ void intel_dvo_init(struct drm_device *dev)
 		drm_encoder_helper_add(&intel_encoder->enc,
 				       &intel_dvo_helper_funcs);
 
-		drm_mode_connector_attach_encoder(&intel_encoder->base,
+		drm_mode_connector_attach_encoder(&intel_connector->base,
 						  &intel_encoder->enc);
 		if (dvo->type == INTEL_DVO_CHIP_LVDS) {
 			/* For our LVDS chipsets, we should hopefully be able
@@ -496,4 +462,5 @@ void intel_dvo_init(struct drm_device *dev)
 		intel_i2c_destroy(i2cbus);
 free_intel:
 	kfree(intel_encoder);
+	kfree(intel_connector);
 }
diff --git a/drivers/gpu/drm/i915/intel_fb.c b/drivers/gpu/drm/i915/intel_fb.c
index 8a0b3bc..c3c5052 100644
--- a/drivers/gpu/drm/i915/intel_fb.c
+++ b/drivers/gpu/drm/i915/intel_fb.c
@@ -44,9 +44,10 @@
 #include "i915_drm.h"
 #include "i915_drv.h"
 
-struct intelfb_par {
+struct intel_fbdev {
 	struct drm_fb_helper helper;
-	struct intel_framebuffer *intel_fb;
+	struct intel_framebuffer ifb;
+	struct list_head fbdev_list;
 	struct drm_display_mode *our_mode;
 };
 
@@ -54,7 +55,6 @@ static struct fb_ops intelfb_ops = {
 	.owner = THIS_MODULE,
 	.fb_check_var = drm_fb_helper_check_var,
 	.fb_set_par = drm_fb_helper_set_par,
-	.fb_setcolreg = drm_fb_helper_setcolreg,
 	.fb_fillrect = cfb_fillrect,
 	.fb_copyarea = cfb_copyarea,
 	.fb_imageblit = cfb_imageblit,
@@ -63,62 +63,12 @@ static struct fb_ops intelfb_ops = {
 	.fb_setcmap = drm_fb_helper_setcmap,
 };
 
-static struct drm_fb_helper_funcs intel_fb_helper_funcs = {
-	.gamma_set = intel_crtc_fb_gamma_set,
-	.gamma_get = intel_crtc_fb_gamma_get,
-};
-
-
-/**
- * Currently it is assumed that the old framebuffer is reused.
- *
- * LOCKING
- * caller should hold the mode config lock.
- *
- */
-int intelfb_resize(struct drm_device *dev, struct drm_crtc *crtc)
-{
-	struct fb_info *info;
-	struct drm_framebuffer *fb;
-	struct drm_display_mode *mode = crtc->desired_mode;
-
-	fb = crtc->fb;
-	if (!fb)
-		return 1;
-
-	info = fb->fbdev;
-	if (!info)
-		return 1;
-
-	if (!mode)
-		return 1;
-
-	info->var.xres = mode->hdisplay;
-	info->var.right_margin = mode->hsync_start - mode->hdisplay;
-	info->var.hsync_len = mode->hsync_end - mode->hsync_start;
-	info->var.left_margin = mode->htotal - mode->hsync_end;
-	info->var.yres = mode->vdisplay;
-	info->var.lower_margin = mode->vsync_start - mode->vdisplay;
-	info->var.vsync_len = mode->vsync_end - mode->vsync_start;
-	info->var.upper_margin = mode->vtotal - mode->vsync_end;
-	info->var.pixclock = 10000000 / mode->htotal * 1000 / mode->vtotal * 100;
-	/* avoid overflow */
-	info->var.pixclock = info->var.pixclock * 1000 / mode->vrefresh;
-
-	return 0;
-}
-EXPORT_SYMBOL(intelfb_resize);
-
-static int intelfb_create(struct drm_device *dev, uint32_t fb_width,
-			  uint32_t fb_height, uint32_t surface_width,
-			  uint32_t surface_height,
-			  uint32_t surface_depth, uint32_t surface_bpp,
-			  struct drm_framebuffer **fb_p)
+static int intelfb_create(struct intel_fbdev *ifbdev,
+			  struct drm_fb_helper_surface_size *sizes)
 {
+	struct drm_device *dev = ifbdev->helper.dev;
 	struct fb_info *info;
-	struct intelfb_par *par;
 	struct drm_framebuffer *fb;
-	struct intel_framebuffer *intel_fb;
 	struct drm_mode_fb_cmd mode_cmd;
 	struct drm_gem_object *fbo = NULL;
 	struct drm_i915_gem_object *obj_priv;
@@ -126,19 +76,19 @@ static int intelfb_create(struct drm_device *dev, uint32_t fb_width,
 	int size, ret, mmio_bar = IS_I9XX(dev) ? 0 : 1;
 
 	/* we don't do packed 24bpp */
-	if (surface_bpp == 24)
-		surface_bpp = 32;
+	if (sizes->surface_bpp == 24)
+		sizes->surface_bpp = 32;
 
-	mode_cmd.width = surface_width;
-	mode_cmd.height = surface_height;
+	mode_cmd.width = sizes->surface_width;
+	mode_cmd.height = sizes->surface_height;
 
-	mode_cmd.bpp = surface_bpp;
+	mode_cmd.bpp = sizes->surface_bpp;
 	mode_cmd.pitch = ALIGN(mode_cmd.width * ((mode_cmd.bpp + 1) / 8), 64);
-	mode_cmd.depth = surface_depth;
+	mode_cmd.depth = sizes->surface_depth;
 
 	size = mode_cmd.pitch * mode_cmd.height;
 	size = ALIGN(size, PAGE_SIZE);
-	fbo = drm_gem_object_alloc(dev, size);
+	fbo = i915_gem_alloc_object(dev, size);
 	if (!fbo) {
 		DRM_ERROR("failed to allocate framebuffer\n");
 		ret = -ENOMEM;
@@ -155,47 +105,43 @@ static int intelfb_create(struct drm_device *dev, uint32_t fb_width,
 	}
 
 	/* Flush everything out, we'll be doing GTT only from now on */
-	i915_gem_object_set_to_gtt_domain(fbo, 1);
-
-	ret = intel_framebuffer_create(dev, &mode_cmd, &fb, fbo);
+	ret = i915_gem_object_set_to_gtt_domain(fbo, 1);
 	if (ret) {
-		DRM_ERROR("failed to allocate fb.\n");
+		DRM_ERROR("failed to bind fb: %d.\n", ret);
 		goto out_unpin;
 	}
 
-	list_add(&fb->filp_head, &dev->mode_config.fb_kernel_list);
-
-	intel_fb = to_intel_framebuffer(fb);
-	*fb_p = fb;
-
-	info = framebuffer_alloc(sizeof(struct intelfb_par), device);
+	info = framebuffer_alloc(0, device);
 	if (!info) {
 		ret = -ENOMEM;
 		goto out_unpin;
 	}
 
-	par = info->par;
+	info->par = ifbdev;
 
-	par->helper.funcs = &intel_fb_helper_funcs;
-	par->helper.dev = dev;
-	ret = drm_fb_helper_init_crtc_count(&par->helper, 2,
-					    INTELFB_CONN_LIMIT);
-	if (ret)
-		goto out_unref;
+	intel_framebuffer_init(dev, &ifbdev->ifb, &mode_cmd, fbo);
+
+	fb = &ifbdev->ifb.base;
+
+	ifbdev->helper.fb = fb;
+	ifbdev->helper.fbdev = info;
 
 	strcpy(info->fix.id, "inteldrmfb");
 
 	info->flags = FBINFO_DEFAULT;
-
 	info->fbops = &intelfb_ops;
 
-
 	/* setup aperture base/size for vesafb takeover */
-	info->aperture_base = dev->mode_config.fb_base;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		ret = -ENOMEM;
+		goto out_unpin;
+	}
+	info->apertures->ranges[0].base = dev->mode_config.fb_base;
 	if (IS_I9XX(dev))
-		info->aperture_size = pci_resource_len(dev->pdev, 2);
+		info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 2);
 	else
-		info->aperture_size = pci_resource_len(dev->pdev, 0);
+		info->apertures->ranges[0].size = pci_resource_len(dev->pdev, 0);
 
 	info->fix.smem_start = dev->mode_config.fb_base + obj_priv->gtt_offset;
 	info->fix.smem_len = size;
@@ -208,12 +154,18 @@ static int intelfb_create(struct drm_device *dev, uint32_t fb_width,
 		ret = -ENOSPC;
 		goto out_unpin;
 	}
+
+	ret = fb_alloc_cmap(&info->cmap, 256, 0);
+	if (ret) {
+		ret = -ENOMEM;
+		goto out_unpin;
+	}
 	info->screen_size = size;
 
 //	memset(info->screen_base, 0, size);
 
 	drm_fb_helper_fill_fix(info, fb->pitch, fb->depth);
-	drm_fb_helper_fill_var(info, fb, fb_width, fb_height);
+	drm_fb_helper_fill_var(info, &ifbdev->helper, sizes->fb_width, sizes->fb_height);
 
 	/* FIXME: we really shouldn't expose mmio space at all */
 	info->fix.mmio_start = pci_resource_start(dev->pdev, mmio_bar);
@@ -225,14 +177,10 @@ static int intelfb_create(struct drm_device *dev, uint32_t fb_width,
 	info->pixmap.flags = FB_PIXMAP_SYSTEM;
 	info->pixmap.scan_align = 1;
 
-	fb->fbdev = info;
-
-	par->intel_fb = intel_fb;
-
-	/* To allow resizeing without swapping buffers */
 	DRM_DEBUG_KMS("allocated %dx%d fb: 0x%08x, bo %p\n",
-			intel_fb->base.width, intel_fb->base.height,
-			obj_priv->gtt_offset, fbo);
+		      fb->width, fb->height,
+		      obj_priv->gtt_offset, fbo);
+
 
 	mutex_unlock(&dev->struct_mutex);
 	vga_switcheroo_client_fb_set(dev->pdev, info);
@@ -247,35 +195,92 @@ out:
 	return ret;
 }
 
-int intelfb_probe(struct drm_device *dev)
+static int intel_fb_find_or_create_single(struct drm_fb_helper *helper,
+					  struct drm_fb_helper_surface_size *sizes)
 {
+	struct intel_fbdev *ifbdev = (struct intel_fbdev *)helper;
+	int new_fb = 0;
 	int ret;
 
-	DRM_DEBUG_KMS("\n");
-	ret = drm_fb_helper_single_fb_probe(dev, 32, intelfb_create);
-	return ret;
+	if (!helper->fb) {
+		ret = intelfb_create(ifbdev, sizes);
+		if (ret)
+			return ret;
+		new_fb = 1;
+	}
+	return new_fb;
 }
-EXPORT_SYMBOL(intelfb_probe);
 
-int intelfb_remove(struct drm_device *dev, struct drm_framebuffer *fb)
+static struct drm_fb_helper_funcs intel_fb_helper_funcs = {
+	.gamma_set = intel_crtc_fb_gamma_set,
+	.gamma_get = intel_crtc_fb_gamma_get,
+	.fb_probe = intel_fb_find_or_create_single,
+};
+
+int intel_fbdev_destroy(struct drm_device *dev,
+			struct intel_fbdev *ifbdev)
 {
 	struct fb_info *info;
+	struct intel_framebuffer *ifb = &ifbdev->ifb;
 
-	if (!fb)
-		return -EINVAL;
-
-	info = fb->fbdev;
-
-	if (info) {
-		struct intelfb_par *par = info->par;
+	if (ifbdev->helper.fbdev) {
+		info = ifbdev->helper.fbdev;
 		unregister_framebuffer(info);
 		iounmap(info->screen_base);
-		if (info->par)
-			drm_fb_helper_free(&par->helper);
+		if (info->cmap.len)
+			fb_dealloc_cmap(&info->cmap);
 		framebuffer_release(info);
 	}
 
+	drm_fb_helper_fini(&ifbdev->helper);
+
+	drm_framebuffer_cleanup(&ifb->base);
+	if (ifb->obj)
+		drm_gem_object_unreference_unlocked(ifb->obj);
+
+	return 0;
+}
+
+int intel_fbdev_init(struct drm_device *dev)
+{
+	struct intel_fbdev *ifbdev;
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	int ret;
+
+	ifbdev = kzalloc(sizeof(struct intel_fbdev), GFP_KERNEL);
+	if (!ifbdev)
+		return -ENOMEM;
+
+	dev_priv->fbdev = ifbdev;
+	ifbdev->helper.funcs = &intel_fb_helper_funcs;
+
+	ret = drm_fb_helper_init(dev, &ifbdev->helper,
+				 dev_priv->num_pipe,
+				 INTELFB_CONN_LIMIT);
+	if (ret) {
+		kfree(ifbdev);
+		return ret;
+	}
+
+	drm_fb_helper_single_add_all_connectors(&ifbdev->helper);
+	drm_fb_helper_initial_config(&ifbdev->helper, 32);
 	return 0;
 }
-EXPORT_SYMBOL(intelfb_remove);
+
+void intel_fbdev_fini(struct drm_device *dev)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	if (!dev_priv->fbdev)
+		return;
+
+	intel_fbdev_destroy(dev, dev_priv->fbdev);
+	kfree(dev_priv->fbdev);
+	dev_priv->fbdev = NULL;
+}
 MODULE_LICENSE("GPL and additional rights");
+
+void intel_fb_output_poll_changed(struct drm_device *dev)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	drm_fb_helper_hotplug_event(&dev_priv->fbdev->helper);
+}
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 48cade0..83bd764 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -39,7 +39,6 @@
 
 struct intel_hdmi_priv {
 	u32 sdvox_reg;
-	u32 save_SDVOX;
 	bool has_hdmi_sink;
 };
 
@@ -60,11 +59,18 @@ static void intel_hdmi_mode_set(struct drm_encoder *encoder,
 		SDVO_VSYNC_ACTIVE_HIGH |
 		SDVO_HSYNC_ACTIVE_HIGH;
 
-	if (hdmi_priv->has_hdmi_sink)
+	if (hdmi_priv->has_hdmi_sink) {
 		sdvox |= SDVO_AUDIO_ENABLE;
+		if (HAS_PCH_CPT(dev))
+			sdvox |= HDMI_MODE_SELECT;
+	}
 
-	if (intel_crtc->pipe == 1)
-		sdvox |= SDVO_PIPE_B_SELECT;
+	if (intel_crtc->pipe == 1) {
+		if (HAS_PCH_CPT(dev))
+			sdvox |= PORT_TRANS_B_SEL_CPT;
+		else
+			sdvox |= SDVO_PIPE_B_SELECT;
+	}
 
 	I915_WRITE(hdmi_priv->sdvox_reg, sdvox);
 	POSTING_READ(hdmi_priv->sdvox_reg);
@@ -106,27 +112,6 @@ static void intel_hdmi_dpms(struct drm_encoder *encoder, int mode)
 	}
 }
 
-static void intel_hdmi_save(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv;
-
-	hdmi_priv->save_SDVOX = I915_READ(hdmi_priv->sdvox_reg);
-}
-
-static void intel_hdmi_restore(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv;
-
-	I915_WRITE(hdmi_priv->sdvox_reg, hdmi_priv->save_SDVOX);
-	POSTING_READ(hdmi_priv->sdvox_reg);
-}
-
 static int intel_hdmi_mode_valid(struct drm_connector *connector,
 				 struct drm_display_mode *mode)
 {
@@ -151,13 +136,14 @@ static bool intel_hdmi_mode_fixup(struct drm_encoder *encoder,
 static enum drm_connector_status
 intel_hdmi_detect(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv;
 	struct edid *edid = NULL;
 	enum drm_connector_status status = connector_status_disconnected;
 
 	hdmi_priv->has_hdmi_sink = false;
-	edid = drm_get_edid(&intel_encoder->base,
+	edid = drm_get_edid(connector,
 			    intel_encoder->ddc_bus);
 
 	if (edid) {
@@ -165,7 +151,7 @@ intel_hdmi_detect(struct drm_connector *connector)
 			status = connector_status_connected;
 			hdmi_priv->has_hdmi_sink = drm_detect_hdmi_monitor(edid);
 		}
-		intel_encoder->base.display_info.raw_edid = NULL;
+		connector->display_info.raw_edid = NULL;
 		kfree(edid);
 	}
 
@@ -174,24 +160,21 @@ intel_hdmi_detect(struct drm_connector *connector)
 
 static int intel_hdmi_get_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 
 	/* We should parse the EDID data and find out if it's an HDMI sink so
 	 * we can send audio to it.
 	 */
 
-	return intel_ddc_get_modes(intel_encoder);
+	return intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 }
 
 static void intel_hdmi_destroy(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-
-	if (intel_encoder->i2c_bus)
-		intel_i2c_destroy(intel_encoder->i2c_bus);
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
-	kfree(intel_encoder);
+	kfree(connector);
 }
 
 static const struct drm_encoder_helper_funcs intel_hdmi_helper_funcs = {
@@ -204,8 +187,6 @@ static const struct drm_encoder_helper_funcs intel_hdmi_helper_funcs = {
 
 static const struct drm_connector_funcs intel_hdmi_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_hdmi_save,
-	.restore = intel_hdmi_restore,
 	.detect = intel_hdmi_detect,
 	.fill_modes = drm_helper_probe_single_connector_modes,
 	.destroy = intel_hdmi_destroy,
@@ -214,12 +195,17 @@ static const struct drm_connector_funcs intel_hdmi_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_hdmi_connector_helper_funcs = {
 	.get_modes = intel_hdmi_get_modes,
 	.mode_valid = intel_hdmi_mode_valid,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_hdmi_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+
+	if (intel_encoder->i2c_bus)
+		intel_i2c_destroy(intel_encoder->i2c_bus);
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_hdmi_enc_funcs = {
@@ -231,21 +217,30 @@ void intel_hdmi_init(struct drm_device *dev, int sdvox_reg)
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_connector *connector;
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct intel_hdmi_priv *hdmi_priv;
 
 	intel_encoder = kcalloc(sizeof(struct intel_encoder) +
 			       sizeof(struct intel_hdmi_priv), 1, GFP_KERNEL);
 	if (!intel_encoder)
 		return;
+
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
 	hdmi_priv = (struct intel_hdmi_priv *)(intel_encoder + 1);
 
-	connector = &intel_encoder->base;
+	connector = &intel_connector->base;
 	drm_connector_init(dev, connector, &intel_hdmi_connector_funcs,
 			   DRM_MODE_CONNECTOR_HDMIA);
 	drm_connector_helper_add(connector, &intel_hdmi_connector_helper_funcs);
 
 	intel_encoder->type = INTEL_OUTPUT_HDMI;
 
+	connector->polled = DRM_CONNECTOR_POLL_HPD;
 	connector->interlace_allowed = 0;
 	connector->doublescan_allowed = 0;
 	intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
@@ -285,7 +280,7 @@ void intel_hdmi_init(struct drm_device *dev, int sdvox_reg)
 			 DRM_MODE_ENCODER_TMDS);
 	drm_encoder_helper_add(&intel_encoder->enc, &intel_hdmi_helper_funcs);
 
-	drm_mode_connector_attach_encoder(&intel_encoder->base,
+	drm_mode_connector_attach_encoder(&intel_connector->base,
 					  &intel_encoder->enc);
 	drm_sysfs_connector_add(connector);
 
@@ -303,6 +298,7 @@ void intel_hdmi_init(struct drm_device *dev, int sdvox_reg)
 err_connector:
 	drm_connector_cleanup(connector);
 	kfree(intel_encoder);
+	kfree(intel_connector);
 
 	return;
 }
diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
index b66806a..6a1accd 100644
--- a/drivers/gpu/drm/i915/intel_lvds.c
+++ b/drivers/gpu/drm/i915/intel_lvds.c
@@ -139,75 +139,6 @@ static void intel_lvds_dpms(struct drm_encoder *encoder, int mode)
 	/* XXX: We never power down the LVDS pairs. */
 }
 
-static void intel_lvds_save(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 pp_on_reg, pp_off_reg, pp_ctl_reg, pp_div_reg;
-	u32 pwm_ctl_reg;
-
-	if (HAS_PCH_SPLIT(dev)) {
-		pp_on_reg = PCH_PP_ON_DELAYS;
-		pp_off_reg = PCH_PP_OFF_DELAYS;
-		pp_ctl_reg = PCH_PP_CONTROL;
-		pp_div_reg = PCH_PP_DIVISOR;
-		pwm_ctl_reg = BLC_PWM_CPU_CTL;
-	} else {
-		pp_on_reg = PP_ON_DELAYS;
-		pp_off_reg = PP_OFF_DELAYS;
-		pp_ctl_reg = PP_CONTROL;
-		pp_div_reg = PP_DIVISOR;
-		pwm_ctl_reg = BLC_PWM_CTL;
-	}
-
-	dev_priv->savePP_ON = I915_READ(pp_on_reg);
-	dev_priv->savePP_OFF = I915_READ(pp_off_reg);
-	dev_priv->savePP_CONTROL = I915_READ(pp_ctl_reg);
-	dev_priv->savePP_DIVISOR = I915_READ(pp_div_reg);
-	dev_priv->saveBLC_PWM_CTL = I915_READ(pwm_ctl_reg);
-	dev_priv->backlight_duty_cycle = (dev_priv->saveBLC_PWM_CTL &
-				       BACKLIGHT_DUTY_CYCLE_MASK);
-
-	/*
-	 * If the light is off at server startup, just make it full brightness
-	 */
-	if (dev_priv->backlight_duty_cycle == 0)
-		dev_priv->backlight_duty_cycle =
-			intel_lvds_get_max_backlight(dev);
-}
-
-static void intel_lvds_restore(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	u32 pp_on_reg, pp_off_reg, pp_ctl_reg, pp_div_reg;
-	u32 pwm_ctl_reg;
-
-	if (HAS_PCH_SPLIT(dev)) {
-		pp_on_reg = PCH_PP_ON_DELAYS;
-		pp_off_reg = PCH_PP_OFF_DELAYS;
-		pp_ctl_reg = PCH_PP_CONTROL;
-		pp_div_reg = PCH_PP_DIVISOR;
-		pwm_ctl_reg = BLC_PWM_CPU_CTL;
-	} else {
-		pp_on_reg = PP_ON_DELAYS;
-		pp_off_reg = PP_OFF_DELAYS;
-		pp_ctl_reg = PP_CONTROL;
-		pp_div_reg = PP_DIVISOR;
-		pwm_ctl_reg = BLC_PWM_CTL;
-	}
-
-	I915_WRITE(pwm_ctl_reg, dev_priv->saveBLC_PWM_CTL);
-	I915_WRITE(pp_on_reg, dev_priv->savePP_ON);
-	I915_WRITE(pp_off_reg, dev_priv->savePP_OFF);
-	I915_WRITE(pp_div_reg, dev_priv->savePP_DIVISOR);
-	I915_WRITE(pp_ctl_reg, dev_priv->savePP_CONTROL);
-	if (dev_priv->savePP_CONTROL & POWER_TARGET_ON)
-		intel_lvds_set_power(dev, true);
-	else
-		intel_lvds_set_power(dev, false);
-}
-
 static int intel_lvds_mode_valid(struct drm_connector *connector,
 				 struct drm_display_mode *mode)
 {
@@ -635,12 +566,13 @@ static enum drm_connector_status intel_lvds_detect(struct drm_connector *connect
 static int intel_lvds_get_modes(struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	int ret = 0;
 
 	if (dev_priv->lvds_edid_good) {
-		ret = intel_ddc_get_modes(intel_encoder);
+		ret = intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 
 		if (ret)
 			return ret;
@@ -717,11 +649,8 @@ static int intel_lid_notify(struct notifier_block *nb, unsigned long val,
 static void intel_lvds_destroy(struct drm_connector *connector)
 {
 	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
 	struct drm_i915_private *dev_priv = dev->dev_private;
 
-	if (intel_encoder->ddc_bus)
-		intel_i2c_destroy(intel_encoder->ddc_bus);
 	if (dev_priv->lid_notifier.notifier_call)
 		acpi_lid_notifier_unregister(&dev_priv->lid_notifier);
 	drm_sysfs_connector_remove(connector);
@@ -734,13 +663,14 @@ static int intel_lvds_set_property(struct drm_connector *connector,
 				   uint64_t value)
 {
 	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder =
-			to_intel_encoder(connector);
 
 	if (property == dev->mode_config.scaling_mode_property &&
 				connector->encoder) {
 		struct drm_crtc *crtc = connector->encoder->crtc;
+		struct drm_encoder *encoder = connector->encoder;
+		struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 		struct intel_lvds_priv *lvds_priv = intel_encoder->dev_priv;
+
 		if (value == DRM_MODE_SCALE_NONE) {
 			DRM_DEBUG_KMS("no scaling not supported\n");
 			return 0;
@@ -774,13 +704,11 @@ static const struct drm_encoder_helper_funcs intel_lvds_helper_funcs = {
 static const struct drm_connector_helper_funcs intel_lvds_connector_helper_funcs = {
 	.get_modes = intel_lvds_get_modes,
 	.mode_valid = intel_lvds_mode_valid,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static const struct drm_connector_funcs intel_lvds_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_lvds_save,
-	.restore = intel_lvds_restore,
 	.detect = intel_lvds_detect,
 	.fill_modes = drm_helper_probe_single_connector_modes,
 	.set_property = intel_lvds_set_property,
@@ -790,7 +718,12 @@ static const struct drm_connector_funcs intel_lvds_connector_funcs = {
 
 static void intel_lvds_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+
+	if (intel_encoder->ddc_bus)
+		intel_i2c_destroy(intel_encoder->ddc_bus);
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_lvds_enc_funcs = {
@@ -979,6 +912,7 @@ void intel_lvds_init(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct drm_connector *connector;
 	struct drm_encoder *encoder;
 	struct drm_display_mode *scan; /* *modes, *bios_mode; */
@@ -1012,19 +946,27 @@ void intel_lvds_init(struct drm_device *dev)
 		return;
 	}
 
-	connector = &intel_encoder->base;
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
+	connector = &intel_connector->base;
 	encoder = &intel_encoder->enc;
-	drm_connector_init(dev, &intel_encoder->base, &intel_lvds_connector_funcs,
+	drm_connector_init(dev, &intel_connector->base, &intel_lvds_connector_funcs,
 			   DRM_MODE_CONNECTOR_LVDS);
 
 	drm_encoder_init(dev, &intel_encoder->enc, &intel_lvds_enc_funcs,
 			 DRM_MODE_ENCODER_LVDS);
 
-	drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc);
+	drm_mode_connector_attach_encoder(&intel_connector->base, &intel_encoder->enc);
 	intel_encoder->type = INTEL_OUTPUT_LVDS;
 
 	intel_encoder->clone_mask = (1 << INTEL_LVDS_CLONE_BIT);
 	intel_encoder->crtc_mask = (1 << 1);
+	if (IS_I965G(dev))
+		intel_encoder->crtc_mask |= (1 << 0);
 	drm_encoder_helper_add(encoder, &intel_lvds_helper_funcs);
 	drm_connector_helper_add(connector, &intel_lvds_connector_helper_funcs);
 	connector->display_info.subpixel_order = SubPixelHorizontalRGB;
@@ -1039,7 +981,7 @@ void intel_lvds_init(struct drm_device *dev)
 	 * the initial panel fitting mode will be FULL_SCREEN.
 	 */
 
-	drm_connector_attach_property(&intel_encoder->base,
+	drm_connector_attach_property(&intel_connector->base,
 				      dev->mode_config.scaling_mode_property,
 				      DRM_MODE_SCALE_FULLSCREEN);
 	lvds_priv->fitting_mode = DRM_MODE_SCALE_FULLSCREEN;
@@ -1067,7 +1009,7 @@ void intel_lvds_init(struct drm_device *dev)
 	 */
 	dev_priv->lvds_edid_good = true;
 
-	if (!intel_ddc_get_modes(intel_encoder))
+	if (!intel_ddc_get_modes(connector, intel_encoder->ddc_bus))
 		dev_priv->lvds_edid_good = false;
 
 	list_for_each_entry(scan, &connector->probed_modes, head) {
@@ -1151,4 +1093,5 @@ failed:
 	drm_connector_cleanup(connector);
 	drm_encoder_cleanup(encoder);
 	kfree(intel_encoder);
+	kfree(intel_connector);
 }
diff --git a/drivers/gpu/drm/i915/intel_modes.c b/drivers/gpu/drm/i915/intel_modes.c
index 8e5c83b..4b1fd3d 100644
--- a/drivers/gpu/drm/i915/intel_modes.c
+++ b/drivers/gpu/drm/i915/intel_modes.c
@@ -54,9 +54,9 @@ bool intel_ddc_probe(struct intel_encoder *intel_encoder)
 		}
 	};
 
-	intel_i2c_quirk_set(intel_encoder->base.dev, true);
+	intel_i2c_quirk_set(intel_encoder->enc.dev, true);
 	ret = i2c_transfer(intel_encoder->ddc_bus, msgs, 2);
-	intel_i2c_quirk_set(intel_encoder->base.dev, false);
+	intel_i2c_quirk_set(intel_encoder->enc.dev, false);
 	if (ret == 2)
 		return true;
 
@@ -66,22 +66,23 @@ bool intel_ddc_probe(struct intel_encoder *intel_encoder)
 /**
  * intel_ddc_get_modes - get modelist from monitor
  * @connector: DRM connector device to use
+ * @adapter: i2c adapter
  *
  * Fetch the EDID information from @connector using the DDC bus.
  */
-int intel_ddc_get_modes(struct intel_encoder *intel_encoder)
+int intel_ddc_get_modes(struct drm_connector *connector,
+			struct i2c_adapter *adapter)
 {
 	struct edid *edid;
 	int ret = 0;
 
-	intel_i2c_quirk_set(intel_encoder->base.dev, true);
-	edid = drm_get_edid(&intel_encoder->base, intel_encoder->ddc_bus);
-	intel_i2c_quirk_set(intel_encoder->base.dev, false);
+	intel_i2c_quirk_set(connector->dev, true);
+	edid = drm_get_edid(connector, adapter);
+	intel_i2c_quirk_set(connector->dev, false);
 	if (edid) {
-		drm_mode_connector_update_edid_property(&intel_encoder->base,
-							edid);
-		ret = drm_add_edid_modes(&intel_encoder->base, edid);
-		intel_encoder->base.display_info.raw_edid = NULL;
+		drm_mode_connector_update_edid_property(connector, edid);
+		ret = drm_add_edid_modes(connector, edid);
+		connector->display_info.raw_edid = NULL;
 		kfree(edid);
 	}
 
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 6d524a1..d7ad513 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -211,9 +211,8 @@ static void intel_overlay_unmap_regs_atomic(struct intel_overlay *overlay)
 static int intel_overlay_on(struct intel_overlay *overlay)
 {
 	struct drm_device *dev = overlay->dev;
-        drm_i915_private_t *dev_priv = dev->dev_private;
 	int ret;
-	RING_LOCALS;
+	drm_i915_private_t *dev_priv = dev->dev_private;
 
 	BUG_ON(overlay->active);
 
@@ -227,11 +226,13 @@ static int intel_overlay_on(struct intel_overlay *overlay)
 	OUT_RING(MI_NOOP);
 	ADVANCE_LP_RING();
 
-	overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+	overlay->last_flip_req =
+		i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 	if (overlay->last_flip_req == 0)
 		return -ENOMEM;
 
-	ret = i915_do_wait_request(dev, overlay->last_flip_req, 1);
+	ret = i915_do_wait_request(dev,
+			overlay->last_flip_req, 1, &dev_priv->render_ring);
 	if (ret != 0)
 		return ret;
 
@@ -248,7 +249,6 @@ static void intel_overlay_continue(struct intel_overlay *overlay,
         drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 flip_addr = overlay->flip_addr;
 	u32 tmp;
-	RING_LOCALS;
 
 	BUG_ON(!overlay->active);
 
@@ -265,7 +265,8 @@ static void intel_overlay_continue(struct intel_overlay *overlay,
 	OUT_RING(flip_addr);
         ADVANCE_LP_RING();
 
-	overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+	overlay->last_flip_req =
+		i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 }
 
 static int intel_overlay_wait_flip(struct intel_overlay *overlay)
@@ -274,10 +275,10 @@ static int intel_overlay_wait_flip(struct intel_overlay *overlay)
         drm_i915_private_t *dev_priv = dev->dev_private;
 	int ret;
 	u32 tmp;
-	RING_LOCALS;
 
 	if (overlay->last_flip_req != 0) {
-		ret = i915_do_wait_request(dev, overlay->last_flip_req, 1);
+		ret = i915_do_wait_request(dev, overlay->last_flip_req,
+				1, &dev_priv->render_ring);
 		if (ret == 0) {
 			overlay->last_flip_req = 0;
 
@@ -296,11 +297,13 @@ static int intel_overlay_wait_flip(struct intel_overlay *overlay)
         OUT_RING(MI_NOOP);
         ADVANCE_LP_RING();
 
-	overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+	overlay->last_flip_req =
+		i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 	if (overlay->last_flip_req == 0)
 		return -ENOMEM;
 
-	ret = i915_do_wait_request(dev, overlay->last_flip_req, 1);
+	ret = i915_do_wait_request(dev, overlay->last_flip_req,
+			1, &dev_priv->render_ring);
 	if (ret != 0)
 		return ret;
 
@@ -314,9 +317,8 @@ static int intel_overlay_off(struct intel_overlay *overlay)
 {
 	u32 flip_addr = overlay->flip_addr;
 	struct drm_device *dev = overlay->dev;
-        drm_i915_private_t *dev_priv = dev->dev_private;
+	drm_i915_private_t *dev_priv = dev->dev_private;
 	int ret;
-	RING_LOCALS;
 
 	BUG_ON(!overlay->active);
 
@@ -336,11 +338,13 @@ static int intel_overlay_off(struct intel_overlay *overlay)
         OUT_RING(MI_NOOP);
         ADVANCE_LP_RING();
 
-	overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+	overlay->last_flip_req =
+		i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 	if (overlay->last_flip_req == 0)
 		return -ENOMEM;
 
-	ret = i915_do_wait_request(dev, overlay->last_flip_req, 1);
+	ret = i915_do_wait_request(dev, overlay->last_flip_req,
+			1, &dev_priv->render_ring);
 	if (ret != 0)
 		return ret;
 
@@ -354,11 +358,13 @@ static int intel_overlay_off(struct intel_overlay *overlay)
         OUT_RING(MI_NOOP);
 	ADVANCE_LP_RING();
 
-	overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+	overlay->last_flip_req =
+		i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 	if (overlay->last_flip_req == 0)
 		return -ENOMEM;
 
-	ret = i915_do_wait_request(dev, overlay->last_flip_req, 1);
+	ret = i915_do_wait_request(dev, overlay->last_flip_req,
+			1, &dev_priv->render_ring);
 	if (ret != 0)
 		return ret;
 
@@ -373,7 +379,7 @@ static void intel_overlay_off_tail(struct intel_overlay *overlay)
 
 	/* never have the overlay hw on without showing a frame */
 	BUG_ON(!overlay->vid_bo);
-	obj = overlay->vid_bo->obj;
+	obj = &overlay->vid_bo->base;
 
 	i915_gem_object_unpin(obj);
 	drm_gem_object_unreference(obj);
@@ -390,28 +396,29 @@ int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay,
 					 int interruptible)
 {
 	struct drm_device *dev = overlay->dev;
-        drm_i915_private_t *dev_priv = dev->dev_private;
 	struct drm_gem_object *obj;
+	drm_i915_private_t *dev_priv = dev->dev_private;
 	u32 flip_addr;
 	int ret;
-	RING_LOCALS;
 
 	if (overlay->hw_wedged == HW_WEDGED)
 		return -EIO;
 
 	if (overlay->last_flip_req == 0) {
-		overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+		overlay->last_flip_req =
+			i915_add_request(dev, NULL, 0, &dev_priv->render_ring);
 		if (overlay->last_flip_req == 0)
 			return -ENOMEM;
 	}
 
-	ret = i915_do_wait_request(dev, overlay->last_flip_req, interruptible);
+	ret = i915_do_wait_request(dev, overlay->last_flip_req,
+			interruptible, &dev_priv->render_ring);
 	if (ret != 0)
 		return ret;
 
 	switch (overlay->hw_wedged) {
 		case RELEASE_OLD_VID:
-			obj = overlay->old_vid_bo->obj;
+			obj = &overlay->old_vid_bo->base;
 			i915_gem_object_unpin(obj);
 			drm_gem_object_unreference(obj);
 			overlay->old_vid_bo = NULL;
@@ -429,12 +436,13 @@ int intel_overlay_recover_from_interrupt(struct intel_overlay *overlay,
 			OUT_RING(MI_NOOP);
 			ADVANCE_LP_RING();
 
-			overlay->last_flip_req = i915_add_request(dev, NULL, 0);
+			overlay->last_flip_req = i915_add_request(dev, NULL,
+					0, &dev_priv->render_ring);
 			if (overlay->last_flip_req == 0)
 				return -ENOMEM;
 
 			ret = i915_do_wait_request(dev, overlay->last_flip_req,
-					interruptible);
+					interruptible, &dev_priv->render_ring);
 			if (ret != 0)
 				return ret;
 
@@ -467,7 +475,7 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
 	if (ret != 0)
 		return ret;
 
-	obj = overlay->old_vid_bo->obj;
+	obj = &overlay->old_vid_bo->base;
 	i915_gem_object_unpin(obj);
 	drm_gem_object_unreference(obj);
 	overlay->old_vid_bo = NULL;
@@ -1341,7 +1349,7 @@ void intel_setup_overlay(struct drm_device *dev)
 		return;
 	overlay->dev = dev;
 
-	reg_bo = drm_gem_object_alloc(dev, PAGE_SIZE);
+	reg_bo = i915_gem_alloc_object(dev, PAGE_SIZE);
 	if (!reg_bo)
 		goto out_free;
 	overlay->reg_bo = to_intel_bo(reg_bo);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
new file mode 100644
index 0000000..cea4f1a
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -0,0 +1,849 @@
+/*
+ * Copyright © 2008-2010 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ * Authors:
+ *    Eric Anholt <eric@anholt.net>
+ *    Zou Nan hai <nanhai.zou@intel.com>
+ *    Xiang Hai hao<haihao.xiang@intel.com>
+ *
+ */
+
+#include "drmP.h"
+#include "drm.h"
+#include "i915_drv.h"
+#include "i915_drm.h"
+#include "i915_trace.h"
+
+static void
+render_ring_flush(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		u32	invalidate_domains,
+		u32	flush_domains)
+{
+#if WATCH_EXEC
+	DRM_INFO("%s: invalidate %08x flush %08x\n", __func__,
+		  invalidate_domains, flush_domains);
+#endif
+	u32 cmd;
+	trace_i915_gem_request_flush(dev, ring->next_seqno,
+				     invalidate_domains, flush_domains);
+
+	if ((invalidate_domains | flush_domains) & I915_GEM_GPU_DOMAINS) {
+		/*
+		 * read/write caches:
+		 *
+		 * I915_GEM_DOMAIN_RENDER is always invalidated, but is
+		 * only flushed if MI_NO_WRITE_FLUSH is unset.  On 965, it is
+		 * also flushed at 2d versus 3d pipeline switches.
+		 *
+		 * read-only caches:
+		 *
+		 * I915_GEM_DOMAIN_SAMPLER is flushed on pre-965 if
+		 * MI_READ_FLUSH is set, and is always flushed on 965.
+		 *
+		 * I915_GEM_DOMAIN_COMMAND may not exist?
+		 *
+		 * I915_GEM_DOMAIN_INSTRUCTION, which exists on 965, is
+		 * invalidated when MI_EXE_FLUSH is set.
+		 *
+		 * I915_GEM_DOMAIN_VERTEX, which exists on 965, is
+		 * invalidated with every MI_FLUSH.
+		 *
+		 * TLBs:
+		 *
+		 * On 965, TLBs associated with I915_GEM_DOMAIN_COMMAND
+		 * and I915_GEM_DOMAIN_CPU in are invalidated at PTE write and
+		 * I915_GEM_DOMAIN_RENDER and I915_GEM_DOMAIN_SAMPLER
+		 * are flushed at any MI_FLUSH.
+		 */
+
+		cmd = MI_FLUSH | MI_NO_WRITE_FLUSH;
+		if ((invalidate_domains|flush_domains) &
+		    I915_GEM_DOMAIN_RENDER)
+			cmd &= ~MI_NO_WRITE_FLUSH;
+		if (!IS_I965G(dev)) {
+			/*
+			 * On the 965, the sampler cache always gets flushed
+			 * and this bit is reserved.
+			 */
+			if (invalidate_domains & I915_GEM_DOMAIN_SAMPLER)
+				cmd |= MI_READ_FLUSH;
+		}
+		if (invalidate_domains & I915_GEM_DOMAIN_INSTRUCTION)
+			cmd |= MI_EXE_FLUSH;
+
+#if WATCH_EXEC
+		DRM_INFO("%s: queue flush %08x to ring\n", __func__, cmd);
+#endif
+		intel_ring_begin(dev, ring, 8);
+		intel_ring_emit(dev, ring, cmd);
+		intel_ring_emit(dev, ring, MI_NOOP);
+		intel_ring_advance(dev, ring);
+	}
+}
+
+static unsigned int render_ring_get_head(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	return I915_READ(PRB0_HEAD) & HEAD_ADDR;
+}
+
+static unsigned int render_ring_get_tail(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	return I915_READ(PRB0_TAIL) & TAIL_ADDR;
+}
+
+static unsigned int render_ring_get_active_head(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	u32 acthd_reg = IS_I965G(dev) ? ACTHD_I965 : ACTHD;
+
+	return I915_READ(acthd_reg);
+}
+
+static void render_ring_advance_ring(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	I915_WRITE(PRB0_TAIL, ring->tail);
+}
+
+static int init_ring_common(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	u32 head;
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	struct drm_i915_gem_object *obj_priv;
+	obj_priv = to_intel_bo(ring->gem_object);
+
+	/* Stop the ring if it's running. */
+	I915_WRITE(ring->regs.ctl, 0);
+	I915_WRITE(ring->regs.head, 0);
+	I915_WRITE(ring->regs.tail, 0);
+
+	/* Initialize the ring. */
+	I915_WRITE(ring->regs.start, obj_priv->gtt_offset);
+	head = ring->get_head(dev, ring);
+
+	/* G45 ring initialization fails to reset head to zero */
+	if (head != 0) {
+		DRM_ERROR("%s head not reset to zero "
+				"ctl %08x head %08x tail %08x start %08x\n",
+				ring->name,
+				I915_READ(ring->regs.ctl),
+				I915_READ(ring->regs.head),
+				I915_READ(ring->regs.tail),
+				I915_READ(ring->regs.start));
+
+		I915_WRITE(ring->regs.head, 0);
+
+		DRM_ERROR("%s head forced to zero "
+				"ctl %08x head %08x tail %08x start %08x\n",
+				ring->name,
+				I915_READ(ring->regs.ctl),
+				I915_READ(ring->regs.head),
+				I915_READ(ring->regs.tail),
+				I915_READ(ring->regs.start));
+	}
+
+	I915_WRITE(ring->regs.ctl,
+			((ring->gem_object->size - PAGE_SIZE) & RING_NR_PAGES)
+			| RING_NO_REPORT | RING_VALID);
+
+	head = I915_READ(ring->regs.head) & HEAD_ADDR;
+	/* If the head is still not zero, the ring is dead */
+	if (head != 0) {
+		DRM_ERROR("%s initialization failed "
+				"ctl %08x head %08x tail %08x start %08x\n",
+				ring->name,
+				I915_READ(ring->regs.ctl),
+				I915_READ(ring->regs.head),
+				I915_READ(ring->regs.tail),
+				I915_READ(ring->regs.start));
+		return -EIO;
+	}
+
+	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+		i915_kernel_lost_context(dev);
+	else {
+		ring->head = ring->get_head(dev, ring);
+		ring->tail = ring->get_tail(dev, ring);
+		ring->space = ring->head - (ring->tail + 8);
+		if (ring->space < 0)
+			ring->space += ring->size;
+	}
+	return 0;
+}
+
+static int init_render_ring(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	int ret = init_ring_common(dev, ring);
+	if (IS_I9XX(dev) && !IS_GEN3(dev)) {
+		I915_WRITE(MI_MODE,
+				(VS_TIMER_DISPATCH) << 16 | VS_TIMER_DISPATCH);
+	}
+	return ret;
+}
+
+#define PIPE_CONTROL_FLUSH(addr)					\
+do {									\
+	OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |		\
+		 PIPE_CONTROL_DEPTH_STALL | 2);				\
+	OUT_RING(addr | PIPE_CONTROL_GLOBAL_GTT);			\
+	OUT_RING(0);							\
+	OUT_RING(0);							\
+} while (0)
+
+/**
+ * Creates a new sequence number, emitting a write of it to the status page
+ * plus an interrupt, which will trigger i915_user_interrupt_handler.
+ *
+ * Must be called with struct_lock held.
+ *
+ * Returned sequence numbers are nonzero on success.
+ */
+static u32
+render_ring_add_request(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		struct drm_file *file_priv,
+		u32 flush_domains)
+{
+	u32 seqno;
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	seqno = intel_ring_get_seqno(dev, ring);
+
+	if (IS_GEN6(dev)) {
+		BEGIN_LP_RING(6);
+		OUT_RING(GFX_OP_PIPE_CONTROL | 3);
+		OUT_RING(PIPE_CONTROL_QW_WRITE |
+			 PIPE_CONTROL_WC_FLUSH | PIPE_CONTROL_IS_FLUSH |
+			 PIPE_CONTROL_NOTIFY);
+		OUT_RING(dev_priv->seqno_gfx_addr | PIPE_CONTROL_GLOBAL_GTT);
+		OUT_RING(seqno);
+		OUT_RING(0);
+		OUT_RING(0);
+		ADVANCE_LP_RING();
+	} else if (HAS_PIPE_CONTROL(dev)) {
+		u32 scratch_addr = dev_priv->seqno_gfx_addr + 128;
+
+		/*
+		 * Workaround qword write incoherence by flushing the
+		 * PIPE_NOTIFY buffers out to memory before requesting
+		 * an interrupt.
+		 */
+		BEGIN_LP_RING(32);
+		OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |
+			 PIPE_CONTROL_WC_FLUSH | PIPE_CONTROL_TC_FLUSH);
+		OUT_RING(dev_priv->seqno_gfx_addr | PIPE_CONTROL_GLOBAL_GTT);
+		OUT_RING(seqno);
+		OUT_RING(0);
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		scratch_addr += 128; /* write to separate cachelines */
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		scratch_addr += 128;
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		scratch_addr += 128;
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		scratch_addr += 128;
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		scratch_addr += 128;
+		PIPE_CONTROL_FLUSH(scratch_addr);
+		OUT_RING(GFX_OP_PIPE_CONTROL | PIPE_CONTROL_QW_WRITE |
+			 PIPE_CONTROL_WC_FLUSH | PIPE_CONTROL_TC_FLUSH |
+			 PIPE_CONTROL_NOTIFY);
+		OUT_RING(dev_priv->seqno_gfx_addr | PIPE_CONTROL_GLOBAL_GTT);
+		OUT_RING(seqno);
+		OUT_RING(0);
+		ADVANCE_LP_RING();
+	} else {
+		BEGIN_LP_RING(4);
+		OUT_RING(MI_STORE_DWORD_INDEX);
+		OUT_RING(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+		OUT_RING(seqno);
+
+		OUT_RING(MI_USER_INTERRUPT);
+		ADVANCE_LP_RING();
+	}
+	return seqno;
+}
+
+static u32
+render_ring_get_gem_seqno(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
+	if (HAS_PIPE_CONTROL(dev))
+		return ((volatile u32 *)(dev_priv->seqno_page))[0];
+	else
+		return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+}
+
+static void
+render_ring_get_user_irq(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&dev_priv->user_irq_lock, irqflags);
+	if (dev->irq_enabled && (++ring->user_irq_refcount == 1)) {
+		if (HAS_PCH_SPLIT(dev))
+			ironlake_enable_graphics_irq(dev_priv, GT_PIPE_NOTIFY);
+		else
+			i915_enable_irq(dev_priv, I915_USER_INTERRUPT);
+	}
+	spin_unlock_irqrestore(&dev_priv->user_irq_lock, irqflags);
+}
+
+static void
+render_ring_put_user_irq(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
+	unsigned long irqflags;
+
+	spin_lock_irqsave(&dev_priv->user_irq_lock, irqflags);
+	BUG_ON(dev->irq_enabled && ring->user_irq_refcount <= 0);
+	if (dev->irq_enabled && (--ring->user_irq_refcount == 0)) {
+		if (HAS_PCH_SPLIT(dev))
+			ironlake_disable_graphics_irq(dev_priv, GT_PIPE_NOTIFY);
+		else
+			i915_disable_irq(dev_priv, I915_USER_INTERRUPT);
+	}
+	spin_unlock_irqrestore(&dev_priv->user_irq_lock, irqflags);
+}
+
+static void render_setup_status_page(struct drm_device *dev,
+	struct	intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	if (IS_GEN6(dev)) {
+		I915_WRITE(HWS_PGA_GEN6, ring->status_page.gfx_addr);
+		I915_READ(HWS_PGA_GEN6); /* posting read */
+	} else {
+		I915_WRITE(HWS_PGA, ring->status_page.gfx_addr);
+		I915_READ(HWS_PGA); /* posting read */
+	}
+
+}
+
+void
+bsd_ring_flush(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		u32     invalidate_domains,
+		u32     flush_domains)
+{
+	intel_ring_begin(dev, ring, 8);
+	intel_ring_emit(dev, ring, MI_FLUSH);
+	intel_ring_emit(dev, ring, MI_NOOP);
+	intel_ring_advance(dev, ring);
+}
+
+static inline unsigned int bsd_ring_get_head(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	return I915_READ(BSD_RING_HEAD) & HEAD_ADDR;
+}
+
+static inline unsigned int bsd_ring_get_tail(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	return I915_READ(BSD_RING_TAIL) & TAIL_ADDR;
+}
+
+static inline unsigned int bsd_ring_get_active_head(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	return I915_READ(BSD_RING_ACTHD);
+}
+
+static inline void bsd_ring_advance_ring(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	I915_WRITE(BSD_RING_TAIL, ring->tail);
+}
+
+static int init_bsd_ring(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	return init_ring_common(dev, ring);
+}
+
+static u32
+bsd_ring_add_request(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		struct drm_file *file_priv,
+		u32 flush_domains)
+{
+	u32 seqno;
+	seqno = intel_ring_get_seqno(dev, ring);
+	intel_ring_begin(dev, ring, 4);
+	intel_ring_emit(dev, ring, MI_STORE_DWORD_INDEX);
+	intel_ring_emit(dev, ring,
+			I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+	intel_ring_emit(dev, ring, seqno);
+	intel_ring_emit(dev, ring, MI_USER_INTERRUPT);
+	intel_ring_advance(dev, ring);
+
+	DRM_DEBUG_DRIVER("%s %d\n", ring->name, seqno);
+
+	return seqno;
+}
+
+static void bsd_setup_status_page(struct drm_device *dev,
+		struct  intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	I915_WRITE(BSD_HWS_PGA, ring->status_page.gfx_addr);
+	I915_READ(BSD_HWS_PGA);
+}
+
+static void
+bsd_ring_get_user_irq(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	/* do nothing */
+}
+static void
+bsd_ring_put_user_irq(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	/* do nothing */
+}
+
+static u32
+bsd_ring_get_gem_seqno(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+}
+
+static int
+bsd_ring_dispatch_gem_execbuffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		struct drm_i915_gem_execbuffer2 *exec,
+		struct drm_clip_rect *cliprects,
+		uint64_t exec_offset)
+{
+	uint32_t exec_start;
+	exec_start = (uint32_t) exec_offset + exec->batch_start_offset;
+	intel_ring_begin(dev, ring, 2);
+	intel_ring_emit(dev, ring, MI_BATCH_BUFFER_START |
+			(2 << 6) | MI_BATCH_NON_SECURE_I965);
+	intel_ring_emit(dev, ring, exec_start);
+	intel_ring_advance(dev, ring);
+	return 0;
+}
+
+
+static int
+render_ring_dispatch_gem_execbuffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		struct drm_i915_gem_execbuffer2 *exec,
+		struct drm_clip_rect *cliprects,
+		uint64_t exec_offset)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	int nbox = exec->num_cliprects;
+	int i = 0, count;
+	uint32_t exec_start, exec_len;
+	exec_start = (uint32_t) exec_offset + exec->batch_start_offset;
+	exec_len = (uint32_t) exec->batch_len;
+
+	trace_i915_gem_request_submit(dev, dev_priv->mm.next_gem_seqno + 1);
+
+	count = nbox ? nbox : 1;
+
+	for (i = 0; i < count; i++) {
+		if (i < nbox) {
+			int ret = i915_emit_box(dev, cliprects, i,
+						exec->DR1, exec->DR4);
+			if (ret)
+				return ret;
+		}
+
+		if (IS_I830(dev) || IS_845G(dev)) {
+			intel_ring_begin(dev, ring, 4);
+			intel_ring_emit(dev, ring, MI_BATCH_BUFFER);
+			intel_ring_emit(dev, ring,
+					exec_start | MI_BATCH_NON_SECURE);
+			intel_ring_emit(dev, ring, exec_start + exec_len - 4);
+			intel_ring_emit(dev, ring, 0);
+		} else {
+			intel_ring_begin(dev, ring, 4);
+			if (IS_I965G(dev)) {
+				intel_ring_emit(dev, ring,
+						MI_BATCH_BUFFER_START | (2 << 6)
+						| MI_BATCH_NON_SECURE_I965);
+				intel_ring_emit(dev, ring, exec_start);
+			} else {
+				intel_ring_emit(dev, ring, MI_BATCH_BUFFER_START
+						| (2 << 6));
+				intel_ring_emit(dev, ring, exec_start |
+						MI_BATCH_NON_SECURE);
+			}
+		}
+		intel_ring_advance(dev, ring);
+	}
+
+	/* XXX breadcrumb */
+	return 0;
+}
+
+static void cleanup_status_page(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	struct drm_gem_object *obj;
+	struct drm_i915_gem_object *obj_priv;
+
+	obj = ring->status_page.obj;
+	if (obj == NULL)
+		return;
+	obj_priv = to_intel_bo(obj);
+
+	kunmap(obj_priv->pages[0]);
+	i915_gem_object_unpin(obj);
+	drm_gem_object_unreference(obj);
+	ring->status_page.obj = NULL;
+
+	memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));
+}
+
+static int init_status_page(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	drm_i915_private_t *dev_priv = dev->dev_private;
+	struct drm_gem_object *obj;
+	struct drm_i915_gem_object *obj_priv;
+	int ret;
+
+	obj = i915_gem_alloc_object(dev, 4096);
+	if (obj == NULL) {
+		DRM_ERROR("Failed to allocate status page\n");
+		ret = -ENOMEM;
+		goto err;
+	}
+	obj_priv = to_intel_bo(obj);
+	obj_priv->agp_type = AGP_USER_CACHED_MEMORY;
+
+	ret = i915_gem_object_pin(obj, 4096);
+	if (ret != 0) {
+		goto err_unref;
+	}
+
+	ring->status_page.gfx_addr = obj_priv->gtt_offset;
+	ring->status_page.page_addr = kmap(obj_priv->pages[0]);
+	if (ring->status_page.page_addr == NULL) {
+		memset(&dev_priv->hws_map, 0, sizeof(dev_priv->hws_map));
+		goto err_unpin;
+	}
+	ring->status_page.obj = obj;
+	memset(ring->status_page.page_addr, 0, PAGE_SIZE);
+
+	ring->setup_status_page(dev, ring);
+	DRM_DEBUG_DRIVER("%s hws offset: 0x%08x\n",
+			ring->name, ring->status_page.gfx_addr);
+
+	return 0;
+
+err_unpin:
+	i915_gem_object_unpin(obj);
+err_unref:
+	drm_gem_object_unreference(obj);
+err:
+	return ret;
+}
+
+
+int intel_init_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	int ret;
+	struct drm_i915_gem_object *obj_priv;
+	struct drm_gem_object *obj;
+	ring->dev = dev;
+
+	if (I915_NEED_GFX_HWS(dev)) {
+		ret = init_status_page(dev, ring);
+		if (ret)
+			return ret;
+	}
+
+	obj = i915_gem_alloc_object(dev, ring->size);
+	if (obj == NULL) {
+		DRM_ERROR("Failed to allocate ringbuffer\n");
+		ret = -ENOMEM;
+		goto cleanup;
+	}
+
+	ring->gem_object = obj;
+
+	ret = i915_gem_object_pin(obj, ring->alignment);
+	if (ret != 0) {
+		drm_gem_object_unreference(obj);
+		goto cleanup;
+	}
+
+	obj_priv = to_intel_bo(obj);
+	ring->map.size = ring->size;
+	ring->map.offset = dev->agp->base + obj_priv->gtt_offset;
+	ring->map.type = 0;
+	ring->map.flags = 0;
+	ring->map.mtrr = 0;
+
+	drm_core_ioremap_wc(&ring->map, dev);
+	if (ring->map.handle == NULL) {
+		DRM_ERROR("Failed to map ringbuffer.\n");
+		i915_gem_object_unpin(obj);
+		drm_gem_object_unreference(obj);
+		ret = -EINVAL;
+		goto cleanup;
+	}
+
+	ring->virtual_start = ring->map.handle;
+	ret = ring->init(dev, ring);
+	if (ret != 0) {
+		intel_cleanup_ring_buffer(dev, ring);
+		return ret;
+	}
+
+	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+		i915_kernel_lost_context(dev);
+	else {
+		ring->head = ring->get_head(dev, ring);
+		ring->tail = ring->get_tail(dev, ring);
+		ring->space = ring->head - (ring->tail + 8);
+		if (ring->space < 0)
+			ring->space += ring->size;
+	}
+	INIT_LIST_HEAD(&ring->active_list);
+	INIT_LIST_HEAD(&ring->request_list);
+	return ret;
+cleanup:
+	cleanup_status_page(dev, ring);
+	return ret;
+}
+
+void intel_cleanup_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	if (ring->gem_object == NULL)
+		return;
+
+	drm_core_ioremapfree(&ring->map, dev);
+
+	i915_gem_object_unpin(ring->gem_object);
+	drm_gem_object_unreference(ring->gem_object);
+	ring->gem_object = NULL;
+	cleanup_status_page(dev, ring);
+}
+
+int intel_wrap_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	unsigned int *virt;
+	int rem;
+	rem = ring->size - ring->tail;
+
+	if (ring->space < rem) {
+		int ret = intel_wait_ring_buffer(dev, ring, rem);
+		if (ret)
+			return ret;
+	}
+
+	virt = (unsigned int *)(ring->virtual_start + ring->tail);
+	rem /= 4;
+	while (rem--)
+		*virt++ = MI_NOOP;
+
+	ring->tail = 0;
+
+	return 0;
+}
+
+int intel_wait_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring, int n)
+{
+	unsigned long end;
+
+	trace_i915_ring_wait_begin (dev);
+	end = jiffies + 3 * HZ;
+	do {
+		ring->head = ring->get_head(dev, ring);
+		ring->space = ring->head - (ring->tail + 8);
+		if (ring->space < 0)
+			ring->space += ring->size;
+		if (ring->space >= n) {
+			trace_i915_ring_wait_end (dev);
+			return 0;
+		}
+
+		if (dev->primary->master) {
+			struct drm_i915_master_private *master_priv = dev->primary->master->driver_priv;
+			if (master_priv->sarea_priv)
+				master_priv->sarea_priv->perf_boxes |= I915_BOX_WAIT;
+		}
+
+		yield();
+	} while (!time_after(jiffies, end));
+	trace_i915_ring_wait_end (dev);
+	return -EBUSY;
+}
+
+void intel_ring_begin(struct drm_device *dev,
+		struct intel_ring_buffer *ring, int n)
+{
+	if (unlikely(ring->tail + n > ring->size))
+		intel_wrap_ring_buffer(dev, ring);
+	if (unlikely(ring->space < n))
+		intel_wait_ring_buffer(dev, ring, n);
+}
+
+void intel_ring_emit(struct drm_device *dev,
+		struct intel_ring_buffer *ring, unsigned int data)
+{
+	unsigned int *virt = ring->virtual_start + ring->tail;
+	*virt = data;
+	ring->tail += 4;
+	ring->tail &= ring->size - 1;
+	ring->space -= 4;
+}
+
+void intel_ring_advance(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	ring->advance_ring(dev, ring);
+}
+
+void intel_fill_struct(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		void *data,
+		unsigned int len)
+{
+	unsigned int *virt = ring->virtual_start + ring->tail;
+	BUG_ON((len&~(4-1)) != 0);
+	intel_ring_begin(dev, ring, len);
+	memcpy(virt, data, len);
+	ring->tail += len;
+	ring->tail &= ring->size - 1;
+	ring->space -= len;
+	intel_ring_advance(dev, ring);
+}
+
+u32 intel_ring_get_seqno(struct drm_device *dev,
+		struct intel_ring_buffer *ring)
+{
+	u32 seqno;
+	seqno = ring->next_seqno;
+
+	/* reserve 0 for non-seqno */
+	if (++ring->next_seqno == 0)
+		ring->next_seqno = 1;
+	return seqno;
+}
+
+struct intel_ring_buffer render_ring = {
+	.name			= "render ring",
+	.regs                   = {
+		.ctl = PRB0_CTL,
+		.head = PRB0_HEAD,
+		.tail = PRB0_TAIL,
+		.start = PRB0_START
+	},
+	.ring_flag		= I915_EXEC_RENDER,
+	.size			= 32 * PAGE_SIZE,
+	.alignment		= PAGE_SIZE,
+	.virtual_start		= NULL,
+	.dev			= NULL,
+	.gem_object		= NULL,
+	.head			= 0,
+	.tail			= 0,
+	.space			= 0,
+	.next_seqno		= 1,
+	.user_irq_refcount	= 0,
+	.irq_gem_seqno		= 0,
+	.waiting_gem_seqno	= 0,
+	.setup_status_page	= render_setup_status_page,
+	.init			= init_render_ring,
+	.get_head		= render_ring_get_head,
+	.get_tail		= render_ring_get_tail,
+	.get_active_head	= render_ring_get_active_head,
+	.advance_ring		= render_ring_advance_ring,
+	.flush			= render_ring_flush,
+	.add_request		= render_ring_add_request,
+	.get_gem_seqno		= render_ring_get_gem_seqno,
+	.user_irq_get		= render_ring_get_user_irq,
+	.user_irq_put		= render_ring_put_user_irq,
+	.dispatch_gem_execbuffer = render_ring_dispatch_gem_execbuffer,
+	.status_page		= {NULL, 0, NULL},
+	.map			= {0,}
+};
+
+/* ring buffer for bit-stream decoder */
+
+struct intel_ring_buffer bsd_ring = {
+	.name                   = "bsd ring",
+	.regs			= {
+		.ctl = BSD_RING_CTL,
+		.head = BSD_RING_HEAD,
+		.tail = BSD_RING_TAIL,
+		.start = BSD_RING_START
+	},
+	.ring_flag		= I915_EXEC_BSD,
+	.size			= 32 * PAGE_SIZE,
+	.alignment		= PAGE_SIZE,
+	.virtual_start		= NULL,
+	.dev			= NULL,
+	.gem_object		= NULL,
+	.head			= 0,
+	.tail			= 0,
+	.space			= 0,
+	.next_seqno		= 1,
+	.user_irq_refcount	= 0,
+	.irq_gem_seqno		= 0,
+	.waiting_gem_seqno	= 0,
+	.setup_status_page	= bsd_setup_status_page,
+	.init			= init_bsd_ring,
+	.get_head		= bsd_ring_get_head,
+	.get_tail		= bsd_ring_get_tail,
+	.get_active_head	= bsd_ring_get_active_head,
+	.advance_ring		= bsd_ring_advance_ring,
+	.flush			= bsd_ring_flush,
+	.add_request		= bsd_ring_add_request,
+	.get_gem_seqno		= bsd_ring_get_gem_seqno,
+	.user_irq_get		= bsd_ring_get_user_irq,
+	.user_irq_put		= bsd_ring_put_user_irq,
+	.dispatch_gem_execbuffer = bsd_ring_dispatch_gem_execbuffer,
+	.status_page		= {NULL, 0, NULL},
+	.map			= {0,}
+};
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
new file mode 100644
index 0000000..d5568d3
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -0,0 +1,124 @@
+#ifndef _INTEL_RINGBUFFER_H_
+#define _INTEL_RINGBUFFER_H_
+
+struct  intel_hw_status_page {
+	void		*page_addr;
+	unsigned int	gfx_addr;
+	struct		drm_gem_object *obj;
+};
+
+struct drm_i915_gem_execbuffer2;
+struct  intel_ring_buffer {
+	const char	*name;
+	struct		ring_regs {
+			u32 ctl;
+			u32 head;
+			u32 tail;
+			u32 start;
+	} regs;
+	unsigned int	ring_flag;
+	unsigned long	size;
+	unsigned int	alignment;
+	void		*virtual_start;
+	struct		drm_device *dev;
+	struct		drm_gem_object *gem_object;
+
+	unsigned int	head;
+	unsigned int	tail;
+	unsigned int	space;
+	u32		next_seqno;
+	struct intel_hw_status_page status_page;
+
+	u32		irq_gem_seqno;		/* last seq seem at irq time */
+	u32		waiting_gem_seqno;
+	int		user_irq_refcount;
+	void		(*user_irq_get)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	void		(*user_irq_put)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	void		(*setup_status_page)(struct drm_device *dev,
+			struct	intel_ring_buffer *ring);
+
+	int		(*init)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+
+	unsigned int	(*get_head)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	unsigned int	(*get_tail)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	unsigned int	(*get_active_head)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	void		(*advance_ring)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	void		(*flush)(struct drm_device *dev,
+			struct intel_ring_buffer *ring,
+			u32	invalidate_domains,
+			u32	flush_domains);
+	u32		(*add_request)(struct drm_device *dev,
+			struct intel_ring_buffer *ring,
+			struct drm_file *file_priv,
+			u32 flush_domains);
+	u32		(*get_gem_seqno)(struct drm_device *dev,
+			struct intel_ring_buffer *ring);
+	int		(*dispatch_gem_execbuffer)(struct drm_device *dev,
+			struct intel_ring_buffer *ring,
+			struct drm_i915_gem_execbuffer2 *exec,
+			struct drm_clip_rect *cliprects,
+			uint64_t exec_offset);
+
+	/**
+	 * List of objects currently involved in rendering from the
+	 * ringbuffer.
+	 *
+	 * Includes buffers having the contents of their GPU caches
+	 * flushed, not necessarily primitives.  last_rendering_seqno
+	 * represents when the rendering involved will be completed.
+	 *
+	 * A reference is held on the buffer while on this list.
+	 */
+	struct list_head active_list;
+
+	/**
+	 * List of breadcrumbs associated with GPU requests currently
+	 * outstanding.
+	 */
+	struct list_head request_list;
+
+	wait_queue_head_t irq_queue;
+	drm_local_map_t map;
+};
+
+static inline u32
+intel_read_status_page(struct intel_ring_buffer *ring,
+		int reg)
+{
+	u32 *regs = ring->status_page.page_addr;
+	return regs[reg];
+}
+
+int intel_init_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
+void intel_cleanup_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
+int intel_wait_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring, int n);
+int intel_wrap_ring_buffer(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
+void intel_ring_begin(struct drm_device *dev,
+		struct intel_ring_buffer *ring, int n);
+void intel_ring_emit(struct drm_device *dev,
+		struct intel_ring_buffer *ring, u32 data);
+void intel_fill_struct(struct drm_device *dev,
+		struct intel_ring_buffer *ring,
+		void *data,
+		unsigned int len);
+void intel_ring_advance(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
+
+u32 intel_ring_get_seqno(struct drm_device *dev,
+		struct intel_ring_buffer *ring);
+
+extern struct intel_ring_buffer render_ring;
+extern struct intel_ring_buffer bsd_ring;
+
+#endif /* _INTEL_RINGBUFFER_H_ */
diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
index 87d9536..76993ac 100644
--- a/drivers/gpu/drm/i915/intel_sdvo.c
+++ b/drivers/gpu/drm/i915/intel_sdvo.c
@@ -36,7 +36,18 @@
 #include "i915_drm.h"
 #include "i915_drv.h"
 #include "intel_sdvo_regs.h"
-#include <linux/dmi.h>
+
+#define SDVO_TMDS_MASK (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1)
+#define SDVO_RGB_MASK  (SDVO_OUTPUT_RGB0 | SDVO_OUTPUT_RGB1)
+#define SDVO_LVDS_MASK (SDVO_OUTPUT_LVDS0 | SDVO_OUTPUT_LVDS1)
+#define SDVO_TV_MASK   (SDVO_OUTPUT_CVBS0 | SDVO_OUTPUT_SVID0)
+
+#define SDVO_OUTPUT_MASK (SDVO_TMDS_MASK | SDVO_RGB_MASK | SDVO_LVDS_MASK |\
+                         SDVO_TV_MASK)
+
+#define IS_TV(c)	(c->output_flag & SDVO_TV_MASK)
+#define IS_LVDS(c)	(c->output_flag & SDVO_LVDS_MASK)
+
 
 static char *tv_format_names[] = {
 	"NTSC_M"   , "NTSC_J"  , "NTSC_443",
@@ -86,12 +97,6 @@ struct intel_sdvo_priv {
 	/* This is for current tv format name */
 	char *tv_format_name;
 
-	/* This contains all current supported TV format */
-	char *tv_format_supported[TV_FORMAT_NUM];
-	int   format_supported_num;
-	struct drm_property *tv_format_property;
-	struct drm_property *tv_format_name_property[TV_FORMAT_NUM];
-
 	/**
 	 * This is set if we treat the device as HDMI, instead of DVI.
 	 */
@@ -112,12 +117,6 @@ struct intel_sdvo_priv {
 	 */
 	struct drm_display_mode *sdvo_lvds_fixed_mode;
 
-	/**
-	 * Returned SDTV resolutions allowed for the current format, if the
-	 * device reported it.
-	 */
-	struct intel_sdvo_sdtv_resolution_reply sdtv_resolutions;
-
 	/*
 	 * supported encoding mode, used to determine whether HDMI is
 	 * supported
@@ -130,11 +129,24 @@ struct intel_sdvo_priv {
 	/* Mac mini hack -- use the same DDC as the analog connector */
 	struct i2c_adapter *analog_ddc_bus;
 
-	int save_sdvo_mult;
-	u16 save_active_outputs;
-	struct intel_sdvo_dtd save_input_dtd_1, save_input_dtd_2;
-	struct intel_sdvo_dtd save_output_dtd[16];
-	u32 save_SDVOX;
+};
+
+struct intel_sdvo_connector {
+	/* Mark the type of connector */
+	uint16_t output_flag;
+
+	/* This contains all current supported TV format */
+	char *tv_format_supported[TV_FORMAT_NUM];
+	int   format_supported_num;
+	struct drm_property *tv_format_property;
+	struct drm_property *tv_format_name_property[TV_FORMAT_NUM];
+
+	/**
+	 * Returned SDTV resolutions allowed for the current format, if the
+	 * device reported it.
+	 */
+	struct intel_sdvo_sdtv_resolution_reply sdtv_resolutions;
+
 	/* add the property for the SDVO-TV */
 	struct drm_property *left_property;
 	struct drm_property *right_property;
@@ -162,7 +174,12 @@ struct intel_sdvo_priv {
 };
 
 static bool
-intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags);
+intel_sdvo_output_setup(struct intel_encoder *intel_encoder,
+			uint16_t flags);
+static void
+intel_sdvo_tv_create_property(struct drm_connector *connector, int type);
+static void
+intel_sdvo_create_enhance_property(struct drm_connector *connector);
 
 /**
  * Writes the SDVOB or SDVOC with the given value, but always writes both
@@ -171,12 +188,18 @@ intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags);
  */
 static void intel_sdvo_write_sdvox(struct intel_encoder *intel_encoder, u32 val)
 {
-	struct drm_device *dev = intel_encoder->base.dev;
+	struct drm_device *dev = intel_encoder->enc.dev;
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_sdvo_priv   *sdvo_priv = intel_encoder->dev_priv;
 	u32 bval = val, cval = val;
 	int i;
 
+	if (sdvo_priv->sdvo_reg == PCH_SDVOB) {
+		I915_WRITE(sdvo_priv->sdvo_reg, val);
+		I915_READ(sdvo_priv->sdvo_reg);
+		return;
+	}
+
 	if (sdvo_priv->sdvo_reg == SDVOB) {
 		cval = I915_READ(SDVOC);
 	} else {
@@ -353,7 +376,8 @@ static const struct _sdvo_cmd_name {
     SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_DATA),
 };
 
-#define SDVO_NAME(dev_priv) ((dev_priv)->sdvo_reg == SDVOB ? "SDVOB" : "SDVOC")
+#define IS_SDVOB(reg)	(reg == SDVOB || reg == PCH_SDVOB)
+#define SDVO_NAME(dev_priv) (IS_SDVOB((dev_priv)->sdvo_reg) ? "SDVOB" : "SDVOC")
 #define SDVO_PRIV(encoder)   ((struct intel_sdvo_priv *) (encoder)->dev_priv)
 
 static void intel_sdvo_debug_write(struct intel_encoder *intel_encoder, u8 cmd,
@@ -563,17 +587,6 @@ static bool intel_sdvo_get_trained_inputs(struct intel_encoder *intel_encoder, b
 	return true;
 }
 
-static bool intel_sdvo_get_active_outputs(struct intel_encoder *intel_encoder,
-					  u16 *outputs)
-{
-	u8 status;
-
-	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_ACTIVE_OUTPUTS, NULL, 0);
-	status = intel_sdvo_read_response(intel_encoder, outputs, sizeof(*outputs));
-
-	return (status == SDVO_CMD_STATUS_SUCCESS);
-}
-
 static bool intel_sdvo_set_active_outputs(struct intel_encoder *intel_encoder,
 					  u16 outputs)
 {
@@ -646,40 +659,6 @@ static bool intel_sdvo_set_target_output(struct intel_encoder *intel_encoder,
 	return (status == SDVO_CMD_STATUS_SUCCESS);
 }
 
-static bool intel_sdvo_get_timing(struct intel_encoder *intel_encoder, u8 cmd,
-				  struct intel_sdvo_dtd *dtd)
-{
-	u8 status;
-
-	intel_sdvo_write_cmd(intel_encoder, cmd, NULL, 0);
-	status = intel_sdvo_read_response(intel_encoder, &dtd->part1,
-					  sizeof(dtd->part1));
-	if (status != SDVO_CMD_STATUS_SUCCESS)
-		return false;
-
-	intel_sdvo_write_cmd(intel_encoder, cmd + 1, NULL, 0);
-	status = intel_sdvo_read_response(intel_encoder, &dtd->part2,
-					  sizeof(dtd->part2));
-	if (status != SDVO_CMD_STATUS_SUCCESS)
-		return false;
-
-	return true;
-}
-
-static bool intel_sdvo_get_input_timing(struct intel_encoder *intel_encoder,
-					 struct intel_sdvo_dtd *dtd)
-{
-	return intel_sdvo_get_timing(intel_encoder,
-				     SDVO_CMD_GET_INPUT_TIMINGS_PART1, dtd);
-}
-
-static bool intel_sdvo_get_output_timing(struct intel_encoder *intel_encoder,
-					 struct intel_sdvo_dtd *dtd)
-{
-	return intel_sdvo_get_timing(intel_encoder,
-				     SDVO_CMD_GET_OUTPUT_TIMINGS_PART1, dtd);
-}
-
 static bool intel_sdvo_set_timing(struct intel_encoder *intel_encoder, u8 cmd,
 				  struct intel_sdvo_dtd *dtd)
 {
@@ -767,23 +746,6 @@ static bool intel_sdvo_get_preferred_input_timing(struct intel_encoder *intel_en
 	return false;
 }
 
-static int intel_sdvo_get_clock_rate_mult(struct intel_encoder *intel_encoder)
-{
-	u8 response, status;
-
-	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_CLOCK_RATE_MULT, NULL, 0);
-	status = intel_sdvo_read_response(intel_encoder, &response, 1);
-
-	if (status != SDVO_CMD_STATUS_SUCCESS) {
-		DRM_DEBUG_KMS("Couldn't get SDVO clock rate multiplier\n");
-		return SDVO_CLOCK_RATE_MULT_1X;
-	} else {
-		DRM_DEBUG_KMS("Current clock rate multiplier: %d\n", response);
-	}
-
-	return response;
-}
-
 static bool intel_sdvo_set_clock_rate_mult(struct intel_encoder *intel_encoder, u8 val)
 {
 	u8 status;
@@ -1071,7 +1033,7 @@ static void intel_sdvo_set_tv_format(struct intel_encoder *intel_encoder)
 	memcpy(&format, &format_map, sizeof(format_map) > sizeof(format) ?
 			sizeof(format) : sizeof(format_map));
 
-	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_TV_FORMAT, &format_map,
+	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_TV_FORMAT, &format,
 			     sizeof(format));
 
 	status = intel_sdvo_read_response(intel_encoder, NULL, 0);
@@ -1101,7 +1063,7 @@ static bool intel_sdvo_mode_fixup(struct drm_encoder *encoder,
 		/* Set output timings */
 		intel_sdvo_get_dtd_from_mode(&output_dtd, mode);
 		intel_sdvo_set_target_output(intel_encoder,
-					     dev_priv->controlled_output);
+					     dev_priv->attached_output);
 		intel_sdvo_set_output_timing(intel_encoder, &output_dtd);
 
 		/* Set the input timing to the screen. Assume always input 0. */
@@ -1139,7 +1101,7 @@ static bool intel_sdvo_mode_fixup(struct drm_encoder *encoder,
 				dev_priv->sdvo_lvds_fixed_mode);
 
 		intel_sdvo_set_target_output(intel_encoder,
-					     dev_priv->controlled_output);
+					     dev_priv->attached_output);
 		intel_sdvo_set_output_timing(intel_encoder, &output_dtd);
 
 		/* Set the input timing to the screen. Assume always input 0. */
@@ -1204,7 +1166,7 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
 	 * channel on the motherboard.  In a two-input device, the first input
 	 * will be SDVOB and the second SDVOC.
 	 */
-	in_out.in0 = sdvo_priv->controlled_output;
+	in_out.in0 = sdvo_priv->attached_output;
 	in_out.in1 = 0;
 
 	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_IN_OUT_MAP,
@@ -1230,7 +1192,7 @@ static void intel_sdvo_mode_set(struct drm_encoder *encoder,
 	if (!sdvo_priv->is_tv && !sdvo_priv->is_lvds) {
 		/* Set the output timing to the screen */
 		intel_sdvo_set_target_output(intel_encoder,
-					     sdvo_priv->controlled_output);
+					     sdvo_priv->attached_output);
 		intel_sdvo_set_output_timing(intel_encoder, &input_dtd);
 	}
 
@@ -1352,107 +1314,16 @@ static void intel_sdvo_dpms(struct drm_encoder *encoder, int mode)
 
 		if (0)
 			intel_sdvo_set_encoder_power_state(intel_encoder, mode);
-		intel_sdvo_set_active_outputs(intel_encoder, sdvo_priv->controlled_output);
+		intel_sdvo_set_active_outputs(intel_encoder, sdvo_priv->attached_output);
 	}
 	return;
 }
 
-static void intel_sdvo_save(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
-	int o;
-
-	sdvo_priv->save_sdvo_mult = intel_sdvo_get_clock_rate_mult(intel_encoder);
-	intel_sdvo_get_active_outputs(intel_encoder, &sdvo_priv->save_active_outputs);
-
-	if (sdvo_priv->caps.sdvo_inputs_mask & 0x1) {
-		intel_sdvo_set_target_input(intel_encoder, true, false);
-		intel_sdvo_get_input_timing(intel_encoder,
-					    &sdvo_priv->save_input_dtd_1);
-	}
-
-	if (sdvo_priv->caps.sdvo_inputs_mask & 0x2) {
-		intel_sdvo_set_target_input(intel_encoder, false, true);
-		intel_sdvo_get_input_timing(intel_encoder,
-					    &sdvo_priv->save_input_dtd_2);
-	}
-
-	for (o = SDVO_OUTPUT_FIRST; o <= SDVO_OUTPUT_LAST; o++)
-	{
-	        u16  this_output = (1 << o);
-		if (sdvo_priv->caps.output_flags & this_output)
-		{
-			intel_sdvo_set_target_output(intel_encoder, this_output);
-			intel_sdvo_get_output_timing(intel_encoder,
-						     &sdvo_priv->save_output_dtd[o]);
-		}
-	}
-	if (sdvo_priv->is_tv) {
-		/* XXX: Save TV format/enhancements. */
-	}
-
-	sdvo_priv->save_SDVOX = I915_READ(sdvo_priv->sdvo_reg);
-}
-
-static void intel_sdvo_restore(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
-	int o;
-	int i;
-	bool input1, input2;
-	u8 status;
-
-	intel_sdvo_set_active_outputs(intel_encoder, 0);
-
-	for (o = SDVO_OUTPUT_FIRST; o <= SDVO_OUTPUT_LAST; o++)
-	{
-		u16  this_output = (1 << o);
-		if (sdvo_priv->caps.output_flags & this_output) {
-			intel_sdvo_set_target_output(intel_encoder, this_output);
-			intel_sdvo_set_output_timing(intel_encoder, &sdvo_priv->save_output_dtd[o]);
-		}
-	}
-
-	if (sdvo_priv->caps.sdvo_inputs_mask & 0x1) {
-		intel_sdvo_set_target_input(intel_encoder, true, false);
-		intel_sdvo_set_input_timing(intel_encoder, &sdvo_priv->save_input_dtd_1);
-	}
-
-	if (sdvo_priv->caps.sdvo_inputs_mask & 0x2) {
-		intel_sdvo_set_target_input(intel_encoder, false, true);
-		intel_sdvo_set_input_timing(intel_encoder, &sdvo_priv->save_input_dtd_2);
-	}
-
-	intel_sdvo_set_clock_rate_mult(intel_encoder, sdvo_priv->save_sdvo_mult);
-
-	if (sdvo_priv->is_tv) {
-		/* XXX: Restore TV format/enhancements. */
-	}
-
-	intel_sdvo_write_sdvox(intel_encoder, sdvo_priv->save_SDVOX);
-
-	if (sdvo_priv->save_SDVOX & SDVO_ENABLE)
-	{
-		for (i = 0; i < 2; i++)
-			intel_wait_for_vblank(dev);
-		status = intel_sdvo_get_trained_inputs(intel_encoder, &input1, &input2);
-		if (status == SDVO_CMD_STATUS_SUCCESS && !input1)
-			DRM_DEBUG_KMS("First %s output reported failure to "
-					"sync\n", SDVO_NAME(sdvo_priv));
-	}
-
-	intel_sdvo_set_active_outputs(intel_encoder, sdvo_priv->save_active_outputs);
-}
-
 static int intel_sdvo_mode_valid(struct drm_connector *connector,
 				 struct drm_display_mode *mode)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
 
 	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
@@ -1490,6 +1361,8 @@ static bool intel_sdvo_get_capabilities(struct intel_encoder *intel_encoder, str
 	return true;
 }
 
+/* No use! */
+#if 0
 struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB)
 {
 	struct drm_connector *connector = NULL;
@@ -1560,6 +1433,7 @@ void intel_sdvo_set_hotplug(struct drm_connector *connector, int on)
 	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0);
 	intel_sdvo_read_response(intel_encoder, &response, 2);
 }
+#endif
 
 static bool
 intel_sdvo_multifunc_encoder(struct intel_encoder *intel_encoder)
@@ -1598,12 +1472,17 @@ static struct drm_connector *
 intel_find_analog_connector(struct drm_device *dev)
 {
 	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 	struct intel_encoder *intel_encoder;
 
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		intel_encoder = to_intel_encoder(connector);
-		if (intel_encoder->type == INTEL_OUTPUT_ANALOG)
-			return connector;
+	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		intel_encoder = enc_to_intel_encoder(encoder);
+		if (intel_encoder->type == INTEL_OUTPUT_ANALOG) {
+			list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+				if (encoder == intel_attached_encoder(connector))
+					return connector;
+			}
+		}
 	}
 	return NULL;
 }
@@ -1625,15 +1504,17 @@ intel_analog_is_connected(struct drm_device *dev)
 }
 
 enum drm_connector_status
-intel_sdvo_hdmi_sink_detect(struct drm_connector *connector, u16 response)
+intel_sdvo_hdmi_sink_detect(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
 	enum drm_connector_status status = connector_status_connected;
 	struct edid *edid = NULL;
 
-	edid = drm_get_edid(&intel_encoder->base,
-			    intel_encoder->ddc_bus);
+	edid = drm_get_edid(connector, intel_encoder->ddc_bus);
 
 	/* This is only applied to SDVO cards with multiple outputs */
 	if (edid == NULL && intel_sdvo_multifunc_encoder(intel_encoder)) {
@@ -1646,8 +1527,7 @@ intel_sdvo_hdmi_sink_detect(struct drm_connector *connector, u16 response)
 		 */
 		while(temp_ddc > 1) {
 			sdvo_priv->ddc_bus = temp_ddc;
-			edid = drm_get_edid(&intel_encoder->base,
-				intel_encoder->ddc_bus);
+			edid = drm_get_edid(connector, intel_encoder->ddc_bus);
 			if (edid) {
 				/*
 				 * When we can get the EDID, maybe it is the
@@ -1664,28 +1544,25 @@ intel_sdvo_hdmi_sink_detect(struct drm_connector *connector, u16 response)
 	/* when there is no edid and no monitor is connected with VGA
 	 * port, try to use the CRT ddc to read the EDID for DVI-connector
 	 */
-	if (edid == NULL &&
-	    sdvo_priv->analog_ddc_bus &&
-	    !intel_analog_is_connected(intel_encoder->base.dev))
-		edid = drm_get_edid(&intel_encoder->base,
-				    sdvo_priv->analog_ddc_bus);
+	if (edid == NULL && sdvo_priv->analog_ddc_bus &&
+	    !intel_analog_is_connected(connector->dev))
+		edid = drm_get_edid(connector, sdvo_priv->analog_ddc_bus);
+
 	if (edid != NULL) {
-		/* Don't report the output as connected if it's a DVI-I
-		 * connector with a non-digital EDID coming out.
-		 */
-		if (response & (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1)) {
-			if (edid->input & DRM_EDID_INPUT_DIGITAL)
-				sdvo_priv->is_hdmi =
-					drm_detect_hdmi_monitor(edid);
-			else
-				status = connector_status_disconnected;
-		}
+		bool is_digital = !!(edid->input & DRM_EDID_INPUT_DIGITAL);
+		bool need_digital = !!(sdvo_connector->output_flag & SDVO_TMDS_MASK);
 
-		kfree(edid);
-		intel_encoder->base.display_info.raw_edid = NULL;
+		/* DDC bus is shared, match EDID to connector type */
+		if (is_digital && need_digital)
+			sdvo_priv->is_hdmi = drm_detect_hdmi_monitor(edid);
+		else if (is_digital != need_digital)
+			status = connector_status_disconnected;
 
-	} else if (response & (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1))
+		connector->display_info.raw_edid = NULL;
+	} else
 		status = connector_status_disconnected;
+	
+	kfree(edid);
 
 	return status;
 }
@@ -1694,8 +1571,12 @@ static enum drm_connector_status intel_sdvo_detect(struct drm_connector *connect
 {
 	uint16_t response;
 	u8 status;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct intel_connector *intel_connector = to_intel_connector(connector);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
+	enum drm_connector_status ret;
 
 	intel_sdvo_write_cmd(intel_encoder,
 			     SDVO_CMD_GET_ATTACHED_DISPLAYS, NULL, 0);
@@ -1713,24 +1594,41 @@ static enum drm_connector_status intel_sdvo_detect(struct drm_connector *connect
 	if (response == 0)
 		return connector_status_disconnected;
 
-	if (intel_sdvo_multifunc_encoder(intel_encoder) &&
-		sdvo_priv->attached_output != response) {
-		if (sdvo_priv->controlled_output != response &&
-			intel_sdvo_output_setup(intel_encoder, response) != true)
-			return connector_status_unknown;
-		sdvo_priv->attached_output = response;
+	sdvo_priv->attached_output = response;
+
+	if ((sdvo_connector->output_flag & response) == 0)
+		ret = connector_status_disconnected;
+	else if (response & SDVO_TMDS_MASK)
+		ret = intel_sdvo_hdmi_sink_detect(connector);
+	else
+		ret = connector_status_connected;
+
+	/* May update encoder flag for like clock for SDVO TV, etc.*/
+	if (ret == connector_status_connected) {
+		sdvo_priv->is_tv = false;
+		sdvo_priv->is_lvds = false;
+		intel_encoder->needs_tv_clock = false;
+
+		if (response & SDVO_TV_MASK) {
+			sdvo_priv->is_tv = true;
+			intel_encoder->needs_tv_clock = true;
+		}
+		if (response & SDVO_LVDS_MASK)
+			sdvo_priv->is_lvds = true;
 	}
-	return intel_sdvo_hdmi_sink_detect(connector, response);
+
+	return ret;
 }
 
 static void intel_sdvo_get_ddc_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
 	int num_modes;
 
 	/* set the bus switch and get the modes */
-	num_modes = intel_ddc_get_modes(intel_encoder);
+	num_modes = intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 
 	/*
 	 * Mac mini hack.  On this device, the DVI-I connector shares one DDC
@@ -1740,17 +1638,10 @@ static void intel_sdvo_get_ddc_modes(struct drm_connector *connector)
 	 */
 	if (num_modes == 0 &&
 	    sdvo_priv->analog_ddc_bus &&
-	    !intel_analog_is_connected(intel_encoder->base.dev)) {
-		struct i2c_adapter *digital_ddc_bus;
-
+	    !intel_analog_is_connected(connector->dev)) {
 		/* Switch to the analog ddc bus and try that
 		 */
-		digital_ddc_bus = intel_encoder->ddc_bus;
-		intel_encoder->ddc_bus = sdvo_priv->analog_ddc_bus;
-
-		(void) intel_ddc_get_modes(intel_encoder);
-
-		intel_encoder->ddc_bus = digital_ddc_bus;
+		(void) intel_ddc_get_modes(connector, sdvo_priv->analog_ddc_bus);
 	}
 }
 
@@ -1821,8 +1712,9 @@ struct drm_display_mode sdvo_tv_modes[] = {
 
 static void intel_sdvo_get_tv_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *output = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = output->dev_priv;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
 	struct intel_sdvo_sdtv_resolution_request tv_res;
 	uint32_t reply = 0, format_map = 0;
 	int i;
@@ -1842,11 +1734,11 @@ static void intel_sdvo_get_tv_modes(struct drm_connector *connector)
 	       sizeof(format_map) ? sizeof(format_map) :
 	       sizeof(struct intel_sdvo_sdtv_resolution_request));
 
-	intel_sdvo_set_target_output(output, sdvo_priv->controlled_output);
+	intel_sdvo_set_target_output(intel_encoder, sdvo_priv->attached_output);
 
-	intel_sdvo_write_cmd(output, SDVO_CMD_GET_SDTV_RESOLUTION_SUPPORT,
+	intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_SDTV_RESOLUTION_SUPPORT,
 			     &tv_res, sizeof(tv_res));
-	status = intel_sdvo_read_response(output, &reply, 3);
+	status = intel_sdvo_read_response(intel_encoder, &reply, 3);
 	if (status != SDVO_CMD_STATUS_SUCCESS)
 		return;
 
@@ -1863,7 +1755,8 @@ static void intel_sdvo_get_tv_modes(struct drm_connector *connector)
 
 static void intel_sdvo_get_lvds_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct drm_i915_private *dev_priv = connector->dev->dev_private;
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
 	struct drm_display_mode *newmode;
@@ -1873,7 +1766,7 @@ static void intel_sdvo_get_lvds_modes(struct drm_connector *connector)
 	 * Assume that the preferred modes are
 	 * arranged in priority order.
 	 */
-	intel_ddc_get_modes(intel_encoder);
+	intel_ddc_get_modes(connector, intel_encoder->ddc_bus);
 	if (list_empty(&connector->probed_modes) == false)
 		goto end;
 
@@ -1902,12 +1795,12 @@ end:
 
 static int intel_sdvo_get_modes(struct drm_connector *connector)
 {
-	struct intel_encoder *output = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = output->dev_priv;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
 
-	if (sdvo_priv->is_tv)
+	if (IS_TV(sdvo_connector))
 		intel_sdvo_get_tv_modes(connector);
-	else if (sdvo_priv->is_lvds == true)
+	else if (IS_LVDS(sdvo_connector))
 		intel_sdvo_get_lvds_modes(connector);
 	else
 		intel_sdvo_get_ddc_modes(connector);
@@ -1920,11 +1813,11 @@ static int intel_sdvo_get_modes(struct drm_connector *connector)
 static
 void intel_sdvo_destroy_enhance_property(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_priv = intel_connector->dev_priv;
 	struct drm_device *dev = connector->dev;
 
-	if (sdvo_priv->is_tv) {
+	if (IS_TV(sdvo_priv)) {
 		if (sdvo_priv->left_property)
 			drm_property_destroy(dev, sdvo_priv->left_property);
 		if (sdvo_priv->right_property)
@@ -1937,8 +1830,6 @@ void intel_sdvo_destroy_enhance_property(struct drm_connector *connector)
 			drm_property_destroy(dev, sdvo_priv->hpos_property);
 		if (sdvo_priv->vpos_property)
 			drm_property_destroy(dev, sdvo_priv->vpos_property);
-	}
-	if (sdvo_priv->is_tv) {
 		if (sdvo_priv->saturation_property)
 			drm_property_destroy(dev,
 					sdvo_priv->saturation_property);
@@ -1948,7 +1839,7 @@ void intel_sdvo_destroy_enhance_property(struct drm_connector *connector)
 		if (sdvo_priv->hue_property)
 			drm_property_destroy(dev, sdvo_priv->hue_property);
 	}
-	if (sdvo_priv->is_tv || sdvo_priv->is_lvds) {
+	if (IS_TV(sdvo_priv) || IS_LVDS(sdvo_priv)) {
 		if (sdvo_priv->brightness_property)
 			drm_property_destroy(dev,
 					sdvo_priv->brightness_property);
@@ -1958,31 +1849,17 @@ void intel_sdvo_destroy_enhance_property(struct drm_connector *connector)
 
 static void intel_sdvo_destroy(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
-
-	if (intel_encoder->i2c_bus)
-		intel_i2c_destroy(intel_encoder->i2c_bus);
-	if (intel_encoder->ddc_bus)
-		intel_i2c_destroy(intel_encoder->ddc_bus);
-	if (sdvo_priv->analog_ddc_bus)
-		intel_i2c_destroy(sdvo_priv->analog_ddc_bus);
-
-	if (sdvo_priv->sdvo_lvds_fixed_mode != NULL)
-		drm_mode_destroy(connector->dev,
-				 sdvo_priv->sdvo_lvds_fixed_mode);
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
 
-	if (sdvo_priv->tv_format_property)
+	if (sdvo_connector->tv_format_property)
 		drm_property_destroy(connector->dev,
-				     sdvo_priv->tv_format_property);
-
-	if (sdvo_priv->is_tv || sdvo_priv->is_lvds)
-		intel_sdvo_destroy_enhance_property(connector);
+				     sdvo_connector->tv_format_property);
 
+	intel_sdvo_destroy_enhance_property(connector);
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
-
-	kfree(intel_encoder);
+	kfree(connector);
 }
 
 static int
@@ -1990,9 +1867,11 @@ intel_sdvo_set_property(struct drm_connector *connector,
 			struct drm_property *property,
 			uint64_t val)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
-	struct drm_encoder *encoder = &intel_encoder->enc;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
 	struct drm_crtc *crtc = encoder->crtc;
 	int ret = 0;
 	bool changed = false;
@@ -2003,101 +1882,101 @@ intel_sdvo_set_property(struct drm_connector *connector,
 	if (ret < 0)
 		goto out;
 
-	if (property == sdvo_priv->tv_format_property) {
+	if (property == sdvo_connector->tv_format_property) {
 		if (val >= TV_FORMAT_NUM) {
 			ret = -EINVAL;
 			goto out;
 		}
 		if (sdvo_priv->tv_format_name ==
-		    sdvo_priv->tv_format_supported[val])
+		    sdvo_connector->tv_format_supported[val])
 			goto out;
 
-		sdvo_priv->tv_format_name = sdvo_priv->tv_format_supported[val];
+		sdvo_priv->tv_format_name = sdvo_connector->tv_format_supported[val];
 		changed = true;
 	}
 
-	if (sdvo_priv->is_tv || sdvo_priv->is_lvds) {
+	if (IS_TV(sdvo_connector) || IS_LVDS(sdvo_connector)) {
 		cmd = 0;
 		temp_value = val;
-		if (sdvo_priv->left_property == property) {
+		if (sdvo_connector->left_property == property) {
 			drm_connector_property_set_value(connector,
-				sdvo_priv->right_property, val);
-			if (sdvo_priv->left_margin == temp_value)
+				sdvo_connector->right_property, val);
+			if (sdvo_connector->left_margin == temp_value)
 				goto out;
 
-			sdvo_priv->left_margin = temp_value;
-			sdvo_priv->right_margin = temp_value;
-			temp_value = sdvo_priv->max_hscan -
-					sdvo_priv->left_margin;
+			sdvo_connector->left_margin = temp_value;
+			sdvo_connector->right_margin = temp_value;
+			temp_value = sdvo_connector->max_hscan -
+					sdvo_connector->left_margin;
 			cmd = SDVO_CMD_SET_OVERSCAN_H;
-		} else if (sdvo_priv->right_property == property) {
+		} else if (sdvo_connector->right_property == property) {
 			drm_connector_property_set_value(connector,
-				sdvo_priv->left_property, val);
-			if (sdvo_priv->right_margin == temp_value)
+				sdvo_connector->left_property, val);
+			if (sdvo_connector->right_margin == temp_value)
 				goto out;
 
-			sdvo_priv->left_margin = temp_value;
-			sdvo_priv->right_margin = temp_value;
-			temp_value = sdvo_priv->max_hscan -
-				sdvo_priv->left_margin;
+			sdvo_connector->left_margin = temp_value;
+			sdvo_connector->right_margin = temp_value;
+			temp_value = sdvo_connector->max_hscan -
+				sdvo_connector->left_margin;
 			cmd = SDVO_CMD_SET_OVERSCAN_H;
-		} else if (sdvo_priv->top_property == property) {
+		} else if (sdvo_connector->top_property == property) {
 			drm_connector_property_set_value(connector,
-				sdvo_priv->bottom_property, val);
-			if (sdvo_priv->top_margin == temp_value)
+				sdvo_connector->bottom_property, val);
+			if (sdvo_connector->top_margin == temp_value)
 				goto out;
 
-			sdvo_priv->top_margin = temp_value;
-			sdvo_priv->bottom_margin = temp_value;
-			temp_value = sdvo_priv->max_vscan -
-					sdvo_priv->top_margin;
+			sdvo_connector->top_margin = temp_value;
+			sdvo_connector->bottom_margin = temp_value;
+			temp_value = sdvo_connector->max_vscan -
+					sdvo_connector->top_margin;
 			cmd = SDVO_CMD_SET_OVERSCAN_V;
-		} else if (sdvo_priv->bottom_property == property) {
+		} else if (sdvo_connector->bottom_property == property) {
 			drm_connector_property_set_value(connector,
-				sdvo_priv->top_property, val);
-			if (sdvo_priv->bottom_margin == temp_value)
+				sdvo_connector->top_property, val);
+			if (sdvo_connector->bottom_margin == temp_value)
 				goto out;
-			sdvo_priv->top_margin = temp_value;
-			sdvo_priv->bottom_margin = temp_value;
-			temp_value = sdvo_priv->max_vscan -
-					sdvo_priv->top_margin;
+			sdvo_connector->top_margin = temp_value;
+			sdvo_connector->bottom_margin = temp_value;
+			temp_value = sdvo_connector->max_vscan -
+					sdvo_connector->top_margin;
 			cmd = SDVO_CMD_SET_OVERSCAN_V;
-		} else if (sdvo_priv->hpos_property == property) {
-			if (sdvo_priv->cur_hpos == temp_value)
+		} else if (sdvo_connector->hpos_property == property) {
+			if (sdvo_connector->cur_hpos == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_POSITION_H;
-			sdvo_priv->cur_hpos = temp_value;
-		} else if (sdvo_priv->vpos_property == property) {
-			if (sdvo_priv->cur_vpos == temp_value)
+			sdvo_connector->cur_hpos = temp_value;
+		} else if (sdvo_connector->vpos_property == property) {
+			if (sdvo_connector->cur_vpos == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_POSITION_V;
-			sdvo_priv->cur_vpos = temp_value;
-		} else if (sdvo_priv->saturation_property == property) {
-			if (sdvo_priv->cur_saturation == temp_value)
+			sdvo_connector->cur_vpos = temp_value;
+		} else if (sdvo_connector->saturation_property == property) {
+			if (sdvo_connector->cur_saturation == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_SATURATION;
-			sdvo_priv->cur_saturation = temp_value;
-		} else if (sdvo_priv->contrast_property == property) {
-			if (sdvo_priv->cur_contrast == temp_value)
+			sdvo_connector->cur_saturation = temp_value;
+		} else if (sdvo_connector->contrast_property == property) {
+			if (sdvo_connector->cur_contrast == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_CONTRAST;
-			sdvo_priv->cur_contrast = temp_value;
-		} else if (sdvo_priv->hue_property == property) {
-			if (sdvo_priv->cur_hue == temp_value)
+			sdvo_connector->cur_contrast = temp_value;
+		} else if (sdvo_connector->hue_property == property) {
+			if (sdvo_connector->cur_hue == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_HUE;
-			sdvo_priv->cur_hue = temp_value;
-		} else if (sdvo_priv->brightness_property == property) {
-			if (sdvo_priv->cur_brightness == temp_value)
+			sdvo_connector->cur_hue = temp_value;
+		} else if (sdvo_connector->brightness_property == property) {
+			if (sdvo_connector->cur_brightness == temp_value)
 				goto out;
 
 			cmd = SDVO_CMD_SET_BRIGHTNESS;
-			sdvo_priv->cur_brightness = temp_value;
+			sdvo_connector->cur_brightness = temp_value;
 		}
 		if (cmd) {
 			intel_sdvo_write_cmd(intel_encoder, cmd, &temp_value, 2);
@@ -2127,8 +2006,6 @@ static const struct drm_encoder_helper_funcs intel_sdvo_helper_funcs = {
 
 static const struct drm_connector_funcs intel_sdvo_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_sdvo_save,
-	.restore = intel_sdvo_restore,
 	.detect = intel_sdvo_detect,
 	.fill_modes = drm_helper_probe_single_connector_modes,
 	.set_property = intel_sdvo_set_property,
@@ -2138,12 +2015,27 @@ static const struct drm_connector_funcs intel_sdvo_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_sdvo_connector_helper_funcs = {
 	.get_modes = intel_sdvo_get_modes,
 	.mode_valid = intel_sdvo_mode_valid,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_sdvo_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+
+	if (intel_encoder->i2c_bus)
+		intel_i2c_destroy(intel_encoder->i2c_bus);
+	if (intel_encoder->ddc_bus)
+		intel_i2c_destroy(intel_encoder->ddc_bus);
+	if (sdvo_priv->analog_ddc_bus)
+		intel_i2c_destroy(sdvo_priv->analog_ddc_bus);
+
+	if (sdvo_priv->sdvo_lvds_fixed_mode != NULL)
+		drm_mode_destroy(encoder->dev,
+				 sdvo_priv->sdvo_lvds_fixed_mode);
+
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_sdvo_enc_funcs = {
@@ -2159,49 +2051,29 @@ static const struct drm_encoder_funcs intel_sdvo_enc_funcs = {
  * outputs, then LVDS outputs.
  */
 static void
-intel_sdvo_select_ddc_bus(struct intel_sdvo_priv *dev_priv)
+intel_sdvo_select_ddc_bus(struct drm_i915_private *dev_priv,
+			  struct intel_sdvo_priv *sdvo, u32 reg)
 {
-	uint16_t mask = 0;
-	unsigned int num_bits;
-
-	/* Make a mask of outputs less than or equal to our own priority in the
-	 * list.
-	 */
-	switch (dev_priv->controlled_output) {
-	case SDVO_OUTPUT_LVDS1:
-		mask |= SDVO_OUTPUT_LVDS1;
-	case SDVO_OUTPUT_LVDS0:
-		mask |= SDVO_OUTPUT_LVDS0;
-	case SDVO_OUTPUT_TMDS1:
-		mask |= SDVO_OUTPUT_TMDS1;
-	case SDVO_OUTPUT_TMDS0:
-		mask |= SDVO_OUTPUT_TMDS0;
-	case SDVO_OUTPUT_RGB1:
-		mask |= SDVO_OUTPUT_RGB1;
-	case SDVO_OUTPUT_RGB0:
-		mask |= SDVO_OUTPUT_RGB0;
-		break;
-	}
+	struct sdvo_device_mapping *mapping;
 
-	/* Count bits to find what number we are in the priority list. */
-	mask &= dev_priv->caps.output_flags;
-	num_bits = hweight16(mask);
-	if (num_bits > 3) {
-		/* if more than 3 outputs, default to DDC bus 3 for now */
-		num_bits = 3;
-	}
+	if (IS_SDVOB(reg))
+		mapping = &(dev_priv->sdvo_mappings[0]);
+	else
+		mapping = &(dev_priv->sdvo_mappings[1]);
 
-	/* Corresponds to SDVO_CONTROL_BUS_DDCx */
-	dev_priv->ddc_bus = 1 << num_bits;
+	sdvo->ddc_bus = 1 << ((mapping->ddc_pin & 0xf0) >> 4);
 }
 
 static bool
-intel_sdvo_get_digital_encoding_mode(struct intel_encoder *output)
+intel_sdvo_get_digital_encoding_mode(struct intel_encoder *output, int device)
 {
 	struct intel_sdvo_priv *sdvo_priv = output->dev_priv;
 	uint8_t status;
 
-	intel_sdvo_set_target_output(output, sdvo_priv->controlled_output);
+	if (device == 0)
+		intel_sdvo_set_target_output(output, SDVO_OUTPUT_TMDS0);
+	else
+		intel_sdvo_set_target_output(output, SDVO_OUTPUT_TMDS1);
 
 	intel_sdvo_write_cmd(output, SDVO_CMD_GET_ENCODE, NULL, 0);
 	status = intel_sdvo_read_response(output, &sdvo_priv->is_hdmi, 1);
@@ -2214,15 +2086,13 @@ static struct intel_encoder *
 intel_sdvo_chan_to_intel_encoder(struct intel_i2c_chan *chan)
 {
 	struct drm_device *dev = chan->drm_dev;
-	struct drm_connector *connector;
+	struct drm_encoder *encoder;
 	struct intel_encoder *intel_encoder = NULL;
 
-	list_for_each_entry(connector,
-			&dev->mode_config.connector_list, head) {
-		if (to_intel_encoder(connector)->ddc_bus == &chan->adapter) {
-			intel_encoder = to_intel_encoder(connector);
+	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		intel_encoder = enc_to_intel_encoder(encoder);
+		if (intel_encoder->ddc_bus == &chan->adapter)
 			break;
-		}
 	}
 	return intel_encoder;
 }
@@ -2259,7 +2129,7 @@ intel_sdvo_get_slave_addr(struct drm_device *dev, int sdvo_reg)
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct sdvo_device_mapping *my_mapping, *other_mapping;
 
-	if (sdvo_reg == SDVOB) {
+	if (IS_SDVOB(sdvo_reg)) {
 		my_mapping = &dev_priv->sdvo_mappings[0];
 		other_mapping = &dev_priv->sdvo_mappings[1];
 	} else {
@@ -2284,120 +2154,237 @@ intel_sdvo_get_slave_addr(struct drm_device *dev, int sdvo_reg)
 	/* No SDVO device info is found for another DVO port,
 	 * so use mapping assumption we had before BIOS parsing.
 	 */
-	if (sdvo_reg == SDVOB)
+	if (IS_SDVOB(sdvo_reg))
 		return 0x70;
 	else
 		return 0x72;
 }
 
-static int intel_sdvo_bad_tv_callback(const struct dmi_system_id *id)
+static bool
+intel_sdvo_connector_alloc (struct intel_connector **ret)
 {
-	DRM_DEBUG_KMS("Ignoring bad SDVO TV connector for %s\n", id->ident);
-	return 1;
+	struct intel_connector *intel_connector;
+	struct intel_sdvo_connector *sdvo_connector;
+
+	*ret = kzalloc(sizeof(*intel_connector) +
+			sizeof(*sdvo_connector), GFP_KERNEL);
+	if (!*ret)
+		return false;
+
+	intel_connector = *ret;
+	sdvo_connector = (struct intel_sdvo_connector *)(intel_connector + 1);
+	intel_connector->dev_priv = sdvo_connector;
+
+	return true;
 }
 
-static struct dmi_system_id intel_sdvo_bad_tv[] = {
-	{
-		.callback = intel_sdvo_bad_tv_callback,
-		.ident = "IntelG45/ICH10R/DME1737",
-		.matches = {
-			DMI_MATCH(DMI_SYS_VENDOR, "IBM CORPORATION"),
-			DMI_MATCH(DMI_PRODUCT_NAME, "4800784"),
-		},
-	},
+static void
+intel_sdvo_connector_create (struct drm_encoder *encoder,
+			     struct drm_connector *connector)
+{
+	drm_connector_init(encoder->dev, connector, &intel_sdvo_connector_funcs,
+			   connector->connector_type);
 
-	{ }	/* terminating entry */
-};
+	drm_connector_helper_add(connector, &intel_sdvo_connector_helper_funcs);
+
+	connector->interlace_allowed = 0;
+	connector->doublescan_allowed = 0;
+	connector->display_info.subpixel_order = SubPixelHorizontalRGB;
+
+	drm_mode_connector_attach_encoder(connector, encoder);
+	drm_sysfs_connector_add(connector);
+}
 
 static bool
-intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags)
+intel_sdvo_dvi_init(struct intel_encoder *intel_encoder, int device)
 {
-	struct drm_connector *connector = &intel_encoder->base;
 	struct drm_encoder *encoder = &intel_encoder->enc;
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
-	bool ret = true, registered = false;
+	struct drm_connector *connector;
+	struct intel_connector *intel_connector;
+	struct intel_sdvo_connector *sdvo_connector;
+
+	if (!intel_sdvo_connector_alloc(&intel_connector))
+		return false;
+
+	sdvo_connector = intel_connector->dev_priv;
+
+	if (device == 0) {
+		sdvo_priv->controlled_output |= SDVO_OUTPUT_TMDS0;
+		sdvo_connector->output_flag = SDVO_OUTPUT_TMDS0;
+	} else if (device == 1) {
+		sdvo_priv->controlled_output |= SDVO_OUTPUT_TMDS1;
+		sdvo_connector->output_flag = SDVO_OUTPUT_TMDS1;
+	}
+
+	connector = &intel_connector->base;
+	connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+	encoder->encoder_type = DRM_MODE_ENCODER_TMDS;
+	connector->connector_type = DRM_MODE_CONNECTOR_DVID;
+
+	if (intel_sdvo_get_supp_encode(intel_encoder, &sdvo_priv->encode)
+		&& intel_sdvo_get_digital_encoding_mode(intel_encoder, device)
+		&& sdvo_priv->is_hdmi) {
+		/* enable hdmi encoding mode if supported */
+		intel_sdvo_set_encode(intel_encoder, SDVO_ENCODE_HDMI);
+		intel_sdvo_set_colorimetry(intel_encoder,
+					   SDVO_COLORIMETRY_RGB256);
+		connector->connector_type = DRM_MODE_CONNECTOR_HDMIA;
+	}
+	intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
+				    (1 << INTEL_ANALOG_CLONE_BIT);
+
+	intel_sdvo_connector_create(encoder, connector);
+
+	return true;
+}
+
+static bool
+intel_sdvo_tv_init(struct intel_encoder *intel_encoder, int type)
+{
+        struct drm_encoder *encoder = &intel_encoder->enc;
+        struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+        struct drm_connector *connector;
+        struct intel_connector *intel_connector;
+        struct intel_sdvo_connector *sdvo_connector;
+
+        if (!intel_sdvo_connector_alloc(&intel_connector))
+                return false;
+
+        connector = &intel_connector->base;
+        encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
+        connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
+        sdvo_connector = intel_connector->dev_priv;
+
+        sdvo_priv->controlled_output |= type;
+        sdvo_connector->output_flag = type;
+
+        sdvo_priv->is_tv = true;
+        intel_encoder->needs_tv_clock = true;
+        intel_encoder->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT;
+
+        intel_sdvo_connector_create(encoder, connector);
+
+        intel_sdvo_tv_create_property(connector, type);
+
+        intel_sdvo_create_enhance_property(connector);
+
+        return true;
+}
+
+static bool
+intel_sdvo_analog_init(struct intel_encoder *intel_encoder, int device)
+{
+        struct drm_encoder *encoder = &intel_encoder->enc;
+        struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+        struct drm_connector *connector;
+        struct intel_connector *intel_connector;
+        struct intel_sdvo_connector *sdvo_connector;
+
+        if (!intel_sdvo_connector_alloc(&intel_connector))
+                return false;
+
+        connector = &intel_connector->base;
+	connector->polled = DRM_CONNECTOR_POLL_CONNECT;
+        encoder->encoder_type = DRM_MODE_ENCODER_DAC;
+        connector->connector_type = DRM_MODE_CONNECTOR_VGA;
+        sdvo_connector = intel_connector->dev_priv;
+
+        if (device == 0) {
+                sdvo_priv->controlled_output |= SDVO_OUTPUT_RGB0;
+                sdvo_connector->output_flag = SDVO_OUTPUT_RGB0;
+        } else if (device == 1) {
+                sdvo_priv->controlled_output |= SDVO_OUTPUT_RGB1;
+                sdvo_connector->output_flag = SDVO_OUTPUT_RGB1;
+        }
+
+        intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
+                                    (1 << INTEL_ANALOG_CLONE_BIT);
+
+        intel_sdvo_connector_create(encoder, connector);
+        return true;
+}
+
+static bool
+intel_sdvo_lvds_init(struct intel_encoder *intel_encoder, int device)
+{
+        struct drm_encoder *encoder = &intel_encoder->enc;
+        struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+        struct drm_connector *connector;
+        struct intel_connector *intel_connector;
+        struct intel_sdvo_connector *sdvo_connector;
+
+        if (!intel_sdvo_connector_alloc(&intel_connector))
+                return false;
+
+        connector = &intel_connector->base;
+        encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
+        connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
+        sdvo_connector = intel_connector->dev_priv;
+
+        sdvo_priv->is_lvds = true;
+
+        if (device == 0) {
+                sdvo_priv->controlled_output |= SDVO_OUTPUT_LVDS0;
+                sdvo_connector->output_flag = SDVO_OUTPUT_LVDS0;
+        } else if (device == 1) {
+                sdvo_priv->controlled_output |= SDVO_OUTPUT_LVDS1;
+                sdvo_connector->output_flag = SDVO_OUTPUT_LVDS1;
+        }
+
+        intel_encoder->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) |
+                                    (1 << INTEL_SDVO_LVDS_CLONE_BIT);
+
+        intel_sdvo_connector_create(encoder, connector);
+        intel_sdvo_create_enhance_property(connector);
+        return true;
+}
+
+static bool
+intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags)
+{
+	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
 
 	sdvo_priv->is_tv = false;
 	intel_encoder->needs_tv_clock = false;
 	sdvo_priv->is_lvds = false;
 
-	if (device_is_registered(&connector->kdev)) {
-		drm_sysfs_connector_remove(connector);
-		registered = true;
-	}
+	/* SDVO requires XXX1 function may not exist unless it has XXX0 function.*/
 
-	if (flags &
-	    (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1)) {
-		if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_TMDS0)
-			sdvo_priv->controlled_output = SDVO_OUTPUT_TMDS0;
-		else
-			sdvo_priv->controlled_output = SDVO_OUTPUT_TMDS1;
-
-		encoder->encoder_type = DRM_MODE_ENCODER_TMDS;
-		connector->connector_type = DRM_MODE_CONNECTOR_DVID;
-
-		if (intel_sdvo_get_supp_encode(intel_encoder,
-					       &sdvo_priv->encode) &&
-		    intel_sdvo_get_digital_encoding_mode(intel_encoder) &&
-		    sdvo_priv->is_hdmi) {
-			/* enable hdmi encoding mode if supported */
-			intel_sdvo_set_encode(intel_encoder, SDVO_ENCODE_HDMI);
-			intel_sdvo_set_colorimetry(intel_encoder,
-						   SDVO_COLORIMETRY_RGB256);
-			connector->connector_type = DRM_MODE_CONNECTOR_HDMIA;
-			intel_encoder->clone_mask =
-					(1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
-					(1 << INTEL_ANALOG_CLONE_BIT);
-		}
-	} else if ((flags & SDVO_OUTPUT_SVID0) &&
-		   !dmi_check_system(intel_sdvo_bad_tv)) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_SVID0;
-		encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
-		connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
-		sdvo_priv->is_tv = true;
-		intel_encoder->needs_tv_clock = true;
-		intel_encoder->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT;
-	} else if (flags & SDVO_OUTPUT_RGB0) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_RGB0;
-		encoder->encoder_type = DRM_MODE_ENCODER_DAC;
-		connector->connector_type = DRM_MODE_CONNECTOR_VGA;
-		intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
-					(1 << INTEL_ANALOG_CLONE_BIT);
-	} else if (flags & SDVO_OUTPUT_RGB1) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_RGB1;
-		encoder->encoder_type = DRM_MODE_ENCODER_DAC;
-		connector->connector_type = DRM_MODE_CONNECTOR_VGA;
-		intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
-					(1 << INTEL_ANALOG_CLONE_BIT);
-	} else if (flags & SDVO_OUTPUT_CVBS0) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_CVBS0;
-		encoder->encoder_type = DRM_MODE_ENCODER_TVDAC;
-		connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO;
-		sdvo_priv->is_tv = true;
-		intel_encoder->needs_tv_clock = true;
-		intel_encoder->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT;
-	} else if (flags & SDVO_OUTPUT_LVDS0) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS0;
-		encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
-		connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
-		sdvo_priv->is_lvds = true;
-		intel_encoder->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) |
-					(1 << INTEL_SDVO_LVDS_CLONE_BIT);
-	} else if (flags & SDVO_OUTPUT_LVDS1) {
-
-		sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS1;
-		encoder->encoder_type = DRM_MODE_ENCODER_LVDS;
-		connector->connector_type = DRM_MODE_CONNECTOR_LVDS;
-		sdvo_priv->is_lvds = true;
-		intel_encoder->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) |
-					(1 << INTEL_SDVO_LVDS_CLONE_BIT);
-	} else {
+	if (flags & SDVO_OUTPUT_TMDS0)
+		if (!intel_sdvo_dvi_init(intel_encoder, 0))
+			return false;
+
+	if ((flags & SDVO_TMDS_MASK) == SDVO_TMDS_MASK)
+		if (!intel_sdvo_dvi_init(intel_encoder, 1))
+			return false;
+
+	/* TV has no XXX1 function block */
+	if (flags & SDVO_OUTPUT_SVID0)
+		if (!intel_sdvo_tv_init(intel_encoder, SDVO_OUTPUT_SVID0))
+			return false;
+
+	if (flags & SDVO_OUTPUT_CVBS0)
+		if (!intel_sdvo_tv_init(intel_encoder, SDVO_OUTPUT_CVBS0))
+			return false;
+
+	if (flags & SDVO_OUTPUT_RGB0)
+		if (!intel_sdvo_analog_init(intel_encoder, 0))
+			return false;
+
+	if ((flags & SDVO_RGB_MASK) == SDVO_RGB_MASK)
+		if (!intel_sdvo_analog_init(intel_encoder, 1))
+			return false;
+
+	if (flags & SDVO_OUTPUT_LVDS0)
+		if (!intel_sdvo_lvds_init(intel_encoder, 0))
+			return false;
 
+	if ((flags & SDVO_LVDS_MASK) == SDVO_LVDS_MASK)
+		if (!intel_sdvo_lvds_init(intel_encoder, 1))
+			return false;
+
+	if ((flags & SDVO_OUTPUT_MASK) == 0) {
 		unsigned char bytes[2];
 
 		sdvo_priv->controlled_output = 0;
@@ -2405,28 +2392,25 @@ intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags)
 		DRM_DEBUG_KMS("%s: Unknown SDVO output type (0x%02x%02x)\n",
 			      SDVO_NAME(sdvo_priv),
 			      bytes[0], bytes[1]);
-		ret = false;
+		return false;
 	}
 	intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
 
-	if (ret && registered)
-		ret = drm_sysfs_connector_add(connector) == 0 ? true : false;
-
-
-	return ret;
-
+	return true;
 }
 
-static void intel_sdvo_tv_create_property(struct drm_connector *connector)
+static void intel_sdvo_tv_create_property(struct drm_connector *connector, int type)
 {
-      struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_connector = intel_connector->dev_priv;
 	struct intel_sdvo_tv_format format;
 	uint32_t format_map, i;
 	uint8_t status;
 
-	intel_sdvo_set_target_output(intel_encoder,
-				     sdvo_priv->controlled_output);
+	intel_sdvo_set_target_output(intel_encoder, type);
 
 	intel_sdvo_write_cmd(intel_encoder,
 			     SDVO_CMD_GET_SUPPORTED_TV_FORMATS, NULL, 0);
@@ -2441,35 +2425,37 @@ static void intel_sdvo_tv_create_property(struct drm_connector *connector)
 	if (format_map == 0)
 		return;
 
-	sdvo_priv->format_supported_num = 0;
+	sdvo_connector->format_supported_num = 0;
 	for (i = 0 ; i < TV_FORMAT_NUM; i++)
 		if (format_map & (1 << i)) {
-			sdvo_priv->tv_format_supported
-			[sdvo_priv->format_supported_num++] =
+			sdvo_connector->tv_format_supported
+			[sdvo_connector->format_supported_num++] =
 			tv_format_names[i];
 		}
 
 
-	sdvo_priv->tv_format_property =
+	sdvo_connector->tv_format_property =
 			drm_property_create(
 				connector->dev, DRM_MODE_PROP_ENUM,
-				"mode", sdvo_priv->format_supported_num);
+				"mode", sdvo_connector->format_supported_num);
 
-	for (i = 0; i < sdvo_priv->format_supported_num; i++)
+	for (i = 0; i < sdvo_connector->format_supported_num; i++)
 		drm_property_add_enum(
-				sdvo_priv->tv_format_property, i,
-				i, sdvo_priv->tv_format_supported[i]);
+				sdvo_connector->tv_format_property, i,
+				i, sdvo_connector->tv_format_supported[i]);
 
-	sdvo_priv->tv_format_name = sdvo_priv->tv_format_supported[0];
+	sdvo_priv->tv_format_name = sdvo_connector->tv_format_supported[0];
 	drm_connector_attach_property(
-			connector, sdvo_priv->tv_format_property, 0);
+			connector, sdvo_connector->tv_format_property, 0);
 
 }
 
 static void intel_sdvo_create_enhance_property(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv;
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+	struct intel_connector *intel_connector = to_intel_connector(connector);
+	struct intel_sdvo_connector *sdvo_priv = intel_connector->dev_priv;
 	struct intel_sdvo_enhancements_reply sdvo_data;
 	struct drm_device *dev = connector->dev;
 	uint8_t status;
@@ -2488,7 +2474,7 @@ static void intel_sdvo_create_enhance_property(struct drm_connector *connector)
 		DRM_DEBUG_KMS("No enhancement is supported\n");
 		return;
 	}
-	if (sdvo_priv->is_tv) {
+	if (IS_TV(sdvo_priv)) {
 		/* when horizontal overscan is supported, Add the left/right
 		 * property
 		 */
@@ -2636,8 +2622,6 @@ static void intel_sdvo_create_enhance_property(struct drm_connector *connector)
 					"default %d, current %d\n",
 					data_value[0], data_value[1], response);
 		}
-	}
-	if (sdvo_priv->is_tv) {
 		if (sdvo_data.saturation) {
 			intel_sdvo_write_cmd(intel_encoder,
 				SDVO_CMD_GET_MAX_SATURATION, NULL, 0);
@@ -2733,7 +2717,7 @@ static void intel_sdvo_create_enhance_property(struct drm_connector *connector)
 					data_value[0], data_value[1], response);
 		}
 	}
-	if (sdvo_priv->is_tv || sdvo_priv->is_lvds) {
+	if (IS_TV(sdvo_priv) || IS_LVDS(sdvo_priv)) {
 		if (sdvo_data.brightness) {
 			intel_sdvo_write_cmd(intel_encoder,
 				SDVO_CMD_GET_MAX_BRIGHTNESS, NULL, 0);
@@ -2773,12 +2757,11 @@ static void intel_sdvo_create_enhance_property(struct drm_connector *connector)
 bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct drm_connector *connector;
 	struct intel_encoder *intel_encoder;
 	struct intel_sdvo_priv *sdvo_priv;
-
 	u8 ch[0x40];
 	int i;
+	u32 i2c_reg, ddc_reg, analog_ddc_reg;
 
 	intel_encoder = kcalloc(sizeof(struct intel_encoder)+sizeof(struct intel_sdvo_priv), 1, GFP_KERNEL);
 	if (!intel_encoder) {
@@ -2791,11 +2774,21 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg)
 	intel_encoder->dev_priv = sdvo_priv;
 	intel_encoder->type = INTEL_OUTPUT_SDVO;
 
+	if (HAS_PCH_SPLIT(dev)) {
+		i2c_reg = PCH_GPIOE;
+		ddc_reg = PCH_GPIOE;
+		analog_ddc_reg = PCH_GPIOA;
+	} else {
+		i2c_reg = GPIOE;
+		ddc_reg = GPIOE;
+		analog_ddc_reg = GPIOA;
+	}
+
 	/* setup the DDC bus. */
-	if (sdvo_reg == SDVOB)
-		intel_encoder->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOB");
+	if (IS_SDVOB(sdvo_reg))
+		intel_encoder->i2c_bus = intel_i2c_create(dev, i2c_reg, "SDVOCTRL_E for SDVOB");
 	else
-		intel_encoder->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOC");
+		intel_encoder->i2c_bus = intel_i2c_create(dev, i2c_reg, "SDVOCTRL_E for SDVOC");
 
 	if (!intel_encoder->i2c_bus)
 		goto err_inteloutput;
@@ -2809,20 +2802,20 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg)
 	for (i = 0; i < 0x40; i++) {
 		if (!intel_sdvo_read_byte(intel_encoder, i, &ch[i])) {
 			DRM_DEBUG_KMS("No SDVO device found on SDVO%c\n",
-					sdvo_reg == SDVOB ? 'B' : 'C');
+				      IS_SDVOB(sdvo_reg) ? 'B' : 'C');
 			goto err_i2c;
 		}
 	}
 
 	/* setup the DDC bus. */
-	if (sdvo_reg == SDVOB) {
-		intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOB DDC BUS");
-		sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, GPIOA,
+	if (IS_SDVOB(sdvo_reg)) {
+		intel_encoder->ddc_bus = intel_i2c_create(dev, ddc_reg, "SDVOB DDC BUS");
+		sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, analog_ddc_reg,
 						"SDVOB/VGA DDC BUS");
 		dev_priv->hotplug_supported_mask |= SDVOB_HOTPLUG_INT_STATUS;
 	} else {
-		intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOC DDC BUS");
-		sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, GPIOA,
+		intel_encoder->ddc_bus = intel_i2c_create(dev, ddc_reg, "SDVOC DDC BUS");
+		sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, analog_ddc_reg,
 						"SDVOC/VGA DDC BUS");
 		dev_priv->hotplug_supported_mask |= SDVOC_HOTPLUG_INT_STATUS;
 	}
@@ -2833,41 +2826,21 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg)
 	/* Wrap with our custom algo which switches to DDC mode */
 	intel_encoder->ddc_bus->algo = &intel_sdvo_i2c_bit_algo;
 
+	/* encoder type will be decided later */
+	drm_encoder_init(dev, &intel_encoder->enc, &intel_sdvo_enc_funcs, 0);
+	drm_encoder_helper_add(&intel_encoder->enc, &intel_sdvo_helper_funcs);
+
 	/* In default case sdvo lvds is false */
 	intel_sdvo_get_capabilities(intel_encoder, &sdvo_priv->caps);
 
 	if (intel_sdvo_output_setup(intel_encoder,
 				    sdvo_priv->caps.output_flags) != true) {
 		DRM_DEBUG_KMS("SDVO output failed to setup on SDVO%c\n",
-			  sdvo_reg == SDVOB ? 'B' : 'C');
+			      IS_SDVOB(sdvo_reg) ? 'B' : 'C');
 		goto err_i2c;
 	}
 
-
-	connector = &intel_encoder->base;
-	drm_connector_init(dev, connector, &intel_sdvo_connector_funcs,
-			   connector->connector_type);
-
-	drm_connector_helper_add(connector, &intel_sdvo_connector_helper_funcs);
-	connector->interlace_allowed = 0;
-	connector->doublescan_allowed = 0;
-	connector->display_info.subpixel_order = SubPixelHorizontalRGB;
-
-	drm_encoder_init(dev, &intel_encoder->enc,
-			&intel_sdvo_enc_funcs, intel_encoder->enc.encoder_type);
-
-	drm_encoder_helper_add(&intel_encoder->enc, &intel_sdvo_helper_funcs);
-
-	drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc);
-	if (sdvo_priv->is_tv)
-		intel_sdvo_tv_create_property(connector);
-
-	if (sdvo_priv->is_tv || sdvo_priv->is_lvds)
-		intel_sdvo_create_enhance_property(connector);
-
-	drm_sysfs_connector_add(connector);
-
-	intel_sdvo_select_ddc_bus(sdvo_priv);
+	intel_sdvo_select_ddc_bus(dev_priv, sdvo_priv, sdvo_reg);
 
 	/* Set the input timing to the screen. Assume always input 0. */
 	intel_sdvo_set_target_input(intel_encoder, true, false);
diff --git a/drivers/gpu/drm/i915/intel_tv.c b/drivers/gpu/drm/i915/intel_tv.c
index d7d39b2..6d553c2 100644
--- a/drivers/gpu/drm/i915/intel_tv.c
+++ b/drivers/gpu/drm/i915/intel_tv.c
@@ -916,143 +916,6 @@ intel_tv_dpms(struct drm_encoder *encoder, int mode)
 	}
 }
 
-static void
-intel_tv_save(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_tv_priv *tv_priv = intel_encoder->dev_priv;
-	int i;
-
-	tv_priv->save_TV_H_CTL_1 = I915_READ(TV_H_CTL_1);
-	tv_priv->save_TV_H_CTL_2 = I915_READ(TV_H_CTL_2);
-	tv_priv->save_TV_H_CTL_3 = I915_READ(TV_H_CTL_3);
-	tv_priv->save_TV_V_CTL_1 = I915_READ(TV_V_CTL_1);
-	tv_priv->save_TV_V_CTL_2 = I915_READ(TV_V_CTL_2);
-	tv_priv->save_TV_V_CTL_3 = I915_READ(TV_V_CTL_3);
-	tv_priv->save_TV_V_CTL_4 = I915_READ(TV_V_CTL_4);
-	tv_priv->save_TV_V_CTL_5 = I915_READ(TV_V_CTL_5);
-	tv_priv->save_TV_V_CTL_6 = I915_READ(TV_V_CTL_6);
-	tv_priv->save_TV_V_CTL_7 = I915_READ(TV_V_CTL_7);
-	tv_priv->save_TV_SC_CTL_1 = I915_READ(TV_SC_CTL_1);
-	tv_priv->save_TV_SC_CTL_2 = I915_READ(TV_SC_CTL_2);
-	tv_priv->save_TV_SC_CTL_3 = I915_READ(TV_SC_CTL_3);
-
-	tv_priv->save_TV_CSC_Y = I915_READ(TV_CSC_Y);
-	tv_priv->save_TV_CSC_Y2 = I915_READ(TV_CSC_Y2);
-	tv_priv->save_TV_CSC_U = I915_READ(TV_CSC_U);
-	tv_priv->save_TV_CSC_U2 = I915_READ(TV_CSC_U2);
-	tv_priv->save_TV_CSC_V = I915_READ(TV_CSC_V);
-	tv_priv->save_TV_CSC_V2 = I915_READ(TV_CSC_V2);
-	tv_priv->save_TV_CLR_KNOBS = I915_READ(TV_CLR_KNOBS);
-	tv_priv->save_TV_CLR_LEVEL = I915_READ(TV_CLR_LEVEL);
-	tv_priv->save_TV_WIN_POS = I915_READ(TV_WIN_POS);
-	tv_priv->save_TV_WIN_SIZE = I915_READ(TV_WIN_SIZE);
-	tv_priv->save_TV_FILTER_CTL_1 = I915_READ(TV_FILTER_CTL_1);
-	tv_priv->save_TV_FILTER_CTL_2 = I915_READ(TV_FILTER_CTL_2);
-	tv_priv->save_TV_FILTER_CTL_3 = I915_READ(TV_FILTER_CTL_3);
-
-	for (i = 0; i < 60; i++)
-		tv_priv->save_TV_H_LUMA[i] = I915_READ(TV_H_LUMA_0 + (i <<2));
-	for (i = 0; i < 60; i++)
-		tv_priv->save_TV_H_CHROMA[i] = I915_READ(TV_H_CHROMA_0 + (i <<2));
-	for (i = 0; i < 43; i++)
-		tv_priv->save_TV_V_LUMA[i] = I915_READ(TV_V_LUMA_0 + (i <<2));
-	for (i = 0; i < 43; i++)
-		tv_priv->save_TV_V_CHROMA[i] = I915_READ(TV_V_CHROMA_0 + (i <<2));
-
-	tv_priv->save_TV_DAC = I915_READ(TV_DAC);
-	tv_priv->save_TV_CTL = I915_READ(TV_CTL);
-}
-
-static void
-intel_tv_restore(struct drm_connector *connector)
-{
-	struct drm_device *dev = connector->dev;
-	struct drm_i915_private *dev_priv = dev->dev_private;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-	struct intel_tv_priv *tv_priv = intel_encoder->dev_priv;
-	struct drm_crtc *crtc = connector->encoder->crtc;
-	struct intel_crtc *intel_crtc;
-	int i;
-
-	/* FIXME: No CRTC? */
-	if (!crtc)
-		return;
-
-	intel_crtc = to_intel_crtc(crtc);
-	I915_WRITE(TV_H_CTL_1, tv_priv->save_TV_H_CTL_1);
-	I915_WRITE(TV_H_CTL_2, tv_priv->save_TV_H_CTL_2);
-	I915_WRITE(TV_H_CTL_3, tv_priv->save_TV_H_CTL_3);
-	I915_WRITE(TV_V_CTL_1, tv_priv->save_TV_V_CTL_1);
-	I915_WRITE(TV_V_CTL_2, tv_priv->save_TV_V_CTL_2);
-	I915_WRITE(TV_V_CTL_3, tv_priv->save_TV_V_CTL_3);
-	I915_WRITE(TV_V_CTL_4, tv_priv->save_TV_V_CTL_4);
-	I915_WRITE(TV_V_CTL_5, tv_priv->save_TV_V_CTL_5);
-	I915_WRITE(TV_V_CTL_6, tv_priv->save_TV_V_CTL_6);
-	I915_WRITE(TV_V_CTL_7, tv_priv->save_TV_V_CTL_7);
-	I915_WRITE(TV_SC_CTL_1, tv_priv->save_TV_SC_CTL_1);
-	I915_WRITE(TV_SC_CTL_2, tv_priv->save_TV_SC_CTL_2);
-	I915_WRITE(TV_SC_CTL_3, tv_priv->save_TV_SC_CTL_3);
-
-	I915_WRITE(TV_CSC_Y, tv_priv->save_TV_CSC_Y);
-	I915_WRITE(TV_CSC_Y2, tv_priv->save_TV_CSC_Y2);
-	I915_WRITE(TV_CSC_U, tv_priv->save_TV_CSC_U);
-	I915_WRITE(TV_CSC_U2, tv_priv->save_TV_CSC_U2);
-	I915_WRITE(TV_CSC_V, tv_priv->save_TV_CSC_V);
-	I915_WRITE(TV_CSC_V2, tv_priv->save_TV_CSC_V2);
-	I915_WRITE(TV_CLR_KNOBS, tv_priv->save_TV_CLR_KNOBS);
-	I915_WRITE(TV_CLR_LEVEL, tv_priv->save_TV_CLR_LEVEL);
-
-	{
-		int pipeconf_reg = (intel_crtc->pipe == 0) ?
-			PIPEACONF : PIPEBCONF;
-		int dspcntr_reg = (intel_crtc->plane == 0) ?
-			DSPACNTR : DSPBCNTR;
-		int pipeconf = I915_READ(pipeconf_reg);
-		int dspcntr = I915_READ(dspcntr_reg);
-		int dspbase_reg = (intel_crtc->plane == 0) ?
-			DSPAADDR : DSPBADDR;
-		/* Pipe must be off here */
-		I915_WRITE(dspcntr_reg, dspcntr & ~DISPLAY_PLANE_ENABLE);
-		/* Flush the plane changes */
-		I915_WRITE(dspbase_reg, I915_READ(dspbase_reg));
-
-		if (!IS_I9XX(dev)) {
-			/* Wait for vblank for the disable to take effect */
-			intel_wait_for_vblank(dev);
-		}
-
-		I915_WRITE(pipeconf_reg, pipeconf & ~PIPEACONF_ENABLE);
-		/* Wait for vblank for the disable to take effect. */
-		intel_wait_for_vblank(dev);
-
-		/* Filter ctl must be set before TV_WIN_SIZE */
-		I915_WRITE(TV_FILTER_CTL_1, tv_priv->save_TV_FILTER_CTL_1);
-		I915_WRITE(TV_FILTER_CTL_2, tv_priv->save_TV_FILTER_CTL_2);
-		I915_WRITE(TV_FILTER_CTL_3, tv_priv->save_TV_FILTER_CTL_3);
-		I915_WRITE(TV_WIN_POS, tv_priv->save_TV_WIN_POS);
-		I915_WRITE(TV_WIN_SIZE, tv_priv->save_TV_WIN_SIZE);
-		I915_WRITE(pipeconf_reg, pipeconf);
-		I915_WRITE(dspcntr_reg, dspcntr);
-		/* Flush the plane changes */
-		I915_WRITE(dspbase_reg, I915_READ(dspbase_reg));
-	}
-
-	for (i = 0; i < 60; i++)
-		I915_WRITE(TV_H_LUMA_0 + (i <<2), tv_priv->save_TV_H_LUMA[i]);
-	for (i = 0; i < 60; i++)
-		I915_WRITE(TV_H_CHROMA_0 + (i <<2), tv_priv->save_TV_H_CHROMA[i]);
-	for (i = 0; i < 43; i++)
-		I915_WRITE(TV_V_LUMA_0 + (i <<2), tv_priv->save_TV_V_LUMA[i]);
-	for (i = 0; i < 43; i++)
-		I915_WRITE(TV_V_CHROMA_0 + (i <<2), tv_priv->save_TV_V_CHROMA[i]);
-
-	I915_WRITE(TV_DAC, tv_priv->save_TV_DAC);
-	I915_WRITE(TV_CTL, tv_priv->save_TV_CTL);
-}
-
 static const struct tv_mode *
 intel_tv_mode_lookup (char *tv_format)
 {
@@ -1078,7 +941,8 @@ intel_tv_mode_find (struct intel_encoder *intel_encoder)
 static enum drm_mode_status
 intel_tv_mode_valid(struct drm_connector *connector, struct drm_display_mode *mode)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder);
 
 	/* Ensure TV refresh is close to desired refresh */
@@ -1441,7 +1305,8 @@ intel_tv_detect_type (struct drm_crtc *crtc, struct intel_encoder *intel_encoder
  */
 static void intel_tv_find_better_format(struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_tv_priv *tv_priv = intel_encoder->dev_priv;
 	const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder);
 	int i;
@@ -1475,9 +1340,9 @@ intel_tv_detect(struct drm_connector *connector)
 {
 	struct drm_crtc *crtc;
 	struct drm_display_mode mode;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_tv_priv *tv_priv = intel_encoder->dev_priv;
-	struct drm_encoder *encoder = &intel_encoder->enc;
 	int dpms_mode;
 	int type = tv_priv->type;
 
@@ -1487,10 +1352,12 @@ intel_tv_detect(struct drm_connector *connector)
 	if (encoder->crtc && encoder->crtc->enabled) {
 		type = intel_tv_detect_type(encoder->crtc, intel_encoder);
 	} else {
-		crtc = intel_get_load_detect_pipe(intel_encoder, &mode, &dpms_mode);
+		crtc = intel_get_load_detect_pipe(intel_encoder, connector,
+						  &mode, &dpms_mode);
 		if (crtc) {
 			type = intel_tv_detect_type(crtc, intel_encoder);
-			intel_release_load_detect_pipe(intel_encoder, dpms_mode);
+			intel_release_load_detect_pipe(intel_encoder, connector,
+						       dpms_mode);
 		} else
 			type = -1;
 	}
@@ -1525,7 +1392,8 @@ static void
 intel_tv_chose_preferred_modes(struct drm_connector *connector,
 			       struct drm_display_mode *mode_ptr)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder);
 
 	if (tv_mode->nbr_end < 480 && mode_ptr->vdisplay == 480)
@@ -1550,7 +1418,8 @@ static int
 intel_tv_get_modes(struct drm_connector *connector)
 {
 	struct drm_display_mode *mode_ptr;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder);
 	int j, count = 0;
 	u64 tmp;
@@ -1604,11 +1473,9 @@ intel_tv_get_modes(struct drm_connector *connector)
 static void
 intel_tv_destroy (struct drm_connector *connector)
 {
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
-
 	drm_sysfs_connector_remove(connector);
 	drm_connector_cleanup(connector);
-	kfree(intel_encoder);
+	kfree(connector);
 }
 
 
@@ -1617,9 +1484,9 @@ intel_tv_set_property(struct drm_connector *connector, struct drm_property *prop
 		      uint64_t val)
 {
 	struct drm_device *dev = connector->dev;
-	struct intel_encoder *intel_encoder = to_intel_encoder(connector);
+	struct drm_encoder *encoder = intel_attached_encoder(connector);
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
 	struct intel_tv_priv *tv_priv = intel_encoder->dev_priv;
-	struct drm_encoder *encoder = &intel_encoder->enc;
 	struct drm_crtc *crtc = encoder->crtc;
 	int ret = 0;
 	bool changed = false;
@@ -1676,8 +1543,6 @@ static const struct drm_encoder_helper_funcs intel_tv_helper_funcs = {
 
 static const struct drm_connector_funcs intel_tv_connector_funcs = {
 	.dpms = drm_helper_connector_dpms,
-	.save = intel_tv_save,
-	.restore = intel_tv_restore,
 	.detect = intel_tv_detect,
 	.destroy = intel_tv_destroy,
 	.set_property = intel_tv_set_property,
@@ -1687,12 +1552,15 @@ static const struct drm_connector_funcs intel_tv_connector_funcs = {
 static const struct drm_connector_helper_funcs intel_tv_connector_helper_funcs = {
 	.mode_valid = intel_tv_mode_valid,
 	.get_modes = intel_tv_get_modes,
-	.best_encoder = intel_best_encoder,
+	.best_encoder = intel_attached_encoder,
 };
 
 static void intel_tv_enc_destroy(struct drm_encoder *encoder)
 {
+	struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder);
+
 	drm_encoder_cleanup(encoder);
+	kfree(intel_encoder);
 }
 
 static const struct drm_encoder_funcs intel_tv_enc_funcs = {
@@ -1741,6 +1609,7 @@ intel_tv_init(struct drm_device *dev)
 	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct drm_connector *connector;
 	struct intel_encoder *intel_encoder;
+	struct intel_connector *intel_connector;
 	struct intel_tv_priv *tv_priv;
 	u32 tv_dac_on, tv_dac_off, save_tv_dac;
 	char **tv_format_names;
@@ -1786,7 +1655,13 @@ intel_tv_init(struct drm_device *dev)
 		return;
 	}
 
-	connector = &intel_encoder->base;
+	intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
+	if (!intel_connector) {
+		kfree(intel_encoder);
+		return;
+	}
+
+	connector = &intel_connector->base;
 
 	drm_connector_init(dev, connector, &intel_tv_connector_funcs,
 			   DRM_MODE_CONNECTOR_SVIDEO);
@@ -1794,7 +1669,7 @@ intel_tv_init(struct drm_device *dev)
 	drm_encoder_init(dev, &intel_encoder->enc, &intel_tv_enc_funcs,
 			 DRM_MODE_ENCODER_TVDAC);
 
-	drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc);
+	drm_mode_connector_attach_encoder(&intel_connector->base, &intel_encoder->enc);
 	tv_priv = (struct intel_tv_priv *)(intel_encoder + 1);
 	intel_encoder->type = INTEL_OUTPUT_TVOUT;
 	intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
diff --git a/drivers/gpu/drm/nouveau/Makefile b/drivers/gpu/drm/nouveau/Makefile
index 453df3f..acd31ed 100644
--- a/drivers/gpu/drm/nouveau/Makefile
+++ b/drivers/gpu/drm/nouveau/Makefile
@@ -22,7 +22,8 @@ nouveau-y := nouveau_drv.o nouveau_state.o nouveau_channel.o nouveau_mem.o \
              nv50_cursor.o nv50_display.o nv50_fbcon.o \
              nv04_dac.o nv04_dfp.o nv04_tv.o nv17_tv.o nv17_tv_modes.o \
              nv04_crtc.o nv04_display.o nv04_cursor.o nv04_fbcon.o \
-             nv17_gpio.o nv50_gpio.o
+             nv17_gpio.o nv50_gpio.o \
+	     nv50_calc.o
 
 nouveau-$(CONFIG_DRM_NOUVEAU_DEBUG) += nouveau_debugfs.o
 nouveau-$(CONFIG_COMPAT) += nouveau_ioc32.o
diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c
index e13f6af..d4bcca8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c
+++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c
@@ -34,7 +34,7 @@
 static struct nouveau_dsm_priv {
 	bool dsm_detected;
 	acpi_handle dhandle;
-	acpi_handle dsm_handle;
+	acpi_handle rom_handle;
 } nouveau_dsm_priv;
 
 static const char nouveau_dsm_muid[] = {
@@ -107,9 +107,9 @@ static int nouveau_dsm_set_discrete_state(acpi_handle handle, enum vga_switchero
 static int nouveau_dsm_switchto(enum vga_switcheroo_client_id id)
 {
 	if (id == VGA_SWITCHEROO_IGD)
-		return nouveau_dsm_switch_mux(nouveau_dsm_priv.dsm_handle, NOUVEAU_DSM_LED_STAMINA);
+		return nouveau_dsm_switch_mux(nouveau_dsm_priv.dhandle, NOUVEAU_DSM_LED_STAMINA);
 	else
-		return nouveau_dsm_switch_mux(nouveau_dsm_priv.dsm_handle, NOUVEAU_DSM_LED_SPEED);
+		return nouveau_dsm_switch_mux(nouveau_dsm_priv.dhandle, NOUVEAU_DSM_LED_SPEED);
 }
 
 static int nouveau_dsm_power_state(enum vga_switcheroo_client_id id,
@@ -118,7 +118,7 @@ static int nouveau_dsm_power_state(enum vga_switcheroo_client_id id,
 	if (id == VGA_SWITCHEROO_IGD)
 		return 0;
 
-	return nouveau_dsm_set_discrete_state(nouveau_dsm_priv.dsm_handle, state);
+	return nouveau_dsm_set_discrete_state(nouveau_dsm_priv.dhandle, state);
 }
 
 static int nouveau_dsm_init(void)
@@ -151,18 +151,18 @@ static bool nouveau_dsm_pci_probe(struct pci_dev *pdev)
 	dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
 	if (!dhandle)
 		return false;
+
 	status = acpi_get_handle(dhandle, "_DSM", &nvidia_handle);
 	if (ACPI_FAILURE(status)) {
 		return false;
 	}
 
-	ret= nouveau_dsm(nvidia_handle, NOUVEAU_DSM_SUPPORTED,
-			 NOUVEAU_DSM_SUPPORTED_FUNCTIONS, &result);
+	ret = nouveau_dsm(dhandle, NOUVEAU_DSM_SUPPORTED,
+			  NOUVEAU_DSM_SUPPORTED_FUNCTIONS, &result);
 	if (ret < 0)
 		return false;
 
 	nouveau_dsm_priv.dhandle = dhandle;
-	nouveau_dsm_priv.dsm_handle = nvidia_handle;
 	return true;
 }
 
@@ -173,6 +173,7 @@ static bool nouveau_dsm_detect(void)
 	struct pci_dev *pdev = NULL;
 	int has_dsm = 0;
 	int vga_count = 0;
+
 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
 		vga_count++;
 
@@ -180,7 +181,7 @@ static bool nouveau_dsm_detect(void)
 	}
 
 	if (vga_count == 2 && has_dsm) {
-		acpi_get_name(nouveau_dsm_priv.dsm_handle, ACPI_FULL_PATHNAME, &buffer);
+		acpi_get_name(nouveau_dsm_priv.dhandle, ACPI_FULL_PATHNAME, &buffer);
 		printk(KERN_INFO "VGA switcheroo: detected DSM switching method %s handle\n",
 		       acpi_method_name);
 		nouveau_dsm_priv.dsm_detected = true;
@@ -204,3 +205,57 @@ void nouveau_unregister_dsm_handler(void)
 {
 	vga_switcheroo_unregister_handler();
 }
+
+/* retrieve the ROM in 4k blocks */
+static int nouveau_rom_call(acpi_handle rom_handle, uint8_t *bios,
+			    int offset, int len)
+{
+	acpi_status status;
+	union acpi_object rom_arg_elements[2], *obj;
+	struct acpi_object_list rom_arg;
+	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL};
+
+	rom_arg.count = 2;
+	rom_arg.pointer = &rom_arg_elements[0];
+
+	rom_arg_elements[0].type = ACPI_TYPE_INTEGER;
+	rom_arg_elements[0].integer.value = offset;
+
+	rom_arg_elements[1].type = ACPI_TYPE_INTEGER;
+	rom_arg_elements[1].integer.value = len;
+
+	status = acpi_evaluate_object(rom_handle, NULL, &rom_arg, &buffer);
+	if (ACPI_FAILURE(status)) {
+		printk(KERN_INFO "failed to evaluate ROM got %s\n", acpi_format_exception(status));
+		return -ENODEV;
+	}
+	obj = (union acpi_object *)buffer.pointer;
+	memcpy(bios+offset, obj->buffer.pointer, len);
+	kfree(buffer.pointer);
+	return len;
+}
+
+bool nouveau_acpi_rom_supported(struct pci_dev *pdev)
+{
+	acpi_status status;
+	acpi_handle dhandle, rom_handle;
+
+	if (!nouveau_dsm_priv.dsm_detected)
+		return false;
+
+	dhandle = DEVICE_ACPI_HANDLE(&pdev->dev);
+	if (!dhandle)
+		return false;
+
+	status = acpi_get_handle(dhandle, "_ROM", &rom_handle);
+	if (ACPI_FAILURE(status))
+		return false;
+
+	nouveau_dsm_priv.rom_handle = rom_handle;
+	return true;
+}
+
+int nouveau_acpi_get_bios_chunk(uint8_t *bios, int offset, int len)
+{
+	return nouveau_rom_call(nouveau_dsm_priv.rom_handle, bios, offset, len);
+}
diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c b/drivers/gpu/drm/nouveau/nouveau_bios.c
index abc382a..fc924b6 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bios.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bios.c
@@ -26,6 +26,7 @@
 #define NV_DEBUG_NOTRACE
 #include "nouveau_drv.h"
 #include "nouveau_hw.h"
+#include "nouveau_encoder.h"
 
 /* these defines are made up */
 #define NV_CIO_CRE_44_HEADA 0x0
@@ -177,6 +178,25 @@ out:
 	pci_disable_rom(dev->pdev);
 }
 
+static void load_vbios_acpi(struct drm_device *dev, uint8_t *data)
+{
+	int i;
+	int ret;
+	int size = 64 * 1024;
+
+	if (!nouveau_acpi_rom_supported(dev->pdev))
+		return;
+
+	for (i = 0; i < (size / ROM_BIOS_PAGE); i++) {
+		ret = nouveau_acpi_get_bios_chunk(data,
+						  (i * ROM_BIOS_PAGE),
+						  ROM_BIOS_PAGE);
+		if (ret <= 0)
+			break;
+	}
+	return;
+}
+
 struct methods {
 	const char desc[8];
 	void (*loadbios)(struct drm_device *, uint8_t *);
@@ -190,6 +210,7 @@ static struct methods nv04_methods[] = {
 };
 
 static struct methods nv50_methods[] = {
+	{ "ACPI", load_vbios_acpi, true },
 	{ "PRAMIN", load_vbios_pramin, true },
 	{ "PROM", load_vbios_prom, false },
 	{ "PCIROM", load_vbios_pci, true },
@@ -256,6 +277,11 @@ static bool NVShadowVBIOS(struct drm_device *dev, uint8_t *data)
 struct init_tbl_entry {
 	char *name;
 	uint8_t id;
+	/* Return:
+	 *  > 0: success, length of opcode
+	 *    0: success, but abort further parsing of table (INIT_DONE etc)
+	 *  < 0: failure, table parsing will be aborted
+	 */
 	int (*handler)(struct nvbios *, uint16_t, struct init_exec *);
 };
 
@@ -709,6 +735,83 @@ static int dcb_entry_idx_from_crtchead(struct drm_device *dev)
 	return dcb_entry;
 }
 
+static int
+read_dcb_i2c_entry(struct drm_device *dev, int dcb_version, uint8_t *i2ctable, int index, struct dcb_i2c_entry *i2c)
+{
+	uint8_t dcb_i2c_ver = dcb_version, headerlen = 0, entry_len = 4;
+	int i2c_entries = DCB_MAX_NUM_I2C_ENTRIES;
+	int recordoffset = 0, rdofs = 1, wrofs = 0;
+	uint8_t port_type = 0;
+
+	if (!i2ctable)
+		return -EINVAL;
+
+	if (dcb_version >= 0x30) {
+		if (i2ctable[0] != dcb_version) /* necessary? */
+			NV_WARN(dev,
+				"DCB I2C table version mismatch (%02X vs %02X)\n",
+				i2ctable[0], dcb_version);
+		dcb_i2c_ver = i2ctable[0];
+		headerlen = i2ctable[1];
+		if (i2ctable[2] <= DCB_MAX_NUM_I2C_ENTRIES)
+			i2c_entries = i2ctable[2];
+		else
+			NV_WARN(dev,
+				"DCB I2C table has more entries than indexable "
+				"(%d entries, max %d)\n", i2ctable[2],
+				DCB_MAX_NUM_I2C_ENTRIES);
+		entry_len = i2ctable[3];
+		/* [4] is i2c_default_indices, read in parse_dcb_table() */
+	}
+	/*
+	 * It's your own fault if you call this function on a DCB 1.1 BIOS --
+	 * the test below is for DCB 1.2
+	 */
+	if (dcb_version < 0x14) {
+		recordoffset = 2;
+		rdofs = 0;
+		wrofs = 1;
+	}
+
+	if (index == 0xf)
+		return 0;
+	if (index >= i2c_entries) {
+		NV_ERROR(dev, "DCB I2C index too big (%d >= %d)\n",
+			 index, i2ctable[2]);
+		return -ENOENT;
+	}
+	if (i2ctable[headerlen + entry_len * index + 3] == 0xff) {
+		NV_ERROR(dev, "DCB I2C entry invalid\n");
+		return -EINVAL;
+	}
+
+	if (dcb_i2c_ver >= 0x30) {
+		port_type = i2ctable[headerlen + recordoffset + 3 + entry_len * index];
+
+		/*
+		 * Fixup for chips using same address offset for read and
+		 * write.
+		 */
+		if (port_type == 4)	/* seen on C51 */
+			rdofs = wrofs = 1;
+		if (port_type >= 5)	/* G80+ */
+			rdofs = wrofs = 0;
+	}
+
+	if (dcb_i2c_ver >= 0x40) {
+		if (port_type != 5 && port_type != 6)
+			NV_WARN(dev, "DCB I2C table has port type %d\n", port_type);
+
+		i2c->entry = ROM32(i2ctable[headerlen + recordoffset + entry_len * index]);
+	}
+
+	i2c->port_type = port_type;
+	i2c->read = i2ctable[headerlen + recordoffset + rdofs + entry_len * index];
+	i2c->write = i2ctable[headerlen + recordoffset + wrofs + entry_len * index];
+
+	return 0;
+}
+
 static struct nouveau_i2c_chan *
 init_i2c_device_find(struct drm_device *dev, int i2c_index)
 {
@@ -727,6 +830,20 @@ init_i2c_device_find(struct drm_device *dev, int i2c_index)
 	}
 	if (i2c_index == 0x80)	/* g80+ */
 		i2c_index = dcb->i2c_default_indices & 0xf;
+	else
+	if (i2c_index == 0x81)
+		i2c_index = (dcb->i2c_default_indices & 0xf0) >> 4;
+
+	if (i2c_index >= DCB_MAX_NUM_I2C_ENTRIES) {
+		NV_ERROR(dev, "invalid i2c_index 0x%x\n", i2c_index);
+		return NULL;
+	}
+
+	/* Make sure i2c table entry has been parsed, it may not
+	 * have been if this is a bus not referenced by a DCB encoder
+	 */
+	read_dcb_i2c_entry(dev, dcb->version, dcb->i2c_table,
+			   i2c_index, &dcb->i2c[i2c_index]);
 
 	return nouveau_i2c_find(dev, i2c_index);
 }
@@ -818,7 +935,7 @@ init_io_restrict_prog(struct nvbios *bios, uint16_t offset,
 		NV_ERROR(bios->dev,
 			 "0x%04X: Config 0x%02X exceeds maximal bound 0x%02X\n",
 			 offset, config, count);
-		return 0;
+		return -EINVAL;
 	}
 
 	configval = ROM32(bios->data[offset + 11 + config * 4]);
@@ -920,7 +1037,7 @@ init_io_restrict_pll(struct nvbios *bios, uint16_t offset,
 		NV_ERROR(bios->dev,
 			 "0x%04X: Config 0x%02X exceeds maximal bound 0x%02X\n",
 			 offset, config, count);
-		return 0;
+		return -EINVAL;
 	}
 
 	freq = ROM16(bios->data[offset + 12 + config * 2]);
@@ -1067,6 +1184,126 @@ init_io_flag_condition(struct nvbios *bios, uint16_t offset,
 }
 
 static int
+init_dp_condition(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
+{
+	/*
+	 * INIT_DP_CONDITION   opcode: 0x3A ('')
+	 *
+	 * offset      (8 bit): opcode
+	 * offset + 1  (8 bit): "sub" opcode
+	 * offset + 2  (8 bit): unknown
+	 *
+	 */
+
+	struct bit_displayport_encoder_table *dpe = NULL;
+	struct dcb_entry *dcb = bios->display.output;
+	struct drm_device *dev = bios->dev;
+	uint8_t cond = bios->data[offset + 1];
+	int dummy;
+
+	BIOSLOG(bios, "0x%04X: subop 0x%02X\n", offset, cond);
+
+	if (!iexec->execute)
+		return 3;
+
+	dpe = nouveau_bios_dp_table(dev, dcb, &dummy);
+	if (!dpe) {
+		NV_ERROR(dev, "0x%04X: INIT_3A: no encoder table!!\n", offset);
+		return -EINVAL;
+	}
+
+	switch (cond) {
+	case 0:
+	{
+		struct dcb_connector_table_entry *ent =
+			&bios->dcb.connector.entry[dcb->connector];
+
+		if (ent->type != DCB_CONNECTOR_eDP)
+			iexec->execute = false;
+	}
+		break;
+	case 1:
+	case 2:
+		if (!(dpe->unknown & cond))
+			iexec->execute = false;
+		break;
+	case 5:
+	{
+		struct nouveau_i2c_chan *auxch;
+		int ret;
+
+		auxch = nouveau_i2c_find(dev, bios->display.output->i2c_index);
+		if (!auxch)
+			return -ENODEV;
+
+		ret = nouveau_dp_auxch(auxch, 9, 0xd, &cond, 1);
+		if (ret)
+			return ret;
+
+		if (cond & 1)
+			iexec->execute = false;
+	}
+		break;
+	default:
+		NV_WARN(dev, "0x%04X: unknown INIT_3A op: %d\n", offset, cond);
+		break;
+	}
+
+	if (iexec->execute)
+		BIOSLOG(bios, "0x%04X: continuing to execute\n", offset);
+	else
+		BIOSLOG(bios, "0x%04X: skipping following commands\n", offset);
+
+	return 3;
+}
+
+static int
+init_op_3b(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
+{
+	/*
+	 * INIT_3B   opcode: 0x3B ('')
+	 *
+	 * offset      (8 bit): opcode
+	 * offset + 1  (8 bit): crtc index
+	 *
+	 */
+
+	uint8_t or = ffs(bios->display.output->or) - 1;
+	uint8_t index = bios->data[offset + 1];
+	uint8_t data;
+
+	if (!iexec->execute)
+		return 2;
+
+	data = bios_idxprt_rd(bios, 0x3d4, index);
+	bios_idxprt_wr(bios, 0x3d4, index, data & ~(1 << or));
+	return 2;
+}
+
+static int
+init_op_3c(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
+{
+	/*
+	 * INIT_3C   opcode: 0x3C ('')
+	 *
+	 * offset      (8 bit): opcode
+	 * offset + 1  (8 bit): crtc index
+	 *
+	 */
+
+	uint8_t or = ffs(bios->display.output->or) - 1;
+	uint8_t index = bios->data[offset + 1];
+	uint8_t data;
+
+	if (!iexec->execute)
+		return 2;
+
+	data = bios_idxprt_rd(bios, 0x3d4, index);
+	bios_idxprt_wr(bios, 0x3d4, index, data | (1 << or));
+	return 2;
+}
+
+static int
 init_idx_addr_latched(struct nvbios *bios, uint16_t offset,
 		      struct init_exec *iexec)
 {
@@ -1170,7 +1407,7 @@ init_io_restrict_pll2(struct nvbios *bios, uint16_t offset,
 		NV_ERROR(bios->dev,
 			 "0x%04X: Config 0x%02X exceeds maximal bound 0x%02X\n",
 			 offset, config, count);
-		return 0;
+		return -EINVAL;
 	}
 
 	freq = ROM32(bios->data[offset + 11 + config * 4]);
@@ -1231,12 +1468,11 @@ init_i2c_byte(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 	 */
 
 	uint8_t i2c_index = bios->data[offset + 1];
-	uint8_t i2c_address = bios->data[offset + 2];
+	uint8_t i2c_address = bios->data[offset + 2] >> 1;
 	uint8_t count = bios->data[offset + 3];
-	int len = 4 + count * 3;
 	struct nouveau_i2c_chan *chan;
-	struct i2c_msg msg;
-	int i;
+	int len = 4 + count * 3;
+	int ret, i;
 
 	if (!iexec->execute)
 		return len;
@@ -1247,35 +1483,34 @@ init_i2c_byte(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	chan = init_i2c_device_find(bios->dev, i2c_index);
 	if (!chan)
-		return 0;
+		return -ENODEV;
 
 	for (i = 0; i < count; i++) {
-		uint8_t i2c_reg = bios->data[offset + 4 + i * 3];
+		uint8_t reg = bios->data[offset + 4 + i * 3];
 		uint8_t mask = bios->data[offset + 5 + i * 3];
 		uint8_t data = bios->data[offset + 6 + i * 3];
-		uint8_t value;
+		union i2c_smbus_data val;
 
-		msg.addr = i2c_address;
-		msg.flags = I2C_M_RD;
-		msg.len = 1;
-		msg.buf = &value;
-		if (i2c_transfer(&chan->adapter, &msg, 1) != 1)
-			return 0;
+		ret = i2c_smbus_xfer(&chan->adapter, i2c_address, 0,
+				     I2C_SMBUS_READ, reg,
+				     I2C_SMBUS_BYTE_DATA, &val);
+		if (ret < 0)
+			return ret;
 
 		BIOSLOG(bios, "0x%04X: I2CReg: 0x%02X, Value: 0x%02X, "
 			      "Mask: 0x%02X, Data: 0x%02X\n",
-			offset, i2c_reg, value, mask, data);
+			offset, reg, val.byte, mask, data);
 
-		value = (value & mask) | data;
+		if (!bios->execute)
+			continue;
 
-		if (bios->execute) {
-			msg.addr = i2c_address;
-			msg.flags = 0;
-			msg.len = 1;
-			msg.buf = &value;
-			if (i2c_transfer(&chan->adapter, &msg, 1) != 1)
-				return 0;
-		}
+		val.byte &= mask;
+		val.byte |= data;
+		ret = i2c_smbus_xfer(&chan->adapter, i2c_address, 0,
+				     I2C_SMBUS_WRITE, reg,
+				     I2C_SMBUS_BYTE_DATA, &val);
+		if (ret < 0)
+			return ret;
 	}
 
 	return len;
@@ -1301,12 +1536,11 @@ init_zm_i2c_byte(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 	 */
 
 	uint8_t i2c_index = bios->data[offset + 1];
-	uint8_t i2c_address = bios->data[offset + 2];
+	uint8_t i2c_address = bios->data[offset + 2] >> 1;
 	uint8_t count = bios->data[offset + 3];
-	int len = 4 + count * 2;
 	struct nouveau_i2c_chan *chan;
-	struct i2c_msg msg;
-	int i;
+	int len = 4 + count * 2;
+	int ret, i;
 
 	if (!iexec->execute)
 		return len;
@@ -1317,23 +1551,25 @@ init_zm_i2c_byte(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	chan = init_i2c_device_find(bios->dev, i2c_index);
 	if (!chan)
-		return 0;
+		return -ENODEV;
 
 	for (i = 0; i < count; i++) {
-		uint8_t i2c_reg = bios->data[offset + 4 + i * 2];
-		uint8_t data = bios->data[offset + 5 + i * 2];
+		uint8_t reg = bios->data[offset + 4 + i * 2];
+		union i2c_smbus_data val;
+
+		val.byte = bios->data[offset + 5 + i * 2];
 
 		BIOSLOG(bios, "0x%04X: I2CReg: 0x%02X, Data: 0x%02X\n",
-			offset, i2c_reg, data);
-
-		if (bios->execute) {
-			msg.addr = i2c_address;
-			msg.flags = 0;
-			msg.len = 1;
-			msg.buf = &data;
-			if (i2c_transfer(&chan->adapter, &msg, 1) != 1)
-				return 0;
-		}
+			offset, reg, val.byte);
+
+		if (!bios->execute)
+			continue;
+
+		ret = i2c_smbus_xfer(&chan->adapter, i2c_address, 0,
+				     I2C_SMBUS_WRITE, reg,
+				     I2C_SMBUS_BYTE_DATA, &val);
+		if (ret < 0)
+			return ret;
 	}
 
 	return len;
@@ -1357,7 +1593,7 @@ init_zm_i2c(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 	 */
 
 	uint8_t i2c_index = bios->data[offset + 1];
-	uint8_t i2c_address = bios->data[offset + 2];
+	uint8_t i2c_address = bios->data[offset + 2] >> 1;
 	uint8_t count = bios->data[offset + 3];
 	int len = 4 + count;
 	struct nouveau_i2c_chan *chan;
@@ -1374,7 +1610,7 @@ init_zm_i2c(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	chan = init_i2c_device_find(bios->dev, i2c_index);
 	if (!chan)
-		return 0;
+		return -ENODEV;
 
 	for (i = 0; i < count; i++) {
 		data[i] = bios->data[offset + 4 + i];
@@ -1388,7 +1624,7 @@ init_zm_i2c(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 		msg.len = count;
 		msg.buf = data;
 		if (i2c_transfer(&chan->adapter, &msg, 1) != 1)
-			return 0;
+			return -EIO;
 	}
 
 	return len;
@@ -1427,7 +1663,7 @@ init_tmds(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	reg = get_tmds_index_reg(bios->dev, mlv);
 	if (!reg)
-		return 0;
+		return -EINVAL;
 
 	bios_wr32(bios, reg,
 		  tmdsaddr | NV_PRAMDAC_FP_TMDS_CONTROL_WRITE_DISABLE);
@@ -1471,7 +1707,7 @@ init_zm_tmds_group(struct nvbios *bios, uint16_t offset,
 
 	reg = get_tmds_index_reg(bios->dev, mlv);
 	if (!reg)
-		return 0;
+		return -EINVAL;
 
 	for (i = 0; i < count; i++) {
 		uint8_t tmdsaddr = bios->data[offset + 3 + i * 2];
@@ -1946,7 +2182,7 @@ init_configure_mem(struct nvbios *bios, uint16_t offset,
 	uint32_t reg, data;
 
 	if (bios->major_version > 2)
-		return 0;
+		return -ENODEV;
 
 	bios_idxprt_wr(bios, NV_VIO_SRX, NV_VIO_SR_CLOCK_INDEX, bios_idxprt_rd(
 		       bios, NV_VIO_SRX, NV_VIO_SR_CLOCK_INDEX) | 0x20);
@@ -2001,7 +2237,7 @@ init_configure_clk(struct nvbios *bios, uint16_t offset,
 	int clock;
 
 	if (bios->major_version > 2)
-		return 0;
+		return -ENODEV;
 
 	clock = ROM16(bios->data[meminitoffs + 4]) * 10;
 	setPLL(bios, NV_PRAMDAC_NVPLL_COEFF, clock);
@@ -2034,7 +2270,7 @@ init_configure_preinit(struct nvbios *bios, uint16_t offset,
 	uint8_t cr3c = ((straps << 2) & 0xf0) | (straps & (1 << 6));
 
 	if (bios->major_version > 2)
-		return 0;
+		return -ENODEV;
 
 	bios_idxprt_wr(bios, NV_CIO_CRX__COLOR,
 			     NV_CIO_CRE_SCRATCH4__INDEX, cr3c);
@@ -2591,7 +2827,10 @@ init_gpio(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 		BIOSLOG(bios, "0x%04X: Entry: 0x%08X\n", offset, gpio->entry);
 
-		nv50_gpio_set(bios->dev, gpio->tag, gpio->state_default);
+		BIOSLOG(bios, "0x%04X: set gpio 0x%02x, state %d\n",
+			offset, gpio->tag, gpio->state_default);
+		if (bios->execute)
+			nv50_gpio_set(bios->dev, gpio->tag, gpio->state_default);
 
 		/* The NVIDIA binary driver doesn't appear to actually do
 		 * any of this, my VBIOS does however.
@@ -2656,7 +2895,7 @@ init_ram_restrict_zm_reg_group(struct nvbios *bios, uint16_t offset,
 		NV_ERROR(bios->dev,
 			 "0x%04X: Zero block length - has the M table "
 			 "been parsed?\n", offset);
-		return 0;
+		return -EINVAL;
 	}
 
 	strap_ramcfg = (bios_rd32(bios, NV_PEXTDEV_BOOT_0) >> 2) & 0xf;
@@ -2840,14 +3079,14 @@ init_auxch(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	if (!bios->display.output) {
 		NV_ERROR(dev, "INIT_AUXCH: no active output\n");
-		return 0;
+		return -EINVAL;
 	}
 
 	auxch = init_i2c_device_find(dev, bios->display.output->i2c_index);
 	if (!auxch) {
 		NV_ERROR(dev, "INIT_AUXCH: couldn't get auxch %d\n",
 			 bios->display.output->i2c_index);
-		return 0;
+		return -ENODEV;
 	}
 
 	if (!iexec->execute)
@@ -2860,7 +3099,7 @@ init_auxch(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 		ret = nouveau_dp_auxch(auxch, 9, addr, &data, 1);
 		if (ret) {
 			NV_ERROR(dev, "INIT_AUXCH: rd auxch fail %d\n", ret);
-			return 0;
+			return ret;
 		}
 
 		data &= bios->data[offset + 0];
@@ -2869,7 +3108,7 @@ init_auxch(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 		ret = nouveau_dp_auxch(auxch, 8, addr, &data, 1);
 		if (ret) {
 			NV_ERROR(dev, "INIT_AUXCH: wr auxch fail %d\n", ret);
-			return 0;
+			return ret;
 		}
 	}
 
@@ -2899,14 +3138,14 @@ init_zm_auxch(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 
 	if (!bios->display.output) {
 		NV_ERROR(dev, "INIT_ZM_AUXCH: no active output\n");
-		return 0;
+		return -EINVAL;
 	}
 
 	auxch = init_i2c_device_find(dev, bios->display.output->i2c_index);
 	if (!auxch) {
 		NV_ERROR(dev, "INIT_ZM_AUXCH: couldn't get auxch %d\n",
 			 bios->display.output->i2c_index);
-		return 0;
+		return -ENODEV;
 	}
 
 	if (!iexec->execute)
@@ -2917,7 +3156,7 @@ init_zm_auxch(struct nvbios *bios, uint16_t offset, struct init_exec *iexec)
 		ret = nouveau_dp_auxch(auxch, 8, addr, &bios->data[offset], 1);
 		if (ret) {
 			NV_ERROR(dev, "INIT_ZM_AUXCH: wr auxch fail %d\n", ret);
-			return 0;
+			return ret;
 		}
 	}
 
@@ -2934,6 +3173,9 @@ static struct init_tbl_entry itbl_entry[] = {
 	{ "INIT_COPY"                         , 0x37, init_copy                       },
 	{ "INIT_NOT"                          , 0x38, init_not                        },
 	{ "INIT_IO_FLAG_CONDITION"            , 0x39, init_io_flag_condition          },
+	{ "INIT_DP_CONDITION"                 , 0x3A, init_dp_condition               },
+	{ "INIT_OP_3B"                        , 0x3B, init_op_3b                      },
+	{ "INIT_OP_3C"                        , 0x3C, init_op_3c                      },
 	{ "INIT_INDEX_ADDRESS_LATCHED"        , 0x49, init_idx_addr_latched           },
 	{ "INIT_IO_RESTRICT_PLL2"             , 0x4A, init_io_restrict_pll2           },
 	{ "INIT_PLL2"                         , 0x4B, init_pll2                       },
@@ -3001,7 +3243,7 @@ parse_init_table(struct nvbios *bios, unsigned int offset,
 	 * is changed back to EXECUTE.
 	 */
 
-	int count = 0, i, res;
+	int count = 0, i, ret;
 	uint8_t id;
 
 	/*
@@ -3016,26 +3258,33 @@ parse_init_table(struct nvbios *bios, unsigned int offset,
 		for (i = 0; itbl_entry[i].name && (itbl_entry[i].id != id); i++)
 			;
 
-		if (itbl_entry[i].name) {
-			BIOSLOG(bios, "0x%04X: [ (0x%02X) - %s ]\n",
-				offset, itbl_entry[i].id, itbl_entry[i].name);
-
-			/* execute eventual command handler */
-			res = (*itbl_entry[i].handler)(bios, offset, iexec);
-			if (!res)
-				break;
-			/*
-			 * Add the offset of the current command including all data
-			 * of that command. The offset will then be pointing on the
-			 * next op code.
-			 */
-			offset += res;
-		} else {
+		if (!itbl_entry[i].name) {
 			NV_ERROR(bios->dev,
 				 "0x%04X: Init table command not found: "
 				 "0x%02X\n", offset, id);
 			return -ENOENT;
 		}
+
+		BIOSLOG(bios, "0x%04X: [ (0x%02X) - %s ]\n", offset,
+			itbl_entry[i].id, itbl_entry[i].name);
+
+		/* execute eventual command handler */
+		ret = (*itbl_entry[i].handler)(bios, offset, iexec);
+		if (ret < 0) {
+			NV_ERROR(bios->dev, "0x%04X: Failed parsing init "
+				 "table opcode: %s %d\n", offset,
+				 itbl_entry[i].name, ret);
+		}
+
+		if (ret <= 0)
+			break;
+
+		/*
+		 * Add the offset of the current command including all data
+		 * of that command. The offset will then be pointing on the
+		 * next op code.
+		 */
+		offset += ret;
 	}
 
 	if (offset >= bios->length)
@@ -3671,7 +3920,8 @@ int nouveau_bios_parse_lvds_table(struct drm_device *dev, int pxclk, bool *dl, b
 
 static uint8_t *
 bios_output_config_match(struct drm_device *dev, struct dcb_entry *dcbent,
-			 uint16_t record, int record_len, int record_nr)
+			 uint16_t record, int record_len, int record_nr,
+			 bool match_link)
 {
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nvbios *bios = &dev_priv->vbios;
@@ -3679,12 +3929,28 @@ bios_output_config_match(struct drm_device *dev, struct dcb_entry *dcbent,
 	uint16_t table;
 	int i, v;
 
+	switch (dcbent->type) {
+	case OUTPUT_TMDS:
+	case OUTPUT_LVDS:
+	case OUTPUT_DP:
+		break;
+	default:
+		match_link = false;
+		break;
+	}
+
 	for (i = 0; i < record_nr; i++, record += record_len) {
 		table = ROM16(bios->data[record]);
 		if (!table)
 			continue;
 		entry = ROM32(bios->data[table]);
 
+		if (match_link) {
+			v = (entry & 0x00c00000) >> 22;
+			if (!(v & dcbent->sorconf.link))
+				continue;
+		}
+
 		v = (entry & 0x000f0000) >> 16;
 		if (!(v & dcbent->or))
 			continue;
@@ -3726,7 +3992,7 @@ nouveau_bios_dp_table(struct drm_device *dev, struct dcb_entry *dcbent,
 	*length = table[4];
 	return bios_output_config_match(dev, dcbent,
 					bios->display.dp_table_ptr + table[1],
-					table[2], table[3]);
+					table[2], table[3], table[0] >= 0x21);
 }
 
 int
@@ -3815,7 +4081,7 @@ nouveau_bios_run_display_table(struct drm_device *dev, struct dcb_entry *dcbent,
 			dcbent->type, dcbent->location, dcbent->or);
 	otable = bios_output_config_match(dev, dcbent, table[1] +
 					  bios->display.script_table_ptr,
-					  table[2], table[3]);
+					  table[2], table[3], table[0] >= 0x21);
 	if (!otable) {
 		NV_ERROR(dev, "Couldn't find matching output script table\n");
 		return 1;
@@ -4285,31 +4551,32 @@ int get_pll_limits(struct drm_device *dev, uint32_t limit_match, struct pll_lims
 			break;
 		}
 
-#if 0 /* for easy debugging */
-	ErrorF("pll.vco1.minfreq: %d\n", pll_lim->vco1.minfreq);
-	ErrorF("pll.vco1.maxfreq: %d\n", pll_lim->vco1.maxfreq);
-	ErrorF("pll.vco2.minfreq: %d\n", pll_lim->vco2.minfreq);
-	ErrorF("pll.vco2.maxfreq: %d\n", pll_lim->vco2.maxfreq);
-
-	ErrorF("pll.vco1.min_inputfreq: %d\n", pll_lim->vco1.min_inputfreq);
-	ErrorF("pll.vco1.max_inputfreq: %d\n", pll_lim->vco1.max_inputfreq);
-	ErrorF("pll.vco2.min_inputfreq: %d\n", pll_lim->vco2.min_inputfreq);
-	ErrorF("pll.vco2.max_inputfreq: %d\n", pll_lim->vco2.max_inputfreq);
-
-	ErrorF("pll.vco1.min_n: %d\n", pll_lim->vco1.min_n);
-	ErrorF("pll.vco1.max_n: %d\n", pll_lim->vco1.max_n);
-	ErrorF("pll.vco1.min_m: %d\n", pll_lim->vco1.min_m);
-	ErrorF("pll.vco1.max_m: %d\n", pll_lim->vco1.max_m);
-	ErrorF("pll.vco2.min_n: %d\n", pll_lim->vco2.min_n);
-	ErrorF("pll.vco2.max_n: %d\n", pll_lim->vco2.max_n);
-	ErrorF("pll.vco2.min_m: %d\n", pll_lim->vco2.min_m);
-	ErrorF("pll.vco2.max_m: %d\n", pll_lim->vco2.max_m);
-
-	ErrorF("pll.max_log2p: %d\n", pll_lim->max_log2p);
-	ErrorF("pll.log2p_bias: %d\n", pll_lim->log2p_bias);
-
-	ErrorF("pll.refclk: %d\n", pll_lim->refclk);
-#endif
+	NV_DEBUG(dev, "pll.vco1.minfreq: %d\n", pll_lim->vco1.minfreq);
+	NV_DEBUG(dev, "pll.vco1.maxfreq: %d\n", pll_lim->vco1.maxfreq);
+	NV_DEBUG(dev, "pll.vco1.min_inputfreq: %d\n", pll_lim->vco1.min_inputfreq);
+	NV_DEBUG(dev, "pll.vco1.max_inputfreq: %d\n", pll_lim->vco1.max_inputfreq);
+	NV_DEBUG(dev, "pll.vco1.min_n: %d\n", pll_lim->vco1.min_n);
+	NV_DEBUG(dev, "pll.vco1.max_n: %d\n", pll_lim->vco1.max_n);
+	NV_DEBUG(dev, "pll.vco1.min_m: %d\n", pll_lim->vco1.min_m);
+	NV_DEBUG(dev, "pll.vco1.max_m: %d\n", pll_lim->vco1.max_m);
+	if (pll_lim->vco2.maxfreq) {
+		NV_DEBUG(dev, "pll.vco2.minfreq: %d\n", pll_lim->vco2.minfreq);
+		NV_DEBUG(dev, "pll.vco2.maxfreq: %d\n", pll_lim->vco2.maxfreq);
+		NV_DEBUG(dev, "pll.vco2.min_inputfreq: %d\n", pll_lim->vco2.min_inputfreq);
+		NV_DEBUG(dev, "pll.vco2.max_inputfreq: %d\n", pll_lim->vco2.max_inputfreq);
+		NV_DEBUG(dev, "pll.vco2.min_n: %d\n", pll_lim->vco2.min_n);
+		NV_DEBUG(dev, "pll.vco2.max_n: %d\n", pll_lim->vco2.max_n);
+		NV_DEBUG(dev, "pll.vco2.min_m: %d\n", pll_lim->vco2.min_m);
+		NV_DEBUG(dev, "pll.vco2.max_m: %d\n", pll_lim->vco2.max_m);
+	}
+	if (!pll_lim->max_p) {
+		NV_DEBUG(dev, "pll.max_log2p: %d\n", pll_lim->max_log2p);
+		NV_DEBUG(dev, "pll.log2p_bias: %d\n", pll_lim->log2p_bias);
+	} else {
+		NV_DEBUG(dev, "pll.min_p: %d\n", pll_lim->min_p);
+		NV_DEBUG(dev, "pll.max_p: %d\n", pll_lim->max_p);
+	}
+	NV_DEBUG(dev, "pll.refclk: %d\n", pll_lim->refclk);
 
 	return 0;
 }
@@ -4953,79 +5220,6 @@ static uint16_t findstr(uint8_t *data, int n, const uint8_t *str, int len)
 	return 0;
 }
 
-static int
-read_dcb_i2c_entry(struct drm_device *dev, int dcb_version, uint8_t *i2ctable, int index, struct dcb_i2c_entry *i2c)
-{
-	uint8_t dcb_i2c_ver = dcb_version, headerlen = 0, entry_len = 4;
-	int i2c_entries = DCB_MAX_NUM_I2C_ENTRIES;
-	int recordoffset = 0, rdofs = 1, wrofs = 0;
-	uint8_t port_type = 0;
-
-	if (!i2ctable)
-		return -EINVAL;
-
-	if (dcb_version >= 0x30) {
-		if (i2ctable[0] != dcb_version) /* necessary? */
-			NV_WARN(dev,
-				"DCB I2C table version mismatch (%02X vs %02X)\n",
-				i2ctable[0], dcb_version);
-		dcb_i2c_ver = i2ctable[0];
-		headerlen = i2ctable[1];
-		if (i2ctable[2] <= DCB_MAX_NUM_I2C_ENTRIES)
-			i2c_entries = i2ctable[2];
-		else
-			NV_WARN(dev,
-				"DCB I2C table has more entries than indexable "
-				"(%d entries, max %d)\n", i2ctable[2],
-				DCB_MAX_NUM_I2C_ENTRIES);
-		entry_len = i2ctable[3];
-		/* [4] is i2c_default_indices, read in parse_dcb_table() */
-	}
-	/*
-	 * It's your own fault if you call this function on a DCB 1.1 BIOS --
-	 * the test below is for DCB 1.2
-	 */
-	if (dcb_version < 0x14) {
-		recordoffset = 2;
-		rdofs = 0;
-		wrofs = 1;
-	}
-
-	if (index == 0xf)
-		return 0;
-	if (index >= i2c_entries) {
-		NV_ERROR(dev, "DCB I2C index too big (%d >= %d)\n",
-			 index, i2ctable[2]);
-		return -ENOENT;
-	}
-	if (i2ctable[headerlen + entry_len * index + 3] == 0xff) {
-		NV_ERROR(dev, "DCB I2C entry invalid\n");
-		return -EINVAL;
-	}
-
-	if (dcb_i2c_ver >= 0x30) {
-		port_type = i2ctable[headerlen + recordoffset + 3 + entry_len * index];
-
-		/*
-		 * Fixup for chips using same address offset for read and
-		 * write.
-		 */
-		if (port_type == 4)	/* seen on C51 */
-			rdofs = wrofs = 1;
-		if (port_type >= 5)	/* G80+ */
-			rdofs = wrofs = 0;
-	}
-
-	if (dcb_i2c_ver >= 0x40 && port_type != 5 && port_type != 6)
-		NV_WARN(dev, "DCB I2C table has port type %d\n", port_type);
-
-	i2c->port_type = port_type;
-	i2c->read = i2ctable[headerlen + recordoffset + rdofs + entry_len * index];
-	i2c->write = i2ctable[headerlen + recordoffset + wrofs + entry_len * index];
-
-	return 0;
-}
-
 static struct dcb_gpio_entry *
 new_gpio_entry(struct nvbios *bios)
 {
@@ -5379,12 +5573,6 @@ parse_dcb20_entry(struct drm_device *dev, struct dcb_table *dcb,
 	entry->bus = (conn >> 16) & 0xf;
 	entry->location = (conn >> 20) & 0x3;
 	entry->or = (conn >> 24) & 0xf;
-	/*
-	 * Normal entries consist of a single bit, but dual link has the
-	 * next most significant bit set too
-	 */
-	entry->duallink_possible =
-			((1 << (ffs(entry->or) - 1)) * 3 == entry->or);
 
 	switch (entry->type) {
 	case OUTPUT_ANALOG:
@@ -5468,6 +5656,16 @@ parse_dcb20_entry(struct drm_device *dev, struct dcb_table *dcb,
 		break;
 	}
 
+	if (dcb->version < 0x40) {
+		/* Normal entries consist of a single bit, but dual link has
+		 * the next most significant bit set too
+		 */
+		entry->duallink_possible =
+			((1 << (ffs(entry->or) - 1)) * 3 == entry->or);
+	} else {
+		entry->duallink_possible = (entry->sorconf.link == 3);
+	}
+
 	/* unsure what DCB version introduces this, 3.0? */
 	if (conf & 0x100000)
 		entry->i2c_upper_default = true;
@@ -6051,6 +6249,30 @@ nouveau_bios_i2c_devices_takedown(struct drm_device *dev)
 		nouveau_i2c_fini(dev, entry);
 }
 
+static bool
+nouveau_bios_posted(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	bool was_locked;
+	unsigned htotal;
+
+	if (dev_priv->chipset >= NV_50) {
+		if (NVReadVgaCrtc(dev, 0, 0x00) == 0 &&
+		    NVReadVgaCrtc(dev, 0, 0x1a) == 0)
+			return false;
+		return true;
+	}
+
+	was_locked = NVLockVgaCrtcs(dev, false);
+	htotal  = NVReadVgaCrtc(dev, 0, 0x06);
+	htotal |= (NVReadVgaCrtc(dev, 0, 0x07) & 0x01) << 8;
+	htotal |= (NVReadVgaCrtc(dev, 0, 0x07) & 0x20) << 4;
+	htotal |= (NVReadVgaCrtc(dev, 0, 0x25) & 0x01) << 10;
+	htotal |= (NVReadVgaCrtc(dev, 0, 0x41) & 0x01) << 11;
+	NVLockVgaCrtcs(dev, was_locked);
+	return (htotal != 0);
+}
+
 int
 nouveau_bios_init(struct drm_device *dev)
 {
@@ -6085,11 +6307,9 @@ nouveau_bios_init(struct drm_device *dev)
 	bios->execute = false;
 
 	/* ... unless card isn't POSTed already */
-	if (dev_priv->card_type >= NV_10 &&
-	    NVReadVgaCrtc(dev, 0, 0x00) == 0 &&
-	    NVReadVgaCrtc(dev, 0, 0x1a) == 0) {
+	if (!nouveau_bios_posted(dev)) {
 		NV_INFO(dev, "Adaptor not initialised\n");
-		if (dev_priv->card_type < NV_50) {
+		if (dev_priv->card_type < NV_40) {
 			NV_ERROR(dev, "Unable to POST this chipset\n");
 			return -ENODEV;
 		}
diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.h b/drivers/gpu/drm/nouveau/nouveau_bios.h
index c0d7b0a..adf4ec2 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bios.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bios.h
@@ -35,6 +35,7 @@
 #define DCB_LOC_ON_CHIP 0
 
 struct dcb_i2c_entry {
+	uint32_t entry;
 	uint8_t port_type;
 	uint8_t read, write;
 	struct nouveau_i2c_chan *chan;
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 957d176..6f3c195 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -160,11 +160,11 @@ nouveau_bo_new(struct drm_device *dev, struct nouveau_channel *chan,
 	ret = ttm_bo_init(&dev_priv->ttm.bdev, &nvbo->bo, size,
 			  ttm_bo_type_device, &nvbo->placement, align, 0,
 			  false, NULL, size, nouveau_bo_del_ttm);
-	nvbo->channel = NULL;
 	if (ret) {
 		/* ttm will call nouveau_bo_del_ttm if it fails.. */
 		return ret;
 	}
+	nvbo->channel = NULL;
 
 	spin_lock(&dev_priv->ttm.bo_list_lock);
 	list_add_tail(&nvbo->head, &dev_priv->ttm.bo_list);
@@ -225,7 +225,7 @@ nouveau_bo_pin(struct nouveau_bo *nvbo, uint32_t memtype)
 
 	nouveau_bo_placement_set(nvbo, memtype, 0);
 
-	ret = ttm_bo_validate(bo, &nvbo->placement, false, false);
+	ret = ttm_bo_validate(bo, &nvbo->placement, false, false, false);
 	if (ret == 0) {
 		switch (bo->mem.mem_type) {
 		case TTM_PL_VRAM:
@@ -261,7 +261,7 @@ nouveau_bo_unpin(struct nouveau_bo *nvbo)
 
 	nouveau_bo_placement_set(nvbo, bo->mem.placement, 0);
 
-	ret = ttm_bo_validate(bo, &nvbo->placement, false, false);
+	ret = ttm_bo_validate(bo, &nvbo->placement, false, false, false);
 	if (ret == 0) {
 		switch (bo->mem.mem_type) {
 		case TTM_PL_VRAM:
@@ -391,25 +391,16 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 		break;
 	case TTM_PL_VRAM:
 		man->flags = TTM_MEMTYPE_FLAG_FIXED |
-			     TTM_MEMTYPE_FLAG_MAPPABLE |
-			     TTM_MEMTYPE_FLAG_NEEDS_IOREMAP;
+			     TTM_MEMTYPE_FLAG_MAPPABLE;
 		man->available_caching = TTM_PL_FLAG_UNCACHED |
 					 TTM_PL_FLAG_WC;
 		man->default_caching = TTM_PL_FLAG_WC;
-
-		man->io_addr = NULL;
-		man->io_offset = drm_get_resource_start(dev, 1);
-		man->io_size = drm_get_resource_len(dev, 1);
-		if (man->io_size > dev_priv->vram_size)
-			man->io_size = dev_priv->vram_size;
-
 		man->gpu_offset = dev_priv->vm_vram_base;
 		break;
 	case TTM_PL_TT:
 		switch (dev_priv->gart_info.type) {
 		case NOUVEAU_GART_AGP:
-			man->flags = TTM_MEMTYPE_FLAG_MAPPABLE |
-				     TTM_MEMTYPE_FLAG_NEEDS_IOREMAP;
+			man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
 			man->available_caching = TTM_PL_FLAG_UNCACHED;
 			man->default_caching = TTM_PL_FLAG_UNCACHED;
 			break;
@@ -424,10 +415,6 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 				 dev_priv->gart_info.type);
 			return -EINVAL;
 		}
-
-		man->io_offset  = dev_priv->gart_info.aper_base;
-		man->io_size    = dev_priv->gart_info.aper_size;
-		man->io_addr   = NULL;
 		man->gpu_offset = dev_priv->vm_gart_base;
 		break;
 	default:
@@ -462,7 +449,8 @@ nouveau_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
 
 static int
 nouveau_bo_move_accel_cleanup(struct nouveau_channel *chan,
-			      struct nouveau_bo *nvbo, bool evict, bool no_wait,
+			      struct nouveau_bo *nvbo, bool evict,
+			      bool no_wait_reserve, bool no_wait_gpu,
 			      struct ttm_mem_reg *new_mem)
 {
 	struct nouveau_fence *fence = NULL;
@@ -473,7 +461,7 @@ nouveau_bo_move_accel_cleanup(struct nouveau_channel *chan,
 		return ret;
 
 	ret = ttm_bo_move_accel_cleanup(&nvbo->bo, fence, NULL,
-					evict, no_wait, new_mem);
+					evict, no_wait_reserve, no_wait_gpu, new_mem);
 	if (nvbo->channel && nvbo->channel != chan)
 		ret = nouveau_fence_wait(fence, NULL, false, false);
 	nouveau_fence_unref((void *)&fence);
@@ -497,7 +485,8 @@ nouveau_bo_mem_ctxdma(struct nouveau_bo *nvbo, struct nouveau_channel *chan,
 
 static int
 nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, bool intr,
-		     int no_wait, struct ttm_mem_reg *new_mem)
+		     bool no_wait_reserve, bool no_wait_gpu,
+		     struct ttm_mem_reg *new_mem)
 {
 	struct nouveau_bo *nvbo = nouveau_bo(bo);
 	struct drm_nouveau_private *dev_priv = nouveau_bdev(bo->bdev);
@@ -575,12 +564,13 @@ nouveau_bo_move_m2mf(struct ttm_buffer_object *bo, int evict, bool intr,
 		dst_offset += (PAGE_SIZE * line_count);
 	}
 
-	return nouveau_bo_move_accel_cleanup(chan, nvbo, evict, no_wait, new_mem);
+	return nouveau_bo_move_accel_cleanup(chan, nvbo, evict, no_wait_reserve, no_wait_gpu, new_mem);
 }
 
 static int
 nouveau_bo_move_flipd(struct ttm_buffer_object *bo, bool evict, bool intr,
-		      bool no_wait, struct ttm_mem_reg *new_mem)
+		      bool no_wait_reserve, bool no_wait_gpu,
+		      struct ttm_mem_reg *new_mem)
 {
 	u32 placement_memtype = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING;
 	struct ttm_placement placement;
@@ -593,7 +583,7 @@ nouveau_bo_move_flipd(struct ttm_buffer_object *bo, bool evict, bool intr,
 
 	tmp_mem = *new_mem;
 	tmp_mem.mm_node = NULL;
-	ret = ttm_bo_mem_space(bo, &placement, &tmp_mem, intr, no_wait);
+	ret = ttm_bo_mem_space(bo, &placement, &tmp_mem, intr, no_wait_reserve, no_wait_gpu);
 	if (ret)
 		return ret;
 
@@ -601,11 +591,11 @@ nouveau_bo_move_flipd(struct ttm_buffer_object *bo, bool evict, bool intr,
 	if (ret)
 		goto out;
 
-	ret = nouveau_bo_move_m2mf(bo, true, intr, no_wait, &tmp_mem);
+	ret = nouveau_bo_move_m2mf(bo, true, intr, no_wait_reserve, no_wait_gpu, &tmp_mem);
 	if (ret)
 		goto out;
 
-	ret = ttm_bo_move_ttm(bo, evict, no_wait, new_mem);
+	ret = ttm_bo_move_ttm(bo, evict, no_wait_reserve, no_wait_gpu, new_mem);
 out:
 	if (tmp_mem.mm_node) {
 		spin_lock(&bo->bdev->glob->lru_lock);
@@ -618,7 +608,8 @@ out:
 
 static int
 nouveau_bo_move_flips(struct ttm_buffer_object *bo, bool evict, bool intr,
-		      bool no_wait, struct ttm_mem_reg *new_mem)
+		      bool no_wait_reserve, bool no_wait_gpu,
+		      struct ttm_mem_reg *new_mem)
 {
 	u32 placement_memtype = TTM_PL_FLAG_TT | TTM_PL_MASK_CACHING;
 	struct ttm_placement placement;
@@ -631,15 +622,15 @@ nouveau_bo_move_flips(struct ttm_buffer_object *bo, bool evict, bool intr,
 
 	tmp_mem = *new_mem;
 	tmp_mem.mm_node = NULL;
-	ret = ttm_bo_mem_space(bo, &placement, &tmp_mem, intr, no_wait);
+	ret = ttm_bo_mem_space(bo, &placement, &tmp_mem, intr, no_wait_reserve, no_wait_gpu);
 	if (ret)
 		return ret;
 
-	ret = ttm_bo_move_ttm(bo, evict, no_wait, &tmp_mem);
+	ret = ttm_bo_move_ttm(bo, evict, no_wait_reserve, no_wait_gpu, &tmp_mem);
 	if (ret)
 		goto out;
 
-	ret = nouveau_bo_move_m2mf(bo, evict, intr, no_wait, new_mem);
+	ret = nouveau_bo_move_m2mf(bo, evict, intr, no_wait_reserve, no_wait_gpu, new_mem);
 	if (ret)
 		goto out;
 
@@ -706,7 +697,8 @@ nouveau_bo_vm_cleanup(struct ttm_buffer_object *bo,
 
 static int
 nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr,
-		bool no_wait, struct ttm_mem_reg *new_mem)
+		bool no_wait_reserve, bool no_wait_gpu,
+		struct ttm_mem_reg *new_mem)
 {
 	struct drm_nouveau_private *dev_priv = nouveau_bdev(bo->bdev);
 	struct nouveau_bo *nvbo = nouveau_bo(bo);
@@ -721,7 +713,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr,
 	/* Software copy if the card isn't up and running yet. */
 	if (dev_priv->init_state != NOUVEAU_CARD_INIT_DONE ||
 	    !dev_priv->channel) {
-		ret = ttm_bo_move_memcpy(bo, evict, no_wait, new_mem);
+		ret = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, new_mem);
 		goto out;
 	}
 
@@ -735,17 +727,17 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict, bool intr,
 
 	/* Hardware assisted copy. */
 	if (new_mem->mem_type == TTM_PL_SYSTEM)
-		ret = nouveau_bo_move_flipd(bo, evict, intr, no_wait, new_mem);
+		ret = nouveau_bo_move_flipd(bo, evict, intr, no_wait_reserve, no_wait_gpu, new_mem);
 	else if (old_mem->mem_type == TTM_PL_SYSTEM)
-		ret = nouveau_bo_move_flips(bo, evict, intr, no_wait, new_mem);
+		ret = nouveau_bo_move_flips(bo, evict, intr, no_wait_reserve, no_wait_gpu, new_mem);
 	else
-		ret = nouveau_bo_move_m2mf(bo, evict, intr, no_wait, new_mem);
+		ret = nouveau_bo_move_m2mf(bo, evict, intr, no_wait_reserve, no_wait_gpu, new_mem);
 
 	if (!ret)
 		goto out;
 
 	/* Fallback to software copy. */
-	ret = ttm_bo_move_memcpy(bo, evict, no_wait, new_mem);
+	ret = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, new_mem);
 
 out:
 	if (ret)
@@ -762,6 +754,55 @@ nouveau_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
 	return 0;
 }
 
+static int
+nouveau_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
+	struct drm_nouveau_private *dev_priv = nouveau_bdev(bdev);
+	struct drm_device *dev = dev_priv->dev;
+
+	mem->bus.addr = NULL;
+	mem->bus.offset = 0;
+	mem->bus.size = mem->num_pages << PAGE_SHIFT;
+	mem->bus.base = 0;
+	mem->bus.is_iomem = false;
+	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
+		return -EINVAL;
+	switch (mem->mem_type) {
+	case TTM_PL_SYSTEM:
+		/* System memory */
+		return 0;
+	case TTM_PL_TT:
+#if __OS_HAS_AGP
+		if (dev_priv->gart_info.type == NOUVEAU_GART_AGP) {
+			mem->bus.offset = mem->mm_node->start << PAGE_SHIFT;
+			mem->bus.base = dev_priv->gart_info.aper_base;
+			mem->bus.is_iomem = true;
+		}
+#endif
+		break;
+	case TTM_PL_VRAM:
+		mem->bus.offset = mem->mm_node->start << PAGE_SHIFT;
+		mem->bus.base = drm_get_resource_start(dev, 1);
+		mem->bus.is_iomem = true;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static void
+nouveau_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+}
+
+static int
+nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo)
+{
+	return 0;
+}
+
 struct ttm_bo_driver nouveau_bo_driver = {
 	.create_ttm_backend_entry = nouveau_bo_create_ttm_backend_entry,
 	.invalidate_caches = nouveau_bo_invalidate_caches,
@@ -774,5 +815,8 @@ struct ttm_bo_driver nouveau_bo_driver = {
 	.sync_obj_flush = nouveau_fence_flush,
 	.sync_obj_unref = nouveau_fence_unref,
 	.sync_obj_ref = nouveau_fence_ref,
+	.fault_reserve_notify = &nouveau_ttm_fault_reserve_notify,
+	.io_mem_reserve = &nouveau_ttm_io_mem_reserve,
+	.io_mem_free = &nouveau_ttm_io_mem_free,
 };
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
index 14afe1e..149ed22 100644
--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
+++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
@@ -241,7 +241,8 @@ nouveau_connector_detect(struct drm_connector *connector)
 	if (nv_encoder && nv_connector->native_mode) {
 		unsigned status = connector_status_connected;
 
-#ifdef CONFIG_ACPI
+#if defined(CONFIG_ACPI_BUTTON) || \
+	(defined(CONFIG_ACPI_BUTTON_MODULE) && defined(MODULE))
 		if (!nouveau_ignorelid && !acpi_lid_open())
 			status = connector_status_unknown;
 #endif
@@ -431,24 +432,27 @@ nouveau_connector_set_property(struct drm_connector *connector,
 }
 
 static struct drm_display_mode *
-nouveau_connector_native_mode(struct nouveau_connector *connector)
+nouveau_connector_native_mode(struct drm_connector *connector)
 {
-	struct drm_device *dev = connector->base.dev;
+	struct drm_connector_helper_funcs *helper = connector->helper_private;
+	struct nouveau_connector *nv_connector = nouveau_connector(connector);
+	struct drm_device *dev = connector->dev;
 	struct drm_display_mode *mode, *largest = NULL;
 	int high_w = 0, high_h = 0, high_v = 0;
 
-	/* Use preferred mode if there is one.. */
-	list_for_each_entry(mode, &connector->base.probed_modes, head) {
+	list_for_each_entry(mode, &nv_connector->base.probed_modes, head) {
+		if (helper->mode_valid(connector, mode) != MODE_OK)
+			continue;
+
+		/* Use preferred mode if there is one.. */
 		if (mode->type & DRM_MODE_TYPE_PREFERRED) {
 			NV_DEBUG_KMS(dev, "native mode from preferred\n");
 			return drm_mode_duplicate(dev, mode);
 		}
-	}
 
-	/* Otherwise, take the resolution with the largest width, then height,
-	 * then vertical refresh
-	 */
-	list_for_each_entry(mode, &connector->base.probed_modes, head) {
+		/* Otherwise, take the resolution with the largest width, then
+		 * height, then vertical refresh
+		 */
 		if (mode->hdisplay < high_w)
 			continue;
 
@@ -552,7 +556,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
 	 */
 	if (!nv_connector->native_mode)
 		nv_connector->native_mode =
-			nouveau_connector_native_mode(nv_connector);
+			nouveau_connector_native_mode(connector);
 	if (ret == 0 && nv_connector->native_mode) {
 		struct drm_display_mode *mode;
 
@@ -583,9 +587,9 @@ nouveau_connector_mode_valid(struct drm_connector *connector,
 
 	switch (nv_encoder->dcb->type) {
 	case OUTPUT_LVDS:
-		BUG_ON(!nv_connector->native_mode);
-		if (mode->hdisplay > nv_connector->native_mode->hdisplay ||
-		    mode->vdisplay > nv_connector->native_mode->vdisplay)
+		if (nv_connector->native_mode &&
+		    (mode->hdisplay > nv_connector->native_mode->hdisplay ||
+		     mode->vdisplay > nv_connector->native_mode->vdisplay))
 			return MODE_PANEL;
 
 		min_clock = 0;
@@ -593,8 +597,7 @@ nouveau_connector_mode_valid(struct drm_connector *connector,
 		break;
 	case OUTPUT_TMDS:
 		if ((dev_priv->card_type >= NV_50 && !nouveau_duallink) ||
-		    (dev_priv->card_type < NV_50 &&
-		     !nv_encoder->dcb->duallink_possible))
+		    !nv_encoder->dcb->duallink_possible)
 			max_clock = 165000;
 		else
 			max_clock = 330000;
@@ -728,7 +731,7 @@ nouveau_connector_create_lvds(struct drm_device *dev,
 	if (ret == 0)
 		goto out;
 	nv_connector->detected_encoder = nv_encoder;
-	nv_connector->native_mode = nouveau_connector_native_mode(nv_connector);
+	nv_connector->native_mode = nouveau_connector_native_mode(connector);
 	list_for_each_entry_safe(mode, temp, &connector->probed_modes, head)
 		drm_mode_remove(connector, mode);
 
@@ -843,6 +846,7 @@ nouveau_connector_create(struct drm_device *dev,
 
 	switch (dcb->type) {
 	case DCB_CONNECTOR_VGA:
+		connector->polled = DRM_CONNECTOR_POLL_CONNECT;
 		if (dev_priv->card_type >= NV_50) {
 			drm_connector_attach_property(connector,
 					dev->mode_config.scaling_mode_property,
@@ -854,6 +858,17 @@ nouveau_connector_create(struct drm_device *dev,
 	case DCB_CONNECTOR_TV_3:
 		nv_connector->scaling_mode = DRM_MODE_SCALE_NONE;
 		break;
+	case DCB_CONNECTOR_DP:
+	case DCB_CONNECTOR_eDP:
+	case DCB_CONNECTOR_HDMI_0:
+	case DCB_CONNECTOR_HDMI_1:
+	case DCB_CONNECTOR_DVI_I:
+	case DCB_CONNECTOR_DVI_D:
+		if (dev_priv->card_type >= NV_50)
+			connector->polled = DRM_CONNECTOR_POLL_HPD;
+		else
+			connector->polled = DRM_CONNECTOR_POLL_CONNECT;
+		/* fall-through */
 	default:
 		nv_connector->scaling_mode = DRM_MODE_SCALE_FULLSCREEN;
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_crtc.h b/drivers/gpu/drm/nouveau/nouveau_crtc.h
index 49fa7b2..cb1ce2a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_crtc.h
+++ b/drivers/gpu/drm/nouveau/nouveau_crtc.h
@@ -40,6 +40,8 @@ struct nouveau_crtc {
 	int sharpness;
 	int last_dpms;
 
+	int cursor_saved_x, cursor_saved_y;
+
 	struct {
 		int cpp;
 		bool blanked;
diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
index a251886..7933de4 100644
--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
@@ -33,6 +33,8 @@
 #include "drmP.h"
 #include "nouveau_drv.h"
 
+#include <ttm/ttm_page_alloc.h>
+
 static int
 nouveau_debugfs_channel_info(struct seq_file *m, void *data)
 {
@@ -159,6 +161,7 @@ static struct drm_info_list nouveau_debugfs_list[] = {
 	{ "chipset", nouveau_debugfs_chipset_info, 0, NULL },
 	{ "memory", nouveau_debugfs_memory_info, 0, NULL },
 	{ "vbios.rom", nouveau_debugfs_vbios_image, 0, NULL },
+	{ "ttm_page_pool", ttm_page_alloc_debugfs, 0, NULL },
 };
 #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
index cf1c5c0..74e6b4e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_display.c
+++ b/drivers/gpu/drm/nouveau/nouveau_display.c
@@ -34,10 +34,6 @@ static void
 nouveau_user_framebuffer_destroy(struct drm_framebuffer *drm_fb)
 {
 	struct nouveau_framebuffer *fb = nouveau_framebuffer(drm_fb);
-	struct drm_device *dev = drm_fb->dev;
-
-	if (drm_fb->fbdev)
-		nouveau_fbcon_remove(dev, drm_fb);
 
 	if (fb->nvbo)
 		drm_gem_object_unreference_unlocked(fb->nvbo->gem);
@@ -61,27 +57,20 @@ static const struct drm_framebuffer_funcs nouveau_framebuffer_funcs = {
 	.create_handle = nouveau_user_framebuffer_create_handle,
 };
 
-struct drm_framebuffer *
-nouveau_framebuffer_create(struct drm_device *dev, struct nouveau_bo *nvbo,
-			   struct drm_mode_fb_cmd *mode_cmd)
+int
+nouveau_framebuffer_init(struct drm_device *dev, struct nouveau_framebuffer *nouveau_fb,
+			 struct drm_mode_fb_cmd *mode_cmd, struct nouveau_bo *nvbo)
 {
-	struct nouveau_framebuffer *fb;
 	int ret;
 
-	fb = kzalloc(sizeof(struct nouveau_framebuffer), GFP_KERNEL);
-	if (!fb)
-		return NULL;
-
-	ret = drm_framebuffer_init(dev, &fb->base, &nouveau_framebuffer_funcs);
+	ret = drm_framebuffer_init(dev, &nouveau_fb->base, &nouveau_framebuffer_funcs);
 	if (ret) {
-		kfree(fb);
-		return NULL;
+		return ret;
 	}
 
-	drm_helper_mode_fill_fb_struct(&fb->base, mode_cmd);
-
-	fb->nvbo = nvbo;
-	return &fb->base;
+	drm_helper_mode_fill_fb_struct(&nouveau_fb->base, mode_cmd);
+	nouveau_fb->nvbo = nvbo;
+	return 0;
 }
 
 static struct drm_framebuffer *
@@ -89,24 +78,29 @@ nouveau_user_framebuffer_create(struct drm_device *dev,
 				struct drm_file *file_priv,
 				struct drm_mode_fb_cmd *mode_cmd)
 {
-	struct drm_framebuffer *fb;
+	struct nouveau_framebuffer *nouveau_fb;
 	struct drm_gem_object *gem;
+	int ret;
 
 	gem = drm_gem_object_lookup(dev, file_priv, mode_cmd->handle);
 	if (!gem)
 		return NULL;
 
-	fb = nouveau_framebuffer_create(dev, nouveau_gem_object(gem), mode_cmd);
-	if (!fb) {
+	nouveau_fb = kzalloc(sizeof(struct nouveau_framebuffer), GFP_KERNEL);
+	if (!nouveau_fb)
+		return NULL;
+
+	ret = nouveau_framebuffer_init(dev, nouveau_fb, mode_cmd, nouveau_gem_object(gem));
+	if (ret) {
 		drm_gem_object_unreference(gem);
 		return NULL;
 	}
 
-	return fb;
+	return &nouveau_fb->base;
 }
 
 const struct drm_mode_config_funcs nouveau_mode_config_funcs = {
 	.fb_create = nouveau_user_framebuffer_create,
-	.fb_changed = nouveau_fbcon_probe,
+	.output_poll_changed = nouveau_fbcon_output_poll_changed,
 };
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.c b/drivers/gpu/drm/nouveau/nouveau_drv.c
index 1de974a..2737704 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.c
@@ -153,7 +153,6 @@ nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state)
 	struct nouveau_fifo_engine *pfifo = &dev_priv->engine.fifo;
 	struct nouveau_channel *chan;
 	struct drm_crtc *crtc;
-	uint32_t fbdev_flags;
 	int ret, i;
 
 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
@@ -163,8 +162,7 @@ nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state)
 		return 0;
 
 	NV_INFO(dev, "Disabling fbcon acceleration...\n");
-	fbdev_flags = dev_priv->fbdev_info->flags;
-	dev_priv->fbdev_info->flags |= FBINFO_HWACCEL_DISABLED;
+	nouveau_fbcon_save_disable_accel(dev);
 
 	NV_INFO(dev, "Unpinning framebuffer(s)...\n");
 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
@@ -177,6 +175,13 @@ nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state)
 		nouveau_bo_unpin(nouveau_fb->nvbo);
 	}
 
+	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+		struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
+
+		nouveau_bo_unmap(nv_crtc->cursor.nvbo);
+		nouveau_bo_unpin(nv_crtc->cursor.nvbo);
+	}
+
 	NV_INFO(dev, "Evicting buffers...\n");
 	ttm_bo_evict_mm(&dev_priv->ttm.bdev, TTM_PL_VRAM);
 
@@ -230,9 +235,9 @@ nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state)
 	}
 
 	acquire_console_sem();
-	fb_set_suspend(dev_priv->fbdev_info, 1);
+	nouveau_fbcon_set_suspend(dev, 1);
 	release_console_sem();
-	dev_priv->fbdev_info->flags = fbdev_flags;
+	nouveau_fbcon_restore_accel(dev);
 	return 0;
 
 out_abort:
@@ -250,14 +255,12 @@ nouveau_pci_resume(struct pci_dev *pdev)
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_engine *engine = &dev_priv->engine;
 	struct drm_crtc *crtc;
-	uint32_t fbdev_flags;
 	int ret, i;
 
 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
 		return -ENODEV;
 
-	fbdev_flags = dev_priv->fbdev_info->flags;
-	dev_priv->fbdev_info->flags |= FBINFO_HWACCEL_DISABLED;
+	nouveau_fbcon_save_disable_accel(dev);
 
 	NV_INFO(dev, "We're back, enabling device...\n");
 	pci_set_power_state(pdev, PCI_D0);
@@ -318,12 +321,34 @@ nouveau_pci_resume(struct pci_dev *pdev)
 		nouveau_bo_pin(nouveau_fb->nvbo, TTM_PL_FLAG_VRAM);
 	}
 
+	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+		struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
+		int ret;
+
+		ret = nouveau_bo_pin(nv_crtc->cursor.nvbo, TTM_PL_FLAG_VRAM);
+		if (!ret)
+			ret = nouveau_bo_map(nv_crtc->cursor.nvbo);
+		if (ret)
+			NV_ERROR(dev, "Could not pin/map cursor.\n");
+	}
+
 	if (dev_priv->card_type < NV_50) {
 		nv04_display_restore(dev);
 		NVLockVgaCrtcs(dev, false);
 	} else
 		nv50_display_init(dev);
 
+	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+		struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
+
+		nv_crtc->cursor.set_offset(nv_crtc,
+					nv_crtc->cursor.nvbo->bo.offset -
+					dev_priv->vm_vram_base);
+
+		nv_crtc->cursor.set_pos(nv_crtc, nv_crtc->cursor_saved_x,
+			nv_crtc->cursor_saved_y);
+	}
+
 	/* Force CLUT to get re-loaded during modeset */
 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
 		struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
@@ -332,13 +357,14 @@ nouveau_pci_resume(struct pci_dev *pdev)
 	}
 
 	acquire_console_sem();
-	fb_set_suspend(dev_priv->fbdev_info, 0);
+	nouveau_fbcon_set_suspend(dev, 0);
 	release_console_sem();
 
-	nouveau_fbcon_zfill(dev);
+	nouveau_fbcon_zfill_all(dev);
 
 	drm_helper_resume_force_mode(dev);
-	dev_priv->fbdev_info->flags = fbdev_flags;
+
+	nouveau_fbcon_restore_accel(dev);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h
index ace630a..c697191 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -535,6 +535,7 @@ struct drm_nouveau_private {
 
 	struct fb_info *fbdev_info;
 
+	int fifo_alloc_count;
 	struct nouveau_channel *fifos[NOUVEAU_MAX_CHANNEL_NR];
 
 	struct nouveau_engine engine;
@@ -621,6 +622,9 @@ struct drm_nouveau_private {
 	struct {
 		struct dentry *channel_root;
 	} debugfs;
+
+	struct nouveau_fbdev *nfbdev;
+	struct apertures_struct *apertures;
 };
 
 static inline struct drm_nouveau_private *
@@ -847,12 +851,17 @@ extern int  nouveau_dma_init(struct nouveau_channel *);
 extern int  nouveau_dma_wait(struct nouveau_channel *, int slots, int size);
 
 /* nouveau_acpi.c */
+#define ROM_BIOS_PAGE 4096
 #if defined(CONFIG_ACPI)
 void nouveau_register_dsm_handler(void);
 void nouveau_unregister_dsm_handler(void);
+int nouveau_acpi_get_bios_chunk(uint8_t *bios, int offset, int len);
+bool nouveau_acpi_rom_supported(struct pci_dev *pdev);
 #else
 static inline void nouveau_register_dsm_handler(void) {}
 static inline void nouveau_unregister_dsm_handler(void) {}
+static inline bool nouveau_acpi_rom_supported(struct pci_dev *pdev) { return false; }
+static inline int nouveau_acpi_get_bios_chunk(uint8_t *bios, int offset, int len) { return -EINVAL; }
 #endif
 
 /* nouveau_backlight.c */
@@ -1166,6 +1175,12 @@ int nv17_gpio_set(struct drm_device *dev, enum dcb_gpio_tag tag, int state);
 int nv50_gpio_get(struct drm_device *dev, enum dcb_gpio_tag tag);
 int nv50_gpio_set(struct drm_device *dev, enum dcb_gpio_tag tag, int state);
 
+/* nv50_calc. */
+int nv50_calc_pll(struct drm_device *, struct pll_lims *, int clk,
+		  int *N1, int *M1, int *N2, int *M2, int *P);
+int nv50_calc_pll2(struct drm_device *, struct pll_lims *,
+		   int clk, int *N, int *fN, int *M, int *P);
+
 #ifndef ioread32_native
 #ifdef __BIG_ENDIAN
 #define ioread16_native ioread16be
diff --git a/drivers/gpu/drm/nouveau/nouveau_encoder.h b/drivers/gpu/drm/nouveau/nouveau_encoder.h
index 9f28b94..e1df820 100644
--- a/drivers/gpu/drm/nouveau/nouveau_encoder.h
+++ b/drivers/gpu/drm/nouveau/nouveau_encoder.h
@@ -48,6 +48,8 @@ struct nouveau_encoder {
 	union {
 		struct {
 			int mc_unknown;
+			uint32_t unk0;
+			uint32_t unk1;
 			int dpcd_version;
 			int link_nr;
 			int link_bw;
diff --git a/drivers/gpu/drm/nouveau/nouveau_fb.h b/drivers/gpu/drm/nouveau/nouveau_fb.h
index 4a3f31a..d432134 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fb.h
+++ b/drivers/gpu/drm/nouveau/nouveau_fb.h
@@ -40,8 +40,6 @@ nouveau_framebuffer(struct drm_framebuffer *fb)
 
 extern const struct drm_mode_config_funcs nouveau_mode_config_funcs;
 
-struct drm_framebuffer *
-nouveau_framebuffer_create(struct drm_device *, struct nouveau_bo *,
-			   struct drm_mode_fb_cmd *);
-
+int nouveau_framebuffer_init(struct drm_device *dev, struct nouveau_framebuffer *nouveau_fb,
+			     struct drm_mode_fb_cmd *mode_cmd, struct nouveau_bo *nvbo);
 #endif /* __NOUVEAU_FB_H__ */
diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
index 8e7dc1d..c9a4a0d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
@@ -52,8 +52,8 @@
 static int
 nouveau_fbcon_sync(struct fb_info *info)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 	int ret, i;
@@ -97,7 +97,6 @@ static struct fb_ops nouveau_fbcon_ops = {
 	.owner = THIS_MODULE,
 	.fb_check_var = drm_fb_helper_check_var,
 	.fb_set_par = drm_fb_helper_set_par,
-	.fb_setcolreg = drm_fb_helper_setcolreg,
 	.fb_fillrect = cfb_fillrect,
 	.fb_copyarea = cfb_copyarea,
 	.fb_imageblit = cfb_imageblit,
@@ -111,7 +110,6 @@ static struct fb_ops nv04_fbcon_ops = {
 	.owner = THIS_MODULE,
 	.fb_check_var = drm_fb_helper_check_var,
 	.fb_set_par = drm_fb_helper_set_par,
-	.fb_setcolreg = drm_fb_helper_setcolreg,
 	.fb_fillrect = nv04_fbcon_fillrect,
 	.fb_copyarea = nv04_fbcon_copyarea,
 	.fb_imageblit = nv04_fbcon_imageblit,
@@ -125,7 +123,6 @@ static struct fb_ops nv50_fbcon_ops = {
 	.owner = THIS_MODULE,
 	.fb_check_var = drm_fb_helper_check_var,
 	.fb_set_par = drm_fb_helper_set_par,
-	.fb_setcolreg = drm_fb_helper_setcolreg,
 	.fb_fillrect = nv50_fbcon_fillrect,
 	.fb_copyarea = nv50_fbcon_copyarea,
 	.fb_imageblit = nv50_fbcon_imageblit,
@@ -155,54 +152,10 @@ static void nouveau_fbcon_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
 	*blue = nv_crtc->lut.b[regno];
 }
 
-static struct drm_fb_helper_funcs nouveau_fbcon_helper_funcs = {
-	.gamma_set = nouveau_fbcon_gamma_set,
-	.gamma_get = nouveau_fbcon_gamma_get
-};
-
-#if defined(__i386__) || defined(__x86_64__)
-static bool
-nouveau_fbcon_has_vesafb_or_efifb(struct drm_device *dev)
+static void
+nouveau_fbcon_zfill(struct drm_device *dev, struct nouveau_fbdev *nfbdev)
 {
-	struct pci_dev *pdev = dev->pdev;
-	int ramin;
-
-	if (screen_info.orig_video_isVGA != VIDEO_TYPE_VLFB &&
-	    screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
-		return false;
-
-	if (screen_info.lfb_base < pci_resource_start(pdev, 1))
-		goto not_fb;
-
-	if (screen_info.lfb_base + screen_info.lfb_size >=
-	    pci_resource_start(pdev, 1) + pci_resource_len(pdev, 1))
-		goto not_fb;
-
-	return true;
-not_fb:
-	ramin = 2;
-	if (pci_resource_len(pdev, ramin) == 0) {
-		ramin = 3;
-		if (pci_resource_len(pdev, ramin) == 0)
-			return false;
-	}
-
-	if (screen_info.lfb_base < pci_resource_start(pdev, ramin))
-		return false;
-
-	if (screen_info.lfb_base + screen_info.lfb_size >=
-	    pci_resource_start(pdev, ramin) + pci_resource_len(pdev, ramin))
-		return false;
-
-	return true;
-}
-#endif
-
-void
-nouveau_fbcon_zfill(struct drm_device *dev)
-{
-	struct drm_nouveau_private *dev_priv = dev->dev_private;
-	struct fb_info *info = dev_priv->fbdev_info;
+	struct fb_info *info = nfbdev->helper.fbdev;
 	struct fb_fillrect rect;
 
 	/* Clear the entire fbcon.  The drm will program every connector
@@ -218,28 +171,27 @@ nouveau_fbcon_zfill(struct drm_device *dev)
 }
 
 static int
-nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
-		     uint32_t fb_height, uint32_t surface_width,
-		     uint32_t surface_height, uint32_t surface_depth,
-		     uint32_t surface_bpp, struct drm_framebuffer **pfb)
+nouveau_fbcon_create(struct nouveau_fbdev *nfbdev,
+		     struct drm_fb_helper_surface_size *sizes)
 {
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct fb_info *info;
-	struct nouveau_fbcon_par *par;
 	struct drm_framebuffer *fb;
 	struct nouveau_framebuffer *nouveau_fb;
 	struct nouveau_bo *nvbo;
 	struct drm_mode_fb_cmd mode_cmd;
-	struct device *device = &dev->pdev->dev;
+	struct pci_dev *pdev = dev->pdev;
+	struct device *device = &pdev->dev;
 	int size, ret;
 
-	mode_cmd.width = surface_width;
-	mode_cmd.height = surface_height;
+	mode_cmd.width = sizes->surface_width;
+	mode_cmd.height = sizes->surface_height;
 
-	mode_cmd.bpp = surface_bpp;
+	mode_cmd.bpp = sizes->surface_bpp;
 	mode_cmd.pitch = mode_cmd.width * (mode_cmd.bpp >> 3);
 	mode_cmd.pitch = roundup(mode_cmd.pitch, 256);
-	mode_cmd.depth = surface_depth;
+	mode_cmd.depth = sizes->surface_depth;
 
 	size = mode_cmd.pitch * mode_cmd.height;
 	size = roundup(size, PAGE_SIZE);
@@ -268,31 +220,28 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
 
 	mutex_lock(&dev->struct_mutex);
 
-	fb = nouveau_framebuffer_create(dev, nvbo, &mode_cmd);
-	if (!fb) {
+	info = framebuffer_alloc(0, device);
+	if (!info) {
 		ret = -ENOMEM;
-		NV_ERROR(dev, "failed to allocate fb.\n");
 		goto out_unref;
 	}
 
-	list_add(&fb->filp_head, &dev->mode_config.fb_kernel_list);
-
-	nouveau_fb = nouveau_framebuffer(fb);
-	*pfb = fb;
-
-	info = framebuffer_alloc(sizeof(struct nouveau_fbcon_par), device);
-	if (!info) {
+	ret = fb_alloc_cmap(&info->cmap, 256, 0);
+	if (ret) {
 		ret = -ENOMEM;
 		goto out_unref;
 	}
 
-	par = info->par;
-	par->helper.funcs = &nouveau_fbcon_helper_funcs;
-	par->helper.dev = dev;
-	ret = drm_fb_helper_init_crtc_count(&par->helper, 2, 4);
-	if (ret)
-		goto out_unref;
-	dev_priv->fbdev_info = info;
+	info->par = nfbdev;
+
+	nouveau_framebuffer_init(dev, &nfbdev->nouveau_fb, &mode_cmd, nvbo);
+
+	nouveau_fb = &nfbdev->nouveau_fb;
+	fb = &nouveau_fb->base;
+
+	/* setup helper */
+	nfbdev->helper.fb = fb;
+	nfbdev->helper.fbdev = info;
 
 	strcpy(info->fix.id, "nouveaufb");
 	if (nouveau_nofbaccel)
@@ -310,31 +259,17 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
 	info->screen_size = size;
 
 	drm_fb_helper_fill_fix(info, fb->pitch, fb->depth);
-	drm_fb_helper_fill_var(info, fb, fb_width, fb_height);
+	drm_fb_helper_fill_var(info, &nfbdev->helper, sizes->fb_width, sizes->fb_height);
 
 	/* FIXME: we really shouldn't expose mmio space at all */
-	info->fix.mmio_start = pci_resource_start(dev->pdev, 1);
-	info->fix.mmio_len = pci_resource_len(dev->pdev, 1);
+	info->fix.mmio_start = pci_resource_start(pdev, 1);
+	info->fix.mmio_len = pci_resource_len(pdev, 1);
 
 	/* Set aperture base/size for vesafb takeover */
-#if defined(__i386__) || defined(__x86_64__)
-	if (nouveau_fbcon_has_vesafb_or_efifb(dev)) {
-		/* Some NVIDIA VBIOS' are stupid and decide to put the
-		 * framebuffer in the middle of the PRAMIN BAR for
-		 * whatever reason.  We need to know the exact lfb_base
-		 * to get vesafb kicked off, and the only reliable way
-		 * we have left is to find out lfb_base the same way
-		 * vesafb did.
-		 */
-		info->aperture_base = screen_info.lfb_base;
-		info->aperture_size = screen_info.lfb_size;
-		if (screen_info.orig_video_isVGA == VIDEO_TYPE_VLFB)
-			info->aperture_size *= 65536;
-	} else
-#endif
-	{
-		info->aperture_base = info->fix.mmio_start;
-		info->aperture_size = info->fix.mmio_len;
+	info->apertures = dev_priv->apertures;
+	if (!info->apertures) {
+		ret = -ENOMEM;
+		goto out_unref;
 	}
 
 	info->pixmap.size = 64*1024;
@@ -343,11 +278,6 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
 	info->pixmap.flags = FB_PIXMAP_SYSTEM;
 	info->pixmap.scan_align = 1;
 
-	fb->fbdev = info;
-
-	par->nouveau_fb = nouveau_fb;
-	par->dev = dev;
-
 	if (dev_priv->channel && !nouveau_nofbaccel) {
 		switch (dev_priv->card_type) {
 		case NV_50:
@@ -361,7 +291,7 @@ nouveau_fbcon_create(struct drm_device *dev, uint32_t fb_width,
 		};
 	}
 
-	nouveau_fbcon_zfill(dev);
+	nouveau_fbcon_zfill(dev, nfbdev);
 
 	/* To allow resizeing without swapping buffers */
 	NV_INFO(dev, "allocated %dx%d fb: 0x%lx, bo %p\n",
@@ -379,44 +309,129 @@ out:
 	return ret;
 }
 
-int
-nouveau_fbcon_probe(struct drm_device *dev)
+static int
+nouveau_fbcon_find_or_create_single(struct drm_fb_helper *helper,
+				    struct drm_fb_helper_surface_size *sizes)
 {
-	NV_DEBUG_KMS(dev, "\n");
+	struct nouveau_fbdev *nfbdev = (struct nouveau_fbdev *)helper;
+	int new_fb = 0;
+	int ret;
+
+	if (!helper->fb) {
+		ret = nouveau_fbcon_create(nfbdev, sizes);
+		if (ret)
+			return ret;
+		new_fb = 1;
+	}
+	return new_fb;
+}
 
-	return drm_fb_helper_single_fb_probe(dev, 32, nouveau_fbcon_create);
+void
+nouveau_fbcon_output_poll_changed(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	drm_fb_helper_hotplug_event(&dev_priv->nfbdev->helper);
 }
 
 int
-nouveau_fbcon_remove(struct drm_device *dev, struct drm_framebuffer *fb)
+nouveau_fbcon_destroy(struct drm_device *dev, struct nouveau_fbdev *nfbdev)
 {
-	struct nouveau_framebuffer *nouveau_fb = nouveau_framebuffer(fb);
+	struct nouveau_framebuffer *nouveau_fb = &nfbdev->nouveau_fb;
 	struct fb_info *info;
 
-	if (!fb)
-		return -EINVAL;
-
-	info = fb->fbdev;
-	if (info) {
-		struct nouveau_fbcon_par *par = info->par;
-
+	if (nfbdev->helper.fbdev) {
+		info = nfbdev->helper.fbdev;
 		unregister_framebuffer(info);
+		if (info->cmap.len)
+			fb_dealloc_cmap(&info->cmap);
+		framebuffer_release(info);
+	}
+
+	if (nouveau_fb->nvbo) {
 		nouveau_bo_unmap(nouveau_fb->nvbo);
 		drm_gem_object_unreference_unlocked(nouveau_fb->nvbo->gem);
 		nouveau_fb->nvbo = NULL;
-		if (par)
-			drm_fb_helper_free(&par->helper);
-		framebuffer_release(info);
 	}
-
+	drm_fb_helper_fini(&nfbdev->helper);
+	drm_framebuffer_cleanup(&nouveau_fb->base);
 	return 0;
 }
 
 void nouveau_fbcon_gpu_lockup(struct fb_info *info)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 
 	NV_ERROR(dev, "GPU lockup - switching to software fbcon\n");
 	info->flags |= FBINFO_HWACCEL_DISABLED;
 }
+
+static struct drm_fb_helper_funcs nouveau_fbcon_helper_funcs = {
+	.gamma_set = nouveau_fbcon_gamma_set,
+	.gamma_get = nouveau_fbcon_gamma_get,
+	.fb_probe = nouveau_fbcon_find_or_create_single,
+};
+
+
+int nouveau_fbcon_init(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	struct nouveau_fbdev *nfbdev;
+	int ret;
+
+	nfbdev = kzalloc(sizeof(struct nouveau_fbdev), GFP_KERNEL);
+	if (!nfbdev)
+		return -ENOMEM;
+
+	nfbdev->dev = dev;
+	dev_priv->nfbdev = nfbdev;
+	nfbdev->helper.funcs = &nouveau_fbcon_helper_funcs;
+
+	ret = drm_fb_helper_init(dev, &nfbdev->helper, 2, 4);
+	if (ret) {
+		kfree(nfbdev);
+		return ret;
+	}
+
+	drm_fb_helper_single_add_all_connectors(&nfbdev->helper);
+	drm_fb_helper_initial_config(&nfbdev->helper, 32);
+	return 0;
+}
+
+void nouveau_fbcon_fini(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+
+	if (!dev_priv->nfbdev)
+		return;
+
+	nouveau_fbcon_destroy(dev, dev_priv->nfbdev);
+	kfree(dev_priv->nfbdev);
+	dev_priv->nfbdev = NULL;
+}
+
+void nouveau_fbcon_save_disable_accel(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+
+	dev_priv->nfbdev->saved_flags = dev_priv->nfbdev->helper.fbdev->flags;
+	dev_priv->nfbdev->helper.fbdev->flags |= FBINFO_HWACCEL_DISABLED;
+}
+
+void nouveau_fbcon_restore_accel(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	dev_priv->nfbdev->helper.fbdev->flags = dev_priv->nfbdev->saved_flags;
+}
+
+void nouveau_fbcon_set_suspend(struct drm_device *dev, int state)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	fb_set_suspend(dev_priv->nfbdev->helper.fbdev, state);
+}
+
+void nouveau_fbcon_zfill_all(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	nouveau_fbcon_zfill(dev, dev_priv->nfbdev);
+}
diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.h b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
index f9c34e1..e7e1268 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.h
+++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.h
@@ -29,16 +29,16 @@
 
 #include "drm_fb_helper.h"
 
-struct nouveau_fbcon_par {
+#include "nouveau_fb.h"
+struct nouveau_fbdev {
 	struct drm_fb_helper helper;
+	struct nouveau_framebuffer nouveau_fb;
+	struct list_head fbdev_list;
 	struct drm_device *dev;
-	struct nouveau_framebuffer *nouveau_fb;
+	unsigned int saved_flags;
 };
 
-int nouveau_fbcon_probe(struct drm_device *dev);
-int nouveau_fbcon_remove(struct drm_device *dev, struct drm_framebuffer *fb);
 void nouveau_fbcon_restore(void);
-void nouveau_fbcon_zfill(struct drm_device *dev);
 
 void nv04_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *region);
 void nv04_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect);
@@ -50,5 +50,14 @@ void nv50_fbcon_imageblit(struct fb_info *info, const struct fb_image *image);
 int nv50_fbcon_accel_init(struct fb_info *info);
 
 void nouveau_fbcon_gpu_lockup(struct fb_info *info);
+
+int nouveau_fbcon_init(struct drm_device *dev);
+void nouveau_fbcon_fini(struct drm_device *dev);
+void nouveau_fbcon_set_suspend(struct drm_device *dev, int state);
+void nouveau_fbcon_zfill_all(struct drm_device *dev);
+void nouveau_fbcon_save_disable_accel(struct drm_device *dev);
+void nouveau_fbcon_restore_accel(struct drm_device *dev);
+
+void nouveau_fbcon_output_poll_changed(struct drm_device *dev);
 #endif /* __NV50_FBCON_H__ */
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 1bc0b38..69c76cf 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -57,6 +57,9 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
 	}
 
 	ttm_bo_unref(&bo);
+
+	drm_gem_object_release(gem);
+	kfree(gem);
 }
 
 int
@@ -382,7 +385,7 @@ validate_list(struct nouveau_channel *chan, struct list_head *list,
 
 		nvbo->channel = chan;
 		ret = ttm_bo_validate(&nvbo->bo, &nvbo->placement,
-				      false, false);
+				      false, false, false);
 		nvbo->channel = NULL;
 		if (unlikely(ret)) {
 			NV_ERROR(dev, "fail ttm_validate\n");
diff --git a/drivers/gpu/drm/nouveau/nouveau_grctx.c b/drivers/gpu/drm/nouveau/nouveau_grctx.c
index 32f0e49..f731c5f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_grctx.c
+++ b/drivers/gpu/drm/nouveau/nouveau_grctx.c
@@ -68,13 +68,12 @@ nouveau_grctx_prog_load(struct drm_device *dev)
 			return ret;
 		}
 
-		pgraph->ctxprog = kmalloc(fw->size, GFP_KERNEL);
+		pgraph->ctxprog = kmemdup(fw->data, fw->size, GFP_KERNEL);
 		if (!pgraph->ctxprog) {
 			NV_ERROR(dev, "OOM copying ctxprog\n");
 			release_firmware(fw);
 			return -ENOMEM;
 		}
-		memcpy(pgraph->ctxprog, fw->data, fw->size);
 
 		cp = pgraph->ctxprog;
 		if (le32_to_cpu(cp->signature) != 0x5043564e ||
@@ -97,14 +96,13 @@ nouveau_grctx_prog_load(struct drm_device *dev)
 			return ret;
 		}
 
-		pgraph->ctxvals = kmalloc(fw->size, GFP_KERNEL);
+		pgraph->ctxvals = kmemdup(fw->data, fw->size, GFP_KERNEL);
 		if (!pgraph->ctxvals) {
 			NV_ERROR(dev, "OOM copying ctxvals\n");
 			release_firmware(fw);
 			nouveau_grctx_fini(dev);
 			return -ENOMEM;
 		}
-		memcpy(pgraph->ctxvals, fw->data, fw->size);
 
 		cv = (void *)pgraph->ctxvals;
 		if (le32_to_cpu(cv->signature) != 0x5643564e ||
diff --git a/drivers/gpu/drm/nouveau/nouveau_i2c.c b/drivers/gpu/drm/nouveau/nouveau_i2c.c
index 88583e7..316a3c7 100644
--- a/drivers/gpu/drm/nouveau/nouveau_i2c.c
+++ b/drivers/gpu/drm/nouveau/nouveau_i2c.c
@@ -254,16 +254,27 @@ struct nouveau_i2c_chan *
 nouveau_i2c_find(struct drm_device *dev, int index)
 {
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
-	struct nvbios *bios = &dev_priv->vbios;
+	struct dcb_i2c_entry *i2c = &dev_priv->vbios.dcb.i2c[index];
 
 	if (index >= DCB_MAX_NUM_I2C_ENTRIES)
 		return NULL;
 
-	if (!bios->dcb.i2c[index].chan) {
-		if (nouveau_i2c_init(dev, &bios->dcb.i2c[index], index))
-			return NULL;
+	if (dev_priv->chipset >= NV_50 && (i2c->entry & 0x00000100)) {
+		uint32_t reg = 0xe500, val;
+
+		if (i2c->port_type == 6) {
+			reg += i2c->read * 0x50;
+			val  = 0x2002;
+		} else {
+			reg += ((i2c->entry & 0x1e00) >> 9) * 0x50;
+			val  = 0xe001;
+		}
+
+		nv_wr32(dev, reg, (nv_rd32(dev, reg) & ~0xf003) | val);
 	}
 
-	return bios->dcb.i2c[index].chan;
+	if (!i2c->chan && nouveau_i2c_init(dev, i2c, index))
+		return NULL;
+	return i2c->chan;
 }
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_irq.c b/drivers/gpu/drm/nouveau/nouveau_irq.c
index 13e73ce..53360f1 100644
--- a/drivers/gpu/drm/nouveau/nouveau_irq.c
+++ b/drivers/gpu/drm/nouveau/nouveau_irq.c
@@ -1204,7 +1204,7 @@ nouveau_irq_handler(DRM_IRQ_ARGS)
 {
 	struct drm_device *dev = (struct drm_device *)arg;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
-	uint32_t status, fbdev_flags = 0;
+	uint32_t status;
 	unsigned long flags;
 
 	status = nv_rd32(dev, NV03_PMC_INTR_0);
@@ -1213,11 +1213,6 @@ nouveau_irq_handler(DRM_IRQ_ARGS)
 
 	spin_lock_irqsave(&dev_priv->context_switch_lock, flags);
 
-	if (dev_priv->fbdev_info) {
-		fbdev_flags = dev_priv->fbdev_info->flags;
-		dev_priv->fbdev_info->flags |= FBINFO_HWACCEL_DISABLED;
-	}
-
 	if (status & NV_PMC_INTR_0_PFIFO_PENDING) {
 		nouveau_fifo_irq_handler(dev);
 		status &= ~NV_PMC_INTR_0_PFIFO_PENDING;
@@ -1247,9 +1242,6 @@ nouveau_irq_handler(DRM_IRQ_ARGS)
 	if (status)
 		NV_ERROR(dev, "Unhandled PMC INTR status bits 0x%08x\n", status);
 
-	if (dev_priv->fbdev_info)
-		dev_priv->fbdev_info->flags = fbdev_flags;
-
 	spin_unlock_irqrestore(&dev_priv->context_switch_lock, flags);
 
 	return IRQ_HANDLED;
diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 775a701..c1fd42b 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -540,7 +540,8 @@ nouveau_mem_detect(struct drm_device *dev)
 		dev_priv->vram_size  = nv_rd32(dev, NV04_FIFO_DATA);
 		dev_priv->vram_size &= NV10_FIFO_DATA_RAM_AMOUNT_MB_MASK;
 		if (dev_priv->chipset == 0xaa || dev_priv->chipset == 0xac)
-			dev_priv->vram_sys_base = nv_rd32(dev, 0x100e10) << 12;
+			dev_priv->vram_sys_base = nv_rd32(dev, 0x100e10);
+			dev_priv->vram_sys_base <<= 12;
 	}
 
 	NV_INFO(dev, "Detected %dMiB VRAM\n", (int)(dev_priv->vram_size >> 20));
diff --git a/drivers/gpu/drm/nouveau/nouveau_reg.h b/drivers/gpu/drm/nouveau/nouveau_reg.h
index aa9b310..6ca80a3 100644
--- a/drivers/gpu/drm/nouveau/nouveau_reg.h
+++ b/drivers/gpu/drm/nouveau/nouveau_reg.h
@@ -826,6 +826,7 @@
 #define NV50_SOR_DP_CTRL_TRAINING_PATTERN_2                          0x02000000
 #define NV50_SOR_DP_UNK118(i,l)          (0x0061c118 + (i) * 0x800 + (l) * 0x80)
 #define NV50_SOR_DP_UNK120(i,l)          (0x0061c120 + (i) * 0x800 + (l) * 0x80)
+#define NV50_SOR_DP_UNK128(i,l)          (0x0061c128 + (i) * 0x800 + (l) * 0x80)
 #define NV50_SOR_DP_UNK130(i,l)          (0x0061c130 + (i) * 0x800 + (l) * 0x80)
 
 #define NV50_PDISPLAY_USER(i)                        ((i) * 0x1000 + 0x00640000)
diff --git a/drivers/gpu/drm/nouveau/nouveau_state.c b/drivers/gpu/drm/nouveau/nouveau_state.c
index e171064..b02a231 100644
--- a/drivers/gpu/drm/nouveau/nouveau_state.c
+++ b/drivers/gpu/drm/nouveau/nouveau_state.c
@@ -34,6 +34,7 @@
 
 #include "nouveau_drv.h"
 #include "nouveau_drm.h"
+#include "nouveau_fbcon.h"
 #include "nv50_display.h"
 
 static void nouveau_stub_takedown(struct drm_device *dev) {}
@@ -375,12 +376,15 @@ out_err:
 static void nouveau_switcheroo_set_state(struct pci_dev *pdev,
 					 enum vga_switcheroo_state state)
 {
+	struct drm_device *dev = pci_get_drvdata(pdev);
 	pm_message_t pmm = { .event = PM_EVENT_SUSPEND };
 	if (state == VGA_SWITCHEROO_ON) {
 		printk(KERN_ERR "VGA switcheroo: switched nouveau on\n");
 		nouveau_pci_resume(pdev);
+		drm_kms_helper_poll_enable(dev);
 	} else {
 		printk(KERN_ERR "VGA switcheroo: switched nouveau off\n");
+		drm_kms_helper_poll_disable(dev);
 		nouveau_pci_suspend(pdev, pmm);
 	}
 }
@@ -515,8 +519,10 @@ nouveau_card_init(struct drm_device *dev)
 
 	dev_priv->init_state = NOUVEAU_CARD_INIT_DONE;
 
-	if (drm_core_check_feature(dev, DRIVER_MODESET))
-		drm_helper_initial_config(dev);
+	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
+		nouveau_fbcon_init(dev);
+		drm_kms_helper_poll_init(dev);
+	}
 
 	return 0;
 
@@ -563,6 +569,7 @@ static void nouveau_card_takedown(struct drm_device *dev)
 	NV_DEBUG(dev, "prev state = %d\n", dev_priv->init_state);
 
 	if (dev_priv->init_state != NOUVEAU_CARD_INIT_DOWN) {
+
 		nouveau_backlight_exit(dev);
 
 		if (dev_priv->channel) {
@@ -637,6 +644,48 @@ static void nouveau_OF_copy_vbios_to_ramin(struct drm_device *dev)
 #endif
 }
 
+static struct apertures_struct *nouveau_get_apertures(struct drm_device *dev)
+{
+	struct pci_dev *pdev = dev->pdev;
+	struct apertures_struct *aper = alloc_apertures(3);
+	if (!aper)
+		return NULL;
+
+	aper->ranges[0].base = pci_resource_start(pdev, 1);
+	aper->ranges[0].size = pci_resource_len(pdev, 1);
+	aper->count = 1;
+
+	if (pci_resource_len(pdev, 2)) {
+		aper->ranges[aper->count].base = pci_resource_start(pdev, 2);
+		aper->ranges[aper->count].size = pci_resource_len(pdev, 2);
+		aper->count++;
+	}
+
+	if (pci_resource_len(pdev, 3)) {
+		aper->ranges[aper->count].base = pci_resource_start(pdev, 3);
+		aper->ranges[aper->count].size = pci_resource_len(pdev, 3);
+		aper->count++;
+	}
+
+	return aper;
+}
+
+static int nouveau_remove_conflicting_drivers(struct drm_device *dev)
+{
+	struct drm_nouveau_private *dev_priv = dev->dev_private;
+	bool primary = false;
+	dev_priv->apertures = nouveau_get_apertures(dev);
+	if (!dev_priv->apertures)
+		return -ENOMEM;
+
+#ifdef CONFIG_X86
+	primary = dev->pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
+#endif
+	
+	remove_conflicting_framebuffers(dev_priv->apertures, "nouveaufb", primary);
+	return 0;
+}
+
 int nouveau_load(struct drm_device *dev, unsigned long flags)
 {
 	struct drm_nouveau_private *dev_priv;
@@ -724,29 +773,30 @@ int nouveau_load(struct drm_device *dev, unsigned long flags)
 	NV_INFO(dev, "Detected an NV%2x generation card (0x%08x)\n",
 		dev_priv->card_type, reg0);
 
-	/* map larger RAMIN aperture on NV40 cards */
-	dev_priv->ramin  = NULL;
+	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
+		int ret = nouveau_remove_conflicting_drivers(dev);
+		if (ret)
+			return ret;
+	}
+
+	/* Map PRAMIN BAR, or on older cards, the aperture withing BAR0 */
 	if (dev_priv->card_type >= NV_40) {
 		int ramin_bar = 2;
 		if (pci_resource_len(dev->pdev, ramin_bar) == 0)
 			ramin_bar = 3;
 
 		dev_priv->ramin_size = pci_resource_len(dev->pdev, ramin_bar);
-		dev_priv->ramin = ioremap(
-				pci_resource_start(dev->pdev, ramin_bar),
+		dev_priv->ramin =
+			ioremap(pci_resource_start(dev->pdev, ramin_bar),
 				dev_priv->ramin_size);
 		if (!dev_priv->ramin) {
-			NV_ERROR(dev, "Failed to init RAMIN mapping, "
-				      "limited instance memory available\n");
+			NV_ERROR(dev, "Failed to PRAMIN BAR");
+			return -ENOMEM;
 		}
-	}
-
-	/* On older cards (or if the above failed), create a map covering
-	 * the BAR0 PRAMIN aperture */
-	if (!dev_priv->ramin) {
+	} else {
 		dev_priv->ramin_size = 1 * 1024 * 1024;
 		dev_priv->ramin = ioremap(mmio_start_offs + NV_RAMIN,
-							dev_priv->ramin_size);
+					  dev_priv->ramin_size);
 		if (!dev_priv->ramin) {
 			NV_ERROR(dev, "Failed to map BAR0 PRAMIN.\n");
 			return -ENOMEM;
@@ -794,6 +844,8 @@ int nouveau_unload(struct drm_device *dev)
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 
 	if (drm_core_check_feature(dev, DRIVER_MODESET)) {
+		drm_kms_helper_poll_fini(dev);
+		nouveau_fbcon_fini(dev);
 		if (dev_priv->card_type >= NV_50)
 			nv50_display_destroy(dev);
 		else
@@ -859,6 +911,9 @@ int nouveau_ioctl_getparam(struct drm_device *dev, void *data,
 	case NOUVEAU_GETPARAM_VM_VRAM_BASE:
 		getparam->value = dev_priv->vm_vram_base;
 		break;
+	case NOUVEAU_GETPARAM_PTIMER_TIME:
+		getparam->value = dev_priv->engine.timer.read(dev);
+		break;
 	case NOUVEAU_GETPARAM_GRAPH_UNITS:
 		/* NV40 and NV50 versions are quite different, but register
 		 * address is the same. User is supposed to know the card
diff --git a/drivers/gpu/drm/nouveau/nv04_cursor.c b/drivers/gpu/drm/nouveau/nv04_cursor.c
index 89a91b9..aaf3de3 100644
--- a/drivers/gpu/drm/nouveau/nv04_cursor.c
+++ b/drivers/gpu/drm/nouveau/nv04_cursor.c
@@ -20,6 +20,7 @@ nv04_cursor_hide(struct nouveau_crtc *nv_crtc, bool update)
 static void
 nv04_cursor_set_pos(struct nouveau_crtc *nv_crtc, int x, int y)
 {
+	nv_crtc->cursor_saved_x = x; nv_crtc->cursor_saved_y = y;
 	NVWriteRAMDAC(nv_crtc->base.dev, nv_crtc->index,
 		      NV_PRAMDAC_CU_START_POS,
 		      XLATE(y, 0, NV_PRAMDAC_CU_START_POS_Y) |
diff --git a/drivers/gpu/drm/nouveau/nv04_fbcon.c b/drivers/gpu/drm/nouveau/nv04_fbcon.c
index 813b25c..1eeac4f 100644
--- a/drivers/gpu/drm/nouveau/nv04_fbcon.c
+++ b/drivers/gpu/drm/nouveau/nv04_fbcon.c
@@ -30,8 +30,8 @@
 void
 nv04_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *region)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 
@@ -57,8 +57,8 @@ nv04_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *region)
 void
 nv04_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 
@@ -91,8 +91,8 @@ nv04_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
 void
 nv04_fbcon_imageblit(struct fb_info *info, const struct fb_image *image)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 	uint32_t fg;
@@ -179,8 +179,8 @@ nv04_fbcon_grobj_new(struct drm_device *dev, int class, uint32_t handle)
 int
 nv04_fbcon_accel_init(struct fb_info *info)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 	const int sub = NvSubCtxSurf2D;
@@ -236,7 +236,7 @@ nv04_fbcon_accel_init(struct fb_info *info)
 	if (ret)
 		return ret;
 
-	ret = nv04_fbcon_grobj_new(dev, dev_priv->card_type >= NV_10 ?
+	ret = nv04_fbcon_grobj_new(dev, dev_priv->chipset >= 0x11 ?
 				   0x009f : 0x005f, NvImageBlit);
 	if (ret)
 		return ret;
diff --git a/drivers/gpu/drm/nouveau/nv04_graph.c b/drivers/gpu/drm/nouveau/nv04_graph.c
index e260986..618355e 100644
--- a/drivers/gpu/drm/nouveau/nv04_graph.c
+++ b/drivers/gpu/drm/nouveau/nv04_graph.c
@@ -532,9 +532,82 @@ nv04_graph_mthd_set_ref(struct nouveau_channel *chan, int grclass,
 	return 0;
 }
 
-static int
-nv04_graph_mthd_set_operation(struct nouveau_channel *chan, int grclass,
-			      int mthd, uint32_t data)
+/*
+ * Software methods, why they are needed, and how they all work:
+ *
+ * NV04 and NV05 keep most of the state in PGRAPH context itself, but some
+ * 2d engine settings are kept inside the grobjs themselves. The grobjs are
+ * 3 words long on both. grobj format on NV04 is:
+ *
+ * word 0:
+ *  - bits 0-7: class
+ *  - bit 12: color key active
+ *  - bit 13: clip rect active
+ *  - bit 14: if set, destination surface is swizzled and taken from buffer 5
+ *            [set by NV04_SWIZZLED_SURFACE], otherwise it's linear and taken
+ *            from buffer 0 [set by NV04_CONTEXT_SURFACES_2D or
+ *            NV03_CONTEXT_SURFACE_DST].
+ *  - bits 15-17: 2d operation [aka patch config]
+ *  - bit 24: patch valid [enables rendering using this object]
+ *  - bit 25: surf3d valid [for tex_tri and multitex_tri only]
+ * word 1:
+ *  - bits 0-1: mono format
+ *  - bits 8-13: color format
+ *  - bits 16-31: DMA_NOTIFY instance
+ * word 2:
+ *  - bits 0-15: DMA_A instance
+ *  - bits 16-31: DMA_B instance
+ *
+ * On NV05 it's:
+ *
+ * word 0:
+ *  - bits 0-7: class
+ *  - bit 12: color key active
+ *  - bit 13: clip rect active
+ *  - bit 14: if set, destination surface is swizzled and taken from buffer 5
+ *            [set by NV04_SWIZZLED_SURFACE], otherwise it's linear and taken
+ *            from buffer 0 [set by NV04_CONTEXT_SURFACES_2D or
+ *            NV03_CONTEXT_SURFACE_DST].
+ *  - bits 15-17: 2d operation [aka patch config]
+ *  - bits 20-22: dither mode
+ *  - bit 24: patch valid [enables rendering using this object]
+ *  - bit 25: surface_dst/surface_color/surf2d/surf3d valid
+ *  - bit 26: surface_src/surface_zeta valid
+ *  - bit 27: pattern valid
+ *  - bit 28: rop valid
+ *  - bit 29: beta1 valid
+ *  - bit 30: beta4 valid
+ * word 1:
+ *  - bits 0-1: mono format
+ *  - bits 8-13: color format
+ *  - bits 16-31: DMA_NOTIFY instance
+ * word 2:
+ *  - bits 0-15: DMA_A instance
+ *  - bits 16-31: DMA_B instance
+ *
+ * NV05 will set/unset the relevant valid bits when you poke the relevant
+ * object-binding methods with object of the proper type, or with the NULL
+ * type. It'll only allow rendering using the grobj if all needed objects
+ * are bound. The needed set of objects depends on selected operation: for
+ * example rop object is needed by ROP_AND, but not by SRCCOPY_AND.
+ *
+ * NV04 doesn't have these methods implemented at all, and doesn't have the
+ * relevant bits in grobj. Instead, it'll allow rendering whenever bit 24
+ * is set. So we have to emulate them in software, internally keeping the
+ * same bits as NV05 does. Since grobjs are aligned to 16 bytes on nv04,
+ * but the last word isn't actually used for anything, we abuse it for this
+ * purpose.
+ *
+ * Actually, NV05 can optionally check bit 24 too, but we disable this since
+ * there's no use for it.
+ *
+ * For unknown reasons, NV04 implements surf3d binding in hardware as an
+ * exception. Also for unknown reasons, NV04 doesn't implement the clipping
+ * methods on the surf3d object, so we have to emulate them too.
+ */
+
+static void
+nv04_graph_set_ctx1(struct nouveau_channel *chan, uint32_t mask, uint32_t value)
 {
 	struct drm_device *dev = chan->dev;
 	uint32_t instance = (nv_rd32(dev, NV04_PGRAPH_CTX_SWITCH4) & 0xffff) << 4;
@@ -542,42 +615,509 @@ nv04_graph_mthd_set_operation(struct nouveau_channel *chan, int grclass,
 	uint32_t tmp;
 
 	tmp  = nv_ri32(dev, instance);
-	tmp &= ~0x00038000;
-	tmp |= ((data & 7) << 15);
+	tmp &= ~mask;
+	tmp |= value;
 
 	nv_wi32(dev, instance, tmp);
 	nv_wr32(dev, NV04_PGRAPH_CTX_SWITCH1, tmp);
 	nv_wr32(dev, NV04_PGRAPH_CTX_CACHE1 + (subc<<2), tmp);
+}
+
+static void
+nv04_graph_set_ctx_val(struct nouveau_channel *chan, uint32_t mask, uint32_t value)
+{
+	struct drm_device *dev = chan->dev;
+	uint32_t instance = (nv_rd32(dev, NV04_PGRAPH_CTX_SWITCH4) & 0xffff) << 4;
+	uint32_t tmp, ctx1;
+	int class, op, valid = 1;
+
+	ctx1 = nv_ri32(dev, instance);
+	class = ctx1 & 0xff;
+	op = (ctx1 >> 15) & 7;
+	tmp  = nv_ri32(dev, instance + 0xc);
+	tmp &= ~mask;
+	tmp |= value;
+	nv_wi32(dev, instance + 0xc, tmp);
+
+	/* check for valid surf2d/surf_dst/surf_color */
+	if (!(tmp & 0x02000000))
+		valid = 0;
+	/* check for valid surf_src/surf_zeta */
+	if ((class == 0x1f || class == 0x48) && !(tmp & 0x04000000))
+		valid = 0;
+
+	switch (op) {
+	/* SRCCOPY_AND, SRCCOPY: no extra objects required */
+	case 0:
+	case 3:
+		break;
+	/* ROP_AND: requires pattern and rop */
+	case 1:
+		if (!(tmp & 0x18000000))
+			valid = 0;
+		break;
+	/* BLEND_AND: requires beta1 */
+	case 2:
+		if (!(tmp & 0x20000000))
+			valid = 0;
+		break;
+	/* SRCCOPY_PREMULT, BLEND_PREMULT: beta4 required */
+	case 4:
+	case 5:
+		if (!(tmp & 0x40000000))
+			valid = 0;
+		break;
+	}
+
+	nv04_graph_set_ctx1(chan, 0x01000000, valid << 24);
+}
+
+static int
+nv04_graph_mthd_set_operation(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	if (data > 5)
+		return 1;
+	/* Old versions of the objects only accept first three operations. */
+	if (data > 2 && grclass < 0x40)
+		return 1;
+	nv04_graph_set_ctx1(chan, 0x00038000, data << 15);
+	/* changing operation changes set of objects needed for validation */
+	nv04_graph_set_ctx_val(chan, 0, 0);
+	return 0;
+}
+
+static int
+nv04_graph_mthd_surf3d_clip_h(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	uint32_t min = data & 0xffff, max;
+	uint32_t w = data >> 16;
+	if (min & 0x8000)
+		/* too large */
+		return 1;
+	if (w & 0x8000)
+		/* yes, it accepts negative for some reason. */
+		w |= 0xffff0000;
+	max = min + w;
+	max &= 0x3ffff;
+	nv_wr32(chan->dev, 0x40053c, min);
+	nv_wr32(chan->dev, 0x400544, max);
+	return 0;
+}
+
+static int
+nv04_graph_mthd_surf3d_clip_v(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	uint32_t min = data & 0xffff, max;
+	uint32_t w = data >> 16;
+	if (min & 0x8000)
+		/* too large */
+		return 1;
+	if (w & 0x8000)
+		/* yes, it accepts negative for some reason. */
+		w |= 0xffff0000;
+	max = min + w;
+	max &= 0x3ffff;
+	nv_wr32(chan->dev, 0x400540, min);
+	nv_wr32(chan->dev, 0x400548, max);
 	return 0;
 }
 
+static int
+nv04_graph_mthd_bind_surf2d(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx1(chan, 0x00004000, 0);
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0);
+		return 0;
+	case 0x42:
+		nv04_graph_set_ctx1(chan, 0x00004000, 0);
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0x02000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_surf2d_swzsurf(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx1(chan, 0x00004000, 0);
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0);
+		return 0;
+	case 0x42:
+		nv04_graph_set_ctx1(chan, 0x00004000, 0);
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0x02000000);
+		return 0;
+	case 0x52:
+		nv04_graph_set_ctx1(chan, 0x00004000, 0x00004000);
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0x02000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_nv01_patt(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x08000000, 0);
+		return 0;
+	case 0x18:
+		nv04_graph_set_ctx_val(chan, 0x08000000, 0x08000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_nv04_patt(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x08000000, 0);
+		return 0;
+	case 0x44:
+		nv04_graph_set_ctx_val(chan, 0x08000000, 0x08000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_rop(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x10000000, 0);
+		return 0;
+	case 0x43:
+		nv04_graph_set_ctx_val(chan, 0x10000000, 0x10000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_beta1(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x20000000, 0);
+		return 0;
+	case 0x12:
+		nv04_graph_set_ctx_val(chan, 0x20000000, 0x20000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_beta4(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x40000000, 0);
+		return 0;
+	case 0x72:
+		nv04_graph_set_ctx_val(chan, 0x40000000, 0x40000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_surf_dst(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0);
+		return 0;
+	case 0x58:
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0x02000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_surf_src(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x04000000, 0);
+		return 0;
+	case 0x59:
+		nv04_graph_set_ctx_val(chan, 0x04000000, 0x04000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_surf_color(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0);
+		return 0;
+	case 0x5a:
+		nv04_graph_set_ctx_val(chan, 0x02000000, 0x02000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_surf_zeta(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx_val(chan, 0x04000000, 0);
+		return 0;
+	case 0x5b:
+		nv04_graph_set_ctx_val(chan, 0x04000000, 0x04000000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_clip(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx1(chan, 0x2000, 0);
+		return 0;
+	case 0x19:
+		nv04_graph_set_ctx1(chan, 0x2000, 0x2000);
+		return 0;
+	}
+	return 1;
+}
+
+static int
+nv04_graph_mthd_bind_chroma(struct nouveau_channel *chan, int grclass,
+			      int mthd, uint32_t data)
+{
+	switch (nv_ri32(chan->dev, data << 4) & 0xff) {
+	case 0x30:
+		nv04_graph_set_ctx1(chan, 0x1000, 0);
+		return 0;
+	/* Yes, for some reason even the old versions of objects
+	 * accept 0x57 and not 0x17. Consistency be damned.
+	 */
+	case 0x57:
+		nv04_graph_set_ctx1(chan, 0x1000, 0x1000);
+		return 0;
+	}
+	return 1;
+}
+
 static struct nouveau_pgraph_object_method nv04_graph_mthds_sw[] = {
 	{ 0x0150, nv04_graph_mthd_set_ref },
 	{}
 };
 
-static struct nouveau_pgraph_object_method nv04_graph_mthds_set_operation[] = {
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv03_gdirect[] = {
+	{ 0x0184, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x0188, nv04_graph_mthd_bind_rop },
+	{ 0x018c, nv04_graph_mthd_bind_beta1 },
+	{ 0x0190, nv04_graph_mthd_bind_surf_dst },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_gdirect[] = {
+	{ 0x0188, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_beta4 },
+	{ 0x0198, nv04_graph_mthd_bind_surf2d },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv01_imageblit[] = {
+	{ 0x0184, nv04_graph_mthd_bind_chroma },
+	{ 0x0188, nv04_graph_mthd_bind_clip },
+	{ 0x018c, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x0190, nv04_graph_mthd_bind_rop },
+	{ 0x0194, nv04_graph_mthd_bind_beta1 },
+	{ 0x0198, nv04_graph_mthd_bind_surf_dst },
+	{ 0x019c, nv04_graph_mthd_bind_surf_src },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_imageblit_ifc[] = {
+	{ 0x0184, nv04_graph_mthd_bind_chroma },
+	{ 0x0188, nv04_graph_mthd_bind_clip },
+	{ 0x018c, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x0190, nv04_graph_mthd_bind_rop },
+	{ 0x0194, nv04_graph_mthd_bind_beta1 },
+	{ 0x0198, nv04_graph_mthd_bind_beta4 },
+	{ 0x019c, nv04_graph_mthd_bind_surf2d },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_iifc[] = {
+	{ 0x0188, nv04_graph_mthd_bind_chroma },
+	{ 0x018c, nv04_graph_mthd_bind_clip },
+	{ 0x0190, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x0194, nv04_graph_mthd_bind_rop },
+	{ 0x0198, nv04_graph_mthd_bind_beta1 },
+	{ 0x019c, nv04_graph_mthd_bind_beta4 },
+	{ 0x01a0, nv04_graph_mthd_bind_surf2d_swzsurf },
+	{ 0x03e4, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv01_ifc[] = {
+	{ 0x0184, nv04_graph_mthd_bind_chroma },
+	{ 0x0188, nv04_graph_mthd_bind_clip },
+	{ 0x018c, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x0190, nv04_graph_mthd_bind_rop },
+	{ 0x0194, nv04_graph_mthd_bind_beta1 },
+	{ 0x0198, nv04_graph_mthd_bind_surf_dst },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv03_sifc[] = {
+	{ 0x0184, nv04_graph_mthd_bind_chroma },
+	{ 0x0188, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_surf_dst },
 	{ 0x02fc, nv04_graph_mthd_set_operation },
 	{},
 };
 
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_sifc[] = {
+	{ 0x0184, nv04_graph_mthd_bind_chroma },
+	{ 0x0188, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_beta4 },
+	{ 0x0198, nv04_graph_mthd_bind_surf2d },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv03_sifm[] = {
+	{ 0x0188, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_surf_dst },
+	{ 0x0304, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_sifm[] = {
+	{ 0x0188, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_beta4 },
+	{ 0x0198, nv04_graph_mthd_bind_surf2d_swzsurf },
+	{ 0x0304, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv01_shape[] = {
+	{ 0x0184, nv04_graph_mthd_bind_clip },
+	{ 0x0188, nv04_graph_mthd_bind_nv01_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_surf_dst },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv04_shape[] = {
+	{ 0x0184, nv04_graph_mthd_bind_clip },
+	{ 0x0188, nv04_graph_mthd_bind_nv04_patt },
+	{ 0x018c, nv04_graph_mthd_bind_rop },
+	{ 0x0190, nv04_graph_mthd_bind_beta1 },
+	{ 0x0194, nv04_graph_mthd_bind_beta4 },
+	{ 0x0198, nv04_graph_mthd_bind_surf2d },
+	{ 0x02fc, nv04_graph_mthd_set_operation },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_nv03_tex_tri[] = {
+	{ 0x0188, nv04_graph_mthd_bind_clip },
+	{ 0x018c, nv04_graph_mthd_bind_surf_color },
+	{ 0x0190, nv04_graph_mthd_bind_surf_zeta },
+	{},
+};
+
+static struct nouveau_pgraph_object_method nv04_graph_mthds_surf3d[] = {
+	{ 0x02f8, nv04_graph_mthd_surf3d_clip_h },
+	{ 0x02fc, nv04_graph_mthd_surf3d_clip_v },
+	{},
+};
+
 struct nouveau_pgraph_object_class nv04_graph_grclass[] = {
-	{ 0x0039, false, NULL },
-	{ 0x004a, false, nv04_graph_mthds_set_operation }, /* gdirect */
-	{ 0x005f, false, nv04_graph_mthds_set_operation }, /* imageblit */
-	{ 0x0061, false, nv04_graph_mthds_set_operation }, /* ifc */
-	{ 0x0077, false, nv04_graph_mthds_set_operation }, /* sifm */
+	{ 0x0038, false, NULL }, /* dvd subpicture */
+	{ 0x0039, false, NULL }, /* m2mf */
+	{ 0x004b, false, nv04_graph_mthds_nv03_gdirect }, /* nv03 gdirect */
+	{ 0x004a, false, nv04_graph_mthds_nv04_gdirect }, /* nv04 gdirect */
+	{ 0x001f, false, nv04_graph_mthds_nv01_imageblit }, /* nv01 imageblit */
+	{ 0x005f, false, nv04_graph_mthds_nv04_imageblit_ifc }, /* nv04 imageblit */
+	{ 0x0060, false, nv04_graph_mthds_nv04_iifc }, /* nv04 iifc */
+	{ 0x0064, false, NULL }, /* nv05 iifc */
+	{ 0x0021, false, nv04_graph_mthds_nv01_ifc }, /* nv01 ifc */
+	{ 0x0061, false, nv04_graph_mthds_nv04_imageblit_ifc }, /* nv04 ifc */
+	{ 0x0065, false, NULL }, /* nv05 ifc */
+	{ 0x0036, false, nv04_graph_mthds_nv03_sifc }, /* nv03 sifc */
+	{ 0x0076, false, nv04_graph_mthds_nv04_sifc }, /* nv04 sifc */
+	{ 0x0066, false, NULL }, /* nv05 sifc */
+	{ 0x0037, false, nv04_graph_mthds_nv03_sifm }, /* nv03 sifm */
+	{ 0x0077, false, nv04_graph_mthds_nv04_sifm }, /* nv04 sifm */
 	{ 0x0030, false, NULL }, /* null */
 	{ 0x0042, false, NULL }, /* surf2d */
 	{ 0x0043, false, NULL }, /* rop */
 	{ 0x0012, false, NULL }, /* beta1 */
 	{ 0x0072, false, NULL }, /* beta4 */
 	{ 0x0019, false, NULL }, /* cliprect */
-	{ 0x0044, false, NULL }, /* pattern */
+	{ 0x0018, false, NULL }, /* nv01 pattern */
+	{ 0x0044, false, NULL }, /* nv04 pattern */
 	{ 0x0052, false, NULL }, /* swzsurf */
-	{ 0x0053, false, NULL }, /* surf3d */
+	{ 0x0053, false, nv04_graph_mthds_surf3d }, /* surf3d */
+	{ 0x0048, false, nv04_graph_mthds_nv03_tex_tri }, /* nv03 tex_tri */
 	{ 0x0054, false, NULL }, /* tex_tri */
 	{ 0x0055, false, NULL }, /* multitex_tri */
+	{ 0x0017, false, NULL }, /* nv01 chroma */
+	{ 0x0057, false, NULL }, /* nv04 chroma */
+	{ 0x0058, false, NULL }, /* surf_dst */
+	{ 0x0059, false, NULL }, /* surf_src */
+	{ 0x005a, false, NULL }, /* surf_color */
+	{ 0x005b, false, NULL }, /* surf_zeta */
+	{ 0x001c, false, nv04_graph_mthds_nv01_shape }, /* nv01 line */
+	{ 0x005c, false, nv04_graph_mthds_nv04_shape }, /* nv04 line */
+	{ 0x001d, false, nv04_graph_mthds_nv01_shape }, /* nv01 tri */
+	{ 0x005d, false, nv04_graph_mthds_nv04_shape }, /* nv04 tri */
+	{ 0x001e, false, nv04_graph_mthds_nv01_shape }, /* nv01 rect */
+	{ 0x005e, false, nv04_graph_mthds_nv04_shape }, /* nv04 rect */
 	{ 0x506e, true, nv04_graph_mthds_sw },
 	{}
 };
diff --git a/drivers/gpu/drm/nouveau/nv40_graph.c b/drivers/gpu/drm/nouveau/nv40_graph.c
index 0616c96..704a25d 100644
--- a/drivers/gpu/drm/nouveau/nv40_graph.c
+++ b/drivers/gpu/drm/nouveau/nv40_graph.c
@@ -253,7 +253,11 @@ nv40_graph_init(struct drm_device *dev)
 
 	if (!dev_priv->engine.graph.ctxprog) {
 		struct nouveau_grctx ctx = {};
-		uint32_t cp[256];
+		uint32_t *cp;
+
+		cp = kmalloc(sizeof(*cp) * 256, GFP_KERNEL);
+		if (!cp)
+			return -ENOMEM;
 
 		ctx.dev = dev;
 		ctx.mode = NOUVEAU_GRCTX_PROG;
@@ -265,6 +269,8 @@ nv40_graph_init(struct drm_device *dev)
 		nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_INDEX, 0);
 		for (i = 0; i < ctx.ctxprog_len; i++)
 			nv_wr32(dev, NV40_PGRAPH_CTXCTL_UCODE_DATA, cp[i]);
+
+		kfree(cp);
 	}
 
 	/* No context present currently */
diff --git a/drivers/gpu/drm/nouveau/nv40_grctx.c b/drivers/gpu/drm/nouveau/nv40_grctx.c
index 11b11c3..9b5c974 100644
--- a/drivers/gpu/drm/nouveau/nv40_grctx.c
+++ b/drivers/gpu/drm/nouveau/nv40_grctx.c
@@ -115,11 +115,6 @@
 
 /* TODO:
  *  - get vs count from 0x1540
- *  - document unimplemented bits compared to nvidia
- *    - nsource handling
- *    - R0 & 0x0200 handling
- *    - single-vs handling
- *    - 400314 bit 0
  */
 
 static int
diff --git a/drivers/gpu/drm/nouveau/nv50_calc.c b/drivers/gpu/drm/nouveau/nv50_calc.c
new file mode 100644
index 0000000..2cdc2bf
--- /dev/null
+++ b/drivers/gpu/drm/nouveau/nv50_calc.c
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2010 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Ben Skeggs
+ */
+
+#include "drmP.h"
+#include "drm_fixed.h"
+#include "nouveau_drv.h"
+#include "nouveau_hw.h"
+
+int
+nv50_calc_pll(struct drm_device *dev, struct pll_lims *pll, int clk,
+	      int *N1, int *M1, int *N2, int *M2, int *P)
+{
+	struct nouveau_pll_vals pll_vals;
+	int ret;
+
+	ret = nouveau_calc_pll_mnp(dev, pll, clk, &pll_vals);
+	if (ret <= 0)
+		return ret;
+
+	*N1 = pll_vals.N1;
+	*M1 = pll_vals.M1;
+	*N2 = pll_vals.N2;
+	*M2 = pll_vals.M2;
+	*P = pll_vals.log2P;
+	return ret;
+}
+
+int
+nv50_calc_pll2(struct drm_device *dev, struct pll_lims *pll, int clk,
+	       int *N, int *fN, int *M, int *P)
+{
+	fixed20_12 fb_div, a, b;
+
+	*P = pll->vco1.maxfreq / clk;
+	if (*P > pll->max_p)
+		*P = pll->max_p;
+	if (*P < pll->min_p)
+		*P = pll->min_p;
+
+	/* *M = ceil(refclk / pll->vco.max_inputfreq); */
+	a.full = dfixed_const(pll->refclk);
+	b.full = dfixed_const(pll->vco1.max_inputfreq);
+	a.full = dfixed_div(a, b);
+	a.full = dfixed_ceil(a);
+	*M = dfixed_trunc(a);
+
+	/* fb_div = (vco * *M) / refclk; */
+	fb_div.full = dfixed_const(clk * *P);
+	fb_div.full = dfixed_mul(fb_div, a);
+	a.full = dfixed_const(pll->refclk);
+	fb_div.full = dfixed_div(fb_div, a);
+
+	/* *N = floor(fb_div); */
+	a.full = dfixed_floor(fb_div);
+	*N = dfixed_trunc(fb_div);
+
+	/* *fN = (fmod(fb_div, 1.0) * 8192) - 4096; */
+	b.full = dfixed_const(8192);
+	a.full = dfixed_mul(a, b);
+	fb_div.full = dfixed_mul(fb_div, b);
+	fb_div.full = fb_div.full - a.full;
+	*fN = dfixed_trunc(fb_div) - 4096;
+	*fN &= 0xffff;
+
+	return clk;
+}
diff --git a/drivers/gpu/drm/nouveau/nv50_crtc.c b/drivers/gpu/drm/nouveau/nv50_crtc.c
index cfabeb9..b4e4a3b 100644
--- a/drivers/gpu/drm/nouveau/nv50_crtc.c
+++ b/drivers/gpu/drm/nouveau/nv50_crtc.c
@@ -264,32 +264,40 @@ nv50_crtc_set_scale(struct nouveau_crtc *nv_crtc, int scaling_mode, bool update)
 int
 nv50_crtc_set_clock(struct drm_device *dev, int head, int pclk)
 {
-	uint32_t pll_reg = NV50_PDISPLAY_CRTC_CLK_CTRL1(head);
-	struct nouveau_pll_vals pll;
-	struct pll_lims limits;
+	uint32_t reg = NV50_PDISPLAY_CRTC_CLK_CTRL1(head);
+	struct pll_lims pll;
 	uint32_t reg1, reg2;
-	int ret;
+	int ret, N1, M1, N2, M2, P;
 
-	ret = get_pll_limits(dev, pll_reg, &limits);
+	ret = get_pll_limits(dev, reg, &pll);
 	if (ret)
 		return ret;
 
-	ret = nouveau_calc_pll_mnp(dev, &limits, pclk, &pll);
-	if (ret <= 0)
-		return ret;
+	if (pll.vco2.maxfreq) {
+		ret = nv50_calc_pll(dev, &pll, pclk, &N1, &M1, &N2, &M2, &P);
+		if (ret <= 0)
+			return 0;
+
+		NV_DEBUG(dev, "pclk %d out %d NM1 %d %d NM2 %d %d P %d\n",
+			 pclk, ret, N1, M1, N2, M2, P);
 
-	if (limits.vco2.maxfreq) {
-		reg1 = nv_rd32(dev, pll_reg + 4) & 0xff00ff00;
-		reg2 = nv_rd32(dev, pll_reg + 8) & 0x8000ff00;
-		nv_wr32(dev, pll_reg, 0x10000611);
-		nv_wr32(dev, pll_reg + 4, reg1 | (pll.M1 << 16) | pll.N1);
-		nv_wr32(dev, pll_reg + 8,
-			reg2 | (pll.log2P << 28) | (pll.M2 << 16) | pll.N2);
+		reg1 = nv_rd32(dev, reg + 4) & 0xff00ff00;
+		reg2 = nv_rd32(dev, reg + 8) & 0x8000ff00;
+		nv_wr32(dev, reg, 0x10000611);
+		nv_wr32(dev, reg + 4, reg1 | (M1 << 16) | N1);
+		nv_wr32(dev, reg + 8, reg2 | (P << 28) | (M2 << 16) | N2);
 	} else {
-		reg1 = nv_rd32(dev, pll_reg + 4) & 0xffc00000;
-		nv_wr32(dev, pll_reg, 0x50000610);
-		nv_wr32(dev, pll_reg + 4, reg1 |
-			(pll.log2P << 16) | (pll.M1 << 8) | pll.N1);
+		ret = nv50_calc_pll2(dev, &pll, pclk, &N1, &N2, &M1, &P);
+		if (ret <= 0)
+			return 0;
+
+		NV_DEBUG(dev, "pclk %d out %d N %d fN 0x%04x M %d P %d\n",
+			 pclk, ret, N1, N2, M1, P);
+
+		reg1 = nv_rd32(dev, reg + 4) & 0xffc00000;
+		nv_wr32(dev, reg, 0x50000610);
+		nv_wr32(dev, reg + 4, reg1 | (P << 16) | (M1 << 8) | N1);
+		nv_wr32(dev, reg + 8, N2);
 	}
 
 	return 0;
diff --git a/drivers/gpu/drm/nouveau/nv50_cursor.c b/drivers/gpu/drm/nouveau/nv50_cursor.c
index 753e723..03ad7ab 100644
--- a/drivers/gpu/drm/nouveau/nv50_cursor.c
+++ b/drivers/gpu/drm/nouveau/nv50_cursor.c
@@ -107,6 +107,7 @@ nv50_cursor_set_pos(struct nouveau_crtc *nv_crtc, int x, int y)
 {
 	struct drm_device *dev = nv_crtc->base.dev;
 
+	nv_crtc->cursor_saved_x = x; nv_crtc->cursor_saved_y = y;
 	nv_wr32(dev, NV50_PDISPLAY_CURSOR_USER_POS(nv_crtc->index),
 		((y & 0xFFFF) << 16) | (x & 0xFFFF));
 	/* Needed to make the cursor move. */
diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c
index 649db4c..580a5d1 100644
--- a/drivers/gpu/drm/nouveau/nv50_display.c
+++ b/drivers/gpu/drm/nouveau/nv50_display.c
@@ -29,6 +29,7 @@
 #include "nouveau_encoder.h"
 #include "nouveau_connector.h"
 #include "nouveau_fb.h"
+#include "nouveau_fbcon.h"
 #include "drm_crtc_helper.h"
 
 static void
@@ -783,6 +784,37 @@ ack:
 }
 
 static void
+nv50_display_unk20_dp_hack(struct drm_device *dev, struct dcb_entry *dcb)
+{
+	int or = ffs(dcb->or) - 1, link = !(dcb->dpconf.sor.link & 1);
+	struct drm_encoder *encoder;
+	uint32_t tmp, unk0 = 0, unk1 = 0;
+
+	if (dcb->type != OUTPUT_DP)
+		return;
+
+	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
+
+		if (nv_encoder->dcb == dcb) {
+			unk0 = nv_encoder->dp.unk0;
+			unk1 = nv_encoder->dp.unk1;
+			break;
+		}
+	}
+
+	if (unk0 || unk1) {
+		tmp  = nv_rd32(dev, NV50_SOR_DP_CTRL(or, link));
+		tmp &= 0xfffffe03;
+		nv_wr32(dev, NV50_SOR_DP_CTRL(or, link), tmp | unk0);
+
+		tmp  = nv_rd32(dev, NV50_SOR_DP_UNK128(or, link));
+		tmp &= 0xfef080c0;
+		nv_wr32(dev, NV50_SOR_DP_UNK128(or, link), tmp | unk1);
+	}
+}
+
+static void
 nv50_display_unk20_handler(struct drm_device *dev)
 {
 	struct dcb_entry *dcbent;
@@ -805,6 +837,8 @@ nv50_display_unk20_handler(struct drm_device *dev)
 
 	nouveau_bios_run_display_table(dev, dcbent, script, pclk);
 
+	nv50_display_unk20_dp_hack(dev, dcbent);
+
 	tmp = nv_rd32(dev, NV50_PDISPLAY_CRTC_CLK_CTRL2(head));
 	tmp &= ~0x000000f;
 	nv_wr32(dev, NV50_PDISPLAY_CRTC_CLK_CTRL2(head), tmp);
@@ -945,6 +979,8 @@ nv50_display_irq_hotplug_bh(struct work_struct *work)
 	nv_wr32(dev, 0xe054, nv_rd32(dev, 0xe054));
 	if (dev_priv->chipset >= 0x90)
 		nv_wr32(dev, 0xe074, nv_rd32(dev, 0xe074));
+
+	drm_helper_hpd_irq_event(dev);
 }
 
 void
diff --git a/drivers/gpu/drm/nouveau/nv50_fb.c b/drivers/gpu/drm/nouveau/nv50_fb.c
index a95e694..32611bd 100644
--- a/drivers/gpu/drm/nouveau/nv50_fb.c
+++ b/drivers/gpu/drm/nouveau/nv50_fb.c
@@ -6,10 +6,16 @@
 int
 nv50_fb_init(struct drm_device *dev)
 {
-	/* This is needed to get meaningful information from 100c90
-	 * on traps. No idea what these values mean exactly. */
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 
+	/* Not a clue what this is exactly.  Without pointing it at a
+	 * scratch page, VRAM->GART blits with M2MF (as in DDX DFS)
+	 * cause IOMMU "read from address 0" errors (rh#561267)
+	 */
+	nv_wr32(dev, 0x100c08, dev_priv->gart_info.sg_dummy_bus >> 8);
+
+	/* This is needed to get meaningful information from 100c90
+	 * on traps. No idea what these values mean exactly. */
 	switch (dev_priv->chipset) {
 	case 0x50:
 		nv_wr32(dev, 0x100c90, 0x0707ff);
diff --git a/drivers/gpu/drm/nouveau/nv50_fbcon.c b/drivers/gpu/drm/nouveau/nv50_fbcon.c
index a8c70e7..6bf025c 100644
--- a/drivers/gpu/drm/nouveau/nv50_fbcon.c
+++ b/drivers/gpu/drm/nouveau/nv50_fbcon.c
@@ -6,8 +6,8 @@
 void
 nv50_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 
@@ -49,8 +49,8 @@ nv50_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
 void
 nv50_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *region)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 
@@ -84,8 +84,8 @@ nv50_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *region)
 void
 nv50_fbcon_imageblit(struct fb_info *info, const struct fb_image *image)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 	uint32_t width, dwords, *data = (uint32_t *)image->data;
@@ -152,8 +152,8 @@ nv50_fbcon_imageblit(struct fb_info *info, const struct fb_image *image)
 int
 nv50_fbcon_accel_init(struct fb_info *info)
 {
-	struct nouveau_fbcon_par *par = info->par;
-	struct drm_device *dev = par->dev;
+	struct nouveau_fbdev *nfbdev = info->par;
+	struct drm_device *dev = nfbdev->dev;
 	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_channel *chan = dev_priv->channel;
 	struct nouveau_gpuobj *eng2d = NULL;
diff --git a/drivers/gpu/drm/nouveau/nv50_gpio.c b/drivers/gpu/drm/nouveau/nv50_gpio.c
index c61782b..bb47ad7 100644
--- a/drivers/gpu/drm/nouveau/nv50_gpio.c
+++ b/drivers/gpu/drm/nouveau/nv50_gpio.c
@@ -31,7 +31,7 @@ nv50_gpio_location(struct dcb_gpio_entry *gpio, uint32_t *reg, uint32_t *shift)
 {
 	const uint32_t nv50_gpio_reg[4] = { 0xe104, 0xe108, 0xe280, 0xe284 };
 
-	if (gpio->line > 32)
+	if (gpio->line >= 32)
 		return -EINVAL;
 
 	*reg = nv50_gpio_reg[gpio->line >> 3];
diff --git a/drivers/gpu/drm/nouveau/nv50_sor.c b/drivers/gpu/drm/nouveau/nv50_sor.c
index 0c68698..812778d 100644
--- a/drivers/gpu/drm/nouveau/nv50_sor.c
+++ b/drivers/gpu/drm/nouveau/nv50_sor.c
@@ -274,7 +274,6 @@ static const struct drm_encoder_funcs nv50_sor_encoder_funcs = {
 int
 nv50_sor_create(struct drm_device *dev, struct dcb_entry *entry)
 {
-	struct drm_nouveau_private *dev_priv = dev->dev_private;
 	struct nouveau_encoder *nv_encoder = NULL;
 	struct drm_encoder *encoder;
 	bool dum;
@@ -321,18 +320,19 @@ nv50_sor_create(struct drm_device *dev, struct dcb_entry *entry)
 	encoder->possible_clones = 0;
 
 	if (nv_encoder->dcb->type == OUTPUT_DP) {
-		uint32_t mc, or = nv_encoder->or;
+		int or = nv_encoder->or, link = !(entry->dpconf.sor.link & 1);
+		uint32_t tmp;
 
-		if (dev_priv->chipset < 0x90 ||
-		    dev_priv->chipset == 0x92 || dev_priv->chipset == 0xa0)
-			mc = nv_rd32(dev, NV50_PDISPLAY_SOR_MODE_CTRL_C(or));
-		else
-			mc = nv_rd32(dev, NV90_PDISPLAY_SOR_MODE_CTRL_C(or));
+		tmp = nv_rd32(dev, 0x61c700 + (or * 0x800));
 
-		switch ((mc & 0x00000f00) >> 8) {
+		switch ((tmp & 0x00000f00) >> 8) {
 		case 8:
 		case 9:
-			nv_encoder->dp.mc_unknown = (mc & 0x000f0000) >> 16;
+			nv_encoder->dp.mc_unknown = (tmp & 0x000f0000) >> 16;
+			tmp = nv_rd32(dev, NV50_SOR_DP_CTRL(or, link));
+			nv_encoder->dp.unk0 = tmp & 0x000001fc;
+			tmp = nv_rd32(dev, NV50_SOR_DP_UNK128(or, link));
+			nv_encoder->dp.unk1 = tmp & 0x010f7f3f;
 			break;
 		default:
 			break;
diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile
index 3c91312..84b1f27 100644
--- a/drivers/gpu/drm/radeon/Makefile
+++ b/drivers/gpu/drm/radeon/Makefile
@@ -33,6 +33,9 @@ $(obj)/rs600_reg_safe.h: $(src)/reg_srcs/rs600 $(obj)/mkregtable
 $(obj)/r600_reg_safe.h: $(src)/reg_srcs/r600 $(obj)/mkregtable
 	$(call if_changed,mkregtable)
 
+$(obj)/evergreen_reg_safe.h: $(src)/reg_srcs/evergreen $(obj)/mkregtable
+	$(call if_changed,mkregtable)
+
 $(obj)/r100.o: $(obj)/r100_reg_safe.h $(obj)/rn50_reg_safe.h
 
 $(obj)/r200.o: $(obj)/r200_reg_safe.h
@@ -47,6 +50,8 @@ $(obj)/rs600.o: $(obj)/rs600_reg_safe.h
 
 $(obj)/r600_cs.o: $(obj)/r600_reg_safe.h
 
+$(obj)/evergreen_cs.o: $(obj)/evergreen_reg_safe.h
+
 radeon-y := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o \
 	radeon_irq.o r300_cmdbuf.o r600_cp.o
 # add KMS driver
@@ -60,7 +65,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
 	rs400.o rs600.o rs690.o rv515.o r520.o r600.o rv770.o radeon_test.o \
 	r200.o radeon_legacy_tv.o r600_cs.o r600_blit.o r600_blit_shaders.o \
 	r600_blit_kms.o radeon_pm.o atombios_dp.o r600_audio.o r600_hdmi.o \
-	evergreen.o
+	evergreen.o evergreen_cs.o
 
 radeon-$(CONFIG_COMPAT) += radeon_ioc32.o
 radeon-$(CONFIG_VGA_SWITCHEROO) += radeon_atpx_handler.o
diff --git a/drivers/gpu/drm/radeon/atombios.h b/drivers/gpu/drm/radeon/atombios.h
index 27e2c71..2ebcb97 100644
--- a/drivers/gpu/drm/radeon/atombios.h
+++ b/drivers/gpu/drm/radeon/atombios.h
@@ -5742,6 +5742,9 @@ typedef struct _ATOM_PPLIB_THERMALCONTROLLER
 #define ATOM_PP_THERMALCONTROLLER_RV6xx     7
 #define ATOM_PP_THERMALCONTROLLER_RV770     8
 #define ATOM_PP_THERMALCONTROLLER_ADT7473   9
+#define ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO     11
+#define ATOM_PP_THERMALCONTROLLER_EVERGREEN 12
+#define ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL   0x89    // ADT7473 Fan Control + Internal Thermal Controller
 
 typedef struct _ATOM_PPLIB_STATE
 {
@@ -5749,6 +5752,26 @@ typedef struct _ATOM_PPLIB_STATE
     UCHAR ucClockStateIndices[1]; // variable-sized
 } ATOM_PPLIB_STATE;
 
+typedef struct _ATOM_PPLIB_FANTABLE
+{
+    UCHAR   ucFanTableFormat;                // Change this if the table format changes or version changes so that the other fields are not the same.
+    UCHAR   ucTHyst;                         // Temperature hysteresis. Integer.
+    USHORT  usTMin;                          // The temperature, in 0.01 centigrades, below which we just run at a minimal PWM.
+    USHORT  usTMed;                          // The middle temperature where we change slopes.
+    USHORT  usTHigh;                         // The high point above TMed for adjusting the second slope.
+    USHORT  usPWMMin;                        // The minimum PWM value in percent (0.01% increments).
+    USHORT  usPWMMed;                        // The PWM value (in percent) at TMed.
+    USHORT  usPWMHigh;                       // The PWM value at THigh.
+} ATOM_PPLIB_FANTABLE;
+
+typedef struct _ATOM_PPLIB_EXTENDEDHEADER
+{
+    USHORT  usSize;
+    ULONG   ulMaxEngineClock;   // For Overdrive.
+    ULONG   ulMaxMemoryClock;   // For Overdrive.
+    // Add extra system parameters here, always adjust size to include all fields.
+} ATOM_PPLIB_EXTENDEDHEADER;
+
 //// ATOM_PPLIB_POWERPLAYTABLE::ulPlatformCaps
 #define ATOM_PP_PLATFORM_CAP_BACKBIAS 1
 #define ATOM_PP_PLATFORM_CAP_POWERPLAY 2
@@ -5762,6 +5785,12 @@ typedef struct _ATOM_PPLIB_STATE
 #define ATOM_PP_PLATFORM_CAP_SIDEPORTCONTROL 512
 #define ATOM_PP_PLATFORM_CAP_TURNOFFPLL_ASPML1 1024
 #define ATOM_PP_PLATFORM_CAP_HTLINKCONTROL 2048
+#define ATOM_PP_PLATFORM_CAP_MVDDCONTROL 4096
+#define ATOM_PP_PLATFORM_CAP_GOTO_BOOT_ON_ALERT 0x2000              // Go to boot state on alerts, e.g. on an AC->DC transition.
+#define ATOM_PP_PLATFORM_CAP_DONT_WAIT_FOR_VBLANK_ON_ALERT 0x4000   // Do NOT wait for VBLANK during an alert (e.g. AC->DC transition).
+#define ATOM_PP_PLATFORM_CAP_VDDCI_CONTROL 0x8000                   // Does the driver control VDDCI independently from VDDC.
+#define ATOM_PP_PLATFORM_CAP_REGULATOR_HOT 0x00010000               // Enable the 'regulator hot' feature.
+#define ATOM_PP_PLATFORM_CAP_BACO          0x00020000               // Does the driver supports BACO state.
 
 typedef struct _ATOM_PPLIB_POWERPLAYTABLE
 {
@@ -5797,6 +5826,21 @@ typedef struct _ATOM_PPLIB_POWERPLAYTABLE
 
 } ATOM_PPLIB_POWERPLAYTABLE;
 
+typedef struct _ATOM_PPLIB_POWERPLAYTABLE2
+{
+    ATOM_PPLIB_POWERPLAYTABLE basicTable;
+    UCHAR   ucNumCustomThermalPolicy;
+    USHORT  usCustomThermalPolicyArrayOffset;
+}ATOM_PPLIB_POWERPLAYTABLE2, *LPATOM_PPLIB_POWERPLAYTABLE2;
+
+typedef struct _ATOM_PPLIB_POWERPLAYTABLE3
+{
+    ATOM_PPLIB_POWERPLAYTABLE2 basicTable2;
+    USHORT                     usFormatID;                      // To be used ONLY by PPGen.
+    USHORT                     usFanTableOffset;
+    USHORT                     usExtendendedHeaderOffset;
+} ATOM_PPLIB_POWERPLAYTABLE3, *LPATOM_PPLIB_POWERPLAYTABLE3;
+
 //// ATOM_PPLIB_NONCLOCK_INFO::usClassification
 #define ATOM_PPLIB_CLASSIFICATION_UI_MASK          0x0007
 #define ATOM_PPLIB_CLASSIFICATION_UI_SHIFT         0
@@ -5816,7 +5860,9 @@ typedef struct _ATOM_PPLIB_POWERPLAYTABLE
 #define ATOM_PPLIB_CLASSIFICATION_UVDSTATE               0x0400
 #define ATOM_PPLIB_CLASSIFICATION_3DLOW                  0x0800
 #define ATOM_PPLIB_CLASSIFICATION_ACPI                   0x1000
-// remaining 3 bits are reserved
+#define ATOM_PPLIB_CLASSIFICATION_HD2STATE               0x2000
+#define ATOM_PPLIB_CLASSIFICATION_HDSTATE                0x4000
+#define ATOM_PPLIB_CLASSIFICATION_SDSTATE                0x8000
 
 //// ATOM_PPLIB_NONCLOCK_INFO::ulCapsAndSettings
 #define ATOM_PPLIB_SINGLE_DISPLAY_ONLY           0x00000001
@@ -5840,9 +5886,15 @@ typedef struct _ATOM_PPLIB_POWERPLAYTABLE
 
 #define ATOM_PPLIB_SOFTWARE_DISABLE_LOADBALANCING        0x00001000
 #define ATOM_PPLIB_SOFTWARE_ENABLE_SLEEP_FOR_TIMESTAMPS  0x00002000
+#define ATOM_PPLIB_DISALLOW_ON_DC                        0x00004000
 #define ATOM_PPLIB_ENABLE_VARIBRIGHT                     0x00008000
 
-#define ATOM_PPLIB_DISALLOW_ON_DC                       0x00004000
+//memory related flags
+#define ATOM_PPLIB_SWSTATE_MEMORY_DLL_OFF               0x000010000
+
+//M3 Arb    //2bits, current 3 sets of parameters in total
+#define ATOM_PPLIB_M3ARB_MASK                       0x00060000
+#define ATOM_PPLIB_M3ARB_SHIFT                      17
 
 // Contained in an array starting at the offset
 // in ATOM_PPLIB_POWERPLAYTABLE::usNonClockInfoArrayOffset.
@@ -5860,6 +5912,9 @@ typedef struct _ATOM_PPLIB_NONCLOCK_INFO
 // Contained in an array starting at the offset
 // in ATOM_PPLIB_POWERPLAYTABLE::usClockInfoArrayOffset.
 // referenced from ATOM_PPLIB_STATE::ucClockStateIndices
+#define ATOM_PPLIB_NONCLOCKINFO_VER1      12
+#define ATOM_PPLIB_NONCLOCKINFO_VER2      24
+
 typedef struct _ATOM_PPLIB_R600_CLOCK_INFO
 {
       USHORT usEngineClockLow;
@@ -5882,6 +5937,23 @@ typedef struct _ATOM_PPLIB_R600_CLOCK_INFO
 #define ATOM_PPLIB_R600_FLAGS_BACKBIASENABLE    4
 #define ATOM_PPLIB_R600_FLAGS_MEMORY_ODT_OFF    8
 #define ATOM_PPLIB_R600_FLAGS_MEMORY_DLL_OFF    16
+#define ATOM_PPLIB_R600_FLAGS_LOWPOWER         32   // On the RV770 use 'low power' setting (sequencer S0).
+
+typedef struct _ATOM_PPLIB_EVERGREEN_CLOCK_INFO
+{
+      USHORT usEngineClockLow;
+      UCHAR  ucEngineClockHigh;
+
+      USHORT usMemoryClockLow;
+      UCHAR  ucMemoryClockHigh;
+
+      USHORT usVDDC;
+      USHORT usVDDCI;
+      USHORT usUnused;
+
+      ULONG ulFlags; // ATOM_PPLIB_R600_FLAGS_*
+
+} ATOM_PPLIB_EVERGREEN_CLOCK_INFO;
 
 typedef struct _ATOM_PPLIB_RS780_CLOCK_INFO
 
diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
index a87990b..f3f2827 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -26,7 +26,7 @@
 #include <drm/drmP.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/radeon_drm.h>
-#include "radeon_fixed.h"
+#include <drm/drm_fixed.h>
 #include "radeon.h"
 #include "atom.h"
 #include "atom-bits.h"
@@ -245,25 +245,27 @@ void atombios_crtc_dpms(struct drm_crtc *crtc, int mode)
 
 	switch (mode) {
 	case DRM_MODE_DPMS_ON:
+		radeon_crtc->enabled = true;
+		/* adjust pm to dpms changes BEFORE enabling crtcs */
+		radeon_pm_compute_clocks(rdev);
 		atombios_enable_crtc(crtc, ATOM_ENABLE);
 		if (ASIC_IS_DCE3(rdev))
 			atombios_enable_crtc_memreq(crtc, ATOM_ENABLE);
 		atombios_blank_crtc(crtc, ATOM_DISABLE);
-		/* XXX re-enable when interrupt support is added */
-		if (!ASIC_IS_DCE4(rdev))
-			drm_vblank_post_modeset(dev, radeon_crtc->crtc_id);
+		drm_vblank_post_modeset(dev, radeon_crtc->crtc_id);
 		radeon_crtc_load_lut(crtc);
 		break;
 	case DRM_MODE_DPMS_STANDBY:
 	case DRM_MODE_DPMS_SUSPEND:
 	case DRM_MODE_DPMS_OFF:
-		/* XXX re-enable when interrupt support is added */
-		if (!ASIC_IS_DCE4(rdev))
-			drm_vblank_pre_modeset(dev, radeon_crtc->crtc_id);
+		drm_vblank_pre_modeset(dev, radeon_crtc->crtc_id);
 		atombios_blank_crtc(crtc, ATOM_ENABLE);
 		if (ASIC_IS_DCE3(rdev))
 			atombios_enable_crtc_memreq(crtc, ATOM_DISABLE);
 		atombios_enable_crtc(crtc, ATOM_DISABLE);
+		radeon_crtc->enabled = false;
+		/* adjust pm to dpms changes AFTER disabling crtcs */
+		radeon_pm_compute_clocks(rdev);
 		break;
 	}
 }
@@ -705,6 +707,7 @@ static void atombios_crtc_set_pll(struct drm_crtc *crtc, struct drm_display_mode
 		break;
 	case ATOM_DCPLL:
 	case ATOM_PPLL_INVALID:
+	default:
 		pll = &rdev->clock.dcpll;
 		break;
 	}
@@ -1160,6 +1163,12 @@ static bool atombios_crtc_mode_fixup(struct drm_crtc *crtc,
 				     struct drm_display_mode *mode,
 				     struct drm_display_mode *adjusted_mode)
 {
+	struct drm_device *dev = crtc->dev;
+	struct radeon_device *rdev = dev->dev_private;
+
+	/* adjust pm to upcoming mode change */
+	radeon_pm_compute_clocks(rdev);
+
 	if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode))
 		return false;
 	return true;
diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
index 28b31c6..abffb14 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -351,7 +351,7 @@ retry:
 	args.v1.ucChannelID = chan->rec.i2c_id;
 	args.v1.ucDelay = delay / 10;
 	if (ASIC_IS_DCE4(rdev))
-		args.v2.ucHPD_ID = chan->rec.hpd_id;
+		args.v2.ucHPD_ID = chan->rec.hpd;
 
 	atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
 
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index e8f447e..4b6623d 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -28,39 +28,246 @@
 #include "radeon.h"
 #include "radeon_asic.h"
 #include "radeon_drm.h"
-#include "rv770d.h"
+#include "evergreend.h"
 #include "atom.h"
 #include "avivod.h"
 #include "evergreen_reg.h"
 
+#define EVERGREEN_PFP_UCODE_SIZE 1120
+#define EVERGREEN_PM4_UCODE_SIZE 1376
+
 static void evergreen_gpu_init(struct radeon_device *rdev);
 void evergreen_fini(struct radeon_device *rdev);
 
+void evergreen_pm_misc(struct radeon_device *rdev)
+{
+	int req_ps_idx = rdev->pm.requested_power_state_index;
+	int req_cm_idx = rdev->pm.requested_clock_mode_index;
+	struct radeon_power_state *ps = &rdev->pm.power_state[req_ps_idx];
+	struct radeon_voltage *voltage = &ps->clock_info[req_cm_idx].voltage;
+
+	if ((voltage->type == VOLTAGE_SW) && voltage->voltage) {
+		if (voltage->voltage != rdev->pm.current_vddc) {
+			radeon_atom_set_voltage(rdev, voltage->voltage);
+			rdev->pm.current_vddc = voltage->voltage;
+			DRM_DEBUG("Setting: v: %d\n", voltage->voltage);
+		}
+	}
+}
+
+void evergreen_pm_prepare(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* disable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			tmp = RREG32(EVERGREEN_CRTC_CONTROL + radeon_crtc->crtc_offset);
+			tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+			WREG32(EVERGREEN_CRTC_CONTROL + radeon_crtc->crtc_offset, tmp);
+		}
+	}
+}
+
+void evergreen_pm_finish(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* enable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			tmp = RREG32(EVERGREEN_CRTC_CONTROL + radeon_crtc->crtc_offset);
+			tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+			WREG32(EVERGREEN_CRTC_CONTROL + radeon_crtc->crtc_offset, tmp);
+		}
+	}
+}
+
 bool evergreen_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd)
 {
 	bool connected = false;
-	/* XXX */
+
+	switch (hpd) {
+	case RADEON_HPD_1:
+		if (RREG32(DC_HPD1_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+		break;
+	case RADEON_HPD_2:
+		if (RREG32(DC_HPD2_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+		break;
+	case RADEON_HPD_3:
+		if (RREG32(DC_HPD3_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+		break;
+	case RADEON_HPD_4:
+		if (RREG32(DC_HPD4_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+		break;
+	case RADEON_HPD_5:
+		if (RREG32(DC_HPD5_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+		break;
+	case RADEON_HPD_6:
+		if (RREG32(DC_HPD6_INT_STATUS) & DC_HPDx_SENSE)
+			connected = true;
+			break;
+	default:
+		break;
+	}
+
 	return connected;
 }
 
 void evergreen_hpd_set_polarity(struct radeon_device *rdev,
 				enum radeon_hpd_id hpd)
 {
-	/* XXX */
+	u32 tmp;
+	bool connected = evergreen_hpd_sense(rdev, hpd);
+
+	switch (hpd) {
+	case RADEON_HPD_1:
+		tmp = RREG32(DC_HPD1_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD1_INT_CONTROL, tmp);
+		break;
+	case RADEON_HPD_2:
+		tmp = RREG32(DC_HPD2_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD2_INT_CONTROL, tmp);
+		break;
+	case RADEON_HPD_3:
+		tmp = RREG32(DC_HPD3_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD3_INT_CONTROL, tmp);
+		break;
+	case RADEON_HPD_4:
+		tmp = RREG32(DC_HPD4_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD4_INT_CONTROL, tmp);
+		break;
+	case RADEON_HPD_5:
+		tmp = RREG32(DC_HPD5_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD5_INT_CONTROL, tmp);
+			break;
+	case RADEON_HPD_6:
+		tmp = RREG32(DC_HPD6_INT_CONTROL);
+		if (connected)
+			tmp &= ~DC_HPDx_INT_POLARITY;
+		else
+			tmp |= DC_HPDx_INT_POLARITY;
+		WREG32(DC_HPD6_INT_CONTROL, tmp);
+		break;
+	default:
+		break;
+	}
 }
 
 void evergreen_hpd_init(struct radeon_device *rdev)
 {
-	/* XXX */
+	struct drm_device *dev = rdev->ddev;
+	struct drm_connector *connector;
+	u32 tmp = DC_HPDx_CONNECTION_TIMER(0x9c4) |
+		DC_HPDx_RX_INT_TIMER(0xfa) | DC_HPDx_EN;
+
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+		switch (radeon_connector->hpd.hpd) {
+		case RADEON_HPD_1:
+			WREG32(DC_HPD1_CONTROL, tmp);
+			rdev->irq.hpd[0] = true;
+			break;
+		case RADEON_HPD_2:
+			WREG32(DC_HPD2_CONTROL, tmp);
+			rdev->irq.hpd[1] = true;
+			break;
+		case RADEON_HPD_3:
+			WREG32(DC_HPD3_CONTROL, tmp);
+			rdev->irq.hpd[2] = true;
+			break;
+		case RADEON_HPD_4:
+			WREG32(DC_HPD4_CONTROL, tmp);
+			rdev->irq.hpd[3] = true;
+			break;
+		case RADEON_HPD_5:
+			WREG32(DC_HPD5_CONTROL, tmp);
+			rdev->irq.hpd[4] = true;
+			break;
+		case RADEON_HPD_6:
+			WREG32(DC_HPD6_CONTROL, tmp);
+			rdev->irq.hpd[5] = true;
+			break;
+		default:
+			break;
+		}
+	}
+	if (rdev->irq.installed)
+		evergreen_irq_set(rdev);
 }
 
-
-void evergreen_bandwidth_update(struct radeon_device *rdev)
+void evergreen_hpd_fini(struct radeon_device *rdev)
 {
-	/* XXX */
+	struct drm_device *dev = rdev->ddev;
+	struct drm_connector *connector;
+
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+		struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+		switch (radeon_connector->hpd.hpd) {
+		case RADEON_HPD_1:
+			WREG32(DC_HPD1_CONTROL, 0);
+			rdev->irq.hpd[0] = false;
+			break;
+		case RADEON_HPD_2:
+			WREG32(DC_HPD2_CONTROL, 0);
+			rdev->irq.hpd[1] = false;
+			break;
+		case RADEON_HPD_3:
+			WREG32(DC_HPD3_CONTROL, 0);
+			rdev->irq.hpd[2] = false;
+			break;
+		case RADEON_HPD_4:
+			WREG32(DC_HPD4_CONTROL, 0);
+			rdev->irq.hpd[3] = false;
+			break;
+		case RADEON_HPD_5:
+			WREG32(DC_HPD5_CONTROL, 0);
+			rdev->irq.hpd[4] = false;
+			break;
+		case RADEON_HPD_6:
+			WREG32(DC_HPD6_CONTROL, 0);
+			rdev->irq.hpd[5] = false;
+			break;
+		default:
+			break;
+		}
+	}
 }
 
-void evergreen_hpd_fini(struct radeon_device *rdev)
+void evergreen_bandwidth_update(struct radeon_device *rdev)
 {
 	/* XXX */
 }
@@ -83,10 +290,31 @@ static int evergreen_mc_wait_for_idle(struct radeon_device *rdev)
 /*
  * GART
  */
+void evergreen_pcie_gart_tlb_flush(struct radeon_device *rdev)
+{
+	unsigned i;
+	u32 tmp;
+
+	WREG32(VM_CONTEXT0_REQUEST_RESPONSE, REQUEST_TYPE(1));
+	for (i = 0; i < rdev->usec_timeout; i++) {
+		/* read MC_STATUS */
+		tmp = RREG32(VM_CONTEXT0_REQUEST_RESPONSE);
+		tmp = (tmp & RESPONSE_TYPE_MASK) >> RESPONSE_TYPE_SHIFT;
+		if (tmp == 2) {
+			printk(KERN_WARNING "[drm] r600 flush TLB failed\n");
+			return;
+		}
+		if (tmp) {
+			return;
+		}
+		udelay(1);
+	}
+}
+
 int evergreen_pcie_gart_enable(struct radeon_device *rdev)
 {
 	u32 tmp;
-	int r, i;
+	int r;
 
 	if (rdev->gart.table.vram.robj == NULL) {
 		dev_err(rdev->dev, "No VRAM object for PCIE GART.\n");
@@ -121,10 +349,9 @@ int evergreen_pcie_gart_enable(struct radeon_device *rdev)
 				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);
 	WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR,
 			(u32)(rdev->dummy_page.addr >> 12));
-	for (i = 1; i < 7; i++)
-		WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
+	WREG32(VM_CONTEXT1_CNTL, 0);
 
-	r600_pcie_gart_tlb_flush(rdev);
+	evergreen_pcie_gart_tlb_flush(rdev);
 	rdev->gart.ready = true;
 	return 0;
 }
@@ -132,11 +359,11 @@ int evergreen_pcie_gart_enable(struct radeon_device *rdev)
 void evergreen_pcie_gart_disable(struct radeon_device *rdev)
 {
 	u32 tmp;
-	int i, r;
+	int r;
 
 	/* Disable all tables */
-	for (i = 0; i < 7; i++)
-		WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
+	WREG32(VM_CONTEXT0_CNTL, 0);
+	WREG32(VM_CONTEXT1_CNTL, 0);
 
 	/* Setup L2 cache */
 	WREG32(VM_L2_CNTL, ENABLE_L2_FRAGMENT_PROCESSING |
@@ -173,7 +400,6 @@ void evergreen_pcie_gart_fini(struct radeon_device *rdev)
 void evergreen_agp_enable(struct radeon_device *rdev)
 {
 	u32 tmp;
-	int i;
 
 	/* Setup L2 cache */
 	WREG32(VM_L2_CNTL, ENABLE_L2_CACHE | ENABLE_L2_FRAGMENT_PROCESSING |
@@ -193,8 +419,8 @@ void evergreen_agp_enable(struct radeon_device *rdev)
 	WREG32(MC_VM_MB_L1_TLB1_CNTL, tmp);
 	WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp);
 	WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp);
-	for (i = 0; i < 7; i++)
-		WREG32(VM_CONTEXT0_CNTL + (i * 4), 0);
+	WREG32(VM_CONTEXT0_CNTL, 0);
+	WREG32(VM_CONTEXT1_CNTL, 0);
 }
 
 static void evergreen_mc_stop(struct radeon_device *rdev, struct evergreen_mc_save *save)
@@ -400,40 +626,656 @@ static void evergreen_mc_program(struct radeon_device *rdev)
 	rv515_vga_render_disable(rdev);
 }
 
-#if 0
 /*
  * CP.
  */
-static void evergreen_cp_stop(struct radeon_device *rdev)
-{
-	/* XXX */
-}
-
 
 static int evergreen_cp_load_microcode(struct radeon_device *rdev)
 {
-	/* XXX */
+	const __be32 *fw_data;
+	int i;
 
+	if (!rdev->me_fw || !rdev->pfp_fw)
+		return -EINVAL;
+
+	r700_cp_stop(rdev);
+	WREG32(CP_RB_CNTL, RB_NO_UPDATE | (15 << 8) | (3 << 0));
+
+	fw_data = (const __be32 *)rdev->pfp_fw->data;
+	WREG32(CP_PFP_UCODE_ADDR, 0);
+	for (i = 0; i < EVERGREEN_PFP_UCODE_SIZE; i++)
+		WREG32(CP_PFP_UCODE_DATA, be32_to_cpup(fw_data++));
+	WREG32(CP_PFP_UCODE_ADDR, 0);
+
+	fw_data = (const __be32 *)rdev->me_fw->data;
+	WREG32(CP_ME_RAM_WADDR, 0);
+	for (i = 0; i < EVERGREEN_PM4_UCODE_SIZE; i++)
+		WREG32(CP_ME_RAM_DATA, be32_to_cpup(fw_data++));
+
+	WREG32(CP_PFP_UCODE_ADDR, 0);
+	WREG32(CP_ME_RAM_WADDR, 0);
+	WREG32(CP_ME_RAM_RADDR, 0);
 	return 0;
 }
 
+int evergreen_cp_resume(struct radeon_device *rdev)
+{
+	u32 tmp;
+	u32 rb_bufsz;
+	int r;
+
+	/* Reset cp; if cp is reset, then PA, SH, VGT also need to be reset */
+	WREG32(GRBM_SOFT_RESET, (SOFT_RESET_CP |
+				 SOFT_RESET_PA |
+				 SOFT_RESET_SH |
+				 SOFT_RESET_VGT |
+				 SOFT_RESET_SX));
+	RREG32(GRBM_SOFT_RESET);
+	mdelay(15);
+	WREG32(GRBM_SOFT_RESET, 0);
+	RREG32(GRBM_SOFT_RESET);
+
+	/* Set ring buffer size */
+	rb_bufsz = drm_order(rdev->cp.ring_size / 8);
+	tmp = RB_NO_UPDATE | (drm_order(RADEON_GPU_PAGE_SIZE/8) << 8) | rb_bufsz;
+#ifdef __BIG_ENDIAN
+	tmp |= BUF_SWAP_32BIT;
+#endif
+	WREG32(CP_RB_CNTL, tmp);
+	WREG32(CP_SEM_WAIT_TIMER, 0x4);
+
+	/* Set the write pointer delay */
+	WREG32(CP_RB_WPTR_DELAY, 0);
+
+	/* Initialize the ring buffer's read and write pointers */
+	WREG32(CP_RB_CNTL, tmp | RB_RPTR_WR_ENA);
+	WREG32(CP_RB_RPTR_WR, 0);
+	WREG32(CP_RB_WPTR, 0);
+	WREG32(CP_RB_RPTR_ADDR, rdev->cp.gpu_addr & 0xFFFFFFFF);
+	WREG32(CP_RB_RPTR_ADDR_HI, upper_32_bits(rdev->cp.gpu_addr));
+	mdelay(1);
+	WREG32(CP_RB_CNTL, tmp);
+
+	WREG32(CP_RB_BASE, rdev->cp.gpu_addr >> 8);
+	WREG32(CP_DEBUG, (1 << 27) | (1 << 28));
+
+	rdev->cp.rptr = RREG32(CP_RB_RPTR);
+	rdev->cp.wptr = RREG32(CP_RB_WPTR);
+
+	r600_cp_start(rdev);
+	rdev->cp.ready = true;
+	r = radeon_ring_test(rdev);
+	if (r) {
+		rdev->cp.ready = false;
+		return r;
+	}
+	return 0;
+}
 
 /*
  * Core functions
  */
-static u32 evergreen_get_tile_pipe_to_backend_map(u32 num_tile_pipes,
+static u32 evergreen_get_tile_pipe_to_backend_map(struct radeon_device *rdev,
+						  u32 num_tile_pipes,
 						  u32 num_backends,
 						  u32 backend_disable_mask)
 {
 	u32 backend_map = 0;
+	u32 enabled_backends_mask = 0;
+	u32 enabled_backends_count = 0;
+	u32 cur_pipe;
+	u32 swizzle_pipe[EVERGREEN_MAX_PIPES];
+	u32 cur_backend = 0;
+	u32 i;
+	bool force_no_swizzle;
+
+	if (num_tile_pipes > EVERGREEN_MAX_PIPES)
+		num_tile_pipes = EVERGREEN_MAX_PIPES;
+	if (num_tile_pipes < 1)
+		num_tile_pipes = 1;
+	if (num_backends > EVERGREEN_MAX_BACKENDS)
+		num_backends = EVERGREEN_MAX_BACKENDS;
+	if (num_backends < 1)
+		num_backends = 1;
+
+	for (i = 0; i < EVERGREEN_MAX_BACKENDS; ++i) {
+		if (((backend_disable_mask >> i) & 1) == 0) {
+			enabled_backends_mask |= (1 << i);
+			++enabled_backends_count;
+		}
+		if (enabled_backends_count == num_backends)
+			break;
+	}
+
+	if (enabled_backends_count == 0) {
+		enabled_backends_mask = 1;
+		enabled_backends_count = 1;
+	}
+
+	if (enabled_backends_count != num_backends)
+		num_backends = enabled_backends_count;
+
+	memset((uint8_t *)&swizzle_pipe[0], 0, sizeof(u32) * EVERGREEN_MAX_PIPES);
+	switch (rdev->family) {
+	case CHIP_CEDAR:
+	case CHIP_REDWOOD:
+		force_no_swizzle = false;
+		break;
+	case CHIP_CYPRESS:
+	case CHIP_HEMLOCK:
+	case CHIP_JUNIPER:
+	default:
+		force_no_swizzle = true;
+		break;
+	}
+	if (force_no_swizzle) {
+		bool last_backend_enabled = false;
+
+		force_no_swizzle = false;
+		for (i = 0; i < EVERGREEN_MAX_BACKENDS; ++i) {
+			if (((enabled_backends_mask >> i) & 1) == 1) {
+				if (last_backend_enabled)
+					force_no_swizzle = true;
+				last_backend_enabled = true;
+			} else
+				last_backend_enabled = false;
+		}
+	}
+
+	switch (num_tile_pipes) {
+	case 1:
+	case 3:
+	case 5:
+	case 7:
+		DRM_ERROR("odd number of pipes!\n");
+		break;
+	case 2:
+		swizzle_pipe[0] = 0;
+		swizzle_pipe[1] = 1;
+		break;
+	case 4:
+		if (force_no_swizzle) {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 1;
+			swizzle_pipe[2] = 2;
+			swizzle_pipe[3] = 3;
+		} else {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 2;
+			swizzle_pipe[2] = 1;
+			swizzle_pipe[3] = 3;
+		}
+		break;
+	case 6:
+		if (force_no_swizzle) {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 1;
+			swizzle_pipe[2] = 2;
+			swizzle_pipe[3] = 3;
+			swizzle_pipe[4] = 4;
+			swizzle_pipe[5] = 5;
+		} else {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 2;
+			swizzle_pipe[2] = 4;
+			swizzle_pipe[3] = 1;
+			swizzle_pipe[4] = 3;
+			swizzle_pipe[5] = 5;
+		}
+		break;
+	case 8:
+		if (force_no_swizzle) {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 1;
+			swizzle_pipe[2] = 2;
+			swizzle_pipe[3] = 3;
+			swizzle_pipe[4] = 4;
+			swizzle_pipe[5] = 5;
+			swizzle_pipe[6] = 6;
+			swizzle_pipe[7] = 7;
+		} else {
+			swizzle_pipe[0] = 0;
+			swizzle_pipe[1] = 2;
+			swizzle_pipe[2] = 4;
+			swizzle_pipe[3] = 6;
+			swizzle_pipe[4] = 1;
+			swizzle_pipe[5] = 3;
+			swizzle_pipe[6] = 5;
+			swizzle_pipe[7] = 7;
+		}
+		break;
+	}
+
+	for (cur_pipe = 0; cur_pipe < num_tile_pipes; ++cur_pipe) {
+		while (((1 << cur_backend) & enabled_backends_mask) == 0)
+			cur_backend = (cur_backend + 1) % EVERGREEN_MAX_BACKENDS;
+
+		backend_map |= (((cur_backend & 0xf) << (swizzle_pipe[cur_pipe] * 4)));
+
+		cur_backend = (cur_backend + 1) % EVERGREEN_MAX_BACKENDS;
+	}
 
 	return backend_map;
 }
-#endif
 
 static void evergreen_gpu_init(struct radeon_device *rdev)
 {
-	/* XXX */
+	u32 cc_rb_backend_disable = 0;
+	u32 cc_gc_shader_pipe_config;
+	u32 gb_addr_config = 0;
+	u32 mc_shared_chmap, mc_arb_ramcfg;
+	u32 gb_backend_map;
+	u32 grbm_gfx_index;
+	u32 sx_debug_1;
+	u32 smx_dc_ctl0;
+	u32 sq_config;
+	u32 sq_lds_resource_mgmt;
+	u32 sq_gpr_resource_mgmt_1;
+	u32 sq_gpr_resource_mgmt_2;
+	u32 sq_gpr_resource_mgmt_3;
+	u32 sq_thread_resource_mgmt;
+	u32 sq_thread_resource_mgmt_2;
+	u32 sq_stack_resource_mgmt_1;
+	u32 sq_stack_resource_mgmt_2;
+	u32 sq_stack_resource_mgmt_3;
+	u32 vgt_cache_invalidation;
+	u32 hdp_host_path_cntl;
+	int i, j, num_shader_engines, ps_thread_count;
+
+	switch (rdev->family) {
+	case CHIP_CYPRESS:
+	case CHIP_HEMLOCK:
+		rdev->config.evergreen.num_ses = 2;
+		rdev->config.evergreen.max_pipes = 4;
+		rdev->config.evergreen.max_tile_pipes = 8;
+		rdev->config.evergreen.max_simds = 10;
+		rdev->config.evergreen.max_backends = 4 * rdev->config.evergreen.num_ses;
+		rdev->config.evergreen.max_gprs = 256;
+		rdev->config.evergreen.max_threads = 248;
+		rdev->config.evergreen.max_gs_threads = 32;
+		rdev->config.evergreen.max_stack_entries = 512;
+		rdev->config.evergreen.sx_num_of_sets = 4;
+		rdev->config.evergreen.sx_max_export_size = 256;
+		rdev->config.evergreen.sx_max_export_pos_size = 64;
+		rdev->config.evergreen.sx_max_export_smx_size = 192;
+		rdev->config.evergreen.max_hw_contexts = 8;
+		rdev->config.evergreen.sq_num_cf_insts = 2;
+
+		rdev->config.evergreen.sc_prim_fifo_size = 0x100;
+		rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
+		rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
+		break;
+	case CHIP_JUNIPER:
+		rdev->config.evergreen.num_ses = 1;
+		rdev->config.evergreen.max_pipes = 4;
+		rdev->config.evergreen.max_tile_pipes = 4;
+		rdev->config.evergreen.max_simds = 10;
+		rdev->config.evergreen.max_backends = 4 * rdev->config.evergreen.num_ses;
+		rdev->config.evergreen.max_gprs = 256;
+		rdev->config.evergreen.max_threads = 248;
+		rdev->config.evergreen.max_gs_threads = 32;
+		rdev->config.evergreen.max_stack_entries = 512;
+		rdev->config.evergreen.sx_num_of_sets = 4;
+		rdev->config.evergreen.sx_max_export_size = 256;
+		rdev->config.evergreen.sx_max_export_pos_size = 64;
+		rdev->config.evergreen.sx_max_export_smx_size = 192;
+		rdev->config.evergreen.max_hw_contexts = 8;
+		rdev->config.evergreen.sq_num_cf_insts = 2;
+
+		rdev->config.evergreen.sc_prim_fifo_size = 0x100;
+		rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
+		rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
+		break;
+	case CHIP_REDWOOD:
+		rdev->config.evergreen.num_ses = 1;
+		rdev->config.evergreen.max_pipes = 4;
+		rdev->config.evergreen.max_tile_pipes = 4;
+		rdev->config.evergreen.max_simds = 5;
+		rdev->config.evergreen.max_backends = 2 * rdev->config.evergreen.num_ses;
+		rdev->config.evergreen.max_gprs = 256;
+		rdev->config.evergreen.max_threads = 248;
+		rdev->config.evergreen.max_gs_threads = 32;
+		rdev->config.evergreen.max_stack_entries = 256;
+		rdev->config.evergreen.sx_num_of_sets = 4;
+		rdev->config.evergreen.sx_max_export_size = 256;
+		rdev->config.evergreen.sx_max_export_pos_size = 64;
+		rdev->config.evergreen.sx_max_export_smx_size = 192;
+		rdev->config.evergreen.max_hw_contexts = 8;
+		rdev->config.evergreen.sq_num_cf_insts = 2;
+
+		rdev->config.evergreen.sc_prim_fifo_size = 0x100;
+		rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
+		rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
+		break;
+	case CHIP_CEDAR:
+	default:
+		rdev->config.evergreen.num_ses = 1;
+		rdev->config.evergreen.max_pipes = 2;
+		rdev->config.evergreen.max_tile_pipes = 2;
+		rdev->config.evergreen.max_simds = 2;
+		rdev->config.evergreen.max_backends = 1 * rdev->config.evergreen.num_ses;
+		rdev->config.evergreen.max_gprs = 256;
+		rdev->config.evergreen.max_threads = 192;
+		rdev->config.evergreen.max_gs_threads = 16;
+		rdev->config.evergreen.max_stack_entries = 256;
+		rdev->config.evergreen.sx_num_of_sets = 4;
+		rdev->config.evergreen.sx_max_export_size = 128;
+		rdev->config.evergreen.sx_max_export_pos_size = 32;
+		rdev->config.evergreen.sx_max_export_smx_size = 96;
+		rdev->config.evergreen.max_hw_contexts = 4;
+		rdev->config.evergreen.sq_num_cf_insts = 1;
+
+		rdev->config.evergreen.sc_prim_fifo_size = 0x40;
+		rdev->config.evergreen.sc_hiz_tile_fifo_size = 0x30;
+		rdev->config.evergreen.sc_earlyz_tile_fifo_size = 0x130;
+		break;
+	}
+
+	/* Initialize HDP */
+	for (i = 0, j = 0; i < 32; i++, j += 0x18) {
+		WREG32((0x2c14 + j), 0x00000000);
+		WREG32((0x2c18 + j), 0x00000000);
+		WREG32((0x2c1c + j), 0x00000000);
+		WREG32((0x2c20 + j), 0x00000000);
+		WREG32((0x2c24 + j), 0x00000000);
+	}
+
+	WREG32(GRBM_CNTL, GRBM_READ_TIMEOUT(0xff));
+
+	cc_gc_shader_pipe_config = RREG32(CC_GC_SHADER_PIPE_CONFIG) & ~2;
+
+	cc_gc_shader_pipe_config |=
+		INACTIVE_QD_PIPES((EVERGREEN_MAX_PIPES_MASK << rdev->config.evergreen.max_pipes)
+				  & EVERGREEN_MAX_PIPES_MASK);
+	cc_gc_shader_pipe_config |=
+		INACTIVE_SIMDS((EVERGREEN_MAX_SIMDS_MASK << rdev->config.evergreen.max_simds)
+			       & EVERGREEN_MAX_SIMDS_MASK);
+
+	cc_rb_backend_disable =
+		BACKEND_DISABLE((EVERGREEN_MAX_BACKENDS_MASK << rdev->config.evergreen.max_backends)
+				& EVERGREEN_MAX_BACKENDS_MASK);
+
+
+	mc_shared_chmap = RREG32(MC_SHARED_CHMAP);
+	mc_arb_ramcfg = RREG32(MC_ARB_RAMCFG);
+
+	switch (rdev->config.evergreen.max_tile_pipes) {
+	case 1:
+	default:
+		gb_addr_config |= NUM_PIPES(0);
+		break;
+	case 2:
+		gb_addr_config |= NUM_PIPES(1);
+		break;
+	case 4:
+		gb_addr_config |= NUM_PIPES(2);
+		break;
+	case 8:
+		gb_addr_config |= NUM_PIPES(3);
+		break;
+	}
+
+	gb_addr_config |= PIPE_INTERLEAVE_SIZE((mc_arb_ramcfg & BURSTLENGTH_MASK) >> BURSTLENGTH_SHIFT);
+	gb_addr_config |= BANK_INTERLEAVE_SIZE(0);
+	gb_addr_config |= NUM_SHADER_ENGINES(rdev->config.evergreen.num_ses - 1);
+	gb_addr_config |= SHADER_ENGINE_TILE_SIZE(1);
+	gb_addr_config |= NUM_GPUS(0); /* Hemlock? */
+	gb_addr_config |= MULTI_GPU_TILE_SIZE(2);
+
+	if (((mc_arb_ramcfg & NOOFCOLS_MASK) >> NOOFCOLS_SHIFT) > 2)
+		gb_addr_config |= ROW_SIZE(2);
+	else
+		gb_addr_config |= ROW_SIZE((mc_arb_ramcfg & NOOFCOLS_MASK) >> NOOFCOLS_SHIFT);
+
+	if (rdev->ddev->pdev->device == 0x689e) {
+		u32 efuse_straps_4;
+		u32 efuse_straps_3;
+		u8 efuse_box_bit_131_124;
+
+		WREG32(RCU_IND_INDEX, 0x204);
+		efuse_straps_4 = RREG32(RCU_IND_DATA);
+		WREG32(RCU_IND_INDEX, 0x203);
+		efuse_straps_3 = RREG32(RCU_IND_DATA);
+		efuse_box_bit_131_124 = (u8)(((efuse_straps_4 & 0xf) << 4) | ((efuse_straps_3 & 0xf0000000) >> 28));
+
+		switch(efuse_box_bit_131_124) {
+		case 0x00:
+			gb_backend_map = 0x76543210;
+			break;
+		case 0x55:
+			gb_backend_map = 0x77553311;
+			break;
+		case 0x56:
+			gb_backend_map = 0x77553300;
+			break;
+		case 0x59:
+			gb_backend_map = 0x77552211;
+			break;
+		case 0x66:
+			gb_backend_map = 0x77443300;
+			break;
+		case 0x99:
+			gb_backend_map = 0x66552211;
+			break;
+		case 0x5a:
+			gb_backend_map = 0x77552200;
+			break;
+		case 0xaa:
+			gb_backend_map = 0x66442200;
+			break;
+		case 0x95:
+			gb_backend_map = 0x66553311;
+			break;
+		default:
+			DRM_ERROR("bad backend map, using default\n");
+			gb_backend_map =
+				evergreen_get_tile_pipe_to_backend_map(rdev,
+								       rdev->config.evergreen.max_tile_pipes,
+								       rdev->config.evergreen.max_backends,
+								       ((EVERGREEN_MAX_BACKENDS_MASK <<
+								   rdev->config.evergreen.max_backends) &
+									EVERGREEN_MAX_BACKENDS_MASK));
+			break;
+		}
+	} else if (rdev->ddev->pdev->device == 0x68b9) {
+		u32 efuse_straps_3;
+		u8 efuse_box_bit_127_124;
+
+		WREG32(RCU_IND_INDEX, 0x203);
+		efuse_straps_3 = RREG32(RCU_IND_DATA);
+		efuse_box_bit_127_124 = (u8)(efuse_straps_3 & 0xF0000000) >> 28;
+
+		switch(efuse_box_bit_127_124) {
+		case 0x0:
+			gb_backend_map = 0x00003210;
+			break;
+		case 0x5:
+		case 0x6:
+		case 0x9:
+		case 0xa:
+			gb_backend_map = 0x00003311;
+			break;
+		default:
+			DRM_ERROR("bad backend map, using default\n");
+			gb_backend_map =
+				evergreen_get_tile_pipe_to_backend_map(rdev,
+								       rdev->config.evergreen.max_tile_pipes,
+								       rdev->config.evergreen.max_backends,
+								       ((EVERGREEN_MAX_BACKENDS_MASK <<
+								   rdev->config.evergreen.max_backends) &
+									EVERGREEN_MAX_BACKENDS_MASK));
+			break;
+		}
+	} else
+		gb_backend_map =
+			evergreen_get_tile_pipe_to_backend_map(rdev,
+							       rdev->config.evergreen.max_tile_pipes,
+							       rdev->config.evergreen.max_backends,
+							       ((EVERGREEN_MAX_BACKENDS_MASK <<
+								 rdev->config.evergreen.max_backends) &
+								EVERGREEN_MAX_BACKENDS_MASK));
+
+	WREG32(GB_BACKEND_MAP, gb_backend_map);
+	WREG32(GB_ADDR_CONFIG, gb_addr_config);
+	WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
+	WREG32(HDP_ADDR_CONFIG, gb_addr_config);
+
+	num_shader_engines = ((RREG32(GB_ADDR_CONFIG) & NUM_SHADER_ENGINES(3)) >> 12) + 1;
+	grbm_gfx_index = INSTANCE_BROADCAST_WRITES;
+
+	for (i = 0; i < rdev->config.evergreen.num_ses; i++) {
+		u32 rb = cc_rb_backend_disable | (0xf0 << 16);
+		u32 sp = cc_gc_shader_pipe_config;
+		u32 gfx = grbm_gfx_index | SE_INDEX(i);
+
+		if (i == num_shader_engines) {
+			rb |= BACKEND_DISABLE(EVERGREEN_MAX_BACKENDS_MASK);
+			sp |= INACTIVE_SIMDS(EVERGREEN_MAX_SIMDS_MASK);
+		}
+
+		WREG32(GRBM_GFX_INDEX, gfx);
+		WREG32(RLC_GFX_INDEX, gfx);
+
+		WREG32(CC_RB_BACKEND_DISABLE, rb);
+		WREG32(CC_SYS_RB_BACKEND_DISABLE, rb);
+		WREG32(GC_USER_RB_BACKEND_DISABLE, rb);
+		WREG32(CC_GC_SHADER_PIPE_CONFIG, sp);
+        }
+
+	grbm_gfx_index |= SE_BROADCAST_WRITES;
+	WREG32(GRBM_GFX_INDEX, grbm_gfx_index);
+	WREG32(RLC_GFX_INDEX, grbm_gfx_index);
+
+	WREG32(CGTS_SYS_TCC_DISABLE, 0);
+	WREG32(CGTS_TCC_DISABLE, 0);
+	WREG32(CGTS_USER_SYS_TCC_DISABLE, 0);
+	WREG32(CGTS_USER_TCC_DISABLE, 0);
+
+	/* set HW defaults for 3D engine */
+	WREG32(CP_QUEUE_THRESHOLDS, (ROQ_IB1_START(0x16) |
+				     ROQ_IB2_START(0x2b)));
+
+	WREG32(CP_MEQ_THRESHOLDS, STQ_SPLIT(0x30));
+
+	WREG32(TA_CNTL_AUX, (DISABLE_CUBE_ANISO |
+			     SYNC_GRADIENT |
+			     SYNC_WALKER |
+			     SYNC_ALIGNER));
+
+	sx_debug_1 = RREG32(SX_DEBUG_1);
+	sx_debug_1 |= ENABLE_NEW_SMX_ADDRESS;
+	WREG32(SX_DEBUG_1, sx_debug_1);
+
+
+	smx_dc_ctl0 = RREG32(SMX_DC_CTL0);
+	smx_dc_ctl0 &= ~NUMBER_OF_SETS(0x1ff);
+	smx_dc_ctl0 |= NUMBER_OF_SETS(rdev->config.evergreen.sx_num_of_sets);
+	WREG32(SMX_DC_CTL0, smx_dc_ctl0);
+
+	WREG32(SX_EXPORT_BUFFER_SIZES, (COLOR_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_size / 4) - 1) |
+					POSITION_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_pos_size / 4) - 1) |
+					SMX_BUFFER_SIZE((rdev->config.evergreen.sx_max_export_smx_size / 4) - 1)));
+
+	WREG32(PA_SC_FIFO_SIZE, (SC_PRIM_FIFO_SIZE(rdev->config.evergreen.sc_prim_fifo_size) |
+				 SC_HIZ_TILE_FIFO_SIZE(rdev->config.evergreen.sc_hiz_tile_fifo_size) |
+				 SC_EARLYZ_TILE_FIFO_SIZE(rdev->config.evergreen.sc_earlyz_tile_fifo_size)));
+
+	WREG32(VGT_NUM_INSTANCES, 1);
+	WREG32(SPI_CONFIG_CNTL, 0);
+	WREG32(SPI_CONFIG_CNTL_1, VTX_DONE_DELAY(4));
+	WREG32(CP_PERFMON_CNTL, 0);
+
+	WREG32(SQ_MS_FIFO_SIZES, (CACHE_FIFO_SIZE(16 * rdev->config.evergreen.sq_num_cf_insts) |
+				  FETCH_FIFO_HIWATER(0x4) |
+				  DONE_FIFO_HIWATER(0xe0) |
+				  ALU_UPDATE_FIFO_HIWATER(0x8)));
+
+	sq_config = RREG32(SQ_CONFIG);
+	sq_config &= ~(PS_PRIO(3) |
+		       VS_PRIO(3) |
+		       GS_PRIO(3) |
+		       ES_PRIO(3));
+	sq_config |= (VC_ENABLE |
+		      EXPORT_SRC_C |
+		      PS_PRIO(0) |
+		      VS_PRIO(1) |
+		      GS_PRIO(2) |
+		      ES_PRIO(3));
+
+	if (rdev->family == CHIP_CEDAR)
+		/* no vertex cache */
+		sq_config &= ~VC_ENABLE;
+
+	sq_lds_resource_mgmt = RREG32(SQ_LDS_RESOURCE_MGMT);
+
+	sq_gpr_resource_mgmt_1 = NUM_PS_GPRS((rdev->config.evergreen.max_gprs - (4 * 2))* 12 / 32);
+	sq_gpr_resource_mgmt_1 |= NUM_VS_GPRS((rdev->config.evergreen.max_gprs - (4 * 2)) * 6 / 32);
+	sq_gpr_resource_mgmt_1 |= NUM_CLAUSE_TEMP_GPRS(4);
+	sq_gpr_resource_mgmt_2 = NUM_GS_GPRS((rdev->config.evergreen.max_gprs - (4 * 2)) * 4 / 32);
+	sq_gpr_resource_mgmt_2 |= NUM_ES_GPRS((rdev->config.evergreen.max_gprs - (4 * 2)) * 4 / 32);
+	sq_gpr_resource_mgmt_3 = NUM_HS_GPRS((rdev->config.evergreen.max_gprs - (4 * 2)) * 3 / 32);
+	sq_gpr_resource_mgmt_3 |= NUM_LS_GPRS((rdev->config.evergreen.max_gprs - (4 * 2)) * 3 / 32);
+
+	if (rdev->family == CHIP_CEDAR)
+		ps_thread_count = 96;
+	else
+		ps_thread_count = 128;
+
+	sq_thread_resource_mgmt = NUM_PS_THREADS(ps_thread_count);
+	sq_thread_resource_mgmt |= NUM_VS_THREADS(((rdev->config.evergreen.max_threads - ps_thread_count) / 6) / 8) * 8;
+	sq_thread_resource_mgmt |= NUM_GS_THREADS(((rdev->config.evergreen.max_threads - ps_thread_count) / 6) / 8) * 8;
+	sq_thread_resource_mgmt |= NUM_ES_THREADS(((rdev->config.evergreen.max_threads - ps_thread_count) / 6) / 8) * 8;
+	sq_thread_resource_mgmt_2 = NUM_HS_THREADS(((rdev->config.evergreen.max_threads - ps_thread_count) / 6) / 8) * 8;
+	sq_thread_resource_mgmt_2 |= NUM_LS_THREADS(((rdev->config.evergreen.max_threads - ps_thread_count) / 6) / 8) * 8;
+
+	sq_stack_resource_mgmt_1 = NUM_PS_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+	sq_stack_resource_mgmt_1 |= NUM_VS_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+	sq_stack_resource_mgmt_2 = NUM_GS_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+	sq_stack_resource_mgmt_2 |= NUM_ES_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+	sq_stack_resource_mgmt_3 = NUM_HS_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+	sq_stack_resource_mgmt_3 |= NUM_LS_STACK_ENTRIES((rdev->config.evergreen.max_stack_entries * 1) / 6);
+
+	WREG32(SQ_CONFIG, sq_config);
+	WREG32(SQ_GPR_RESOURCE_MGMT_1, sq_gpr_resource_mgmt_1);
+	WREG32(SQ_GPR_RESOURCE_MGMT_2, sq_gpr_resource_mgmt_2);
+	WREG32(SQ_GPR_RESOURCE_MGMT_3, sq_gpr_resource_mgmt_3);
+	WREG32(SQ_THREAD_RESOURCE_MGMT, sq_thread_resource_mgmt);
+	WREG32(SQ_THREAD_RESOURCE_MGMT_2, sq_thread_resource_mgmt_2);
+	WREG32(SQ_STACK_RESOURCE_MGMT_1, sq_stack_resource_mgmt_1);
+	WREG32(SQ_STACK_RESOURCE_MGMT_2, sq_stack_resource_mgmt_2);
+	WREG32(SQ_STACK_RESOURCE_MGMT_3, sq_stack_resource_mgmt_3);
+	WREG32(SQ_DYN_GPR_CNTL_PS_FLUSH_REQ, 0);
+	WREG32(SQ_LDS_RESOURCE_MGMT, sq_lds_resource_mgmt);
+
+	WREG32(PA_SC_FORCE_EOV_MAX_CNTS, (FORCE_EOV_MAX_CLK_CNT(4095) |
+					  FORCE_EOV_MAX_REZ_CNT(255)));
+
+	if (rdev->family == CHIP_CEDAR)
+		vgt_cache_invalidation = CACHE_INVALIDATION(TC_ONLY);
+	else
+		vgt_cache_invalidation = CACHE_INVALIDATION(VC_AND_TC);
+	vgt_cache_invalidation |= AUTO_INVLD_EN(ES_AND_GS_AUTO);
+	WREG32(VGT_CACHE_INVALIDATION, vgt_cache_invalidation);
+
+	WREG32(VGT_GS_VERTEX_REUSE, 16);
+	WREG32(PA_SC_LINE_STIPPLE_STATE, 0);
+
+	WREG32(CB_PERF_CTR0_SEL_0, 0);
+	WREG32(CB_PERF_CTR0_SEL_1, 0);
+	WREG32(CB_PERF_CTR1_SEL_0, 0);
+	WREG32(CB_PERF_CTR1_SEL_1, 0);
+	WREG32(CB_PERF_CTR2_SEL_0, 0);
+	WREG32(CB_PERF_CTR2_SEL_1, 0);
+	WREG32(CB_PERF_CTR3_SEL_0, 0);
+	WREG32(CB_PERF_CTR3_SEL_1, 0);
+
+	hdp_host_path_cntl = RREG32(HDP_HOST_PATH_CNTL);
+	WREG32(HDP_HOST_PATH_CNTL, hdp_host_path_cntl);
+
+	WREG32(PA_CL_ENHANCE, CLIP_VTX_REORDER_ENA | NUM_CLIP_SEQ(3));
+
+	udelay(50);
+
 }
 
 int evergreen_mc_init(struct radeon_device *rdev)
@@ -476,26 +1318,627 @@ int evergreen_mc_init(struct radeon_device *rdev)
 	rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024;
 	rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024;
 	rdev->mc.visible_vram_size = rdev->mc.aper_size;
-	/* FIXME remove this once we support unmappable VRAM */
-	if (rdev->mc.mc_vram_size > rdev->mc.aper_size) {
-		rdev->mc.mc_vram_size = rdev->mc.aper_size;
-		rdev->mc.real_vram_size = rdev->mc.aper_size;
-	}
 	r600_vram_gtt_location(rdev, &rdev->mc);
 	radeon_update_bandwidth_info(rdev);
 
 	return 0;
 }
 
-int evergreen_gpu_reset(struct radeon_device *rdev)
+bool evergreen_gpu_is_lockup(struct radeon_device *rdev)
 {
 	/* FIXME: implement for evergreen */
+	return false;
+}
+
+static int evergreen_gpu_soft_reset(struct radeon_device *rdev)
+{
+	struct evergreen_mc_save save;
+	u32 srbm_reset = 0;
+	u32 grbm_reset = 0;
+
+	dev_info(rdev->dev, "GPU softreset \n");
+	dev_info(rdev->dev, "  GRBM_STATUS=0x%08X\n",
+		RREG32(GRBM_STATUS));
+	dev_info(rdev->dev, "  GRBM_STATUS_SE0=0x%08X\n",
+		RREG32(GRBM_STATUS_SE0));
+	dev_info(rdev->dev, "  GRBM_STATUS_SE1=0x%08X\n",
+		RREG32(GRBM_STATUS_SE1));
+	dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
+		RREG32(SRBM_STATUS));
+	evergreen_mc_stop(rdev, &save);
+	if (evergreen_mc_wait_for_idle(rdev)) {
+		dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
+	}
+	/* Disable CP parsing/prefetching */
+	WREG32(CP_ME_CNTL, CP_ME_HALT | CP_PFP_HALT);
+
+	/* reset all the gfx blocks */
+	grbm_reset = (SOFT_RESET_CP |
+		      SOFT_RESET_CB |
+		      SOFT_RESET_DB |
+		      SOFT_RESET_PA |
+		      SOFT_RESET_SC |
+		      SOFT_RESET_SPI |
+		      SOFT_RESET_SH |
+		      SOFT_RESET_SX |
+		      SOFT_RESET_TC |
+		      SOFT_RESET_TA |
+		      SOFT_RESET_VC |
+		      SOFT_RESET_VGT);
+
+	dev_info(rdev->dev, "  GRBM_SOFT_RESET=0x%08X\n", grbm_reset);
+	WREG32(GRBM_SOFT_RESET, grbm_reset);
+	(void)RREG32(GRBM_SOFT_RESET);
+	udelay(50);
+	WREG32(GRBM_SOFT_RESET, 0);
+	(void)RREG32(GRBM_SOFT_RESET);
+
+	/* reset all the system blocks */
+	srbm_reset = SRBM_SOFT_RESET_ALL_MASK;
+
+	dev_info(rdev->dev, "  SRBM_SOFT_RESET=0x%08X\n", srbm_reset);
+	WREG32(SRBM_SOFT_RESET, srbm_reset);
+	(void)RREG32(SRBM_SOFT_RESET);
+	udelay(50);
+	WREG32(SRBM_SOFT_RESET, 0);
+	(void)RREG32(SRBM_SOFT_RESET);
+	/* Wait a little for things to settle down */
+	udelay(50);
+	dev_info(rdev->dev, "  GRBM_STATUS=0x%08X\n",
+		RREG32(GRBM_STATUS));
+	dev_info(rdev->dev, "  GRBM_STATUS_SE0=0x%08X\n",
+		RREG32(GRBM_STATUS_SE0));
+	dev_info(rdev->dev, "  GRBM_STATUS_SE1=0x%08X\n",
+		RREG32(GRBM_STATUS_SE1));
+	dev_info(rdev->dev, "  SRBM_STATUS=0x%08X\n",
+		RREG32(SRBM_STATUS));
+	/* After reset we need to reinit the asic as GPU often endup in an
+	 * incoherent state.
+	 */
+	atom_asic_init(rdev->mode_info.atom_context);
+	evergreen_mc_resume(rdev, &save);
+	return 0;
+}
+
+int evergreen_asic_reset(struct radeon_device *rdev)
+{
+	return evergreen_gpu_soft_reset(rdev);
+}
+
+/* Interrupts */
+
+u32 evergreen_get_vblank_counter(struct radeon_device *rdev, int crtc)
+{
+	switch (crtc) {
+	case 0:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC0_REGISTER_OFFSET);
+	case 1:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC1_REGISTER_OFFSET);
+	case 2:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC2_REGISTER_OFFSET);
+	case 3:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC3_REGISTER_OFFSET);
+	case 4:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC4_REGISTER_OFFSET);
+	case 5:
+		return RREG32(CRTC_STATUS_FRAME_COUNT + EVERGREEN_CRTC5_REGISTER_OFFSET);
+	default:
+		return 0;
+	}
+}
+
+void evergreen_disable_interrupt_state(struct radeon_device *rdev)
+{
+	u32 tmp;
+
+	WREG32(CP_INT_CNTL, 0);
+	WREG32(GRBM_INT_CNTL, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
+	WREG32(INT_MASK + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
+
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC0_REGISTER_OFFSET, 0);
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC1_REGISTER_OFFSET, 0);
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC2_REGISTER_OFFSET, 0);
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC3_REGISTER_OFFSET, 0);
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, 0);
+	WREG32(GRPH_INT_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, 0);
+
+	WREG32(DACA_AUTODETECT_INT_CONTROL, 0);
+	WREG32(DACB_AUTODETECT_INT_CONTROL, 0);
+
+	tmp = RREG32(DC_HPD1_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD1_INT_CONTROL, tmp);
+	tmp = RREG32(DC_HPD2_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD2_INT_CONTROL, tmp);
+	tmp = RREG32(DC_HPD3_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD3_INT_CONTROL, tmp);
+	tmp = RREG32(DC_HPD4_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD4_INT_CONTROL, tmp);
+	tmp = RREG32(DC_HPD5_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD5_INT_CONTROL, tmp);
+	tmp = RREG32(DC_HPD6_INT_CONTROL) & DC_HPDx_INT_POLARITY;
+	WREG32(DC_HPD6_INT_CONTROL, tmp);
+
+}
+
+int evergreen_irq_set(struct radeon_device *rdev)
+{
+	u32 cp_int_cntl = CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE;
+	u32 crtc1 = 0, crtc2 = 0, crtc3 = 0, crtc4 = 0, crtc5 = 0, crtc6 = 0;
+	u32 hpd1, hpd2, hpd3, hpd4, hpd5, hpd6;
+	u32 grbm_int_cntl = 0;
+
+	if (!rdev->irq.installed) {
+		WARN(1, "Can't enable IRQ/MSI because no handler is installed.\n");
+		return -EINVAL;
+	}
+	/* don't enable anything if the ih is disabled */
+	if (!rdev->ih.enabled) {
+		r600_disable_interrupts(rdev);
+		/* force the active interrupt state to all disabled */
+		evergreen_disable_interrupt_state(rdev);
+		return 0;
+	}
+
+	hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
+	hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
+	hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
+	hpd4 = RREG32(DC_HPD4_INT_CONTROL) & ~DC_HPDx_INT_EN;
+	hpd5 = RREG32(DC_HPD5_INT_CONTROL) & ~DC_HPDx_INT_EN;
+	hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
+
+	if (rdev->irq.sw_int) {
+		DRM_DEBUG("evergreen_irq_set: sw int\n");
+		cp_int_cntl |= RB_INT_ENABLE;
+	}
+	if (rdev->irq.crtc_vblank_int[0]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 0\n");
+		crtc1 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.crtc_vblank_int[1]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 1\n");
+		crtc2 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.crtc_vblank_int[2]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 2\n");
+		crtc3 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.crtc_vblank_int[3]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 3\n");
+		crtc4 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.crtc_vblank_int[4]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 4\n");
+		crtc5 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.crtc_vblank_int[5]) {
+		DRM_DEBUG("evergreen_irq_set: vblank 5\n");
+		crtc6 |= VBLANK_INT_MASK;
+	}
+	if (rdev->irq.hpd[0]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 1\n");
+		hpd1 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.hpd[1]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 2\n");
+		hpd2 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.hpd[2]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 3\n");
+		hpd3 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.hpd[3]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 4\n");
+		hpd4 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.hpd[4]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 5\n");
+		hpd5 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.hpd[5]) {
+		DRM_DEBUG("evergreen_irq_set: hpd 6\n");
+		hpd6 |= DC_HPDx_INT_EN;
+	}
+	if (rdev->irq.gui_idle) {
+		DRM_DEBUG("gui idle\n");
+		grbm_int_cntl |= GUI_IDLE_INT_ENABLE;
+	}
+
+	WREG32(CP_INT_CNTL, cp_int_cntl);
+	WREG32(GRBM_INT_CNTL, grbm_int_cntl);
+
+	WREG32(INT_MASK + EVERGREEN_CRTC0_REGISTER_OFFSET, crtc1);
+	WREG32(INT_MASK + EVERGREEN_CRTC1_REGISTER_OFFSET, crtc2);
+	WREG32(INT_MASK + EVERGREEN_CRTC2_REGISTER_OFFSET, crtc3);
+	WREG32(INT_MASK + EVERGREEN_CRTC3_REGISTER_OFFSET, crtc4);
+	WREG32(INT_MASK + EVERGREEN_CRTC4_REGISTER_OFFSET, crtc5);
+	WREG32(INT_MASK + EVERGREEN_CRTC5_REGISTER_OFFSET, crtc6);
+
+	WREG32(DC_HPD1_INT_CONTROL, hpd1);
+	WREG32(DC_HPD2_INT_CONTROL, hpd2);
+	WREG32(DC_HPD3_INT_CONTROL, hpd3);
+	WREG32(DC_HPD4_INT_CONTROL, hpd4);
+	WREG32(DC_HPD5_INT_CONTROL, hpd5);
+	WREG32(DC_HPD6_INT_CONTROL, hpd6);
+
 	return 0;
 }
 
+static inline void evergreen_irq_ack(struct radeon_device *rdev,
+				     u32 *disp_int,
+				     u32 *disp_int_cont,
+				     u32 *disp_int_cont2,
+				     u32 *disp_int_cont3,
+				     u32 *disp_int_cont4,
+				     u32 *disp_int_cont5)
+{
+	u32 tmp;
+
+	*disp_int = RREG32(DISP_INTERRUPT_STATUS);
+	*disp_int_cont = RREG32(DISP_INTERRUPT_STATUS_CONTINUE);
+	*disp_int_cont2 = RREG32(DISP_INTERRUPT_STATUS_CONTINUE2);
+	*disp_int_cont3 = RREG32(DISP_INTERRUPT_STATUS_CONTINUE3);
+	*disp_int_cont4 = RREG32(DISP_INTERRUPT_STATUS_CONTINUE4);
+	*disp_int_cont5 = RREG32(DISP_INTERRUPT_STATUS_CONTINUE5);
+
+	if (*disp_int & LB_D1_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC0_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int & LB_D1_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC0_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int_cont & LB_D2_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC1_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int_cont & LB_D2_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC1_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int_cont2 & LB_D3_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC2_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int_cont2 & LB_D3_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC2_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int_cont3 & LB_D4_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC3_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int_cont3 & LB_D4_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC3_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int_cont4 & LB_D5_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC4_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int_cont4 & LB_D5_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC4_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int_cont5 & LB_D6_VBLANK_INTERRUPT)
+		WREG32(VBLANK_STATUS + EVERGREEN_CRTC5_REGISTER_OFFSET, VBLANK_ACK);
+	if (*disp_int_cont5 & LB_D6_VLINE_INTERRUPT)
+		WREG32(VLINE_STATUS + EVERGREEN_CRTC5_REGISTER_OFFSET, VLINE_ACK);
+
+	if (*disp_int & DC_HPD1_INTERRUPT) {
+		tmp = RREG32(DC_HPD1_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD1_INT_CONTROL, tmp);
+	}
+	if (*disp_int_cont & DC_HPD2_INTERRUPT) {
+		tmp = RREG32(DC_HPD2_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD2_INT_CONTROL, tmp);
+	}
+	if (*disp_int_cont2 & DC_HPD3_INTERRUPT) {
+		tmp = RREG32(DC_HPD3_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD3_INT_CONTROL, tmp);
+	}
+	if (*disp_int_cont3 & DC_HPD4_INTERRUPT) {
+		tmp = RREG32(DC_HPD4_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD4_INT_CONTROL, tmp);
+	}
+	if (*disp_int_cont4 & DC_HPD5_INTERRUPT) {
+		tmp = RREG32(DC_HPD5_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD5_INT_CONTROL, tmp);
+	}
+	if (*disp_int_cont5 & DC_HPD6_INTERRUPT) {
+		tmp = RREG32(DC_HPD5_INT_CONTROL);
+		tmp |= DC_HPDx_INT_ACK;
+		WREG32(DC_HPD6_INT_CONTROL, tmp);
+	}
+}
+
+void evergreen_irq_disable(struct radeon_device *rdev)
+{
+	u32 disp_int, disp_int_cont, disp_int_cont2;
+	u32 disp_int_cont3, disp_int_cont4, disp_int_cont5;
+
+	r600_disable_interrupts(rdev);
+	/* Wait and acknowledge irq */
+	mdelay(1);
+	evergreen_irq_ack(rdev, &disp_int, &disp_int_cont, &disp_int_cont2,
+			  &disp_int_cont3, &disp_int_cont4, &disp_int_cont5);
+	evergreen_disable_interrupt_state(rdev);
+}
+
+static void evergreen_irq_suspend(struct radeon_device *rdev)
+{
+	evergreen_irq_disable(rdev);
+	r600_rlc_stop(rdev);
+}
+
+static inline u32 evergreen_get_ih_wptr(struct radeon_device *rdev)
+{
+	u32 wptr, tmp;
+
+	/* XXX use writeback */
+	wptr = RREG32(IH_RB_WPTR);
+
+	if (wptr & RB_OVERFLOW) {
+		/* When a ring buffer overflow happen start parsing interrupt
+		 * from the last not overwritten vector (wptr + 16). Hopefully
+		 * this should allow us to catchup.
+		 */
+		dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n",
+			wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask);
+		rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask;
+		tmp = RREG32(IH_RB_CNTL);
+		tmp |= IH_WPTR_OVERFLOW_CLEAR;
+		WREG32(IH_RB_CNTL, tmp);
+	}
+	return (wptr & rdev->ih.ptr_mask);
+}
+
+int evergreen_irq_process(struct radeon_device *rdev)
+{
+	u32 wptr = evergreen_get_ih_wptr(rdev);
+	u32 rptr = rdev->ih.rptr;
+	u32 src_id, src_data;
+	u32 ring_index;
+	u32 disp_int, disp_int_cont, disp_int_cont2;
+	u32 disp_int_cont3, disp_int_cont4, disp_int_cont5;
+	unsigned long flags;
+	bool queue_hotplug = false;
+
+	DRM_DEBUG("r600_irq_process start: rptr %d, wptr %d\n", rptr, wptr);
+	if (!rdev->ih.enabled)
+		return IRQ_NONE;
+
+	spin_lock_irqsave(&rdev->ih.lock, flags);
+
+	if (rptr == wptr) {
+		spin_unlock_irqrestore(&rdev->ih.lock, flags);
+		return IRQ_NONE;
+	}
+	if (rdev->shutdown) {
+		spin_unlock_irqrestore(&rdev->ih.lock, flags);
+		return IRQ_NONE;
+	}
+
+restart_ih:
+	/* display interrupts */
+	evergreen_irq_ack(rdev, &disp_int, &disp_int_cont, &disp_int_cont2,
+			  &disp_int_cont3, &disp_int_cont4, &disp_int_cont5);
+
+	rdev->ih.wptr = wptr;
+	while (rptr != wptr) {
+		/* wptr/rptr are in bytes! */
+		ring_index = rptr / 4;
+		src_id =  rdev->ih.ring[ring_index] & 0xff;
+		src_data = rdev->ih.ring[ring_index + 1] & 0xfffffff;
+
+		switch (src_id) {
+		case 1: /* D1 vblank/vline */
+			switch (src_data) {
+			case 0: /* D1 vblank */
+				if (disp_int & LB_D1_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 0);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int &= ~LB_D1_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D1 vblank\n");
+				}
+				break;
+			case 1: /* D1 vline */
+				if (disp_int & LB_D1_VLINE_INTERRUPT) {
+					disp_int &= ~LB_D1_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D1 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 2: /* D2 vblank/vline */
+			switch (src_data) {
+			case 0: /* D2 vblank */
+				if (disp_int_cont & LB_D2_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 1);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D2 vblank\n");
+				}
+				break;
+			case 1: /* D2 vline */
+				if (disp_int_cont & LB_D2_VLINE_INTERRUPT) {
+					disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D2 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 3: /* D3 vblank/vline */
+			switch (src_data) {
+			case 0: /* D3 vblank */
+				if (disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 2);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D3 vblank\n");
+				}
+				break;
+			case 1: /* D3 vline */
+				if (disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {
+					disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D3 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 4: /* D4 vblank/vline */
+			switch (src_data) {
+			case 0: /* D4 vblank */
+				if (disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 3);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D4 vblank\n");
+				}
+				break;
+			case 1: /* D4 vline */
+				if (disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {
+					disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D4 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 5: /* D5 vblank/vline */
+			switch (src_data) {
+			case 0: /* D5 vblank */
+				if (disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 4);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D5 vblank\n");
+				}
+				break;
+			case 1: /* D5 vline */
+				if (disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {
+					disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D5 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 6: /* D6 vblank/vline */
+			switch (src_data) {
+			case 0: /* D6 vblank */
+				if (disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {
+					drm_handle_vblank(rdev->ddev, 5);
+					wake_up(&rdev->irq.vblank_queue);
+					disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
+					DRM_DEBUG("IH: D6 vblank\n");
+				}
+				break;
+			case 1: /* D6 vline */
+				if (disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {
+					disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
+					DRM_DEBUG("IH: D6 vline\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 42: /* HPD hotplug */
+			switch (src_data) {
+			case 0:
+				if (disp_int & DC_HPD1_INTERRUPT) {
+					disp_int &= ~DC_HPD1_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD1\n");
+				}
+				break;
+			case 1:
+				if (disp_int_cont & DC_HPD2_INTERRUPT) {
+					disp_int_cont &= ~DC_HPD2_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD2\n");
+				}
+				break;
+			case 2:
+				if (disp_int_cont2 & DC_HPD3_INTERRUPT) {
+					disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD3\n");
+				}
+				break;
+			case 3:
+				if (disp_int_cont3 & DC_HPD4_INTERRUPT) {
+					disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD4\n");
+				}
+				break;
+			case 4:
+				if (disp_int_cont4 & DC_HPD5_INTERRUPT) {
+					disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD5\n");
+				}
+				break;
+			case 5:
+				if (disp_int_cont5 & DC_HPD6_INTERRUPT) {
+					disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
+					queue_hotplug = true;
+					DRM_DEBUG("IH: HPD6\n");
+				}
+				break;
+			default:
+				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+				break;
+			}
+			break;
+		case 176: /* CP_INT in ring buffer */
+		case 177: /* CP_INT in IB1 */
+		case 178: /* CP_INT in IB2 */
+			DRM_DEBUG("IH: CP int: 0x%08x\n", src_data);
+			radeon_fence_process(rdev);
+			break;
+		case 181: /* CP EOP event */
+			DRM_DEBUG("IH: CP EOP\n");
+			break;
+		case 233: /* GUI IDLE */
+			DRM_DEBUG("IH: CP EOP\n");
+			rdev->pm.gui_idle = true;
+			wake_up(&rdev->irq.idle_queue);
+			break;
+		default:
+			DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+			break;
+		}
+
+		/* wptr/rptr are in bytes! */
+		rptr += 16;
+		rptr &= rdev->ih.ptr_mask;
+	}
+	/* make sure wptr hasn't changed while processing */
+	wptr = evergreen_get_ih_wptr(rdev);
+	if (wptr != rdev->ih.wptr)
+		goto restart_ih;
+	if (queue_hotplug)
+		queue_work(rdev->wq, &rdev->hotplug_work);
+	rdev->ih.rptr = rptr;
+	WREG32(IH_RB_RPTR, rdev->ih.rptr);
+	spin_unlock_irqrestore(&rdev->ih.lock, flags);
+	return IRQ_HANDLED;
+}
+
 static int evergreen_startup(struct radeon_device *rdev)
 {
-#if 0
 	int r;
 
 	if (!rdev->me_fw || !rdev->pfp_fw || !rdev->rlc_fw) {
@@ -505,17 +1948,15 @@ static int evergreen_startup(struct radeon_device *rdev)
 			return r;
 		}
 	}
-#endif
+
 	evergreen_mc_program(rdev);
-#if 0
 	if (rdev->flags & RADEON_IS_AGP) {
-		evergreem_agp_enable(rdev);
+		evergreen_agp_enable(rdev);
 	} else {
 		r = evergreen_pcie_gart_enable(rdev);
 		if (r)
 			return r;
 	}
-#endif
 	evergreen_gpu_init(rdev);
 #if 0
 	if (!rdev->r600_blit.shader_obj) {
@@ -536,6 +1977,7 @@ static int evergreen_startup(struct radeon_device *rdev)
 		DRM_ERROR("failed to pin blit object %d\n", r);
 		return r;
 	}
+#endif
 
 	/* Enable IRQ */
 	r = r600_irq_init(rdev);
@@ -544,7 +1986,7 @@ static int evergreen_startup(struct radeon_device *rdev)
 		radeon_irq_kms_fini(rdev);
 		return r;
 	}
-	r600_irq_set(rdev);
+	evergreen_irq_set(rdev);
 
 	r = radeon_ring_init(rdev, rdev->cp.ring_size);
 	if (r)
@@ -552,12 +1994,12 @@ static int evergreen_startup(struct radeon_device *rdev)
 	r = evergreen_cp_load_microcode(rdev);
 	if (r)
 		return r;
-	r = r600_cp_resume(rdev);
+	r = evergreen_cp_resume(rdev);
 	if (r)
 		return r;
 	/* write back buffer are not vital so don't worry about failure */
 	r600_wb_enable(rdev);
-#endif
+
 	return 0;
 }
 
@@ -582,13 +2024,13 @@ int evergreen_resume(struct radeon_device *rdev)
 		DRM_ERROR("r600 startup failed on resume\n");
 		return r;
 	}
-#if 0
+
 	r = r600_ib_test(rdev);
 	if (r) {
 		DRM_ERROR("radeon: failled testing IB (%d).\n", r);
 		return r;
 	}
-#endif
+
 	return r;
 
 }
@@ -597,12 +2039,14 @@ int evergreen_suspend(struct radeon_device *rdev)
 {
 #if 0
 	int r;
-
+#endif
 	/* FIXME: we should wait for ring to be empty */
 	r700_cp_stop(rdev);
 	rdev->cp.ready = false;
+	evergreen_irq_suspend(rdev);
 	r600_wb_disable(rdev);
 	evergreen_pcie_gart_disable(rdev);
+#if 0
 	/* unpin shaders bo */
 	r = radeon_bo_reserve(rdev->r600_blit.shader_obj, false);
 	if (likely(r == 0)) {
@@ -682,8 +2126,6 @@ int evergreen_init(struct radeon_device *rdev)
 	r = radeon_clocks_init(rdev);
 	if (r)
 		return r;
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* Fence driver */
 	r = radeon_fence_driver_init(rdev);
 	if (r)
@@ -702,7 +2144,7 @@ int evergreen_init(struct radeon_device *rdev)
 	r = radeon_bo_init(rdev);
 	if (r)
 		return r;
-#if 0
+
 	r = radeon_irq_kms_init(rdev);
 	if (r)
 		return r;
@@ -716,14 +2158,16 @@ int evergreen_init(struct radeon_device *rdev)
 	r = r600_pcie_gart_init(rdev);
 	if (r)
 		return r;
-#endif
-	rdev->accel_working = false;
+
+	rdev->accel_working = true;
 	r = evergreen_startup(rdev);
 	if (r) {
-		evergreen_suspend(rdev);
-		/*r600_wb_fini(rdev);*/
-		/*radeon_ring_fini(rdev);*/
-		/*evergreen_pcie_gart_fini(rdev);*/
+		dev_err(rdev->dev, "disabling GPU acceleration\n");
+		r700_cp_fini(rdev);
+		r600_wb_fini(rdev);
+		r600_irq_fini(rdev);
+		radeon_irq_kms_fini(rdev);
+		evergreen_pcie_gart_fini(rdev);
 		rdev->accel_working = false;
 	}
 	if (rdev->accel_working) {
@@ -743,16 +2187,12 @@ int evergreen_init(struct radeon_device *rdev)
 
 void evergreen_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
-	evergreen_suspend(rdev);
-#if 0
-	r600_blit_fini(rdev);
+	/*r600_blit_fini(rdev);*/
+	r700_cp_fini(rdev);
+	r600_wb_fini(rdev);
 	r600_irq_fini(rdev);
 	radeon_irq_kms_fini(rdev);
-	radeon_ring_fini(rdev);
-	r600_wb_fini(rdev);
 	evergreen_pcie_gart_fini(rdev);
-#endif
 	radeon_gem_fini(rdev);
 	radeon_fence_driver_fini(rdev);
 	radeon_clocks_fini(rdev);
diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
new file mode 100644
index 0000000..64516b9
--- /dev/null
+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
@@ -0,0 +1,1356 @@
+/*
+ * Copyright 2010 Advanced Micro Devices, Inc.
+ * Copyright 2008 Red Hat Inc.
+ * Copyright 2009 Jerome Glisse.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie
+ *          Alex Deucher
+ *          Jerome Glisse
+ */
+#include "drmP.h"
+#include "radeon.h"
+#include "evergreend.h"
+#include "evergreen_reg_safe.h"
+
+static int evergreen_cs_packet_next_reloc(struct radeon_cs_parser *p,
+					  struct radeon_cs_reloc **cs_reloc);
+
+struct evergreen_cs_track {
+	u32			group_size;
+	u32			nbanks;
+	u32			npipes;
+	/* value we track */
+	u32			nsamples;
+	u32			cb_color_base_last[12];
+	struct radeon_bo	*cb_color_bo[12];
+	u32			cb_color_bo_offset[12];
+	struct radeon_bo	*cb_color_fmask_bo[8];
+	struct radeon_bo	*cb_color_cmask_bo[8];
+	u32			cb_color_info[12];
+	u32			cb_color_view[12];
+	u32			cb_color_pitch_idx[12];
+	u32			cb_color_slice_idx[12];
+	u32			cb_color_dim_idx[12];
+	u32			cb_color_dim[12];
+	u32			cb_color_pitch[12];
+	u32			cb_color_slice[12];
+	u32			cb_color_cmask_slice[8];
+	u32			cb_color_fmask_slice[8];
+	u32			cb_target_mask;
+	u32			cb_shader_mask;
+	u32			vgt_strmout_config;
+	u32			vgt_strmout_buffer_config;
+	u32			db_depth_control;
+	u32			db_depth_view;
+	u32			db_depth_size;
+	u32			db_depth_size_idx;
+	u32			db_z_info;
+	u32			db_z_idx;
+	u32			db_z_read_offset;
+	u32			db_z_write_offset;
+	struct radeon_bo	*db_z_read_bo;
+	struct radeon_bo	*db_z_write_bo;
+	u32			db_s_info;
+	u32			db_s_idx;
+	u32			db_s_read_offset;
+	u32			db_s_write_offset;
+	struct radeon_bo	*db_s_read_bo;
+	struct radeon_bo	*db_s_write_bo;
+};
+
+static void evergreen_cs_track_init(struct evergreen_cs_track *track)
+{
+	int i;
+
+	for (i = 0; i < 8; i++) {
+		track->cb_color_fmask_bo[i] = NULL;
+		track->cb_color_cmask_bo[i] = NULL;
+		track->cb_color_cmask_slice[i] = 0;
+		track->cb_color_fmask_slice[i] = 0;
+	}
+
+	for (i = 0; i < 12; i++) {
+		track->cb_color_base_last[i] = 0;
+		track->cb_color_bo[i] = NULL;
+		track->cb_color_bo_offset[i] = 0xFFFFFFFF;
+		track->cb_color_info[i] = 0;
+		track->cb_color_view[i] = 0;
+		track->cb_color_pitch_idx[i] = 0;
+		track->cb_color_slice_idx[i] = 0;
+		track->cb_color_dim[i] = 0;
+		track->cb_color_pitch[i] = 0;
+		track->cb_color_slice[i] = 0;
+		track->cb_color_dim[i] = 0;
+	}
+	track->cb_target_mask = 0xFFFFFFFF;
+	track->cb_shader_mask = 0xFFFFFFFF;
+
+	track->db_depth_view = 0xFFFFC000;
+	track->db_depth_size = 0xFFFFFFFF;
+	track->db_depth_size_idx = 0;
+	track->db_depth_control = 0xFFFFFFFF;
+	track->db_z_info = 0xFFFFFFFF;
+	track->db_z_idx = 0xFFFFFFFF;
+	track->db_z_read_offset = 0xFFFFFFFF;
+	track->db_z_write_offset = 0xFFFFFFFF;
+	track->db_z_read_bo = NULL;
+	track->db_z_write_bo = NULL;
+	track->db_s_info = 0xFFFFFFFF;
+	track->db_s_idx = 0xFFFFFFFF;
+	track->db_s_read_offset = 0xFFFFFFFF;
+	track->db_s_write_offset = 0xFFFFFFFF;
+	track->db_s_read_bo = NULL;
+	track->db_s_write_bo = NULL;
+}
+
+static inline int evergreen_cs_track_validate_cb(struct radeon_cs_parser *p, int i)
+{
+	/* XXX fill in */
+	return 0;
+}
+
+static int evergreen_cs_track_check(struct radeon_cs_parser *p)
+{
+	struct evergreen_cs_track *track = p->track;
+
+	/* we don't support stream out buffer yet */
+	if (track->vgt_strmout_config || track->vgt_strmout_buffer_config) {
+		dev_warn(p->dev, "this kernel doesn't support SMX output buffer\n");
+		return -EINVAL;
+	}
+
+	/* XXX fill in */
+	return 0;
+}
+
+/**
+ * evergreen_cs_packet_parse() - parse cp packet and point ib index to next packet
+ * @parser:	parser structure holding parsing context.
+ * @pkt:	where to store packet informations
+ *
+ * Assume that chunk_ib_index is properly set. Will return -EINVAL
+ * if packet is bigger than remaining ib size. or if packets is unknown.
+ **/
+int evergreen_cs_packet_parse(struct radeon_cs_parser *p,
+			      struct radeon_cs_packet *pkt,
+			      unsigned idx)
+{
+	struct radeon_cs_chunk *ib_chunk = &p->chunks[p->chunk_ib_idx];
+	uint32_t header;
+
+	if (idx >= ib_chunk->length_dw) {
+		DRM_ERROR("Can not parse packet at %d after CS end %d !\n",
+			  idx, ib_chunk->length_dw);
+		return -EINVAL;
+	}
+	header = radeon_get_ib_value(p, idx);
+	pkt->idx = idx;
+	pkt->type = CP_PACKET_GET_TYPE(header);
+	pkt->count = CP_PACKET_GET_COUNT(header);
+	pkt->one_reg_wr = 0;
+	switch (pkt->type) {
+	case PACKET_TYPE0:
+		pkt->reg = CP_PACKET0_GET_REG(header);
+		break;
+	case PACKET_TYPE3:
+		pkt->opcode = CP_PACKET3_GET_OPCODE(header);
+		break;
+	case PACKET_TYPE2:
+		pkt->count = -1;
+		break;
+	default:
+		DRM_ERROR("Unknown packet type %d at %d !\n", pkt->type, idx);
+		return -EINVAL;
+	}
+	if ((pkt->count + 1 + pkt->idx) >= ib_chunk->length_dw) {
+		DRM_ERROR("Packet (%d:%d:%d) end after CS buffer (%d) !\n",
+			  pkt->idx, pkt->type, pkt->count, ib_chunk->length_dw);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/**
+ * evergreen_cs_packet_next_reloc() - parse next packet which should be reloc packet3
+ * @parser:		parser structure holding parsing context.
+ * @data:		pointer to relocation data
+ * @offset_start:	starting offset
+ * @offset_mask:	offset mask (to align start offset on)
+ * @reloc:		reloc informations
+ *
+ * Check next packet is relocation packet3, do bo validation and compute
+ * GPU offset using the provided start.
+ **/
+static int evergreen_cs_packet_next_reloc(struct radeon_cs_parser *p,
+					  struct radeon_cs_reloc **cs_reloc)
+{
+	struct radeon_cs_chunk *relocs_chunk;
+	struct radeon_cs_packet p3reloc;
+	unsigned idx;
+	int r;
+
+	if (p->chunk_relocs_idx == -1) {
+		DRM_ERROR("No relocation chunk !\n");
+		return -EINVAL;
+	}
+	*cs_reloc = NULL;
+	relocs_chunk = &p->chunks[p->chunk_relocs_idx];
+	r = evergreen_cs_packet_parse(p, &p3reloc, p->idx);
+	if (r) {
+		return r;
+	}
+	p->idx += p3reloc.count + 2;
+	if (p3reloc.type != PACKET_TYPE3 || p3reloc.opcode != PACKET3_NOP) {
+		DRM_ERROR("No packet3 for relocation for packet at %d.\n",
+			  p3reloc.idx);
+		return -EINVAL;
+	}
+	idx = radeon_get_ib_value(p, p3reloc.idx + 1);
+	if (idx >= relocs_chunk->length_dw) {
+		DRM_ERROR("Relocs at %d after relocations chunk end %d !\n",
+			  idx, relocs_chunk->length_dw);
+		return -EINVAL;
+	}
+	/* FIXME: we assume reloc size is 4 dwords */
+	*cs_reloc = p->relocs_ptr[(idx / 4)];
+	return 0;
+}
+
+/**
+ * evergreen_cs_packet_next_is_pkt3_nop() - test if next packet is packet3 nop for reloc
+ * @parser:		parser structure holding parsing context.
+ *
+ * Check next packet is relocation packet3, do bo validation and compute
+ * GPU offset using the provided start.
+ **/
+static inline int evergreen_cs_packet_next_is_pkt3_nop(struct radeon_cs_parser *p)
+{
+	struct radeon_cs_packet p3reloc;
+	int r;
+
+	r = evergreen_cs_packet_parse(p, &p3reloc, p->idx);
+	if (r) {
+		return 0;
+	}
+	if (p3reloc.type != PACKET_TYPE3 || p3reloc.opcode != PACKET3_NOP) {
+		return 0;
+	}
+	return 1;
+}
+
+/**
+ * evergreen_cs_packet_next_vline() - parse userspace VLINE packet
+ * @parser:		parser structure holding parsing context.
+ *
+ * Userspace sends a special sequence for VLINE waits.
+ * PACKET0 - VLINE_START_END + value
+ * PACKET3 - WAIT_REG_MEM poll vline status reg
+ * RELOC (P3) - crtc_id in reloc.
+ *
+ * This function parses this and relocates the VLINE START END
+ * and WAIT_REG_MEM packets to the correct crtc.
+ * It also detects a switched off crtc and nulls out the
+ * wait in that case.
+ */
+static int evergreen_cs_packet_parse_vline(struct radeon_cs_parser *p)
+{
+	struct drm_mode_object *obj;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	struct radeon_cs_packet p3reloc, wait_reg_mem;
+	int crtc_id;
+	int r;
+	uint32_t header, h_idx, reg, wait_reg_mem_info;
+	volatile uint32_t *ib;
+
+	ib = p->ib->ptr;
+
+	/* parse the WAIT_REG_MEM */
+	r = evergreen_cs_packet_parse(p, &wait_reg_mem, p->idx);
+	if (r)
+		return r;
+
+	/* check its a WAIT_REG_MEM */
+	if (wait_reg_mem.type != PACKET_TYPE3 ||
+	    wait_reg_mem.opcode != PACKET3_WAIT_REG_MEM) {
+		DRM_ERROR("vline wait missing WAIT_REG_MEM segment\n");
+		r = -EINVAL;
+		return r;
+	}
+
+	wait_reg_mem_info = radeon_get_ib_value(p, wait_reg_mem.idx + 1);
+	/* bit 4 is reg (0) or mem (1) */
+	if (wait_reg_mem_info & 0x10) {
+		DRM_ERROR("vline WAIT_REG_MEM waiting on MEM rather than REG\n");
+		r = -EINVAL;
+		return r;
+	}
+	/* waiting for value to be equal */
+	if ((wait_reg_mem_info & 0x7) != 0x3) {
+		DRM_ERROR("vline WAIT_REG_MEM function not equal\n");
+		r = -EINVAL;
+		return r;
+	}
+	if ((radeon_get_ib_value(p, wait_reg_mem.idx + 2) << 2) != EVERGREEN_VLINE_STATUS) {
+		DRM_ERROR("vline WAIT_REG_MEM bad reg\n");
+		r = -EINVAL;
+		return r;
+	}
+
+	if (radeon_get_ib_value(p, wait_reg_mem.idx + 5) != EVERGREEN_VLINE_STAT) {
+		DRM_ERROR("vline WAIT_REG_MEM bad bit mask\n");
+		r = -EINVAL;
+		return r;
+	}
+
+	/* jump over the NOP */
+	r = evergreen_cs_packet_parse(p, &p3reloc, p->idx + wait_reg_mem.count + 2);
+	if (r)
+		return r;
+
+	h_idx = p->idx - 2;
+	p->idx += wait_reg_mem.count + 2;
+	p->idx += p3reloc.count + 2;
+
+	header = radeon_get_ib_value(p, h_idx);
+	crtc_id = radeon_get_ib_value(p, h_idx + 2 + 7 + 1);
+	reg = CP_PACKET0_GET_REG(header);
+	mutex_lock(&p->rdev->ddev->mode_config.mutex);
+	obj = drm_mode_object_find(p->rdev->ddev, crtc_id, DRM_MODE_OBJECT_CRTC);
+	if (!obj) {
+		DRM_ERROR("cannot find crtc %d\n", crtc_id);
+		r = -EINVAL;
+		goto out;
+	}
+	crtc = obj_to_crtc(obj);
+	radeon_crtc = to_radeon_crtc(crtc);
+	crtc_id = radeon_crtc->crtc_id;
+
+	if (!crtc->enabled) {
+		/* if the CRTC isn't enabled - we need to nop out the WAIT_REG_MEM */
+		ib[h_idx + 2] = PACKET2(0);
+		ib[h_idx + 3] = PACKET2(0);
+		ib[h_idx + 4] = PACKET2(0);
+		ib[h_idx + 5] = PACKET2(0);
+		ib[h_idx + 6] = PACKET2(0);
+		ib[h_idx + 7] = PACKET2(0);
+		ib[h_idx + 8] = PACKET2(0);
+	} else {
+		switch (reg) {
+		case EVERGREEN_VLINE_START_END:
+			header &= ~R600_CP_PACKET0_REG_MASK;
+			header |= (EVERGREEN_VLINE_START_END + radeon_crtc->crtc_offset) >> 2;
+			ib[h_idx] = header;
+			ib[h_idx + 4] = (EVERGREEN_VLINE_STATUS + radeon_crtc->crtc_offset) >> 2;
+			break;
+		default:
+			DRM_ERROR("unknown crtc reloc\n");
+			r = -EINVAL;
+			goto out;
+		}
+	}
+out:
+	mutex_unlock(&p->rdev->ddev->mode_config.mutex);
+	return r;
+}
+
+static int evergreen_packet0_check(struct radeon_cs_parser *p,
+				   struct radeon_cs_packet *pkt,
+				   unsigned idx, unsigned reg)
+{
+	int r;
+
+	switch (reg) {
+	case EVERGREEN_VLINE_START_END:
+		r = evergreen_cs_packet_parse_vline(p);
+		if (r) {
+			DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
+					idx, reg);
+			return r;
+		}
+		break;
+	default:
+		printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n",
+		       reg, idx);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static int evergreen_cs_parse_packet0(struct radeon_cs_parser *p,
+				      struct radeon_cs_packet *pkt)
+{
+	unsigned reg, i;
+	unsigned idx;
+	int r;
+
+	idx = pkt->idx + 1;
+	reg = pkt->reg;
+	for (i = 0; i <= pkt->count; i++, idx++, reg += 4) {
+		r = evergreen_packet0_check(p, pkt, idx, reg);
+		if (r) {
+			return r;
+		}
+	}
+	return 0;
+}
+
+/**
+ * evergreen_cs_check_reg() - check if register is authorized or not
+ * @parser: parser structure holding parsing context
+ * @reg: register we are testing
+ * @idx: index into the cs buffer
+ *
+ * This function will test against evergreen_reg_safe_bm and return 0
+ * if register is safe. If register is not flag as safe this function
+ * will test it against a list of register needind special handling.
+ */
+static inline int evergreen_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+{
+	struct evergreen_cs_track *track = (struct evergreen_cs_track *)p->track;
+	struct radeon_cs_reloc *reloc;
+	u32 last_reg = ARRAY_SIZE(evergreen_reg_safe_bm);
+	u32 m, i, tmp, *ib;
+	int r;
+
+	i = (reg >> 7);
+	if (i > last_reg) {
+		dev_warn(p->dev, "forbidden register 0x%08x at %d\n", reg, idx);
+		return -EINVAL;
+	}
+	m = 1 << ((reg >> 2) & 31);
+	if (!(evergreen_reg_safe_bm[i] & m))
+		return 0;
+	ib = p->ib->ptr;
+	switch (reg) {
+	/* force following reg to 0 in an attemp to disable out buffer
+	 * which will need us to better understand how it works to perform
+	 * security check on it (Jerome)
+	 */
+	case SQ_ESGS_RING_SIZE:
+	case SQ_GSVS_RING_SIZE:
+	case SQ_ESTMP_RING_SIZE:
+	case SQ_GSTMP_RING_SIZE:
+	case SQ_HSTMP_RING_SIZE:
+	case SQ_LSTMP_RING_SIZE:
+	case SQ_PSTMP_RING_SIZE:
+	case SQ_VSTMP_RING_SIZE:
+	case SQ_ESGS_RING_ITEMSIZE:
+	case SQ_ESTMP_RING_ITEMSIZE:
+	case SQ_GSTMP_RING_ITEMSIZE:
+	case SQ_GSVS_RING_ITEMSIZE:
+	case SQ_GS_VERT_ITEMSIZE:
+	case SQ_GS_VERT_ITEMSIZE_1:
+	case SQ_GS_VERT_ITEMSIZE_2:
+	case SQ_GS_VERT_ITEMSIZE_3:
+	case SQ_GSVS_RING_OFFSET_1:
+	case SQ_GSVS_RING_OFFSET_2:
+	case SQ_GSVS_RING_OFFSET_3:
+	case SQ_HSTMP_RING_ITEMSIZE:
+	case SQ_LSTMP_RING_ITEMSIZE:
+	case SQ_PSTMP_RING_ITEMSIZE:
+	case SQ_VSTMP_RING_ITEMSIZE:
+	case VGT_TF_RING_SIZE:
+		/* get value to populate the IB don't remove */
+		tmp =radeon_get_ib_value(p, idx);
+		ib[idx] = 0;
+		break;
+	case DB_DEPTH_CONTROL:
+		track->db_depth_control = radeon_get_ib_value(p, idx);
+		break;
+	case DB_Z_INFO:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		track->db_z_info = radeon_get_ib_value(p, idx);
+		ib[idx] &= ~Z_ARRAY_MODE(0xf);
+		track->db_z_info &= ~Z_ARRAY_MODE(0xf);
+		if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) {
+			ib[idx] |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+			track->db_z_info |= Z_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+		} else {
+			ib[idx] |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+			track->db_z_info |= Z_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+		}
+		break;
+	case DB_STENCIL_INFO:
+		track->db_s_info = radeon_get_ib_value(p, idx);
+		break;
+	case DB_DEPTH_VIEW:
+		track->db_depth_view = radeon_get_ib_value(p, idx);
+		break;
+	case DB_DEPTH_SIZE:
+		track->db_depth_size = radeon_get_ib_value(p, idx);
+		track->db_depth_size_idx = idx;
+		break;
+	case DB_Z_READ_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		track->db_z_read_offset = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->db_z_read_bo = reloc->robj;
+		break;
+	case DB_Z_WRITE_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		track->db_z_write_offset = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->db_z_write_bo = reloc->robj;
+		break;
+	case DB_STENCIL_READ_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		track->db_s_read_offset = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->db_s_read_bo = reloc->robj;
+		break;
+	case DB_STENCIL_WRITE_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		track->db_s_write_offset = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->db_s_write_bo = reloc->robj;
+		break;
+	case VGT_STRMOUT_CONFIG:
+		track->vgt_strmout_config = radeon_get_ib_value(p, idx);
+		break;
+	case VGT_STRMOUT_BUFFER_CONFIG:
+		track->vgt_strmout_buffer_config = radeon_get_ib_value(p, idx);
+		break;
+	case CB_TARGET_MASK:
+		track->cb_target_mask = radeon_get_ib_value(p, idx);
+		break;
+	case CB_SHADER_MASK:
+		track->cb_shader_mask = radeon_get_ib_value(p, idx);
+		break;
+	case PA_SC_AA_CONFIG:
+		tmp = radeon_get_ib_value(p, idx) & MSAA_NUM_SAMPLES_MASK;
+		track->nsamples = 1 << tmp;
+		break;
+	case CB_COLOR0_VIEW:
+	case CB_COLOR1_VIEW:
+	case CB_COLOR2_VIEW:
+	case CB_COLOR3_VIEW:
+	case CB_COLOR4_VIEW:
+	case CB_COLOR5_VIEW:
+	case CB_COLOR6_VIEW:
+	case CB_COLOR7_VIEW:
+		tmp = (reg - CB_COLOR0_VIEW) / 0x3c;
+		track->cb_color_view[tmp] = radeon_get_ib_value(p, idx);
+		break;
+	case CB_COLOR8_VIEW:
+	case CB_COLOR9_VIEW:
+	case CB_COLOR10_VIEW:
+	case CB_COLOR11_VIEW:
+		tmp = ((reg - CB_COLOR8_VIEW) / 0x1c) + 8;
+		track->cb_color_view[tmp] = radeon_get_ib_value(p, idx);
+		break;
+	case CB_COLOR0_INFO:
+	case CB_COLOR1_INFO:
+	case CB_COLOR2_INFO:
+	case CB_COLOR3_INFO:
+	case CB_COLOR4_INFO:
+	case CB_COLOR5_INFO:
+	case CB_COLOR6_INFO:
+	case CB_COLOR7_INFO:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		tmp = (reg - CB_COLOR0_INFO) / 0x3c;
+		track->cb_color_info[tmp] = radeon_get_ib_value(p, idx);
+		if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) {
+			ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+			track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+		} else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) {
+			ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+			track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+		}
+		break;
+	case CB_COLOR8_INFO:
+	case CB_COLOR9_INFO:
+	case CB_COLOR10_INFO:
+	case CB_COLOR11_INFO:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		tmp = ((reg - CB_COLOR8_INFO) / 0x1c) + 8;
+		track->cb_color_info[tmp] = radeon_get_ib_value(p, idx);
+		if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO) {
+			ib[idx] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+			track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+		} else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO) {
+			ib[idx] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+			track->cb_color_info[tmp] |= CB_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+		}
+		break;
+	case CB_COLOR0_PITCH:
+	case CB_COLOR1_PITCH:
+	case CB_COLOR2_PITCH:
+	case CB_COLOR3_PITCH:
+	case CB_COLOR4_PITCH:
+	case CB_COLOR5_PITCH:
+	case CB_COLOR6_PITCH:
+	case CB_COLOR7_PITCH:
+		tmp = (reg - CB_COLOR0_PITCH) / 0x3c;
+		track->cb_color_pitch[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_pitch_idx[tmp] = idx;
+		break;
+	case CB_COLOR8_PITCH:
+	case CB_COLOR9_PITCH:
+	case CB_COLOR10_PITCH:
+	case CB_COLOR11_PITCH:
+		tmp = ((reg - CB_COLOR8_PITCH) / 0x1c) + 8;
+		track->cb_color_pitch[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_pitch_idx[tmp] = idx;
+		break;
+	case CB_COLOR0_SLICE:
+	case CB_COLOR1_SLICE:
+	case CB_COLOR2_SLICE:
+	case CB_COLOR3_SLICE:
+	case CB_COLOR4_SLICE:
+	case CB_COLOR5_SLICE:
+	case CB_COLOR6_SLICE:
+	case CB_COLOR7_SLICE:
+		tmp = (reg - CB_COLOR0_SLICE) / 0x3c;
+		track->cb_color_slice[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_slice_idx[tmp] = idx;
+		break;
+	case CB_COLOR8_SLICE:
+	case CB_COLOR9_SLICE:
+	case CB_COLOR10_SLICE:
+	case CB_COLOR11_SLICE:
+		tmp = ((reg - CB_COLOR8_SLICE) / 0x1c) + 8;
+		track->cb_color_slice[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_slice_idx[tmp] = idx;
+		break;
+	case CB_COLOR0_ATTRIB:
+	case CB_COLOR1_ATTRIB:
+	case CB_COLOR2_ATTRIB:
+	case CB_COLOR3_ATTRIB:
+	case CB_COLOR4_ATTRIB:
+	case CB_COLOR5_ATTRIB:
+	case CB_COLOR6_ATTRIB:
+	case CB_COLOR7_ATTRIB:
+	case CB_COLOR8_ATTRIB:
+	case CB_COLOR9_ATTRIB:
+	case CB_COLOR10_ATTRIB:
+	case CB_COLOR11_ATTRIB:
+		break;
+	case CB_COLOR0_DIM:
+	case CB_COLOR1_DIM:
+	case CB_COLOR2_DIM:
+	case CB_COLOR3_DIM:
+	case CB_COLOR4_DIM:
+	case CB_COLOR5_DIM:
+	case CB_COLOR6_DIM:
+	case CB_COLOR7_DIM:
+		tmp = (reg - CB_COLOR0_DIM) / 0x3c;
+		track->cb_color_dim[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_dim_idx[tmp] = idx;
+		break;
+	case CB_COLOR8_DIM:
+	case CB_COLOR9_DIM:
+	case CB_COLOR10_DIM:
+	case CB_COLOR11_DIM:
+		tmp = ((reg - CB_COLOR8_DIM) / 0x1c) + 8;
+		track->cb_color_dim[tmp] = radeon_get_ib_value(p, idx);
+		track->cb_color_dim_idx[tmp] = idx;
+		break;
+	case CB_COLOR0_FMASK:
+	case CB_COLOR1_FMASK:
+	case CB_COLOR2_FMASK:
+	case CB_COLOR3_FMASK:
+	case CB_COLOR4_FMASK:
+	case CB_COLOR5_FMASK:
+	case CB_COLOR6_FMASK:
+	case CB_COLOR7_FMASK:
+		tmp = (reg - CB_COLOR0_FMASK) / 0x3c;
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_err(p->dev, "bad SET_CONTEXT_REG 0x%04X\n", reg);
+			return -EINVAL;
+		}
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->cb_color_fmask_bo[tmp] = reloc->robj;
+		break;
+	case CB_COLOR0_CMASK:
+	case CB_COLOR1_CMASK:
+	case CB_COLOR2_CMASK:
+	case CB_COLOR3_CMASK:
+	case CB_COLOR4_CMASK:
+	case CB_COLOR5_CMASK:
+	case CB_COLOR6_CMASK:
+	case CB_COLOR7_CMASK:
+		tmp = (reg - CB_COLOR0_CMASK) / 0x3c;
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_err(p->dev, "bad SET_CONTEXT_REG 0x%04X\n", reg);
+			return -EINVAL;
+		}
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->cb_color_cmask_bo[tmp] = reloc->robj;
+		break;
+	case CB_COLOR0_FMASK_SLICE:
+	case CB_COLOR1_FMASK_SLICE:
+	case CB_COLOR2_FMASK_SLICE:
+	case CB_COLOR3_FMASK_SLICE:
+	case CB_COLOR4_FMASK_SLICE:
+	case CB_COLOR5_FMASK_SLICE:
+	case CB_COLOR6_FMASK_SLICE:
+	case CB_COLOR7_FMASK_SLICE:
+		tmp = (reg - CB_COLOR0_FMASK_SLICE) / 0x3c;
+		track->cb_color_fmask_slice[tmp] = radeon_get_ib_value(p, idx);
+		break;
+	case CB_COLOR0_CMASK_SLICE:
+	case CB_COLOR1_CMASK_SLICE:
+	case CB_COLOR2_CMASK_SLICE:
+	case CB_COLOR3_CMASK_SLICE:
+	case CB_COLOR4_CMASK_SLICE:
+	case CB_COLOR5_CMASK_SLICE:
+	case CB_COLOR6_CMASK_SLICE:
+	case CB_COLOR7_CMASK_SLICE:
+		tmp = (reg - CB_COLOR0_CMASK_SLICE) / 0x3c;
+		track->cb_color_cmask_slice[tmp] = radeon_get_ib_value(p, idx);
+		break;
+	case CB_COLOR0_BASE:
+	case CB_COLOR1_BASE:
+	case CB_COLOR2_BASE:
+	case CB_COLOR3_BASE:
+	case CB_COLOR4_BASE:
+	case CB_COLOR5_BASE:
+	case CB_COLOR6_BASE:
+	case CB_COLOR7_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		tmp = (reg - CB_COLOR0_BASE) / 0x3c;
+		track->cb_color_bo_offset[tmp] = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->cb_color_base_last[tmp] = ib[idx];
+		track->cb_color_bo[tmp] = reloc->robj;
+		break;
+	case CB_COLOR8_BASE:
+	case CB_COLOR9_BASE:
+	case CB_COLOR10_BASE:
+	case CB_COLOR11_BASE:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		tmp = ((reg - CB_COLOR8_BASE) / 0x1c) + 8;
+		track->cb_color_bo_offset[tmp] = radeon_get_ib_value(p, idx);
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		track->cb_color_base_last[tmp] = ib[idx];
+		track->cb_color_bo[tmp] = reloc->robj;
+		break;
+	case CB_IMMED0_BASE:
+	case CB_IMMED1_BASE:
+	case CB_IMMED2_BASE:
+	case CB_IMMED3_BASE:
+	case CB_IMMED4_BASE:
+	case CB_IMMED5_BASE:
+	case CB_IMMED6_BASE:
+	case CB_IMMED7_BASE:
+	case CB_IMMED8_BASE:
+	case CB_IMMED9_BASE:
+	case CB_IMMED10_BASE:
+	case CB_IMMED11_BASE:
+	case DB_HTILE_DATA_BASE:
+	case SQ_PGM_START_FS:
+	case SQ_PGM_START_ES:
+	case SQ_PGM_START_VS:
+	case SQ_PGM_START_GS:
+	case SQ_PGM_START_PS:
+	case SQ_PGM_START_HS:
+	case SQ_PGM_START_LS:
+	case GDS_ADDR_BASE:
+	case SQ_CONST_MEM_BASE:
+	case SQ_ALU_CONST_CACHE_GS_0:
+	case SQ_ALU_CONST_CACHE_GS_1:
+	case SQ_ALU_CONST_CACHE_GS_2:
+	case SQ_ALU_CONST_CACHE_GS_3:
+	case SQ_ALU_CONST_CACHE_GS_4:
+	case SQ_ALU_CONST_CACHE_GS_5:
+	case SQ_ALU_CONST_CACHE_GS_6:
+	case SQ_ALU_CONST_CACHE_GS_7:
+	case SQ_ALU_CONST_CACHE_GS_8:
+	case SQ_ALU_CONST_CACHE_GS_9:
+	case SQ_ALU_CONST_CACHE_GS_10:
+	case SQ_ALU_CONST_CACHE_GS_11:
+	case SQ_ALU_CONST_CACHE_GS_12:
+	case SQ_ALU_CONST_CACHE_GS_13:
+	case SQ_ALU_CONST_CACHE_GS_14:
+	case SQ_ALU_CONST_CACHE_GS_15:
+	case SQ_ALU_CONST_CACHE_PS_0:
+	case SQ_ALU_CONST_CACHE_PS_1:
+	case SQ_ALU_CONST_CACHE_PS_2:
+	case SQ_ALU_CONST_CACHE_PS_3:
+	case SQ_ALU_CONST_CACHE_PS_4:
+	case SQ_ALU_CONST_CACHE_PS_5:
+	case SQ_ALU_CONST_CACHE_PS_6:
+	case SQ_ALU_CONST_CACHE_PS_7:
+	case SQ_ALU_CONST_CACHE_PS_8:
+	case SQ_ALU_CONST_CACHE_PS_9:
+	case SQ_ALU_CONST_CACHE_PS_10:
+	case SQ_ALU_CONST_CACHE_PS_11:
+	case SQ_ALU_CONST_CACHE_PS_12:
+	case SQ_ALU_CONST_CACHE_PS_13:
+	case SQ_ALU_CONST_CACHE_PS_14:
+	case SQ_ALU_CONST_CACHE_PS_15:
+	case SQ_ALU_CONST_CACHE_VS_0:
+	case SQ_ALU_CONST_CACHE_VS_1:
+	case SQ_ALU_CONST_CACHE_VS_2:
+	case SQ_ALU_CONST_CACHE_VS_3:
+	case SQ_ALU_CONST_CACHE_VS_4:
+	case SQ_ALU_CONST_CACHE_VS_5:
+	case SQ_ALU_CONST_CACHE_VS_6:
+	case SQ_ALU_CONST_CACHE_VS_7:
+	case SQ_ALU_CONST_CACHE_VS_8:
+	case SQ_ALU_CONST_CACHE_VS_9:
+	case SQ_ALU_CONST_CACHE_VS_10:
+	case SQ_ALU_CONST_CACHE_VS_11:
+	case SQ_ALU_CONST_CACHE_VS_12:
+	case SQ_ALU_CONST_CACHE_VS_13:
+	case SQ_ALU_CONST_CACHE_VS_14:
+	case SQ_ALU_CONST_CACHE_VS_15:
+	case SQ_ALU_CONST_CACHE_HS_0:
+	case SQ_ALU_CONST_CACHE_HS_1:
+	case SQ_ALU_CONST_CACHE_HS_2:
+	case SQ_ALU_CONST_CACHE_HS_3:
+	case SQ_ALU_CONST_CACHE_HS_4:
+	case SQ_ALU_CONST_CACHE_HS_5:
+	case SQ_ALU_CONST_CACHE_HS_6:
+	case SQ_ALU_CONST_CACHE_HS_7:
+	case SQ_ALU_CONST_CACHE_HS_8:
+	case SQ_ALU_CONST_CACHE_HS_9:
+	case SQ_ALU_CONST_CACHE_HS_10:
+	case SQ_ALU_CONST_CACHE_HS_11:
+	case SQ_ALU_CONST_CACHE_HS_12:
+	case SQ_ALU_CONST_CACHE_HS_13:
+	case SQ_ALU_CONST_CACHE_HS_14:
+	case SQ_ALU_CONST_CACHE_HS_15:
+	case SQ_ALU_CONST_CACHE_LS_0:
+	case SQ_ALU_CONST_CACHE_LS_1:
+	case SQ_ALU_CONST_CACHE_LS_2:
+	case SQ_ALU_CONST_CACHE_LS_3:
+	case SQ_ALU_CONST_CACHE_LS_4:
+	case SQ_ALU_CONST_CACHE_LS_5:
+	case SQ_ALU_CONST_CACHE_LS_6:
+	case SQ_ALU_CONST_CACHE_LS_7:
+	case SQ_ALU_CONST_CACHE_LS_8:
+	case SQ_ALU_CONST_CACHE_LS_9:
+	case SQ_ALU_CONST_CACHE_LS_10:
+	case SQ_ALU_CONST_CACHE_LS_11:
+	case SQ_ALU_CONST_CACHE_LS_12:
+	case SQ_ALU_CONST_CACHE_LS_13:
+	case SQ_ALU_CONST_CACHE_LS_14:
+	case SQ_ALU_CONST_CACHE_LS_15:
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			dev_warn(p->dev, "bad SET_CONTEXT_REG "
+					"0x%04X\n", reg);
+			return -EINVAL;
+		}
+		ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		break;
+	default:
+		dev_warn(p->dev, "forbidden register 0x%08x at %d\n", reg, idx);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/**
+ * evergreen_check_texture_resource() - check if register is authorized or not
+ * @p: parser structure holding parsing context
+ * @idx: index into the cs buffer
+ * @texture: texture's bo structure
+ * @mipmap: mipmap's bo structure
+ *
+ * This function will check that the resource has valid field and that
+ * the texture and mipmap bo object are big enough to cover this resource.
+ */
+static inline int evergreen_check_texture_resource(struct radeon_cs_parser *p,  u32 idx,
+						   struct radeon_bo *texture,
+						   struct radeon_bo *mipmap)
+{
+	/* XXX fill in */
+	return 0;
+}
+
+static int evergreen_packet3_check(struct radeon_cs_parser *p,
+				   struct radeon_cs_packet *pkt)
+{
+	struct radeon_cs_reloc *reloc;
+	struct evergreen_cs_track *track;
+	volatile u32 *ib;
+	unsigned idx;
+	unsigned i;
+	unsigned start_reg, end_reg, reg;
+	int r;
+	u32 idx_value;
+
+	track = (struct evergreen_cs_track *)p->track;
+	ib = p->ib->ptr;
+	idx = pkt->idx + 1;
+	idx_value = radeon_get_ib_value(p, idx);
+
+	switch (pkt->opcode) {
+	case PACKET3_CONTEXT_CONTROL:
+		if (pkt->count != 1) {
+			DRM_ERROR("bad CONTEXT_CONTROL\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_INDEX_TYPE:
+	case PACKET3_NUM_INSTANCES:
+	case PACKET3_CLEAR_STATE:
+		if (pkt->count) {
+			DRM_ERROR("bad INDEX_TYPE/NUM_INSTANCES/CLEAR_STATE\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_INDEX_BASE:
+		if (pkt->count != 1) {
+			DRM_ERROR("bad INDEX_BASE\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			DRM_ERROR("bad INDEX_BASE\n");
+			return -EINVAL;
+		}
+		ib[idx+0] = idx_value + (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+		ib[idx+1] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX:
+		if (pkt->count != 3) {
+			DRM_ERROR("bad DRAW_INDEX\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			DRM_ERROR("bad DRAW_INDEX\n");
+			return -EINVAL;
+		}
+		ib[idx+0] = idx_value + (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+		ib[idx+1] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_2:
+		if (pkt->count != 4) {
+			DRM_ERROR("bad DRAW_INDEX_2\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			DRM_ERROR("bad DRAW_INDEX_2\n");
+			return -EINVAL;
+		}
+		ib[idx+1] = idx_value + (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+		ib[idx+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_AUTO:
+		if (pkt->count != 1) {
+			DRM_ERROR("bad DRAW_INDEX_AUTO\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream %d\n", __func__, __LINE__, idx);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_MULTI_AUTO:
+		if (pkt->count != 2) {
+			DRM_ERROR("bad DRAW_INDEX_MULTI_AUTO\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream %d\n", __func__, __LINE__, idx);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_IMMD:
+		if (pkt->count < 2) {
+			DRM_ERROR("bad DRAW_INDEX_IMMD\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_OFFSET:
+		if (pkt->count != 2) {
+			DRM_ERROR("bad DRAW_INDEX_OFFSET\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_DRAW_INDEX_OFFSET_2:
+		if (pkt->count != 3) {
+			DRM_ERROR("bad DRAW_INDEX_OFFSET_2\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_track_check(p);
+		if (r) {
+			dev_warn(p->dev, "%s:%d invalid cmd stream\n", __func__, __LINE__);
+			return r;
+		}
+		break;
+	case PACKET3_WAIT_REG_MEM:
+		if (pkt->count != 5) {
+			DRM_ERROR("bad WAIT_REG_MEM\n");
+			return -EINVAL;
+		}
+		/* bit 4 is reg (0) or mem (1) */
+		if (idx_value & 0x10) {
+			r = evergreen_cs_packet_next_reloc(p, &reloc);
+			if (r) {
+				DRM_ERROR("bad WAIT_REG_MEM\n");
+				return -EINVAL;
+			}
+			ib[idx+1] += (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+			ib[idx+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		}
+		break;
+	case PACKET3_SURFACE_SYNC:
+		if (pkt->count != 3) {
+			DRM_ERROR("bad SURFACE_SYNC\n");
+			return -EINVAL;
+		}
+		/* 0xffffffff/0x0 is flush all cache flag */
+		if (radeon_get_ib_value(p, idx + 1) != 0xffffffff ||
+		    radeon_get_ib_value(p, idx + 2) != 0) {
+			r = evergreen_cs_packet_next_reloc(p, &reloc);
+			if (r) {
+				DRM_ERROR("bad SURFACE_SYNC\n");
+				return -EINVAL;
+			}
+			ib[idx+2] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+		}
+		break;
+	case PACKET3_EVENT_WRITE:
+		if (pkt->count != 2 && pkt->count != 0) {
+			DRM_ERROR("bad EVENT_WRITE\n");
+			return -EINVAL;
+		}
+		if (pkt->count) {
+			r = evergreen_cs_packet_next_reloc(p, &reloc);
+			if (r) {
+				DRM_ERROR("bad EVENT_WRITE\n");
+				return -EINVAL;
+			}
+			ib[idx+1] += (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+			ib[idx+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		}
+		break;
+	case PACKET3_EVENT_WRITE_EOP:
+		if (pkt->count != 4) {
+			DRM_ERROR("bad EVENT_WRITE_EOP\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			DRM_ERROR("bad EVENT_WRITE_EOP\n");
+			return -EINVAL;
+		}
+		ib[idx+1] += (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+		ib[idx+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		break;
+	case PACKET3_EVENT_WRITE_EOS:
+		if (pkt->count != 3) {
+			DRM_ERROR("bad EVENT_WRITE_EOS\n");
+			return -EINVAL;
+		}
+		r = evergreen_cs_packet_next_reloc(p, &reloc);
+		if (r) {
+			DRM_ERROR("bad EVENT_WRITE_EOS\n");
+			return -EINVAL;
+		}
+		ib[idx+1] += (u32)(reloc->lobj.gpu_offset & 0xffffffff);
+		ib[idx+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+		break;
+	case PACKET3_SET_CONFIG_REG:
+		start_reg = (idx_value << 2) + PACKET3_SET_CONFIG_REG_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_CONFIG_REG_START) ||
+		    (start_reg >= PACKET3_SET_CONFIG_REG_END) ||
+		    (end_reg >= PACKET3_SET_CONFIG_REG_END)) {
+			DRM_ERROR("bad PACKET3_SET_CONFIG_REG\n");
+			return -EINVAL;
+		}
+		for (i = 0; i < pkt->count; i++) {
+			reg = start_reg + (4 * i);
+			r = evergreen_cs_check_reg(p, reg, idx+1+i);
+			if (r)
+				return r;
+		}
+		break;
+	case PACKET3_SET_CONTEXT_REG:
+		start_reg = (idx_value << 2) + PACKET3_SET_CONTEXT_REG_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_CONTEXT_REG_START) ||
+		    (start_reg >= PACKET3_SET_CONTEXT_REG_END) ||
+		    (end_reg >= PACKET3_SET_CONTEXT_REG_END)) {
+			DRM_ERROR("bad PACKET3_SET_CONTEXT_REG\n");
+			return -EINVAL;
+		}
+		for (i = 0; i < pkt->count; i++) {
+			reg = start_reg + (4 * i);
+			r = evergreen_cs_check_reg(p, reg, idx+1+i);
+			if (r)
+				return r;
+		}
+		break;
+	case PACKET3_SET_RESOURCE:
+		if (pkt->count % 8) {
+			DRM_ERROR("bad SET_RESOURCE\n");
+			return -EINVAL;
+		}
+		start_reg = (idx_value << 2) + PACKET3_SET_RESOURCE_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_RESOURCE_START) ||
+		    (start_reg >= PACKET3_SET_RESOURCE_END) ||
+		    (end_reg >= PACKET3_SET_RESOURCE_END)) {
+			DRM_ERROR("bad SET_RESOURCE\n");
+			return -EINVAL;
+		}
+		for (i = 0; i < (pkt->count / 8); i++) {
+			struct radeon_bo *texture, *mipmap;
+			u32 size, offset;
+
+			switch (G__SQ_CONSTANT_TYPE(radeon_get_ib_value(p, idx+1+(i*8)+7))) {
+			case SQ_TEX_VTX_VALID_TEXTURE:
+				/* tex base */
+				r = evergreen_cs_packet_next_reloc(p, &reloc);
+				if (r) {
+					DRM_ERROR("bad SET_RESOURCE (tex)\n");
+					return -EINVAL;
+				}
+				ib[idx+1+(i*8)+3] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+				if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)
+					ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_2D_TILED_THIN1);
+				else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)
+					ib[idx+1+(i*8)+1] |= TEX_ARRAY_MODE(ARRAY_1D_TILED_THIN1);
+				texture = reloc->robj;
+				/* tex mip base */
+				r = evergreen_cs_packet_next_reloc(p, &reloc);
+				if (r) {
+					DRM_ERROR("bad SET_RESOURCE (tex)\n");
+					return -EINVAL;
+				}
+				ib[idx+1+(i*8)+4] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
+				mipmap = reloc->robj;
+				r = evergreen_check_texture_resource(p,  idx+1+(i*8),
+						texture, mipmap);
+				if (r)
+					return r;
+				break;
+			case SQ_TEX_VTX_VALID_BUFFER:
+				/* vtx base */
+				r = evergreen_cs_packet_next_reloc(p, &reloc);
+				if (r) {
+					DRM_ERROR("bad SET_RESOURCE (vtx)\n");
+					return -EINVAL;
+				}
+				offset = radeon_get_ib_value(p, idx+1+(i*8)+0);
+				size = radeon_get_ib_value(p, idx+1+(i*8)+1);
+				if (p->rdev && (size + offset) > radeon_bo_size(reloc->robj)) {
+					/* force size to size of the buffer */
+					dev_warn(p->dev, "vbo resource seems too big for the bo\n");
+					ib[idx+1+(i*8)+1] = radeon_bo_size(reloc->robj);
+				}
+				ib[idx+1+(i*8)+0] += (u32)((reloc->lobj.gpu_offset) & 0xffffffff);
+				ib[idx+1+(i*8)+2] += upper_32_bits(reloc->lobj.gpu_offset) & 0xff;
+				break;
+			case SQ_TEX_VTX_INVALID_TEXTURE:
+			case SQ_TEX_VTX_INVALID_BUFFER:
+			default:
+				DRM_ERROR("bad SET_RESOURCE\n");
+				return -EINVAL;
+			}
+		}
+		break;
+	case PACKET3_SET_ALU_CONST:
+		/* XXX fix me ALU const buffers only */
+		break;
+	case PACKET3_SET_BOOL_CONST:
+		start_reg = (idx_value << 2) + PACKET3_SET_BOOL_CONST_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_BOOL_CONST_START) ||
+		    (start_reg >= PACKET3_SET_BOOL_CONST_END) ||
+		    (end_reg >= PACKET3_SET_BOOL_CONST_END)) {
+			DRM_ERROR("bad SET_BOOL_CONST\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_SET_LOOP_CONST:
+		start_reg = (idx_value << 2) + PACKET3_SET_LOOP_CONST_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_LOOP_CONST_START) ||
+		    (start_reg >= PACKET3_SET_LOOP_CONST_END) ||
+		    (end_reg >= PACKET3_SET_LOOP_CONST_END)) {
+			DRM_ERROR("bad SET_LOOP_CONST\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_SET_CTL_CONST:
+		start_reg = (idx_value << 2) + PACKET3_SET_CTL_CONST_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_CTL_CONST_START) ||
+		    (start_reg >= PACKET3_SET_CTL_CONST_END) ||
+		    (end_reg >= PACKET3_SET_CTL_CONST_END)) {
+			DRM_ERROR("bad SET_CTL_CONST\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_SET_SAMPLER:
+		if (pkt->count % 3) {
+			DRM_ERROR("bad SET_SAMPLER\n");
+			return -EINVAL;
+		}
+		start_reg = (idx_value << 2) + PACKET3_SET_SAMPLER_START;
+		end_reg = 4 * pkt->count + start_reg - 4;
+		if ((start_reg < PACKET3_SET_SAMPLER_START) ||
+		    (start_reg >= PACKET3_SET_SAMPLER_END) ||
+		    (end_reg >= PACKET3_SET_SAMPLER_END)) {
+			DRM_ERROR("bad SET_SAMPLER\n");
+			return -EINVAL;
+		}
+		break;
+	case PACKET3_NOP:
+		break;
+	default:
+		DRM_ERROR("Packet3 opcode %x not supported\n", pkt->opcode);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+int evergreen_cs_parse(struct radeon_cs_parser *p)
+{
+	struct radeon_cs_packet pkt;
+	struct evergreen_cs_track *track;
+	int r;
+
+	if (p->track == NULL) {
+		/* initialize tracker, we are in kms */
+		track = kzalloc(sizeof(*track), GFP_KERNEL);
+		if (track == NULL)
+			return -ENOMEM;
+		evergreen_cs_track_init(track);
+		track->npipes = p->rdev->config.evergreen.tiling_npipes;
+		track->nbanks = p->rdev->config.evergreen.tiling_nbanks;
+		track->group_size = p->rdev->config.evergreen.tiling_group_size;
+		p->track = track;
+	}
+	do {
+		r = evergreen_cs_packet_parse(p, &pkt, p->idx);
+		if (r) {
+			kfree(p->track);
+			p->track = NULL;
+			return r;
+		}
+		p->idx += pkt.count + 2;
+		switch (pkt.type) {
+		case PACKET_TYPE0:
+			r = evergreen_cs_parse_packet0(p, &pkt);
+			break;
+		case PACKET_TYPE2:
+			break;
+		case PACKET_TYPE3:
+			r = evergreen_packet3_check(p, &pkt);
+			break;
+		default:
+			DRM_ERROR("Unknown packet type %d !\n", pkt.type);
+			kfree(p->track);
+			p->track = NULL;
+			return -EINVAL;
+		}
+		if (r) {
+			kfree(p->track);
+			p->track = NULL;
+			return r;
+		}
+	} while (p->idx < p->chunks[p->chunk_ib_idx].length_dw);
+#if 0
+	for (r = 0; r < p->ib->length_dw; r++) {
+		printk(KERN_INFO "%05d  0x%08X\n", r, p->ib->ptr[r]);
+		mdelay(1);
+	}
+#endif
+	kfree(p->track);
+	p->track = NULL;
+	return 0;
+}
+
diff --git a/drivers/gpu/drm/radeon/evergreen_reg.h b/drivers/gpu/drm/radeon/evergreen_reg.h
index f7c7c96..e028c1c 100644
--- a/drivers/gpu/drm/radeon/evergreen_reg.h
+++ b/drivers/gpu/drm/radeon/evergreen_reg.h
@@ -151,6 +151,9 @@
 #define EVERGREEN_DATA_FORMAT                           0x6b00
 #       define EVERGREEN_INTERLEAVE_EN                  (1 << 0)
 #define EVERGREEN_DESKTOP_HEIGHT                        0x6b04
+#define EVERGREEN_VLINE_START_END                       0x6b08
+#define EVERGREEN_VLINE_STATUS                          0x6bb8
+#       define EVERGREEN_VLINE_STAT                     (1 << 12)
 
 #define EVERGREEN_VIEWPORT_START                        0x6d70
 #define EVERGREEN_VIEWPORT_SIZE                         0x6d74
@@ -164,8 +167,12 @@
 #define EVERGREEN_CRTC5_REGISTER_OFFSET                 (0x129f0 - 0x6df0)
 
 /* CRTC blocks at 0x6df0, 0x79f0, 0x105f0, 0x111f0, 0x11df0, 0x129f0 */
+#define EVERGREEN_CRTC_V_BLANK_START_END                0x6e34
 #define EVERGREEN_CRTC_CONTROL                          0x6e70
 #       define EVERGREEN_CRTC_MASTER_EN                 (1 << 0)
+#       define EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE (1 << 24)
+#define EVERGREEN_CRTC_STATUS                           0x6e8c
+#define EVERGREEN_CRTC_STATUS_POSITION                  0x6e90
 #define EVERGREEN_CRTC_UPDATE_LOCK                      0x6ed4
 
 #define EVERGREEN_DC_GPIO_HPD_MASK                      0x64b0
diff --git a/drivers/gpu/drm/radeon/evergreend.h b/drivers/gpu/drm/radeon/evergreend.h
new file mode 100644
index 0000000..79683f6
--- /dev/null
+++ b/drivers/gpu/drm/radeon/evergreend.h
@@ -0,0 +1,1020 @@
+/*
+ * Copyright 2010 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Alex Deucher
+ */
+#ifndef EVERGREEND_H
+#define EVERGREEND_H
+
+#define EVERGREEN_MAX_SH_GPRS           256
+#define EVERGREEN_MAX_TEMP_GPRS         16
+#define EVERGREEN_MAX_SH_THREADS        256
+#define EVERGREEN_MAX_SH_STACK_ENTRIES  4096
+#define EVERGREEN_MAX_FRC_EOV_CNT       16384
+#define EVERGREEN_MAX_BACKENDS          8
+#define EVERGREEN_MAX_BACKENDS_MASK     0xFF
+#define EVERGREEN_MAX_SIMDS             16
+#define EVERGREEN_MAX_SIMDS_MASK        0xFFFF
+#define EVERGREEN_MAX_PIPES             8
+#define EVERGREEN_MAX_PIPES_MASK        0xFF
+#define EVERGREEN_MAX_LDS_NUM           0xFFFF
+
+/* Registers */
+
+#define RCU_IND_INDEX           			0x100
+#define RCU_IND_DATA            			0x104
+
+#define GRBM_GFX_INDEX          			0x802C
+#define		INSTANCE_INDEX(x)			((x) << 0)
+#define		SE_INDEX(x)     			((x) << 16)
+#define		INSTANCE_BROADCAST_WRITES      		(1 << 30)
+#define		SE_BROADCAST_WRITES      		(1 << 31)
+#define RLC_GFX_INDEX           			0x3fC4
+#define CC_GC_SHADER_PIPE_CONFIG			0x8950
+#define		WRITE_DIS      				(1 << 0)
+#define CC_RB_BACKEND_DISABLE				0x98F4
+#define		BACKEND_DISABLE(x)     			((x) << 16)
+#define GB_ADDR_CONFIG  				0x98F8
+#define		NUM_PIPES(x)				((x) << 0)
+#define		PIPE_INTERLEAVE_SIZE(x)			((x) << 4)
+#define		BANK_INTERLEAVE_SIZE(x)			((x) << 8)
+#define		NUM_SHADER_ENGINES(x)			((x) << 12)
+#define		SHADER_ENGINE_TILE_SIZE(x)     		((x) << 16)
+#define		NUM_GPUS(x)     			((x) << 20)
+#define		MULTI_GPU_TILE_SIZE(x)     		((x) << 24)
+#define		ROW_SIZE(x)             		((x) << 28)
+#define GB_BACKEND_MAP  				0x98FC
+#define DMIF_ADDR_CONFIG  				0xBD4
+#define HDP_ADDR_CONFIG  				0x2F48
+
+#define	CC_SYS_RB_BACKEND_DISABLE			0x3F88
+#define	GC_USER_RB_BACKEND_DISABLE			0x9B7C
+
+#define	CGTS_SYS_TCC_DISABLE				0x3F90
+#define	CGTS_TCC_DISABLE				0x9148
+#define	CGTS_USER_SYS_TCC_DISABLE			0x3F94
+#define	CGTS_USER_TCC_DISABLE				0x914C
+
+#define	CONFIG_MEMSIZE					0x5428
+
+#define CP_ME_CNTL					0x86D8
+#define		CP_ME_HALT					(1 << 28)
+#define		CP_PFP_HALT					(1 << 26)
+#define	CP_ME_RAM_DATA					0xC160
+#define	CP_ME_RAM_RADDR					0xC158
+#define	CP_ME_RAM_WADDR					0xC15C
+#define CP_MEQ_THRESHOLDS				0x8764
+#define		STQ_SPLIT(x)					((x) << 0)
+#define	CP_PERFMON_CNTL					0x87FC
+#define	CP_PFP_UCODE_ADDR				0xC150
+#define	CP_PFP_UCODE_DATA				0xC154
+#define	CP_QUEUE_THRESHOLDS				0x8760
+#define		ROQ_IB1_START(x)				((x) << 0)
+#define		ROQ_IB2_START(x)				((x) << 8)
+#define	CP_RB_BASE					0xC100
+#define	CP_RB_CNTL					0xC104
+#define		RB_BUFSZ(x)					((x) << 0)
+#define		RB_BLKSZ(x)					((x) << 8)
+#define		RB_NO_UPDATE					(1 << 27)
+#define		RB_RPTR_WR_ENA					(1 << 31)
+#define		BUF_SWAP_32BIT					(2 << 16)
+#define	CP_RB_RPTR					0x8700
+#define	CP_RB_RPTR_ADDR					0xC10C
+#define	CP_RB_RPTR_ADDR_HI				0xC110
+#define	CP_RB_RPTR_WR					0xC108
+#define	CP_RB_WPTR					0xC114
+#define	CP_RB_WPTR_ADDR					0xC118
+#define	CP_RB_WPTR_ADDR_HI				0xC11C
+#define	CP_RB_WPTR_DELAY				0x8704
+#define	CP_SEM_WAIT_TIMER				0x85BC
+#define	CP_DEBUG					0xC1FC
+
+
+#define	GC_USER_SHADER_PIPE_CONFIG			0x8954
+#define		INACTIVE_QD_PIPES(x)				((x) << 8)
+#define		INACTIVE_QD_PIPES_MASK				0x0000FF00
+#define		INACTIVE_SIMDS(x)				((x) << 16)
+#define		INACTIVE_SIMDS_MASK				0x00FF0000
+
+#define	GRBM_CNTL					0x8000
+#define		GRBM_READ_TIMEOUT(x)				((x) << 0)
+#define	GRBM_SOFT_RESET					0x8020
+#define		SOFT_RESET_CP					(1 << 0)
+#define		SOFT_RESET_CB					(1 << 1)
+#define		SOFT_RESET_DB					(1 << 3)
+#define		SOFT_RESET_PA					(1 << 5)
+#define		SOFT_RESET_SC					(1 << 6)
+#define		SOFT_RESET_SPI					(1 << 8)
+#define		SOFT_RESET_SH					(1 << 9)
+#define		SOFT_RESET_SX					(1 << 10)
+#define		SOFT_RESET_TC					(1 << 11)
+#define		SOFT_RESET_TA					(1 << 12)
+#define		SOFT_RESET_VC					(1 << 13)
+#define		SOFT_RESET_VGT					(1 << 14)
+
+#define	GRBM_STATUS					0x8010
+#define		CMDFIFO_AVAIL_MASK				0x0000000F
+#define		SRBM_RQ_PENDING					(1 << 5)
+#define		CF_RQ_PENDING					(1 << 7)
+#define		PF_RQ_PENDING					(1 << 8)
+#define		GRBM_EE_BUSY					(1 << 10)
+#define		SX_CLEAN					(1 << 11)
+#define		DB_CLEAN					(1 << 12)
+#define		CB_CLEAN					(1 << 13)
+#define		TA_BUSY 					(1 << 14)
+#define		VGT_BUSY_NO_DMA					(1 << 16)
+#define		VGT_BUSY					(1 << 17)
+#define		SX_BUSY 					(1 << 20)
+#define		SH_BUSY 					(1 << 21)
+#define		SPI_BUSY					(1 << 22)
+#define		SC_BUSY 					(1 << 24)
+#define		PA_BUSY 					(1 << 25)
+#define		DB_BUSY 					(1 << 26)
+#define		CP_COHERENCY_BUSY      				(1 << 28)
+#define		CP_BUSY 					(1 << 29)
+#define		CB_BUSY 					(1 << 30)
+#define		GUI_ACTIVE					(1 << 31)
+#define	GRBM_STATUS_SE0					0x8014
+#define	GRBM_STATUS_SE1					0x8018
+#define		SE_SX_CLEAN					(1 << 0)
+#define		SE_DB_CLEAN					(1 << 1)
+#define		SE_CB_CLEAN					(1 << 2)
+#define		SE_TA_BUSY					(1 << 25)
+#define		SE_SX_BUSY					(1 << 26)
+#define		SE_SPI_BUSY					(1 << 27)
+#define		SE_SH_BUSY					(1 << 28)
+#define		SE_SC_BUSY					(1 << 29)
+#define		SE_DB_BUSY					(1 << 30)
+#define		SE_CB_BUSY					(1 << 31)
+
+#define	HDP_HOST_PATH_CNTL				0x2C00
+#define	HDP_NONSURFACE_BASE				0x2C04
+#define	HDP_NONSURFACE_INFO				0x2C08
+#define	HDP_NONSURFACE_SIZE				0x2C0C
+#define HDP_REG_COHERENCY_FLUSH_CNTL			0x54A0
+#define	HDP_TILING_CONFIG				0x2F3C
+
+#define MC_SHARED_CHMAP						0x2004
+#define		NOOFCHAN_SHIFT					12
+#define		NOOFCHAN_MASK					0x00003000
+
+#define	MC_ARB_RAMCFG					0x2760
+#define		NOOFBANK_SHIFT					0
+#define		NOOFBANK_MASK					0x00000003
+#define		NOOFRANK_SHIFT					2
+#define		NOOFRANK_MASK					0x00000004
+#define		NOOFROWS_SHIFT					3
+#define		NOOFROWS_MASK					0x00000038
+#define		NOOFCOLS_SHIFT					6
+#define		NOOFCOLS_MASK					0x000000C0
+#define		CHANSIZE_SHIFT					8
+#define		CHANSIZE_MASK					0x00000100
+#define		BURSTLENGTH_SHIFT				9
+#define		BURSTLENGTH_MASK				0x00000200
+#define		CHANSIZE_OVERRIDE				(1 << 11)
+#define	MC_VM_AGP_TOP					0x2028
+#define	MC_VM_AGP_BOT					0x202C
+#define	MC_VM_AGP_BASE					0x2030
+#define	MC_VM_FB_LOCATION				0x2024
+#define	MC_VM_MB_L1_TLB0_CNTL				0x2234
+#define	MC_VM_MB_L1_TLB1_CNTL				0x2238
+#define	MC_VM_MB_L1_TLB2_CNTL				0x223C
+#define	MC_VM_MB_L1_TLB3_CNTL				0x2240
+#define		ENABLE_L1_TLB					(1 << 0)
+#define		ENABLE_L1_FRAGMENT_PROCESSING			(1 << 1)
+#define		SYSTEM_ACCESS_MODE_PA_ONLY			(0 << 3)
+#define		SYSTEM_ACCESS_MODE_USE_SYS_MAP			(1 << 3)
+#define		SYSTEM_ACCESS_MODE_IN_SYS			(2 << 3)
+#define		SYSTEM_ACCESS_MODE_NOT_IN_SYS			(3 << 3)
+#define		SYSTEM_APERTURE_UNMAPPED_ACCESS_PASS_THRU	(0 << 5)
+#define		EFFECTIVE_L1_TLB_SIZE(x)			((x)<<15)
+#define		EFFECTIVE_L1_QUEUE_SIZE(x)			((x)<<18)
+#define	MC_VM_MD_L1_TLB0_CNTL				0x2654
+#define	MC_VM_MD_L1_TLB1_CNTL				0x2658
+#define	MC_VM_MD_L1_TLB2_CNTL				0x265C
+#define	MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR		0x203C
+#define	MC_VM_SYSTEM_APERTURE_HIGH_ADDR			0x2038
+#define	MC_VM_SYSTEM_APERTURE_LOW_ADDR			0x2034
+
+#define	PA_CL_ENHANCE					0x8A14
+#define		CLIP_VTX_REORDER_ENA				(1 << 0)
+#define		NUM_CLIP_SEQ(x)					((x) << 1)
+#define PA_SC_AA_CONFIG					0x28C04
+#define         MSAA_NUM_SAMPLES_SHIFT                  0
+#define         MSAA_NUM_SAMPLES_MASK                   0x3
+#define PA_SC_CLIPRECT_RULE				0x2820C
+#define	PA_SC_EDGERULE					0x28230
+#define	PA_SC_FIFO_SIZE					0x8BCC
+#define		SC_PRIM_FIFO_SIZE(x)				((x) << 0)
+#define		SC_HIZ_TILE_FIFO_SIZE(x)			((x) << 12)
+#define		SC_EARLYZ_TILE_FIFO_SIZE(x)			((x) << 20)
+#define	PA_SC_FORCE_EOV_MAX_CNTS			0x8B24
+#define		FORCE_EOV_MAX_CLK_CNT(x)			((x) << 0)
+#define		FORCE_EOV_MAX_REZ_CNT(x)			((x) << 16)
+#define PA_SC_LINE_STIPPLE				0x28A0C
+#define	PA_SC_LINE_STIPPLE_STATE			0x8B10
+
+#define	SCRATCH_REG0					0x8500
+#define	SCRATCH_REG1					0x8504
+#define	SCRATCH_REG2					0x8508
+#define	SCRATCH_REG3					0x850C
+#define	SCRATCH_REG4					0x8510
+#define	SCRATCH_REG5					0x8514
+#define	SCRATCH_REG6					0x8518
+#define	SCRATCH_REG7					0x851C
+#define	SCRATCH_UMSK					0x8540
+#define	SCRATCH_ADDR					0x8544
+
+#define	SMX_DC_CTL0					0xA020
+#define		USE_HASH_FUNCTION				(1 << 0)
+#define		NUMBER_OF_SETS(x)				((x) << 1)
+#define		FLUSH_ALL_ON_EVENT				(1 << 10)
+#define		STALL_ON_EVENT					(1 << 11)
+#define	SMX_EVENT_CTL					0xA02C
+#define		ES_FLUSH_CTL(x)					((x) << 0)
+#define		GS_FLUSH_CTL(x)					((x) << 3)
+#define		ACK_FLUSH_CTL(x)				((x) << 6)
+#define		SYNC_FLUSH_CTL					(1 << 8)
+
+#define	SPI_CONFIG_CNTL					0x9100
+#define		GPR_WRITE_PRIORITY(x)				((x) << 0)
+#define	SPI_CONFIG_CNTL_1				0x913C
+#define		VTX_DONE_DELAY(x)				((x) << 0)
+#define		INTERP_ONE_PRIM_PER_ROW				(1 << 4)
+#define	SPI_INPUT_Z					0x286D8
+#define	SPI_PS_IN_CONTROL_0				0x286CC
+#define		NUM_INTERP(x)					((x)<<0)
+#define		POSITION_ENA					(1<<8)
+#define		POSITION_CENTROID				(1<<9)
+#define		POSITION_ADDR(x)				((x)<<10)
+#define		PARAM_GEN(x)					((x)<<15)
+#define		PARAM_GEN_ADDR(x)				((x)<<19)
+#define		BARYC_SAMPLE_CNTL(x)				((x)<<26)
+#define		PERSP_GRADIENT_ENA				(1<<28)
+#define		LINEAR_GRADIENT_ENA				(1<<29)
+#define		POSITION_SAMPLE					(1<<30)
+#define		BARYC_AT_SAMPLE_ENA				(1<<31)
+
+#define	SQ_CONFIG					0x8C00
+#define		VC_ENABLE					(1 << 0)
+#define		EXPORT_SRC_C					(1 << 1)
+#define		CS_PRIO(x)					((x) << 18)
+#define		LS_PRIO(x)					((x) << 20)
+#define		HS_PRIO(x)					((x) << 22)
+#define		PS_PRIO(x)					((x) << 24)
+#define		VS_PRIO(x)					((x) << 26)
+#define		GS_PRIO(x)					((x) << 28)
+#define		ES_PRIO(x)					((x) << 30)
+#define	SQ_GPR_RESOURCE_MGMT_1				0x8C04
+#define		NUM_PS_GPRS(x)					((x) << 0)
+#define		NUM_VS_GPRS(x)					((x) << 16)
+#define		NUM_CLAUSE_TEMP_GPRS(x)				((x) << 28)
+#define	SQ_GPR_RESOURCE_MGMT_2				0x8C08
+#define		NUM_GS_GPRS(x)					((x) << 0)
+#define		NUM_ES_GPRS(x)					((x) << 16)
+#define	SQ_GPR_RESOURCE_MGMT_3				0x8C0C
+#define		NUM_HS_GPRS(x)					((x) << 0)
+#define		NUM_LS_GPRS(x)					((x) << 16)
+#define	SQ_THREAD_RESOURCE_MGMT				0x8C18
+#define		NUM_PS_THREADS(x)				((x) << 0)
+#define		NUM_VS_THREADS(x)				((x) << 8)
+#define		NUM_GS_THREADS(x)				((x) << 16)
+#define		NUM_ES_THREADS(x)				((x) << 24)
+#define	SQ_THREAD_RESOURCE_MGMT_2			0x8C1C
+#define		NUM_HS_THREADS(x)				((x) << 0)
+#define		NUM_LS_THREADS(x)				((x) << 8)
+#define	SQ_STACK_RESOURCE_MGMT_1			0x8C20
+#define		NUM_PS_STACK_ENTRIES(x)				((x) << 0)
+#define		NUM_VS_STACK_ENTRIES(x)				((x) << 16)
+#define	SQ_STACK_RESOURCE_MGMT_2			0x8C24
+#define		NUM_GS_STACK_ENTRIES(x)				((x) << 0)
+#define		NUM_ES_STACK_ENTRIES(x)				((x) << 16)
+#define	SQ_STACK_RESOURCE_MGMT_3			0x8C28
+#define		NUM_HS_STACK_ENTRIES(x)				((x) << 0)
+#define		NUM_LS_STACK_ENTRIES(x)				((x) << 16)
+#define	SQ_DYN_GPR_CNTL_PS_FLUSH_REQ    		0x8D8C
+#define	SQ_LDS_RESOURCE_MGMT    			0x8E2C
+
+#define	SQ_MS_FIFO_SIZES				0x8CF0
+#define		CACHE_FIFO_SIZE(x)				((x) << 0)
+#define		FETCH_FIFO_HIWATER(x)				((x) << 8)
+#define		DONE_FIFO_HIWATER(x)				((x) << 16)
+#define		ALU_UPDATE_FIFO_HIWATER(x)			((x) << 24)
+
+#define	SX_DEBUG_1					0x9058
+#define		ENABLE_NEW_SMX_ADDRESS				(1 << 16)
+#define	SX_EXPORT_BUFFER_SIZES				0x900C
+#define		COLOR_BUFFER_SIZE(x)				((x) << 0)
+#define		POSITION_BUFFER_SIZE(x)				((x) << 8)
+#define		SMX_BUFFER_SIZE(x)				((x) << 16)
+#define	SX_MISC						0x28350
+
+#define CB_PERF_CTR0_SEL_0				0x9A20
+#define CB_PERF_CTR0_SEL_1				0x9A24
+#define CB_PERF_CTR1_SEL_0				0x9A28
+#define CB_PERF_CTR1_SEL_1				0x9A2C
+#define CB_PERF_CTR2_SEL_0				0x9A30
+#define CB_PERF_CTR2_SEL_1				0x9A34
+#define CB_PERF_CTR3_SEL_0				0x9A38
+#define CB_PERF_CTR3_SEL_1				0x9A3C
+
+#define	TA_CNTL_AUX					0x9508
+#define		DISABLE_CUBE_WRAP				(1 << 0)
+#define		DISABLE_CUBE_ANISO				(1 << 1)
+#define		SYNC_GRADIENT					(1 << 24)
+#define		SYNC_WALKER					(1 << 25)
+#define		SYNC_ALIGNER					(1 << 26)
+
+#define	VGT_CACHE_INVALIDATION				0x88C4
+#define		CACHE_INVALIDATION(x)				((x) << 0)
+#define			VC_ONLY						0
+#define			TC_ONLY						1
+#define			VC_AND_TC					2
+#define		AUTO_INVLD_EN(x)				((x) << 6)
+#define			NO_AUTO						0
+#define			ES_AUTO						1
+#define			GS_AUTO						2
+#define			ES_AND_GS_AUTO					3
+#define	VGT_GS_VERTEX_REUSE				0x88D4
+#define	VGT_NUM_INSTANCES				0x8974
+#define	VGT_OUT_DEALLOC_CNTL				0x28C5C
+#define		DEALLOC_DIST_MASK				0x0000007F
+#define	VGT_VERTEX_REUSE_BLOCK_CNTL			0x28C58
+#define		VTX_REUSE_DEPTH_MASK				0x000000FF
+
+#define VM_CONTEXT0_CNTL				0x1410
+#define		ENABLE_CONTEXT					(1 << 0)
+#define		PAGE_TABLE_DEPTH(x)				(((x) & 3) << 1)
+#define		RANGE_PROTECTION_FAULT_ENABLE_DEFAULT		(1 << 4)
+#define VM_CONTEXT1_CNTL				0x1414
+#define	VM_CONTEXT0_PAGE_TABLE_BASE_ADDR		0x153C
+#define	VM_CONTEXT0_PAGE_TABLE_END_ADDR			0x157C
+#define	VM_CONTEXT0_PAGE_TABLE_START_ADDR		0x155C
+#define VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR	0x1518
+#define VM_CONTEXT0_REQUEST_RESPONSE			0x1470
+#define		REQUEST_TYPE(x)					(((x) & 0xf) << 0)
+#define		RESPONSE_TYPE_MASK				0x000000F0
+#define		RESPONSE_TYPE_SHIFT				4
+#define VM_L2_CNTL					0x1400
+#define		ENABLE_L2_CACHE					(1 << 0)
+#define		ENABLE_L2_FRAGMENT_PROCESSING			(1 << 1)
+#define		ENABLE_L2_PTE_CACHE_LRU_UPDATE_BY_WRITE		(1 << 9)
+#define		EFFECTIVE_L2_QUEUE_SIZE(x)			(((x) & 7) << 14)
+#define VM_L2_CNTL2					0x1404
+#define		INVALIDATE_ALL_L1_TLBS				(1 << 0)
+#define		INVALIDATE_L2_CACHE				(1 << 1)
+#define VM_L2_CNTL3					0x1408
+#define		BANK_SELECT(x)					((x) << 0)
+#define		CACHE_UPDATE_MODE(x)				((x) << 6)
+#define	VM_L2_STATUS					0x140C
+#define		L2_BUSY						(1 << 0)
+
+#define	WAIT_UNTIL					0x8040
+
+#define	SRBM_STATUS				        0x0E50
+#define	SRBM_SOFT_RESET				        0x0E60
+#define		SRBM_SOFT_RESET_ALL_MASK    	       	0x00FEEFA6
+#define		SOFT_RESET_BIF				(1 << 1)
+#define		SOFT_RESET_CG				(1 << 2)
+#define		SOFT_RESET_DC				(1 << 5)
+#define		SOFT_RESET_GRBM				(1 << 8)
+#define		SOFT_RESET_HDP				(1 << 9)
+#define		SOFT_RESET_IH				(1 << 10)
+#define		SOFT_RESET_MC				(1 << 11)
+#define		SOFT_RESET_RLC				(1 << 13)
+#define		SOFT_RESET_ROM				(1 << 14)
+#define		SOFT_RESET_SEM				(1 << 15)
+#define		SOFT_RESET_VMC				(1 << 17)
+#define		SOFT_RESET_TST				(1 << 21)
+#define		SOFT_RESET_REGBB		       	(1 << 22)
+#define		SOFT_RESET_ORB				(1 << 23)
+
+#define IH_RB_CNTL                                        0x3e00
+#       define IH_RB_ENABLE                               (1 << 0)
+#       define IH_IB_SIZE(x)                              ((x) << 1) /* log2 */
+#       define IH_RB_FULL_DRAIN_ENABLE                    (1 << 6)
+#       define IH_WPTR_WRITEBACK_ENABLE                   (1 << 8)
+#       define IH_WPTR_WRITEBACK_TIMER(x)                 ((x) << 9) /* log2 */
+#       define IH_WPTR_OVERFLOW_ENABLE                    (1 << 16)
+#       define IH_WPTR_OVERFLOW_CLEAR                     (1 << 31)
+#define IH_RB_BASE                                        0x3e04
+#define IH_RB_RPTR                                        0x3e08
+#define IH_RB_WPTR                                        0x3e0c
+#       define RB_OVERFLOW                                (1 << 0)
+#       define WPTR_OFFSET_MASK                           0x3fffc
+#define IH_RB_WPTR_ADDR_HI                                0x3e10
+#define IH_RB_WPTR_ADDR_LO                                0x3e14
+#define IH_CNTL                                           0x3e18
+#       define ENABLE_INTR                                (1 << 0)
+#       define IH_MC_SWAP(x)                              ((x) << 2)
+#       define IH_MC_SWAP_NONE                            0
+#       define IH_MC_SWAP_16BIT                           1
+#       define IH_MC_SWAP_32BIT                           2
+#       define IH_MC_SWAP_64BIT                           3
+#       define RPTR_REARM                                 (1 << 4)
+#       define MC_WRREQ_CREDIT(x)                         ((x) << 15)
+#       define MC_WR_CLEAN_CNT(x)                         ((x) << 20)
+
+#define CP_INT_CNTL                                     0xc124
+#       define CNTX_BUSY_INT_ENABLE                     (1 << 19)
+#       define CNTX_EMPTY_INT_ENABLE                    (1 << 20)
+#       define SCRATCH_INT_ENABLE                       (1 << 25)
+#       define TIME_STAMP_INT_ENABLE                    (1 << 26)
+#       define IB2_INT_ENABLE                           (1 << 29)
+#       define IB1_INT_ENABLE                           (1 << 30)
+#       define RB_INT_ENABLE                            (1 << 31)
+#define CP_INT_STATUS                                   0xc128
+#       define SCRATCH_INT_STAT                         (1 << 25)
+#       define TIME_STAMP_INT_STAT                      (1 << 26)
+#       define IB2_INT_STAT                             (1 << 29)
+#       define IB1_INT_STAT                             (1 << 30)
+#       define RB_INT_STAT                              (1 << 31)
+
+#define GRBM_INT_CNTL                                   0x8060
+#       define RDERR_INT_ENABLE                         (1 << 0)
+#       define GUI_IDLE_INT_ENABLE                      (1 << 19)
+
+/* 0x6e98, 0x7a98, 0x10698, 0x11298, 0x11e98, 0x12a98 */
+#define CRTC_STATUS_FRAME_COUNT                         0x6e98
+
+/* 0x6bb8, 0x77b8, 0x103b8, 0x10fb8, 0x11bb8, 0x127b8 */
+#define VLINE_STATUS                                    0x6bb8
+#       define VLINE_OCCURRED                           (1 << 0)
+#       define VLINE_ACK                                (1 << 4)
+#       define VLINE_STAT                               (1 << 12)
+#       define VLINE_INTERRUPT                          (1 << 16)
+#       define VLINE_INTERRUPT_TYPE                     (1 << 17)
+/* 0x6bbc, 0x77bc, 0x103bc, 0x10fbc, 0x11bbc, 0x127bc */
+#define VBLANK_STATUS                                   0x6bbc
+#       define VBLANK_OCCURRED                          (1 << 0)
+#       define VBLANK_ACK                               (1 << 4)
+#       define VBLANK_STAT                              (1 << 12)
+#       define VBLANK_INTERRUPT                         (1 << 16)
+#       define VBLANK_INTERRUPT_TYPE                    (1 << 17)
+
+/* 0x6b40, 0x7740, 0x10340, 0x10f40, 0x11b40, 0x12740 */
+#define INT_MASK                                        0x6b40
+#       define VBLANK_INT_MASK                          (1 << 0)
+#       define VLINE_INT_MASK                           (1 << 4)
+
+#define DISP_INTERRUPT_STATUS                           0x60f4
+#       define LB_D1_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D1_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD1_INTERRUPT                        (1 << 17)
+#       define DC_HPD1_RX_INTERRUPT                     (1 << 18)
+#       define DACA_AUTODETECT_INTERRUPT                (1 << 22)
+#       define DACB_AUTODETECT_INTERRUPT                (1 << 23)
+#       define DC_I2C_SW_DONE_INTERRUPT                 (1 << 24)
+#       define DC_I2C_HW_DONE_INTERRUPT                 (1 << 25)
+#define DISP_INTERRUPT_STATUS_CONTINUE                  0x60f8
+#       define LB_D2_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D2_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD2_INTERRUPT                        (1 << 17)
+#       define DC_HPD2_RX_INTERRUPT                     (1 << 18)
+#       define DISP_TIMER_INTERRUPT                     (1 << 24)
+#define DISP_INTERRUPT_STATUS_CONTINUE2                 0x60fc
+#       define LB_D3_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D3_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD3_INTERRUPT                        (1 << 17)
+#       define DC_HPD3_RX_INTERRUPT                     (1 << 18)
+#define DISP_INTERRUPT_STATUS_CONTINUE3                 0x6100
+#       define LB_D4_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D4_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD4_INTERRUPT                        (1 << 17)
+#       define DC_HPD4_RX_INTERRUPT                     (1 << 18)
+#define DISP_INTERRUPT_STATUS_CONTINUE4                 0x614c
+#       define LB_D5_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D5_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD5_INTERRUPT                        (1 << 17)
+#       define DC_HPD5_RX_INTERRUPT                     (1 << 18)
+#define DISP_INTERRUPT_STATUS_CONTINUE5                 0x6050
+#       define LB_D6_VLINE_INTERRUPT                    (1 << 2)
+#       define LB_D6_VBLANK_INTERRUPT                   (1 << 3)
+#       define DC_HPD6_INTERRUPT                        (1 << 17)
+#       define DC_HPD6_RX_INTERRUPT                     (1 << 18)
+
+/* 0x6858, 0x7458, 0x10058, 0x10c58, 0x11858, 0x12458 */
+#define GRPH_INT_STATUS                                 0x6858
+#       define GRPH_PFLIP_INT_OCCURRED                  (1 << 0)
+#       define GRPH_PFLIP_INT_CLEAR                     (1 << 8)
+/* 0x685c, 0x745c, 0x1005c, 0x10c5c, 0x1185c, 0x1245c */
+#define	GRPH_INT_CONTROL			        0x685c
+#       define GRPH_PFLIP_INT_MASK                      (1 << 0)
+#       define GRPH_PFLIP_INT_TYPE                      (1 << 8)
+
+#define	DACA_AUTODETECT_INT_CONTROL			0x66c8
+#define	DACB_AUTODETECT_INT_CONTROL			0x67c8
+
+#define DC_HPD1_INT_STATUS                              0x601c
+#define DC_HPD2_INT_STATUS                              0x6028
+#define DC_HPD3_INT_STATUS                              0x6034
+#define DC_HPD4_INT_STATUS                              0x6040
+#define DC_HPD5_INT_STATUS                              0x604c
+#define DC_HPD6_INT_STATUS                              0x6058
+#       define DC_HPDx_INT_STATUS                       (1 << 0)
+#       define DC_HPDx_SENSE                            (1 << 1)
+#       define DC_HPDx_RX_INT_STATUS                    (1 << 8)
+
+#define DC_HPD1_INT_CONTROL                             0x6020
+#define DC_HPD2_INT_CONTROL                             0x602c
+#define DC_HPD3_INT_CONTROL                             0x6038
+#define DC_HPD4_INT_CONTROL                             0x6044
+#define DC_HPD5_INT_CONTROL                             0x6050
+#define DC_HPD6_INT_CONTROL                             0x605c
+#       define DC_HPDx_INT_ACK                          (1 << 0)
+#       define DC_HPDx_INT_POLARITY                     (1 << 8)
+#       define DC_HPDx_INT_EN                           (1 << 16)
+#       define DC_HPDx_RX_INT_ACK                       (1 << 20)
+#       define DC_HPDx_RX_INT_EN                        (1 << 24)
+
+#define DC_HPD1_CONTROL                                   0x6024
+#define DC_HPD2_CONTROL                                   0x6030
+#define DC_HPD3_CONTROL                                   0x603c
+#define DC_HPD4_CONTROL                                   0x6048
+#define DC_HPD5_CONTROL                                   0x6054
+#define DC_HPD6_CONTROL                                   0x6060
+#       define DC_HPDx_CONNECTION_TIMER(x)                ((x) << 0)
+#       define DC_HPDx_RX_INT_TIMER(x)                    ((x) << 16)
+#       define DC_HPDx_EN                                 (1 << 28)
+
+/*
+ * PM4
+ */
+#define	PACKET_TYPE0	0
+#define	PACKET_TYPE1	1
+#define	PACKET_TYPE2	2
+#define	PACKET_TYPE3	3
+
+#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
+#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
+#define CP_PACKET0_GET_REG(h) (((h) & 0xFFFF) << 2)
+#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
+#define PACKET0(reg, n)	((PACKET_TYPE0 << 30) |				\
+			 (((reg) >> 2) & 0xFFFF) |			\
+			 ((n) & 0x3FFF) << 16)
+#define CP_PACKET2			0x80000000
+#define		PACKET2_PAD_SHIFT		0
+#define		PACKET2_PAD_MASK		(0x3fffffff << 0)
+
+#define PACKET2(v)	(CP_PACKET2 | REG_SET(PACKET2_PAD, (v)))
+
+#define PACKET3(op, n)	((PACKET_TYPE3 << 30) |				\
+			 (((op) & 0xFF) << 8) |				\
+			 ((n) & 0x3FFF) << 16)
+
+/* Packet 3 types */
+#define	PACKET3_NOP					0x10
+#define	PACKET3_SET_BASE				0x11
+#define	PACKET3_CLEAR_STATE				0x12
+#define	PACKET3_INDIRECT_BUFFER_SIZE			0x13
+#define	PACKET3_DISPATCH_DIRECT				0x15
+#define	PACKET3_DISPATCH_INDIRECT			0x16
+#define	PACKET3_INDIRECT_BUFFER_END			0x17
+#define	PACKET3_SET_PREDICATION				0x20
+#define	PACKET3_REG_RMW					0x21
+#define	PACKET3_COND_EXEC				0x22
+#define	PACKET3_PRED_EXEC				0x23
+#define	PACKET3_DRAW_INDIRECT				0x24
+#define	PACKET3_DRAW_INDEX_INDIRECT			0x25
+#define	PACKET3_INDEX_BASE				0x26
+#define	PACKET3_DRAW_INDEX_2				0x27
+#define	PACKET3_CONTEXT_CONTROL				0x28
+#define	PACKET3_DRAW_INDEX_OFFSET			0x29
+#define	PACKET3_INDEX_TYPE				0x2A
+#define	PACKET3_DRAW_INDEX				0x2B
+#define	PACKET3_DRAW_INDEX_AUTO				0x2D
+#define	PACKET3_DRAW_INDEX_IMMD				0x2E
+#define	PACKET3_NUM_INSTANCES				0x2F
+#define	PACKET3_DRAW_INDEX_MULTI_AUTO			0x30
+#define	PACKET3_STRMOUT_BUFFER_UPDATE			0x34
+#define	PACKET3_DRAW_INDEX_OFFSET_2			0x35
+#define	PACKET3_DRAW_INDEX_MULTI_ELEMENT		0x36
+#define	PACKET3_MEM_SEMAPHORE				0x39
+#define	PACKET3_MPEG_INDEX				0x3A
+#define	PACKET3_WAIT_REG_MEM				0x3C
+#define	PACKET3_MEM_WRITE				0x3D
+#define	PACKET3_INDIRECT_BUFFER				0x32
+#define	PACKET3_SURFACE_SYNC				0x43
+#              define PACKET3_CB0_DEST_BASE_ENA    (1 << 6)
+#              define PACKET3_CB1_DEST_BASE_ENA    (1 << 7)
+#              define PACKET3_CB2_DEST_BASE_ENA    (1 << 8)
+#              define PACKET3_CB3_DEST_BASE_ENA    (1 << 9)
+#              define PACKET3_CB4_DEST_BASE_ENA    (1 << 10)
+#              define PACKET3_CB5_DEST_BASE_ENA    (1 << 11)
+#              define PACKET3_CB6_DEST_BASE_ENA    (1 << 12)
+#              define PACKET3_CB7_DEST_BASE_ENA    (1 << 13)
+#              define PACKET3_DB_DEST_BASE_ENA     (1 << 14)
+#              define PACKET3_CB8_DEST_BASE_ENA    (1 << 15)
+#              define PACKET3_CB9_DEST_BASE_ENA    (1 << 16)
+#              define PACKET3_CB10_DEST_BASE_ENA   (1 << 17)
+#              define PACKET3_CB11_DEST_BASE_ENA   (1 << 17)
+#              define PACKET3_FULL_CACHE_ENA       (1 << 20)
+#              define PACKET3_TC_ACTION_ENA        (1 << 23)
+#              define PACKET3_VC_ACTION_ENA        (1 << 24)
+#              define PACKET3_CB_ACTION_ENA        (1 << 25)
+#              define PACKET3_DB_ACTION_ENA        (1 << 26)
+#              define PACKET3_SH_ACTION_ENA        (1 << 27)
+#              define PACKET3_SMX_ACTION_ENA       (1 << 28)
+#define	PACKET3_ME_INITIALIZE				0x44
+#define		PACKET3_ME_INITIALIZE_DEVICE_ID(x) ((x) << 16)
+#define	PACKET3_COND_WRITE				0x45
+#define	PACKET3_EVENT_WRITE				0x46
+#define	PACKET3_EVENT_WRITE_EOP				0x47
+#define	PACKET3_EVENT_WRITE_EOS				0x48
+#define	PACKET3_PREAMBLE_CNTL				0x4A
+#define	PACKET3_RB_OFFSET				0x4B
+#define	PACKET3_ALU_PS_CONST_BUFFER_COPY		0x4C
+#define	PACKET3_ALU_VS_CONST_BUFFER_COPY		0x4D
+#define	PACKET3_ALU_PS_CONST_UPDATE		        0x4E
+#define	PACKET3_ALU_VS_CONST_UPDATE		        0x4F
+#define	PACKET3_ONE_REG_WRITE				0x57
+#define	PACKET3_SET_CONFIG_REG				0x68
+#define		PACKET3_SET_CONFIG_REG_START			0x00008000
+#define		PACKET3_SET_CONFIG_REG_END			0x0000ac00
+#define	PACKET3_SET_CONTEXT_REG				0x69
+#define		PACKET3_SET_CONTEXT_REG_START			0x00028000
+#define		PACKET3_SET_CONTEXT_REG_END			0x00029000
+#define	PACKET3_SET_ALU_CONST				0x6A
+/* alu const buffers only; no reg file */
+#define	PACKET3_SET_BOOL_CONST				0x6B
+#define		PACKET3_SET_BOOL_CONST_START			0x0003a500
+#define		PACKET3_SET_BOOL_CONST_END			0x0003a518
+#define	PACKET3_SET_LOOP_CONST				0x6C
+#define		PACKET3_SET_LOOP_CONST_START			0x0003a200
+#define		PACKET3_SET_LOOP_CONST_END			0x0003a500
+#define	PACKET3_SET_RESOURCE				0x6D
+#define		PACKET3_SET_RESOURCE_START			0x00030000
+#define		PACKET3_SET_RESOURCE_END			0x00038000
+#define	PACKET3_SET_SAMPLER				0x6E
+#define		PACKET3_SET_SAMPLER_START			0x0003c000
+#define		PACKET3_SET_SAMPLER_END				0x0003c600
+#define	PACKET3_SET_CTL_CONST				0x6F
+#define		PACKET3_SET_CTL_CONST_START			0x0003cff0
+#define		PACKET3_SET_CTL_CONST_END			0x0003ff0c
+#define	PACKET3_SET_RESOURCE_OFFSET			0x70
+#define	PACKET3_SET_ALU_CONST_VS			0x71
+#define	PACKET3_SET_ALU_CONST_DI			0x72
+#define	PACKET3_SET_CONTEXT_REG_INDIRECT		0x73
+#define	PACKET3_SET_RESOURCE_INDIRECT			0x74
+#define	PACKET3_SET_APPEND_CNT			        0x75
+
+#define	SQ_RESOURCE_CONSTANT_WORD7_0				0x3001c
+#define		S__SQ_CONSTANT_TYPE(x)			(((x) & 3) << 30)
+#define		G__SQ_CONSTANT_TYPE(x)			(((x) >> 30) & 3)
+#define			SQ_TEX_VTX_INVALID_TEXTURE			0x0
+#define			SQ_TEX_VTX_INVALID_BUFFER			0x1
+#define			SQ_TEX_VTX_VALID_TEXTURE			0x2
+#define			SQ_TEX_VTX_VALID_BUFFER				0x3
+
+#define SQ_CONST_MEM_BASE				0x8df8
+
+#define SQ_ESGS_RING_SIZE				0x8c44
+#define SQ_GSVS_RING_SIZE				0x8c4c
+#define SQ_ESTMP_RING_SIZE				0x8c54
+#define SQ_GSTMP_RING_SIZE				0x8c5c
+#define SQ_VSTMP_RING_SIZE				0x8c64
+#define SQ_PSTMP_RING_SIZE				0x8c6c
+#define SQ_LSTMP_RING_SIZE				0x8e14
+#define SQ_HSTMP_RING_SIZE				0x8e1c
+#define VGT_TF_RING_SIZE				0x8988
+
+#define SQ_ESGS_RING_ITEMSIZE				0x28900
+#define SQ_GSVS_RING_ITEMSIZE				0x28904
+#define SQ_ESTMP_RING_ITEMSIZE				0x28908
+#define SQ_GSTMP_RING_ITEMSIZE				0x2890c
+#define SQ_VSTMP_RING_ITEMSIZE				0x28910
+#define SQ_PSTMP_RING_ITEMSIZE				0x28914
+#define SQ_LSTMP_RING_ITEMSIZE				0x28830
+#define SQ_HSTMP_RING_ITEMSIZE				0x28834
+
+#define SQ_GS_VERT_ITEMSIZE				0x2891c
+#define SQ_GS_VERT_ITEMSIZE_1				0x28920
+#define SQ_GS_VERT_ITEMSIZE_2				0x28924
+#define SQ_GS_VERT_ITEMSIZE_3				0x28928
+#define SQ_GSVS_RING_OFFSET_1				0x2892c
+#define SQ_GSVS_RING_OFFSET_2				0x28930
+#define SQ_GSVS_RING_OFFSET_3				0x28934
+
+#define SQ_ALU_CONST_CACHE_PS_0				0x28940
+#define SQ_ALU_CONST_CACHE_PS_1				0x28944
+#define SQ_ALU_CONST_CACHE_PS_2				0x28948
+#define SQ_ALU_CONST_CACHE_PS_3				0x2894c
+#define SQ_ALU_CONST_CACHE_PS_4				0x28950
+#define SQ_ALU_CONST_CACHE_PS_5				0x28954
+#define SQ_ALU_CONST_CACHE_PS_6				0x28958
+#define SQ_ALU_CONST_CACHE_PS_7				0x2895c
+#define SQ_ALU_CONST_CACHE_PS_8				0x28960
+#define SQ_ALU_CONST_CACHE_PS_9				0x28964
+#define SQ_ALU_CONST_CACHE_PS_10			0x28968
+#define SQ_ALU_CONST_CACHE_PS_11			0x2896c
+#define SQ_ALU_CONST_CACHE_PS_12			0x28970
+#define SQ_ALU_CONST_CACHE_PS_13			0x28974
+#define SQ_ALU_CONST_CACHE_PS_14			0x28978
+#define SQ_ALU_CONST_CACHE_PS_15			0x2897c
+#define SQ_ALU_CONST_CACHE_VS_0				0x28980
+#define SQ_ALU_CONST_CACHE_VS_1				0x28984
+#define SQ_ALU_CONST_CACHE_VS_2				0x28988
+#define SQ_ALU_CONST_CACHE_VS_3				0x2898c
+#define SQ_ALU_CONST_CACHE_VS_4				0x28990
+#define SQ_ALU_CONST_CACHE_VS_5				0x28994
+#define SQ_ALU_CONST_CACHE_VS_6				0x28998
+#define SQ_ALU_CONST_CACHE_VS_7				0x2899c
+#define SQ_ALU_CONST_CACHE_VS_8				0x289a0
+#define SQ_ALU_CONST_CACHE_VS_9				0x289a4
+#define SQ_ALU_CONST_CACHE_VS_10			0x289a8
+#define SQ_ALU_CONST_CACHE_VS_11			0x289ac
+#define SQ_ALU_CONST_CACHE_VS_12			0x289b0
+#define SQ_ALU_CONST_CACHE_VS_13			0x289b4
+#define SQ_ALU_CONST_CACHE_VS_14			0x289b8
+#define SQ_ALU_CONST_CACHE_VS_15			0x289bc
+#define SQ_ALU_CONST_CACHE_GS_0				0x289c0
+#define SQ_ALU_CONST_CACHE_GS_1				0x289c4
+#define SQ_ALU_CONST_CACHE_GS_2				0x289c8
+#define SQ_ALU_CONST_CACHE_GS_3				0x289cc
+#define SQ_ALU_CONST_CACHE_GS_4				0x289d0
+#define SQ_ALU_CONST_CACHE_GS_5				0x289d4
+#define SQ_ALU_CONST_CACHE_GS_6				0x289d8
+#define SQ_ALU_CONST_CACHE_GS_7				0x289dc
+#define SQ_ALU_CONST_CACHE_GS_8				0x289e0
+#define SQ_ALU_CONST_CACHE_GS_9				0x289e4
+#define SQ_ALU_CONST_CACHE_GS_10			0x289e8
+#define SQ_ALU_CONST_CACHE_GS_11			0x289ec
+#define SQ_ALU_CONST_CACHE_GS_12			0x289f0
+#define SQ_ALU_CONST_CACHE_GS_13			0x289f4
+#define SQ_ALU_CONST_CACHE_GS_14			0x289f8
+#define SQ_ALU_CONST_CACHE_GS_15			0x289fc
+#define SQ_ALU_CONST_CACHE_HS_0				0x28f00
+#define SQ_ALU_CONST_CACHE_HS_1				0x28f04
+#define SQ_ALU_CONST_CACHE_HS_2				0x28f08
+#define SQ_ALU_CONST_CACHE_HS_3				0x28f0c
+#define SQ_ALU_CONST_CACHE_HS_4				0x28f10
+#define SQ_ALU_CONST_CACHE_HS_5				0x28f14
+#define SQ_ALU_CONST_CACHE_HS_6				0x28f18
+#define SQ_ALU_CONST_CACHE_HS_7				0x28f1c
+#define SQ_ALU_CONST_CACHE_HS_8				0x28f20
+#define SQ_ALU_CONST_CACHE_HS_9				0x28f24
+#define SQ_ALU_CONST_CACHE_HS_10			0x28f28
+#define SQ_ALU_CONST_CACHE_HS_11			0x28f2c
+#define SQ_ALU_CONST_CACHE_HS_12			0x28f30
+#define SQ_ALU_CONST_CACHE_HS_13			0x28f34
+#define SQ_ALU_CONST_CACHE_HS_14			0x28f38
+#define SQ_ALU_CONST_CACHE_HS_15			0x28f3c
+#define SQ_ALU_CONST_CACHE_LS_0				0x28f40
+#define SQ_ALU_CONST_CACHE_LS_1				0x28f44
+#define SQ_ALU_CONST_CACHE_LS_2				0x28f48
+#define SQ_ALU_CONST_CACHE_LS_3				0x28f4c
+#define SQ_ALU_CONST_CACHE_LS_4				0x28f50
+#define SQ_ALU_CONST_CACHE_LS_5				0x28f54
+#define SQ_ALU_CONST_CACHE_LS_6				0x28f58
+#define SQ_ALU_CONST_CACHE_LS_7				0x28f5c
+#define SQ_ALU_CONST_CACHE_LS_8				0x28f60
+#define SQ_ALU_CONST_CACHE_LS_9				0x28f64
+#define SQ_ALU_CONST_CACHE_LS_10			0x28f68
+#define SQ_ALU_CONST_CACHE_LS_11			0x28f6c
+#define SQ_ALU_CONST_CACHE_LS_12			0x28f70
+#define SQ_ALU_CONST_CACHE_LS_13			0x28f74
+#define SQ_ALU_CONST_CACHE_LS_14			0x28f78
+#define SQ_ALU_CONST_CACHE_LS_15			0x28f7c
+
+#define DB_DEPTH_CONTROL				0x28800
+#define DB_DEPTH_VIEW					0x28008
+#define DB_HTILE_DATA_BASE				0x28014
+#define DB_Z_INFO					0x28040
+#       define Z_ARRAY_MODE(x)                          ((x) << 4)
+#define DB_STENCIL_INFO					0x28044
+#define DB_Z_READ_BASE					0x28048
+#define DB_STENCIL_READ_BASE				0x2804c
+#define DB_Z_WRITE_BASE					0x28050
+#define DB_STENCIL_WRITE_BASE				0x28054
+#define DB_DEPTH_SIZE					0x28058
+
+#define SQ_PGM_START_PS					0x28840
+#define SQ_PGM_START_VS					0x2885c
+#define SQ_PGM_START_GS					0x28874
+#define SQ_PGM_START_ES					0x2888c
+#define SQ_PGM_START_FS					0x288a4
+#define SQ_PGM_START_HS					0x288b8
+#define SQ_PGM_START_LS					0x288d0
+
+#define VGT_STRMOUT_CONFIG				0x28b94
+#define VGT_STRMOUT_BUFFER_CONFIG			0x28b98
+
+#define CB_TARGET_MASK					0x28238
+#define CB_SHADER_MASK					0x2823c
+
+#define GDS_ADDR_BASE					0x28720
+
+#define	CB_IMMED0_BASE					0x28b9c
+#define	CB_IMMED1_BASE					0x28ba0
+#define	CB_IMMED2_BASE					0x28ba4
+#define	CB_IMMED3_BASE					0x28ba8
+#define	CB_IMMED4_BASE					0x28bac
+#define	CB_IMMED5_BASE					0x28bb0
+#define	CB_IMMED6_BASE					0x28bb4
+#define	CB_IMMED7_BASE					0x28bb8
+#define	CB_IMMED8_BASE					0x28bbc
+#define	CB_IMMED9_BASE					0x28bc0
+#define	CB_IMMED10_BASE					0x28bc4
+#define	CB_IMMED11_BASE					0x28bc8
+
+/* all 12 CB blocks have these regs */
+#define	CB_COLOR0_BASE					0x28c60
+#define	CB_COLOR0_PITCH					0x28c64
+#define	CB_COLOR0_SLICE					0x28c68
+#define	CB_COLOR0_VIEW					0x28c6c
+#define	CB_COLOR0_INFO					0x28c70
+#       define CB_ARRAY_MODE(x)                         ((x) << 8)
+#       define ARRAY_LINEAR_GENERAL                     0
+#       define ARRAY_LINEAR_ALIGNED                     1
+#       define ARRAY_1D_TILED_THIN1                     2
+#       define ARRAY_2D_TILED_THIN1                     4
+#define	CB_COLOR0_ATTRIB				0x28c74
+#define	CB_COLOR0_DIM					0x28c78
+/* only CB0-7 blocks have these regs */
+#define	CB_COLOR0_CMASK					0x28c7c
+#define	CB_COLOR0_CMASK_SLICE				0x28c80
+#define	CB_COLOR0_FMASK					0x28c84
+#define	CB_COLOR0_FMASK_SLICE				0x28c88
+#define	CB_COLOR0_CLEAR_WORD0				0x28c8c
+#define	CB_COLOR0_CLEAR_WORD1				0x28c90
+#define	CB_COLOR0_CLEAR_WORD2				0x28c94
+#define	CB_COLOR0_CLEAR_WORD3				0x28c98
+
+#define	CB_COLOR1_BASE					0x28c9c
+#define	CB_COLOR2_BASE					0x28cd8
+#define	CB_COLOR3_BASE					0x28d14
+#define	CB_COLOR4_BASE					0x28d50
+#define	CB_COLOR5_BASE					0x28d8c
+#define	CB_COLOR6_BASE					0x28dc8
+#define	CB_COLOR7_BASE					0x28e04
+#define	CB_COLOR8_BASE					0x28e40
+#define	CB_COLOR9_BASE					0x28e5c
+#define	CB_COLOR10_BASE					0x28e78
+#define	CB_COLOR11_BASE					0x28e94
+
+#define	CB_COLOR1_PITCH					0x28ca0
+#define	CB_COLOR2_PITCH					0x28cdc
+#define	CB_COLOR3_PITCH					0x28d18
+#define	CB_COLOR4_PITCH					0x28d54
+#define	CB_COLOR5_PITCH					0x28d90
+#define	CB_COLOR6_PITCH					0x28dcc
+#define	CB_COLOR7_PITCH					0x28e08
+#define	CB_COLOR8_PITCH					0x28e44
+#define	CB_COLOR9_PITCH					0x28e60
+#define	CB_COLOR10_PITCH				0x28e7c
+#define	CB_COLOR11_PITCH				0x28e98
+
+#define	CB_COLOR1_SLICE					0x28ca4
+#define	CB_COLOR2_SLICE					0x28ce0
+#define	CB_COLOR3_SLICE					0x28d1c
+#define	CB_COLOR4_SLICE					0x28d58
+#define	CB_COLOR5_SLICE					0x28d94
+#define	CB_COLOR6_SLICE					0x28dd0
+#define	CB_COLOR7_SLICE					0x28e0c
+#define	CB_COLOR8_SLICE					0x28e48
+#define	CB_COLOR9_SLICE					0x28e64
+#define	CB_COLOR10_SLICE				0x28e80
+#define	CB_COLOR11_SLICE				0x28e9c
+
+#define	CB_COLOR1_VIEW					0x28ca8
+#define	CB_COLOR2_VIEW					0x28ce4
+#define	CB_COLOR3_VIEW					0x28d20
+#define	CB_COLOR4_VIEW					0x28d5c
+#define	CB_COLOR5_VIEW					0x28d98
+#define	CB_COLOR6_VIEW					0x28dd4
+#define	CB_COLOR7_VIEW					0x28e10
+#define	CB_COLOR8_VIEW					0x28e4c
+#define	CB_COLOR9_VIEW					0x28e68
+#define	CB_COLOR10_VIEW					0x28e84
+#define	CB_COLOR11_VIEW					0x28ea0
+
+#define	CB_COLOR1_INFO					0x28cac
+#define	CB_COLOR2_INFO					0x28ce8
+#define	CB_COLOR3_INFO					0x28d24
+#define	CB_COLOR4_INFO					0x28d60
+#define	CB_COLOR5_INFO					0x28d9c
+#define	CB_COLOR6_INFO					0x28dd8
+#define	CB_COLOR7_INFO					0x28e14
+#define	CB_COLOR8_INFO					0x28e50
+#define	CB_COLOR9_INFO					0x28e6c
+#define	CB_COLOR10_INFO					0x28e88
+#define	CB_COLOR11_INFO					0x28ea4
+
+#define	CB_COLOR1_ATTRIB				0x28cb0
+#define	CB_COLOR2_ATTRIB				0x28cec
+#define	CB_COLOR3_ATTRIB				0x28d28
+#define	CB_COLOR4_ATTRIB				0x28d64
+#define	CB_COLOR5_ATTRIB				0x28da0
+#define	CB_COLOR6_ATTRIB				0x28ddc
+#define	CB_COLOR7_ATTRIB				0x28e18
+#define	CB_COLOR8_ATTRIB				0x28e54
+#define	CB_COLOR9_ATTRIB				0x28e70
+#define	CB_COLOR10_ATTRIB				0x28e8c
+#define	CB_COLOR11_ATTRIB				0x28ea8
+
+#define	CB_COLOR1_DIM					0x28cb4
+#define	CB_COLOR2_DIM					0x28cf0
+#define	CB_COLOR3_DIM					0x28d2c
+#define	CB_COLOR4_DIM					0x28d68
+#define	CB_COLOR5_DIM					0x28da4
+#define	CB_COLOR6_DIM					0x28de0
+#define	CB_COLOR7_DIM					0x28e1c
+#define	CB_COLOR8_DIM					0x28e58
+#define	CB_COLOR9_DIM					0x28e74
+#define	CB_COLOR10_DIM					0x28e90
+#define	CB_COLOR11_DIM					0x28eac
+
+#define	CB_COLOR1_CMASK					0x28cb8
+#define	CB_COLOR2_CMASK					0x28cf4
+#define	CB_COLOR3_CMASK					0x28d30
+#define	CB_COLOR4_CMASK					0x28d6c
+#define	CB_COLOR5_CMASK					0x28da8
+#define	CB_COLOR6_CMASK					0x28de4
+#define	CB_COLOR7_CMASK					0x28e20
+
+#define	CB_COLOR1_CMASK_SLICE				0x28cbc
+#define	CB_COLOR2_CMASK_SLICE				0x28cf8
+#define	CB_COLOR3_CMASK_SLICE				0x28d34
+#define	CB_COLOR4_CMASK_SLICE				0x28d70
+#define	CB_COLOR5_CMASK_SLICE				0x28dac
+#define	CB_COLOR6_CMASK_SLICE				0x28de8
+#define	CB_COLOR7_CMASK_SLICE				0x28e24
+
+#define	CB_COLOR1_FMASK					0x28cc0
+#define	CB_COLOR2_FMASK					0x28cfc
+#define	CB_COLOR3_FMASK					0x28d38
+#define	CB_COLOR4_FMASK					0x28d74
+#define	CB_COLOR5_FMASK					0x28db0
+#define	CB_COLOR6_FMASK					0x28dec
+#define	CB_COLOR7_FMASK					0x28e28
+
+#define	CB_COLOR1_FMASK_SLICE				0x28cc4
+#define	CB_COLOR2_FMASK_SLICE				0x28d00
+#define	CB_COLOR3_FMASK_SLICE				0x28d3c
+#define	CB_COLOR4_FMASK_SLICE				0x28d78
+#define	CB_COLOR5_FMASK_SLICE				0x28db4
+#define	CB_COLOR6_FMASK_SLICE				0x28df0
+#define	CB_COLOR7_FMASK_SLICE				0x28e2c
+
+#define	CB_COLOR1_CLEAR_WORD0				0x28cc8
+#define	CB_COLOR2_CLEAR_WORD0				0x28d04
+#define	CB_COLOR3_CLEAR_WORD0				0x28d40
+#define	CB_COLOR4_CLEAR_WORD0				0x28d7c
+#define	CB_COLOR5_CLEAR_WORD0				0x28db8
+#define	CB_COLOR6_CLEAR_WORD0				0x28df4
+#define	CB_COLOR7_CLEAR_WORD0				0x28e30
+
+#define	CB_COLOR1_CLEAR_WORD1				0x28ccc
+#define	CB_COLOR2_CLEAR_WORD1				0x28d08
+#define	CB_COLOR3_CLEAR_WORD1				0x28d44
+#define	CB_COLOR4_CLEAR_WORD1				0x28d80
+#define	CB_COLOR5_CLEAR_WORD1				0x28dbc
+#define	CB_COLOR6_CLEAR_WORD1				0x28df8
+#define	CB_COLOR7_CLEAR_WORD1				0x28e34
+
+#define	CB_COLOR1_CLEAR_WORD2				0x28cd0
+#define	CB_COLOR2_CLEAR_WORD2				0x28d0c
+#define	CB_COLOR3_CLEAR_WORD2				0x28d48
+#define	CB_COLOR4_CLEAR_WORD2				0x28d84
+#define	CB_COLOR5_CLEAR_WORD2				0x28dc0
+#define	CB_COLOR6_CLEAR_WORD2				0x28dfc
+#define	CB_COLOR7_CLEAR_WORD2				0x28e38
+
+#define	CB_COLOR1_CLEAR_WORD3				0x28cd4
+#define	CB_COLOR2_CLEAR_WORD3				0x28d10
+#define	CB_COLOR3_CLEAR_WORD3				0x28d4c
+#define	CB_COLOR4_CLEAR_WORD3				0x28d88
+#define	CB_COLOR5_CLEAR_WORD3				0x28dc4
+#define	CB_COLOR6_CLEAR_WORD3				0x28e00
+#define	CB_COLOR7_CLEAR_WORD3				0x28e3c
+
+#define SQ_TEX_RESOURCE_WORD0_0                         0x30000
+#define SQ_TEX_RESOURCE_WORD1_0                         0x30004
+#       define TEX_ARRAY_MODE(x)                        ((x) << 28)
+#define SQ_TEX_RESOURCE_WORD2_0                         0x30008
+#define SQ_TEX_RESOURCE_WORD3_0                         0x3000C
+#define SQ_TEX_RESOURCE_WORD4_0                         0x30010
+#define SQ_TEX_RESOURCE_WORD5_0                         0x30014
+#define SQ_TEX_RESOURCE_WORD6_0                         0x30018
+#define SQ_TEX_RESOURCE_WORD7_0                         0x3001c
+
+
+#endif
diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c
index cf60c0b..cf89aa2 100644
--- a/drivers/gpu/drm/radeon/r100.c
+++ b/drivers/gpu/drm/radeon/r100.c
@@ -37,6 +37,7 @@
 #include "rs100d.h"
 #include "rv200d.h"
 #include "rv250d.h"
+#include "atom.h"
 
 #include <linux/firmware.h>
 #include <linux/platform_device.h>
@@ -67,6 +68,274 @@ MODULE_FIRMWARE(FIRMWARE_R520);
  * r100,rv100,rs100,rv200,rs200,r200,rv250,rs300,rv280
  */
 
+void r100_pm_get_dynpm_state(struct radeon_device *rdev)
+{
+	int i;
+	rdev->pm.dynpm_can_upclock = true;
+	rdev->pm.dynpm_can_downclock = true;
+
+	switch (rdev->pm.dynpm_planned_action) {
+	case DYNPM_ACTION_MINIMUM:
+		rdev->pm.requested_power_state_index = 0;
+		rdev->pm.dynpm_can_downclock = false;
+		break;
+	case DYNPM_ACTION_DOWNCLOCK:
+		if (rdev->pm.current_power_state_index == 0) {
+			rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+			rdev->pm.dynpm_can_downclock = false;
+		} else {
+			if (rdev->pm.active_crtc_count > 1) {
+				for (i = 0; i < rdev->pm.num_power_states; i++) {
+					if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+						continue;
+					else if (i >= rdev->pm.current_power_state_index) {
+						rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+						break;
+					} else {
+						rdev->pm.requested_power_state_index = i;
+						break;
+					}
+				}
+			} else
+				rdev->pm.requested_power_state_index =
+					rdev->pm.current_power_state_index - 1;
+		}
+		/* don't use the power state if crtcs are active and no display flag is set */
+		if ((rdev->pm.active_crtc_count > 0) &&
+		    (rdev->pm.power_state[rdev->pm.requested_power_state_index].clock_info[0].flags &
+		     RADEON_PM_MODE_NO_DISPLAY)) {
+			rdev->pm.requested_power_state_index++;
+		}
+		break;
+	case DYNPM_ACTION_UPCLOCK:
+		if (rdev->pm.current_power_state_index == (rdev->pm.num_power_states - 1)) {
+			rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+			rdev->pm.dynpm_can_upclock = false;
+		} else {
+			if (rdev->pm.active_crtc_count > 1) {
+				for (i = (rdev->pm.num_power_states - 1); i >= 0; i--) {
+					if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+						continue;
+					else if (i <= rdev->pm.current_power_state_index) {
+						rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+						break;
+					} else {
+						rdev->pm.requested_power_state_index = i;
+						break;
+					}
+				}
+			} else
+				rdev->pm.requested_power_state_index =
+					rdev->pm.current_power_state_index + 1;
+		}
+		break;
+	case DYNPM_ACTION_DEFAULT:
+		rdev->pm.requested_power_state_index = rdev->pm.default_power_state_index;
+		rdev->pm.dynpm_can_upclock = false;
+		break;
+	case DYNPM_ACTION_NONE:
+	default:
+		DRM_ERROR("Requested mode for not defined action\n");
+		return;
+	}
+	/* only one clock mode per power state */
+	rdev->pm.requested_clock_mode_index = 0;
+
+	DRM_DEBUG("Requested: e: %d m: %d p: %d\n",
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  clock_info[rdev->pm.requested_clock_mode_index].sclk,
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  clock_info[rdev->pm.requested_clock_mode_index].mclk,
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  pcie_lanes);
+}
+
+void r100_pm_init_profile(struct radeon_device *rdev)
+{
+	/* default */
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+	/* low sh */
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+	/* mid sh */
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+	/* high sh */
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+	/* low mh */
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+	/* mid mh */
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+	/* high mh */
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+}
+
+void r100_pm_misc(struct radeon_device *rdev)
+{
+	int requested_index = rdev->pm.requested_power_state_index;
+	struct radeon_power_state *ps = &rdev->pm.power_state[requested_index];
+	struct radeon_voltage *voltage = &ps->clock_info[0].voltage;
+	u32 tmp, sclk_cntl, sclk_cntl2, sclk_more_cntl;
+
+	if ((voltage->type == VOLTAGE_GPIO) && (voltage->gpio.valid)) {
+		if (ps->misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) {
+			tmp = RREG32(voltage->gpio.reg);
+			if (voltage->active_high)
+				tmp |= voltage->gpio.mask;
+			else
+				tmp &= ~(voltage->gpio.mask);
+			WREG32(voltage->gpio.reg, tmp);
+			if (voltage->delay)
+				udelay(voltage->delay);
+		} else {
+			tmp = RREG32(voltage->gpio.reg);
+			if (voltage->active_high)
+				tmp &= ~voltage->gpio.mask;
+			else
+				tmp |= voltage->gpio.mask;
+			WREG32(voltage->gpio.reg, tmp);
+			if (voltage->delay)
+				udelay(voltage->delay);
+		}
+	}
+
+	sclk_cntl = RREG32_PLL(SCLK_CNTL);
+	sclk_cntl2 = RREG32_PLL(SCLK_CNTL2);
+	sclk_cntl2 &= ~REDUCED_SPEED_SCLK_SEL(3);
+	sclk_more_cntl = RREG32_PLL(SCLK_MORE_CNTL);
+	sclk_more_cntl &= ~VOLTAGE_DELAY_SEL(3);
+	if (ps->misc & ATOM_PM_MISCINFO_ASIC_REDUCED_SPEED_SCLK_EN) {
+		sclk_more_cntl |= REDUCED_SPEED_SCLK_EN;
+		if (ps->misc & ATOM_PM_MISCINFO_DYN_CLK_3D_IDLE)
+			sclk_cntl2 |= REDUCED_SPEED_SCLK_MODE;
+		else
+			sclk_cntl2 &= ~REDUCED_SPEED_SCLK_MODE;
+		if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_2)
+			sclk_cntl2 |= REDUCED_SPEED_SCLK_SEL(0);
+		else if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_4)
+			sclk_cntl2 |= REDUCED_SPEED_SCLK_SEL(2);
+	} else
+		sclk_more_cntl &= ~REDUCED_SPEED_SCLK_EN;
+
+	if (ps->misc & ATOM_PM_MISCINFO_ASIC_DYNAMIC_VOLTAGE_EN) {
+		sclk_more_cntl |= IO_CG_VOLTAGE_DROP;
+		if (voltage->delay) {
+			sclk_more_cntl |= VOLTAGE_DROP_SYNC;
+			switch (voltage->delay) {
+			case 33:
+				sclk_more_cntl |= VOLTAGE_DELAY_SEL(0);
+				break;
+			case 66:
+				sclk_more_cntl |= VOLTAGE_DELAY_SEL(1);
+				break;
+			case 99:
+				sclk_more_cntl |= VOLTAGE_DELAY_SEL(2);
+				break;
+			case 132:
+				sclk_more_cntl |= VOLTAGE_DELAY_SEL(3);
+				break;
+			}
+		} else
+			sclk_more_cntl &= ~VOLTAGE_DROP_SYNC;
+	} else
+		sclk_more_cntl &= ~IO_CG_VOLTAGE_DROP;
+
+	if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_HDP_BLOCK_EN)
+		sclk_cntl &= ~FORCE_HDP;
+	else
+		sclk_cntl |= FORCE_HDP;
+
+	WREG32_PLL(SCLK_CNTL, sclk_cntl);
+	WREG32_PLL(SCLK_CNTL2, sclk_cntl2);
+	WREG32_PLL(SCLK_MORE_CNTL, sclk_more_cntl);
+
+	/* set pcie lanes */
+	if ((rdev->flags & RADEON_IS_PCIE) &&
+	    !(rdev->flags & RADEON_IS_IGP) &&
+	    rdev->asic->set_pcie_lanes &&
+	    (ps->pcie_lanes !=
+	     rdev->pm.power_state[rdev->pm.current_power_state_index].pcie_lanes)) {
+		radeon_set_pcie_lanes(rdev,
+				      ps->pcie_lanes);
+		DRM_DEBUG("Setting: p: %d\n", ps->pcie_lanes);
+	}
+}
+
+void r100_pm_prepare(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* disable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			if (radeon_crtc->crtc_id) {
+				tmp = RREG32(RADEON_CRTC2_GEN_CNTL);
+				tmp |= RADEON_CRTC2_DISP_REQ_EN_B;
+				WREG32(RADEON_CRTC2_GEN_CNTL, tmp);
+			} else {
+				tmp = RREG32(RADEON_CRTC_GEN_CNTL);
+				tmp |= RADEON_CRTC_DISP_REQ_EN_B;
+				WREG32(RADEON_CRTC_GEN_CNTL, tmp);
+			}
+		}
+	}
+}
+
+void r100_pm_finish(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* enable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			if (radeon_crtc->crtc_id) {
+				tmp = RREG32(RADEON_CRTC2_GEN_CNTL);
+				tmp &= ~RADEON_CRTC2_DISP_REQ_EN_B;
+				WREG32(RADEON_CRTC2_GEN_CNTL, tmp);
+			} else {
+				tmp = RREG32(RADEON_CRTC_GEN_CNTL);
+				tmp &= ~RADEON_CRTC_DISP_REQ_EN_B;
+				WREG32(RADEON_CRTC_GEN_CNTL, tmp);
+			}
+		}
+	}
+}
+
+bool r100_gui_idle(struct radeon_device *rdev)
+{
+	if (RREG32(RADEON_RBBM_STATUS) & RADEON_RBBM_ACTIVE)
+		return false;
+	else
+		return true;
+}
+
 /* hpd for digital panel detect/disconnect */
 bool r100_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd)
 {
@@ -254,6 +523,9 @@ int r100_irq_set(struct radeon_device *rdev)
 	if (rdev->irq.sw_int) {
 		tmp |= RADEON_SW_INT_ENABLE;
 	}
+	if (rdev->irq.gui_idle) {
+		tmp |= RADEON_GUI_IDLE_MASK;
+	}
 	if (rdev->irq.crtc_vblank_int[0]) {
 		tmp |= RADEON_CRTC_VBLANK_MASK;
 	}
@@ -288,6 +560,12 @@ static inline uint32_t r100_irq_ack(struct radeon_device *rdev)
 		RADEON_CRTC_VBLANK_STAT | RADEON_CRTC2_VBLANK_STAT |
 		RADEON_FP_DETECT_STAT | RADEON_FP2_DETECT_STAT;
 
+	/* the interrupt works, but the status bit is permanently asserted */
+	if (rdev->irq.gui_idle && radeon_gui_idle(rdev)) {
+		if (!rdev->irq.gui_idle_acked)
+			irq_mask |= RADEON_GUI_IDLE_STAT;
+	}
+
 	if (irqs) {
 		WREG32(RADEON_GEN_INT_STATUS, irqs);
 	}
@@ -299,6 +577,9 @@ int r100_irq_process(struct radeon_device *rdev)
 	uint32_t status, msi_rearm;
 	bool queue_hotplug = false;
 
+	/* reset gui idle ack.  the status bit is broken */
+	rdev->irq.gui_idle_acked = false;
+
 	status = r100_irq_ack(rdev);
 	if (!status) {
 		return IRQ_NONE;
@@ -311,6 +592,12 @@ int r100_irq_process(struct radeon_device *rdev)
 		if (status & RADEON_SW_INT_TEST) {
 			radeon_fence_process(rdev);
 		}
+		/* gui idle interrupt */
+		if (status & RADEON_GUI_IDLE_STAT) {
+			rdev->irq.gui_idle_acked = true;
+			rdev->pm.gui_idle = true;
+			wake_up(&rdev->irq.idle_queue);
+		}
 		/* Vertical blank interrupts */
 		if (status & RADEON_CRTC_VBLANK_STAT) {
 			drm_handle_vblank(rdev->ddev, 0);
@@ -332,6 +619,8 @@ int r100_irq_process(struct radeon_device *rdev)
 		}
 		status = r100_irq_ack(rdev);
 	}
+	/* reset gui idle ack.  the status bit is broken */
+	rdev->irq.gui_idle_acked = false;
 	if (queue_hotplug)
 		queue_work(rdev->wq, &rdev->hotplug_work);
 	if (rdev->msi_enabled) {
@@ -663,26 +952,6 @@ int r100_cp_init(struct radeon_device *rdev, unsigned ring_size)
 	if (r100_debugfs_cp_init(rdev)) {
 		DRM_ERROR("Failed to register debugfs file for CP !\n");
 	}
-	/* Reset CP */
-	tmp = RREG32(RADEON_CP_CSQ_STAT);
-	if ((tmp & (1 << 31))) {
-		DRM_INFO("radeon: cp busy (0x%08X) resetting\n", tmp);
-		WREG32(RADEON_CP_CSQ_MODE, 0);
-		WREG32(RADEON_CP_CSQ_CNTL, 0);
-		WREG32(RADEON_RBBM_SOFT_RESET, RADEON_SOFT_RESET_CP);
-		tmp = RREG32(RADEON_RBBM_SOFT_RESET);
-		mdelay(2);
-		WREG32(RADEON_RBBM_SOFT_RESET, 0);
-		tmp = RREG32(RADEON_RBBM_SOFT_RESET);
-		mdelay(2);
-		tmp = RREG32(RADEON_CP_CSQ_STAT);
-		if ((tmp & (1 << 31))) {
-			DRM_INFO("radeon: cp reset failed (0x%08X)\n", tmp);
-		}
-	} else {
-		DRM_INFO("radeon: cp idle (0x%08X)\n", tmp);
-	}
-
 	if (!rdev->me_fw) {
 		r = r100_cp_init_microcode(rdev);
 		if (r) {
@@ -787,39 +1056,6 @@ void r100_cp_disable(struct radeon_device *rdev)
 	}
 }
 
-int r100_cp_reset(struct radeon_device *rdev)
-{
-	uint32_t tmp;
-	bool reinit_cp;
-	int i;
-
-	reinit_cp = rdev->cp.ready;
-	rdev->cp.ready = false;
-	WREG32(RADEON_CP_CSQ_MODE, 0);
-	WREG32(RADEON_CP_CSQ_CNTL, 0);
-	WREG32(RADEON_RBBM_SOFT_RESET, RADEON_SOFT_RESET_CP);
-	(void)RREG32(RADEON_RBBM_SOFT_RESET);
-	udelay(200);
-	WREG32(RADEON_RBBM_SOFT_RESET, 0);
-	/* Wait to prevent race in RBBM_STATUS */
-	mdelay(1);
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		tmp = RREG32(RADEON_RBBM_STATUS);
-		if (!(tmp & (1 << 16))) {
-			DRM_INFO("CP reset succeed (RBBM_STATUS=0x%08X)\n",
-				 tmp);
-			if (reinit_cp) {
-				return r100_cp_init(rdev, rdev->cp.ring_size);
-			}
-			return 0;
-		}
-		DRM_UDELAY(1);
-	}
-	tmp = RREG32(RADEON_RBBM_STATUS);
-	DRM_ERROR("Failed to reset CP (RBBM_STATUS=0x%08X)!\n", tmp);
-	return -1;
-}
-
 void r100_cp_commit(struct radeon_device *rdev)
 {
 	WREG32(RADEON_CP_RB_WPTR, rdev->cp.wptr);
@@ -1733,76 +1969,163 @@ int r100_mc_wait_for_idle(struct radeon_device *rdev)
 	return -1;
 }
 
-void r100_gpu_init(struct radeon_device *rdev)
+void r100_gpu_lockup_update(struct r100_gpu_lockup *lockup, struct radeon_cp *cp)
 {
-	/* TODO: anythings to do here ? pipes ? */
-	r100_hdp_reset(rdev);
+	lockup->last_cp_rptr = cp->rptr;
+	lockup->last_jiffies = jiffies;
+}
+
+/**
+ * r100_gpu_cp_is_lockup() - check if CP is lockup by recording information
+ * @rdev:	radeon device structure
+ * @lockup:	r100_gpu_lockup structure holding CP lockup tracking informations
+ * @cp:		radeon_cp structure holding CP information
+ *
+ * We don't need to initialize the lockup tracking information as we will either
+ * have CP rptr to a different value of jiffies wrap around which will force
+ * initialization of the lockup tracking informations.
+ *
+ * A possible false positivie is if we get call after while and last_cp_rptr ==
+ * the current CP rptr, even if it's unlikely it might happen. To avoid this
+ * if the elapsed time since last call is bigger than 2 second than we return
+ * false and update the tracking information. Due to this the caller must call
+ * r100_gpu_cp_is_lockup several time in less than 2sec for lockup to be reported
+ * the fencing code should be cautious about that.
+ *
+ * Caller should write to the ring to force CP to do something so we don't get
+ * false positive when CP is just gived nothing to do.
+ *
+ **/
+bool r100_gpu_cp_is_lockup(struct radeon_device *rdev, struct r100_gpu_lockup *lockup, struct radeon_cp *cp)
+{
+	unsigned long cjiffies, elapsed;
+
+	cjiffies = jiffies;
+	if (!time_after(cjiffies, lockup->last_jiffies)) {
+		/* likely a wrap around */
+		lockup->last_cp_rptr = cp->rptr;
+		lockup->last_jiffies = jiffies;
+		return false;
+	}
+	if (cp->rptr != lockup->last_cp_rptr) {
+		/* CP is still working no lockup */
+		lockup->last_cp_rptr = cp->rptr;
+		lockup->last_jiffies = jiffies;
+		return false;
+	}
+	elapsed = jiffies_to_msecs(cjiffies - lockup->last_jiffies);
+	if (elapsed >= 3000) {
+		/* very likely the improbable case where current
+		 * rptr is equal to last recorded, a while ago, rptr
+		 * this is more likely a false positive update tracking
+		 * information which should force us to be recall at
+		 * latter point
+		 */
+		lockup->last_cp_rptr = cp->rptr;
+		lockup->last_jiffies = jiffies;
+		return false;
+	}
+	if (elapsed >= 1000) {
+		dev_err(rdev->dev, "GPU lockup CP stall for more than %lumsec\n", elapsed);
+		return true;
+	}
+	/* give a chance to the GPU ... */
+	return false;
 }
 
-void r100_hdp_reset(struct radeon_device *rdev)
+bool r100_gpu_is_lockup(struct radeon_device *rdev)
 {
-	uint32_t tmp;
+	u32 rbbm_status;
+	int r;
 
-	tmp = RREG32(RADEON_HOST_PATH_CNTL) & RADEON_HDP_APER_CNTL;
-	tmp |= (7 << 28);
-	WREG32(RADEON_HOST_PATH_CNTL, tmp | RADEON_HDP_SOFT_RESET | RADEON_HDP_READ_BUFFER_INVALIDATE);
-	(void)RREG32(RADEON_HOST_PATH_CNTL);
-	udelay(200);
-	WREG32(RADEON_RBBM_SOFT_RESET, 0);
-	WREG32(RADEON_HOST_PATH_CNTL, tmp);
-	(void)RREG32(RADEON_HOST_PATH_CNTL);
+	rbbm_status = RREG32(R_000E40_RBBM_STATUS);
+	if (!G_000E40_GUI_ACTIVE(rbbm_status)) {
+		r100_gpu_lockup_update(&rdev->config.r100.lockup, &rdev->cp);
+		return false;
+	}
+	/* force CP activities */
+	r = radeon_ring_lock(rdev, 2);
+	if (!r) {
+		/* PACKET2 NOP */
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_unlock_commit(rdev);
+	}
+	rdev->cp.rptr = RREG32(RADEON_CP_RB_RPTR);
+	return r100_gpu_cp_is_lockup(rdev, &rdev->config.r100.lockup, &rdev->cp);
 }
 
-int r100_rb2d_reset(struct radeon_device *rdev)
+void r100_bm_disable(struct radeon_device *rdev)
 {
-	uint32_t tmp;
-	int i;
+	u32 tmp;
 
-	WREG32(RADEON_RBBM_SOFT_RESET, RADEON_SOFT_RESET_E2);
-	(void)RREG32(RADEON_RBBM_SOFT_RESET);
-	udelay(200);
-	WREG32(RADEON_RBBM_SOFT_RESET, 0);
-	/* Wait to prevent race in RBBM_STATUS */
+	/* disable bus mastering */
+	tmp = RREG32(R_000030_BUS_CNTL);
+	WREG32(R_000030_BUS_CNTL, (tmp & 0xFFFFFFFF) | 0x00000044);
+	mdelay(1);
+	WREG32(R_000030_BUS_CNTL, (tmp & 0xFFFFFFFF) | 0x00000042);
+	mdelay(1);
+	WREG32(R_000030_BUS_CNTL, (tmp & 0xFFFFFFFF) | 0x00000040);
+	tmp = RREG32(RADEON_BUS_CNTL);
+	mdelay(1);
+	pci_read_config_word(rdev->pdev, 0x4, (u16*)&tmp);
+	pci_write_config_word(rdev->pdev, 0x4, tmp & 0xFFFB);
 	mdelay(1);
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		tmp = RREG32(RADEON_RBBM_STATUS);
-		if (!(tmp & (1 << 26))) {
-			DRM_INFO("RB2D reset succeed (RBBM_STATUS=0x%08X)\n",
-				 tmp);
-			return 0;
-		}
-		DRM_UDELAY(1);
-	}
-	tmp = RREG32(RADEON_RBBM_STATUS);
-	DRM_ERROR("Failed to reset RB2D (RBBM_STATUS=0x%08X)!\n", tmp);
-	return -1;
 }
 
-int r100_gpu_reset(struct radeon_device *rdev)
+int r100_asic_reset(struct radeon_device *rdev)
 {
-	uint32_t status;
+	struct r100_mc_save save;
+	u32 status, tmp;
 
-	/* reset order likely matter */
-	status = RREG32(RADEON_RBBM_STATUS);
-	/* reset HDP */
-	r100_hdp_reset(rdev);
-	/* reset rb2d */
-	if (status & ((1 << 17) | (1 << 18) | (1 << 27))) {
-		r100_rb2d_reset(rdev);
+	r100_mc_stop(rdev, &save);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	if (!G_000E40_GUI_ACTIVE(status)) {
+		return 0;
 	}
-	/* TODO: reset 3D engine */
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* stop CP */
+	WREG32(RADEON_CP_CSQ_CNTL, 0);
+	tmp = RREG32(RADEON_CP_RB_CNTL);
+	WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA);
+	WREG32(RADEON_CP_RB_RPTR_WR, 0);
+	WREG32(RADEON_CP_RB_WPTR, 0);
+	WREG32(RADEON_CP_RB_CNTL, tmp);
+	/* save PCI state */
+	pci_save_state(rdev->pdev);
+	/* disable bus mastering */
+	r100_bm_disable(rdev);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_SE(1) |
+					S_0000F0_SOFT_RESET_RE(1) |
+					S_0000F0_SOFT_RESET_PP(1) |
+					S_0000F0_SOFT_RESET_RB(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
 	/* reset CP */
-	status = RREG32(RADEON_RBBM_STATUS);
-	if (status & (1 << 16)) {
-		r100_cp_reset(rdev);
-	}
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_CP(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* restore PCI & busmastering */
+	pci_restore_state(rdev->pdev);
+	r100_enable_bm(rdev);
 	/* Check if GPU is idle */
-	status = RREG32(RADEON_RBBM_STATUS);
-	if (status & RADEON_RBBM_ACTIVE) {
-		DRM_ERROR("Failed to reset GPU (RBBM_STATUS=0x%08X)\n", status);
+	if (G_000E40_SE_BUSY(status) || G_000E40_RE_BUSY(status) ||
+		G_000E40_TAM_BUSY(status) || G_000E40_PB_BUSY(status)) {
+		dev_err(rdev->dev, "failed to reset GPU\n");
+		rdev->gpu_lockup = true;
 		return -1;
 	}
-	DRM_INFO("GPU reset succeed (RBBM_STATUS=0x%08X)\n", status);
+	r100_mc_resume(rdev, &save);
+	dev_info(rdev->dev, "GPU reset succeed\n");
 	return 0;
 }
 
@@ -2002,11 +2325,6 @@ void r100_vram_init_sizes(struct radeon_device *rdev)
 		else
 			rdev->mc.mc_vram_size = rdev->mc.real_vram_size;
 	}
-	/* FIXME remove this once we support unmappable VRAM */
-	if (rdev->mc.mc_vram_size > rdev->mc.aper_size) {
-		rdev->mc.mc_vram_size = rdev->mc.aper_size;
-		rdev->mc.real_vram_size = rdev->mc.aper_size;
-	}
 }
 
 void r100_vga_set_state(struct radeon_device *rdev, bool state)
@@ -2335,53 +2653,53 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 	fixed20_12 peak_disp_bw, mem_bw, pix_clk, pix_clk2, temp_ff, crit_point_ff;
 	uint32_t temp, data, mem_trcd, mem_trp, mem_tras;
 	fixed20_12 memtcas_ff[8] = {
-		fixed_init(1),
-		fixed_init(2),
-		fixed_init(3),
-		fixed_init(0),
-		fixed_init_half(1),
-		fixed_init_half(2),
-		fixed_init(0),
+		dfixed_init(1),
+		dfixed_init(2),
+		dfixed_init(3),
+		dfixed_init(0),
+		dfixed_init_half(1),
+		dfixed_init_half(2),
+		dfixed_init(0),
 	};
 	fixed20_12 memtcas_rs480_ff[8] = {
-		fixed_init(0),
-		fixed_init(1),
-		fixed_init(2),
-		fixed_init(3),
-		fixed_init(0),
-		fixed_init_half(1),
-		fixed_init_half(2),
-		fixed_init_half(3),
+		dfixed_init(0),
+		dfixed_init(1),
+		dfixed_init(2),
+		dfixed_init(3),
+		dfixed_init(0),
+		dfixed_init_half(1),
+		dfixed_init_half(2),
+		dfixed_init_half(3),
 	};
 	fixed20_12 memtcas2_ff[8] = {
-		fixed_init(0),
-		fixed_init(1),
-		fixed_init(2),
-		fixed_init(3),
-		fixed_init(4),
-		fixed_init(5),
-		fixed_init(6),
-		fixed_init(7),
+		dfixed_init(0),
+		dfixed_init(1),
+		dfixed_init(2),
+		dfixed_init(3),
+		dfixed_init(4),
+		dfixed_init(5),
+		dfixed_init(6),
+		dfixed_init(7),
 	};
 	fixed20_12 memtrbs[8] = {
-		fixed_init(1),
-		fixed_init_half(1),
-		fixed_init(2),
-		fixed_init_half(2),
-		fixed_init(3),
-		fixed_init_half(3),
-		fixed_init(4),
-		fixed_init_half(4)
+		dfixed_init(1),
+		dfixed_init_half(1),
+		dfixed_init(2),
+		dfixed_init_half(2),
+		dfixed_init(3),
+		dfixed_init_half(3),
+		dfixed_init(4),
+		dfixed_init_half(4)
 	};
 	fixed20_12 memtrbs_r4xx[8] = {
-		fixed_init(4),
-		fixed_init(5),
-		fixed_init(6),
-		fixed_init(7),
-		fixed_init(8),
-		fixed_init(9),
-		fixed_init(10),
-		fixed_init(11)
+		dfixed_init(4),
+		dfixed_init(5),
+		dfixed_init(6),
+		dfixed_init(7),
+		dfixed_init(8),
+		dfixed_init(9),
+		dfixed_init(10),
+		dfixed_init(11)
 	};
 	fixed20_12 min_mem_eff;
 	fixed20_12 mc_latency_sclk, mc_latency_mclk, k1;
@@ -2412,7 +2730,7 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 		}
 	}
 
-	min_mem_eff.full = rfixed_const_8(0);
+	min_mem_eff.full = dfixed_const_8(0);
 	/* get modes */
 	if ((rdev->disp_priority == 2) && ASIC_IS_R300(rdev)) {
 		uint32_t mc_init_misc_lat_timer = RREG32(R300_MC_INIT_MISC_LAT_TIMER);
@@ -2433,28 +2751,28 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 	mclk_ff = rdev->pm.mclk;
 
 	temp = (rdev->mc.vram_width / 8) * (rdev->mc.vram_is_ddr ? 2 : 1);
-	temp_ff.full = rfixed_const(temp);
-	mem_bw.full = rfixed_mul(mclk_ff, temp_ff);
+	temp_ff.full = dfixed_const(temp);
+	mem_bw.full = dfixed_mul(mclk_ff, temp_ff);
 
 	pix_clk.full = 0;
 	pix_clk2.full = 0;
 	peak_disp_bw.full = 0;
 	if (mode1) {
-		temp_ff.full = rfixed_const(1000);
-		pix_clk.full = rfixed_const(mode1->clock); /* convert to fixed point */
-		pix_clk.full = rfixed_div(pix_clk, temp_ff);
-		temp_ff.full = rfixed_const(pixel_bytes1);
-		peak_disp_bw.full += rfixed_mul(pix_clk, temp_ff);
+		temp_ff.full = dfixed_const(1000);
+		pix_clk.full = dfixed_const(mode1->clock); /* convert to fixed point */
+		pix_clk.full = dfixed_div(pix_clk, temp_ff);
+		temp_ff.full = dfixed_const(pixel_bytes1);
+		peak_disp_bw.full += dfixed_mul(pix_clk, temp_ff);
 	}
 	if (mode2) {
-		temp_ff.full = rfixed_const(1000);
-		pix_clk2.full = rfixed_const(mode2->clock); /* convert to fixed point */
-		pix_clk2.full = rfixed_div(pix_clk2, temp_ff);
-		temp_ff.full = rfixed_const(pixel_bytes2);
-		peak_disp_bw.full += rfixed_mul(pix_clk2, temp_ff);
+		temp_ff.full = dfixed_const(1000);
+		pix_clk2.full = dfixed_const(mode2->clock); /* convert to fixed point */
+		pix_clk2.full = dfixed_div(pix_clk2, temp_ff);
+		temp_ff.full = dfixed_const(pixel_bytes2);
+		peak_disp_bw.full += dfixed_mul(pix_clk2, temp_ff);
 	}
 
-	mem_bw.full = rfixed_mul(mem_bw, min_mem_eff);
+	mem_bw.full = dfixed_mul(mem_bw, min_mem_eff);
 	if (peak_disp_bw.full >= mem_bw.full) {
 		DRM_ERROR("You may not have enough display bandwidth for current mode\n"
 			  "If you have flickering problem, try to lower resolution, refresh rate, or color depth\n");
@@ -2496,9 +2814,9 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 		mem_tras = ((temp >> 12) & 0xf) + 4;
 	}
 	/* convert to FF */
-	trcd_ff.full = rfixed_const(mem_trcd);
-	trp_ff.full = rfixed_const(mem_trp);
-	tras_ff.full = rfixed_const(mem_tras);
+	trcd_ff.full = dfixed_const(mem_trcd);
+	trp_ff.full = dfixed_const(mem_trp);
+	tras_ff.full = dfixed_const(mem_tras);
 
 	/* Get values from the MEM_SDRAM_MODE_REG register...converting its */
 	temp = RREG32(RADEON_MEM_SDRAM_MODE_REG);
@@ -2516,7 +2834,7 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 		/* extra cas latency stored in bits 23-25 0-4 clocks */
 		data = (temp >> 23) & 0x7;
 		if (data < 5)
-			tcas_ff.full += rfixed_const(data);
+			tcas_ff.full += dfixed_const(data);
 	}
 
 	if (ASIC_IS_R300(rdev) && !(rdev->flags & RADEON_IS_IGP)) {
@@ -2553,72 +2871,72 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 
 	if (rdev->flags & RADEON_IS_AGP) {
 		fixed20_12 agpmode_ff;
-		agpmode_ff.full = rfixed_const(radeon_agpmode);
-		temp_ff.full = rfixed_const_666(16);
-		sclk_eff_ff.full -= rfixed_mul(agpmode_ff, temp_ff);
+		agpmode_ff.full = dfixed_const(radeon_agpmode);
+		temp_ff.full = dfixed_const_666(16);
+		sclk_eff_ff.full -= dfixed_mul(agpmode_ff, temp_ff);
 	}
 	/* TODO PCIE lanes may affect this - agpmode == 16?? */
 
 	if (ASIC_IS_R300(rdev)) {
-		sclk_delay_ff.full = rfixed_const(250);
+		sclk_delay_ff.full = dfixed_const(250);
 	} else {
 		if ((rdev->family == CHIP_RV100) ||
 		    rdev->flags & RADEON_IS_IGP) {
 			if (rdev->mc.vram_is_ddr)
-				sclk_delay_ff.full = rfixed_const(41);
+				sclk_delay_ff.full = dfixed_const(41);
 			else
-				sclk_delay_ff.full = rfixed_const(33);
+				sclk_delay_ff.full = dfixed_const(33);
 		} else {
 			if (rdev->mc.vram_width == 128)
-				sclk_delay_ff.full = rfixed_const(57);
+				sclk_delay_ff.full = dfixed_const(57);
 			else
-				sclk_delay_ff.full = rfixed_const(41);
+				sclk_delay_ff.full = dfixed_const(41);
 		}
 	}
 
-	mc_latency_sclk.full = rfixed_div(sclk_delay_ff, sclk_eff_ff);
+	mc_latency_sclk.full = dfixed_div(sclk_delay_ff, sclk_eff_ff);
 
 	if (rdev->mc.vram_is_ddr) {
 		if (rdev->mc.vram_width == 32) {
-			k1.full = rfixed_const(40);
+			k1.full = dfixed_const(40);
 			c  = 3;
 		} else {
-			k1.full = rfixed_const(20);
+			k1.full = dfixed_const(20);
 			c  = 1;
 		}
 	} else {
-		k1.full = rfixed_const(40);
+		k1.full = dfixed_const(40);
 		c  = 3;
 	}
 
-	temp_ff.full = rfixed_const(2);
-	mc_latency_mclk.full = rfixed_mul(trcd_ff, temp_ff);
-	temp_ff.full = rfixed_const(c);
-	mc_latency_mclk.full += rfixed_mul(tcas_ff, temp_ff);
-	temp_ff.full = rfixed_const(4);
-	mc_latency_mclk.full += rfixed_mul(tras_ff, temp_ff);
-	mc_latency_mclk.full += rfixed_mul(trp_ff, temp_ff);
+	temp_ff.full = dfixed_const(2);
+	mc_latency_mclk.full = dfixed_mul(trcd_ff, temp_ff);
+	temp_ff.full = dfixed_const(c);
+	mc_latency_mclk.full += dfixed_mul(tcas_ff, temp_ff);
+	temp_ff.full = dfixed_const(4);
+	mc_latency_mclk.full += dfixed_mul(tras_ff, temp_ff);
+	mc_latency_mclk.full += dfixed_mul(trp_ff, temp_ff);
 	mc_latency_mclk.full += k1.full;
 
-	mc_latency_mclk.full = rfixed_div(mc_latency_mclk, mclk_ff);
-	mc_latency_mclk.full += rfixed_div(temp_ff, sclk_eff_ff);
+	mc_latency_mclk.full = dfixed_div(mc_latency_mclk, mclk_ff);
+	mc_latency_mclk.full += dfixed_div(temp_ff, sclk_eff_ff);
 
 	/*
 	  HW cursor time assuming worst case of full size colour cursor.
 	*/
-	temp_ff.full = rfixed_const((2 * (cur_size - (rdev->mc.vram_is_ddr + 1))));
+	temp_ff.full = dfixed_const((2 * (cur_size - (rdev->mc.vram_is_ddr + 1))));
 	temp_ff.full += trcd_ff.full;
 	if (temp_ff.full < tras_ff.full)
 		temp_ff.full = tras_ff.full;
-	cur_latency_mclk.full = rfixed_div(temp_ff, mclk_ff);
+	cur_latency_mclk.full = dfixed_div(temp_ff, mclk_ff);
 
-	temp_ff.full = rfixed_const(cur_size);
-	cur_latency_sclk.full = rfixed_div(temp_ff, sclk_eff_ff);
+	temp_ff.full = dfixed_const(cur_size);
+	cur_latency_sclk.full = dfixed_div(temp_ff, sclk_eff_ff);
 	/*
 	  Find the total latency for the display data.
 	*/
-	disp_latency_overhead.full = rfixed_const(8);
-	disp_latency_overhead.full = rfixed_div(disp_latency_overhead, sclk_ff);
+	disp_latency_overhead.full = dfixed_const(8);
+	disp_latency_overhead.full = dfixed_div(disp_latency_overhead, sclk_ff);
 	mc_latency_mclk.full += disp_latency_overhead.full + cur_latency_mclk.full;
 	mc_latency_sclk.full += disp_latency_overhead.full + cur_latency_sclk.full;
 
@@ -2646,16 +2964,16 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 		/*
 		  Find the drain rate of the display buffer.
 		*/
-		temp_ff.full = rfixed_const((16/pixel_bytes1));
-		disp_drain_rate.full = rfixed_div(pix_clk, temp_ff);
+		temp_ff.full = dfixed_const((16/pixel_bytes1));
+		disp_drain_rate.full = dfixed_div(pix_clk, temp_ff);
 
 		/*
 		  Find the critical point of the display buffer.
 		*/
-		crit_point_ff.full = rfixed_mul(disp_drain_rate, disp_latency);
-		crit_point_ff.full += rfixed_const_half(0);
+		crit_point_ff.full = dfixed_mul(disp_drain_rate, disp_latency);
+		crit_point_ff.full += dfixed_const_half(0);
 
-		critical_point = rfixed_trunc(crit_point_ff);
+		critical_point = dfixed_trunc(crit_point_ff);
 
 		if (rdev->disp_priority == 2) {
 			critical_point = 0;
@@ -2726,8 +3044,8 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 		/*
 		  Find the drain rate of the display buffer.
 		*/
-		temp_ff.full = rfixed_const((16/pixel_bytes2));
-		disp_drain_rate2.full = rfixed_div(pix_clk2, temp_ff);
+		temp_ff.full = dfixed_const((16/pixel_bytes2));
+		disp_drain_rate2.full = dfixed_div(pix_clk2, temp_ff);
 
 		grph2_cntl = RREG32(RADEON_GRPH2_BUFFER_CNTL);
 		grph2_cntl &= ~(RADEON_GRPH_STOP_REQ_MASK);
@@ -2748,8 +3066,8 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 			critical_point2 = 0;
 		else {
 			temp = (rdev->mc.vram_width * rdev->mc.vram_is_ddr + 1)/128;
-			temp_ff.full = rfixed_const(temp);
-			temp_ff.full = rfixed_mul(mclk_ff, temp_ff);
+			temp_ff.full = dfixed_const(temp);
+			temp_ff.full = dfixed_mul(mclk_ff, temp_ff);
 			if (sclk_ff.full < temp_ff.full)
 				temp_ff.full = sclk_ff.full;
 
@@ -2757,15 +3075,15 @@ void r100_bandwidth_update(struct radeon_device *rdev)
 
 			if (mode1) {
 				temp_ff.full = read_return_rate.full - disp_drain_rate.full;
-				time_disp1_drop_priority.full = rfixed_div(crit_point_ff, temp_ff);
+				time_disp1_drop_priority.full = dfixed_div(crit_point_ff, temp_ff);
 			} else {
 				time_disp1_drop_priority.full = 0;
 			}
 			crit_point_ff.full = disp_latency.full + time_disp1_drop_priority.full + disp_latency.full;
-			crit_point_ff.full = rfixed_mul(crit_point_ff, disp_drain_rate2);
-			crit_point_ff.full += rfixed_const_half(0);
+			crit_point_ff.full = dfixed_mul(crit_point_ff, disp_drain_rate2);
+			crit_point_ff.full += dfixed_const_half(0);
 
-			critical_point2 = rfixed_trunc(crit_point_ff);
+			critical_point2 = dfixed_trunc(crit_point_ff);
 
 			if (rdev->disp_priority == 2) {
 				critical_point2 = 0;
@@ -3399,7 +3717,7 @@ static int r100_startup(struct radeon_device *rdev)
 	/* Resume clock */
 	r100_clock_startup(rdev);
 	/* Initialize GPU configuration (# pipes, ...) */
-	r100_gpu_init(rdev);
+//	r100_gpu_init(rdev);
 	/* Initialize GART (initialize after TTM so we can allocate
 	 * memory through TTM but finalize after TTM) */
 	r100_enable_bm(rdev);
@@ -3436,7 +3754,7 @@ int r100_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	r100_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -3462,7 +3780,6 @@ int r100_suspend(struct radeon_device *rdev)
 
 void r100_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -3505,7 +3822,7 @@ int r100_init(struct radeon_device *rdev)
 			return r;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -3518,8 +3835,6 @@ int r100_init(struct radeon_device *rdev)
 	r100_errata(rdev);
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize AGP */
 	if (rdev->flags & RADEON_IS_AGP) {
 		r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r100d.h b/drivers/gpu/drm/radeon/r100d.h
index df29a63..d016b16 100644
--- a/drivers/gpu/drm/radeon/r100d.h
+++ b/drivers/gpu/drm/radeon/r100d.h
@@ -74,6 +74,134 @@
 #define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
 
 /* Registers */
+#define R_0000F0_RBBM_SOFT_RESET                     0x0000F0
+#define   S_0000F0_SOFT_RESET_CP(x)                    (((x) & 0x1) << 0)
+#define   G_0000F0_SOFT_RESET_CP(x)                    (((x) >> 0) & 0x1)
+#define   C_0000F0_SOFT_RESET_CP                       0xFFFFFFFE
+#define   S_0000F0_SOFT_RESET_HI(x)                    (((x) & 0x1) << 1)
+#define   G_0000F0_SOFT_RESET_HI(x)                    (((x) >> 1) & 0x1)
+#define   C_0000F0_SOFT_RESET_HI                       0xFFFFFFFD
+#define   S_0000F0_SOFT_RESET_SE(x)                    (((x) & 0x1) << 2)
+#define   G_0000F0_SOFT_RESET_SE(x)                    (((x) >> 2) & 0x1)
+#define   C_0000F0_SOFT_RESET_SE                       0xFFFFFFFB
+#define   S_0000F0_SOFT_RESET_RE(x)                    (((x) & 0x1) << 3)
+#define   G_0000F0_SOFT_RESET_RE(x)                    (((x) >> 3) & 0x1)
+#define   C_0000F0_SOFT_RESET_RE                       0xFFFFFFF7
+#define   S_0000F0_SOFT_RESET_PP(x)                    (((x) & 0x1) << 4)
+#define   G_0000F0_SOFT_RESET_PP(x)                    (((x) >> 4) & 0x1)
+#define   C_0000F0_SOFT_RESET_PP                       0xFFFFFFEF
+#define   S_0000F0_SOFT_RESET_E2(x)                    (((x) & 0x1) << 5)
+#define   G_0000F0_SOFT_RESET_E2(x)                    (((x) >> 5) & 0x1)
+#define   C_0000F0_SOFT_RESET_E2                       0xFFFFFFDF
+#define   S_0000F0_SOFT_RESET_RB(x)                    (((x) & 0x1) << 6)
+#define   G_0000F0_SOFT_RESET_RB(x)                    (((x) >> 6) & 0x1)
+#define   C_0000F0_SOFT_RESET_RB                       0xFFFFFFBF
+#define   S_0000F0_SOFT_RESET_HDP(x)                   (((x) & 0x1) << 7)
+#define   G_0000F0_SOFT_RESET_HDP(x)                   (((x) >> 7) & 0x1)
+#define   C_0000F0_SOFT_RESET_HDP                      0xFFFFFF7F
+#define   S_0000F0_SOFT_RESET_MC(x)                    (((x) & 0x1) << 8)
+#define   G_0000F0_SOFT_RESET_MC(x)                    (((x) >> 8) & 0x1)
+#define   C_0000F0_SOFT_RESET_MC                       0xFFFFFEFF
+#define   S_0000F0_SOFT_RESET_AIC(x)                   (((x) & 0x1) << 9)
+#define   G_0000F0_SOFT_RESET_AIC(x)                   (((x) >> 9) & 0x1)
+#define   C_0000F0_SOFT_RESET_AIC                      0xFFFFFDFF
+#define   S_0000F0_SOFT_RESET_VIP(x)                   (((x) & 0x1) << 10)
+#define   G_0000F0_SOFT_RESET_VIP(x)                   (((x) >> 10) & 0x1)
+#define   C_0000F0_SOFT_RESET_VIP                      0xFFFFFBFF
+#define   S_0000F0_SOFT_RESET_DISP(x)                  (((x) & 0x1) << 11)
+#define   G_0000F0_SOFT_RESET_DISP(x)                  (((x) >> 11) & 0x1)
+#define   C_0000F0_SOFT_RESET_DISP                     0xFFFFF7FF
+#define   S_0000F0_SOFT_RESET_CG(x)                    (((x) & 0x1) << 12)
+#define   G_0000F0_SOFT_RESET_CG(x)                    (((x) >> 12) & 0x1)
+#define   C_0000F0_SOFT_RESET_CG                       0xFFFFEFFF
+#define R_000030_BUS_CNTL                            0x000030
+#define   S_000030_BUS_DBL_RESYNC(x)                   (((x) & 0x1) << 0)
+#define   G_000030_BUS_DBL_RESYNC(x)                   (((x) >> 0) & 0x1)
+#define   C_000030_BUS_DBL_RESYNC                      0xFFFFFFFE
+#define   S_000030_BUS_MSTR_RESET(x)                   (((x) & 0x1) << 1)
+#define   G_000030_BUS_MSTR_RESET(x)                   (((x) >> 1) & 0x1)
+#define   C_000030_BUS_MSTR_RESET                      0xFFFFFFFD
+#define   S_000030_BUS_FLUSH_BUF(x)                    (((x) & 0x1) << 2)
+#define   G_000030_BUS_FLUSH_BUF(x)                    (((x) >> 2) & 0x1)
+#define   C_000030_BUS_FLUSH_BUF                       0xFFFFFFFB
+#define   S_000030_BUS_STOP_REQ_DIS(x)                 (((x) & 0x1) << 3)
+#define   G_000030_BUS_STOP_REQ_DIS(x)                 (((x) >> 3) & 0x1)
+#define   C_000030_BUS_STOP_REQ_DIS                    0xFFFFFFF7
+#define   S_000030_BUS_PM4_READ_COMBINE_EN(x)          (((x) & 0x1) << 4)
+#define   G_000030_BUS_PM4_READ_COMBINE_EN(x)          (((x) >> 4) & 0x1)
+#define   C_000030_BUS_PM4_READ_COMBINE_EN             0xFFFFFFEF
+#define   S_000030_BUS_WRT_COMBINE_EN(x)               (((x) & 0x1) << 5)
+#define   G_000030_BUS_WRT_COMBINE_EN(x)               (((x) >> 5) & 0x1)
+#define   C_000030_BUS_WRT_COMBINE_EN                  0xFFFFFFDF
+#define   S_000030_BUS_MASTER_DIS(x)                   (((x) & 0x1) << 6)
+#define   G_000030_BUS_MASTER_DIS(x)                   (((x) >> 6) & 0x1)
+#define   C_000030_BUS_MASTER_DIS                      0xFFFFFFBF
+#define   S_000030_BIOS_ROM_WRT_EN(x)                  (((x) & 0x1) << 7)
+#define   G_000030_BIOS_ROM_WRT_EN(x)                  (((x) >> 7) & 0x1)
+#define   C_000030_BIOS_ROM_WRT_EN                     0xFFFFFF7F
+#define   S_000030_BM_DAC_CRIPPLE(x)                   (((x) & 0x1) << 8)
+#define   G_000030_BM_DAC_CRIPPLE(x)                   (((x) >> 8) & 0x1)
+#define   C_000030_BM_DAC_CRIPPLE                      0xFFFFFEFF
+#define   S_000030_BUS_NON_PM4_READ_COMBINE_EN(x)      (((x) & 0x1) << 9)
+#define   G_000030_BUS_NON_PM4_READ_COMBINE_EN(x)      (((x) >> 9) & 0x1)
+#define   C_000030_BUS_NON_PM4_READ_COMBINE_EN         0xFFFFFDFF
+#define   S_000030_BUS_XFERD_DISCARD_EN(x)             (((x) & 0x1) << 10)
+#define   G_000030_BUS_XFERD_DISCARD_EN(x)             (((x) >> 10) & 0x1)
+#define   C_000030_BUS_XFERD_DISCARD_EN                0xFFFFFBFF
+#define   S_000030_BUS_SGL_READ_DISABLE(x)             (((x) & 0x1) << 11)
+#define   G_000030_BUS_SGL_READ_DISABLE(x)             (((x) >> 11) & 0x1)
+#define   C_000030_BUS_SGL_READ_DISABLE                0xFFFFF7FF
+#define   S_000030_BIOS_DIS_ROM(x)                     (((x) & 0x1) << 12)
+#define   G_000030_BIOS_DIS_ROM(x)                     (((x) >> 12) & 0x1)
+#define   C_000030_BIOS_DIS_ROM                        0xFFFFEFFF
+#define   S_000030_BUS_PCI_READ_RETRY_EN(x)            (((x) & 0x1) << 13)
+#define   G_000030_BUS_PCI_READ_RETRY_EN(x)            (((x) >> 13) & 0x1)
+#define   C_000030_BUS_PCI_READ_RETRY_EN               0xFFFFDFFF
+#define   S_000030_BUS_AGP_AD_STEPPING_EN(x)           (((x) & 0x1) << 14)
+#define   G_000030_BUS_AGP_AD_STEPPING_EN(x)           (((x) >> 14) & 0x1)
+#define   C_000030_BUS_AGP_AD_STEPPING_EN              0xFFFFBFFF
+#define   S_000030_BUS_PCI_WRT_RETRY_EN(x)             (((x) & 0x1) << 15)
+#define   G_000030_BUS_PCI_WRT_RETRY_EN(x)             (((x) >> 15) & 0x1)
+#define   C_000030_BUS_PCI_WRT_RETRY_EN                0xFFFF7FFF
+#define   S_000030_BUS_RETRY_WS(x)                     (((x) & 0xF) << 16)
+#define   G_000030_BUS_RETRY_WS(x)                     (((x) >> 16) & 0xF)
+#define   C_000030_BUS_RETRY_WS                        0xFFF0FFFF
+#define   S_000030_BUS_MSTR_RD_MULT(x)                 (((x) & 0x1) << 20)
+#define   G_000030_BUS_MSTR_RD_MULT(x)                 (((x) >> 20) & 0x1)
+#define   C_000030_BUS_MSTR_RD_MULT                    0xFFEFFFFF
+#define   S_000030_BUS_MSTR_RD_LINE(x)                 (((x) & 0x1) << 21)
+#define   G_000030_BUS_MSTR_RD_LINE(x)                 (((x) >> 21) & 0x1)
+#define   C_000030_BUS_MSTR_RD_LINE                    0xFFDFFFFF
+#define   S_000030_BUS_SUSPEND(x)                      (((x) & 0x1) << 22)
+#define   G_000030_BUS_SUSPEND(x)                      (((x) >> 22) & 0x1)
+#define   C_000030_BUS_SUSPEND                         0xFFBFFFFF
+#define   S_000030_LAT_16X(x)                          (((x) & 0x1) << 23)
+#define   G_000030_LAT_16X(x)                          (((x) >> 23) & 0x1)
+#define   C_000030_LAT_16X                             0xFF7FFFFF
+#define   S_000030_BUS_RD_DISCARD_EN(x)                (((x) & 0x1) << 24)
+#define   G_000030_BUS_RD_DISCARD_EN(x)                (((x) >> 24) & 0x1)
+#define   C_000030_BUS_RD_DISCARD_EN                   0xFEFFFFFF
+#define   S_000030_ENFRCWRDY(x)                        (((x) & 0x1) << 25)
+#define   G_000030_ENFRCWRDY(x)                        (((x) >> 25) & 0x1)
+#define   C_000030_ENFRCWRDY                           0xFDFFFFFF
+#define   S_000030_BUS_MSTR_WS(x)                      (((x) & 0x1) << 26)
+#define   G_000030_BUS_MSTR_WS(x)                      (((x) >> 26) & 0x1)
+#define   C_000030_BUS_MSTR_WS                         0xFBFFFFFF
+#define   S_000030_BUS_PARKING_DIS(x)                  (((x) & 0x1) << 27)
+#define   G_000030_BUS_PARKING_DIS(x)                  (((x) >> 27) & 0x1)
+#define   C_000030_BUS_PARKING_DIS                     0xF7FFFFFF
+#define   S_000030_BUS_MSTR_DISCONNECT_EN(x)           (((x) & 0x1) << 28)
+#define   G_000030_BUS_MSTR_DISCONNECT_EN(x)           (((x) >> 28) & 0x1)
+#define   C_000030_BUS_MSTR_DISCONNECT_EN              0xEFFFFFFF
+#define   S_000030_SERR_EN(x)                          (((x) & 0x1) << 29)
+#define   G_000030_SERR_EN(x)                          (((x) >> 29) & 0x1)
+#define   C_000030_SERR_EN                             0xDFFFFFFF
+#define   S_000030_BUS_READ_BURST(x)                   (((x) & 0x1) << 30)
+#define   G_000030_BUS_READ_BURST(x)                   (((x) >> 30) & 0x1)
+#define   C_000030_BUS_READ_BURST                      0xBFFFFFFF
+#define   S_000030_BUS_RDY_READ_DLY(x)                 (((x) & 0x1) << 31)
+#define   G_000030_BUS_RDY_READ_DLY(x)                 (((x) >> 31) & 0x1)
+#define   C_000030_BUS_RDY_READ_DLY                    0x7FFFFFFF
 #define R_000040_GEN_INT_CNTL                        0x000040
 #define   S_000040_CRTC_VBLANK(x)                      (((x) & 0x1) << 0)
 #define   G_000040_CRTC_VBLANK(x)                      (((x) >> 0) & 0x1)
@@ -710,5 +838,41 @@
 #define   G_00000D_FORCE_RB(x)                         (((x) >> 28) & 0x1)
 #define   C_00000D_FORCE_RB                            0xEFFFFFFF
 
+/* PLL regs */
+#define SCLK_CNTL                                      0xd
+#define   FORCE_HDP                                    (1 << 17)
+#define CLK_PWRMGT_CNTL                                0x14
+#define   GLOBAL_PMAN_EN                               (1 << 10)
+#define   DISP_PM                                      (1 << 20)
+#define PLL_PWRMGT_CNTL                                0x15
+#define   MPLL_TURNOFF                                 (1 << 0)
+#define   SPLL_TURNOFF                                 (1 << 1)
+#define   PPLL_TURNOFF                                 (1 << 2)
+#define   P2PLL_TURNOFF                                (1 << 3)
+#define   TVPLL_TURNOFF                                (1 << 4)
+#define   MOBILE_SU                                    (1 << 16)
+#define   SU_SCLK_USE_BCLK                             (1 << 17)
+#define SCLK_CNTL2                                     0x1e
+#define   REDUCED_SPEED_SCLK_MODE                      (1 << 16)
+#define   REDUCED_SPEED_SCLK_SEL(x)                    ((x) << 17)
+#define MCLK_MISC                                      0x1f
+#define   EN_MCLK_TRISTATE_IN_SUSPEND                  (1 << 18)
+#define SCLK_MORE_CNTL                                 0x35
+#define   REDUCED_SPEED_SCLK_EN                        (1 << 16)
+#define   IO_CG_VOLTAGE_DROP                           (1 << 17)
+#define   VOLTAGE_DELAY_SEL(x)                         ((x) << 20)
+#define   VOLTAGE_DROP_SYNC                            (1 << 19)
+
+/* mmreg */
+#define DISP_PWR_MAN                                   0xd08
+#define   DISP_D3_GRPH_RST                             (1 << 18)
+#define   DISP_D3_SUBPIC_RST                           (1 << 19)
+#define   DISP_D3_OV0_RST                              (1 << 20)
+#define   DISP_D1D2_GRPH_RST                           (1 << 21)
+#define   DISP_D1D2_SUBPIC_RST                         (1 << 22)
+#define   DISP_D1D2_OV0_RST                            (1 << 23)
+#define   DISP_DVO_ENABLE_RST                          (1 << 24)
+#define   TV_ENABLE_RST                                (1 << 25)
+#define   AUTO_PWRUP_EN                                (1 << 26)
 
 #endif
diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
index a5ff807..b2f9efe 100644
--- a/drivers/gpu/drm/radeon/r300.c
+++ b/drivers/gpu/drm/radeon/r300.c
@@ -27,8 +27,9 @@
  */
 #include <linux/seq_file.h>
 #include <linux/slab.h>
-#include "drmP.h"
-#include "drm.h"
+#include <drm/drmP.h>
+#include <drm/drm.h>
+#include <drm/drm_crtc_helper.h>
 #include "radeon_reg.h"
 #include "radeon.h"
 #include "radeon_asic.h"
@@ -151,6 +152,10 @@ void rv370_pcie_gart_disable(struct radeon_device *rdev)
 	u32 tmp;
 	int r;
 
+	WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, 0);
+	WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, 0);
+	WREG32_PCIE(RADEON_PCIE_TX_GART_START_HI, 0);
+	WREG32_PCIE(RADEON_PCIE_TX_GART_END_HI, 0);
 	tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
 	tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
 	WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp & ~RADEON_PCIE_TX_GART_EN);
@@ -323,7 +328,6 @@ void r300_gpu_init(struct radeon_device *rdev)
 {
 	uint32_t gb_tile_config, tmp;
 
-	r100_hdp_reset(rdev);
 	if ((rdev->family == CHIP_R300 && rdev->pdev->device != 0x4144) ||
 	    (rdev->family == CHIP_R350 && rdev->pdev->device != 0x4148)) {
 		/* r300,r350 */
@@ -375,89 +379,85 @@ void r300_gpu_init(struct radeon_device *rdev)
 		 rdev->num_gb_pipes, rdev->num_z_pipes);
 }
 
-int r300_ga_reset(struct radeon_device *rdev)
+bool r300_gpu_is_lockup(struct radeon_device *rdev)
 {
-	uint32_t tmp;
-	bool reinit_cp;
-	int i;
+	u32 rbbm_status;
+	int r;
 
-	reinit_cp = rdev->cp.ready;
-	rdev->cp.ready = false;
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		WREG32(RADEON_CP_CSQ_MODE, 0);
-		WREG32(RADEON_CP_CSQ_CNTL, 0);
-		WREG32(RADEON_RBBM_SOFT_RESET, 0x32005);
-		(void)RREG32(RADEON_RBBM_SOFT_RESET);
-		udelay(200);
-		WREG32(RADEON_RBBM_SOFT_RESET, 0);
-		/* Wait to prevent race in RBBM_STATUS */
-		mdelay(1);
-		tmp = RREG32(RADEON_RBBM_STATUS);
-		if (tmp & ((1 << 20) | (1 << 26))) {
-			DRM_ERROR("VAP & CP still busy (RBBM_STATUS=0x%08X)", tmp);
-			/* GA still busy soft reset it */
-			WREG32(0x429C, 0x200);
-			WREG32(R300_VAP_PVS_STATE_FLUSH_REG, 0);
-			WREG32(R300_RE_SCISSORS_TL, 0);
-			WREG32(R300_RE_SCISSORS_BR, 0);
-			WREG32(0x24AC, 0);
-		}
-		/* Wait to prevent race in RBBM_STATUS */
-		mdelay(1);
-		tmp = RREG32(RADEON_RBBM_STATUS);
-		if (!(tmp & ((1 << 20) | (1 << 26)))) {
-			break;
-		}
+	rbbm_status = RREG32(R_000E40_RBBM_STATUS);
+	if (!G_000E40_GUI_ACTIVE(rbbm_status)) {
+		r100_gpu_lockup_update(&rdev->config.r300.lockup, &rdev->cp);
+		return false;
 	}
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		tmp = RREG32(RADEON_RBBM_STATUS);
-		if (!(tmp & ((1 << 20) | (1 << 26)))) {
-			DRM_INFO("GA reset succeed (RBBM_STATUS=0x%08X)\n",
-				 tmp);
-			if (reinit_cp) {
-				return r100_cp_init(rdev, rdev->cp.ring_size);
-			}
-			return 0;
-		}
-		DRM_UDELAY(1);
+	/* force CP activities */
+	r = radeon_ring_lock(rdev, 2);
+	if (!r) {
+		/* PACKET2 NOP */
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_unlock_commit(rdev);
 	}
-	tmp = RREG32(RADEON_RBBM_STATUS);
-	DRM_ERROR("Failed to reset GA ! (RBBM_STATUS=0x%08X)\n", tmp);
-	return -1;
+	rdev->cp.rptr = RREG32(RADEON_CP_RB_RPTR);
+	return r100_gpu_cp_is_lockup(rdev, &rdev->config.r300.lockup, &rdev->cp);
 }
 
-int r300_gpu_reset(struct radeon_device *rdev)
+int r300_asic_reset(struct radeon_device *rdev)
 {
-	uint32_t status;
-
-	/* reset order likely matter */
-	status = RREG32(RADEON_RBBM_STATUS);
-	/* reset HDP */
-	r100_hdp_reset(rdev);
-	/* reset rb2d */
-	if (status & ((1 << 17) | (1 << 18) | (1 << 27))) {
-		r100_rb2d_reset(rdev);
-	}
-	/* reset GA */
-	if (status & ((1 << 20) | (1 << 26))) {
-		r300_ga_reset(rdev);
-	}
-	/* reset CP */
-	status = RREG32(RADEON_RBBM_STATUS);
-	if (status & (1 << 16)) {
-		r100_cp_reset(rdev);
+	struct r100_mc_save save;
+	u32 status, tmp;
+
+	r100_mc_stop(rdev, &save);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	if (!G_000E40_GUI_ACTIVE(status)) {
+		return 0;
 	}
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* stop CP */
+	WREG32(RADEON_CP_CSQ_CNTL, 0);
+	tmp = RREG32(RADEON_CP_RB_CNTL);
+	WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA);
+	WREG32(RADEON_CP_RB_RPTR_WR, 0);
+	WREG32(RADEON_CP_RB_WPTR, 0);
+	WREG32(RADEON_CP_RB_CNTL, tmp);
+	/* save PCI state */
+	pci_save_state(rdev->pdev);
+	/* disable bus mastering */
+	r100_bm_disable(rdev);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_VAP(1) |
+					S_0000F0_SOFT_RESET_GA(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* resetting the CP seems to be problematic sometimes it end up
+	 * hard locking the computer, but it's necessary for successfull
+	 * reset more test & playing is needed on R3XX/R4XX to find a
+	 * reliable (if any solution)
+	 */
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_CP(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* restore PCI & busmastering */
+	pci_restore_state(rdev->pdev);
+	r100_enable_bm(rdev);
 	/* Check if GPU is idle */
-	status = RREG32(RADEON_RBBM_STATUS);
-	if (status & RADEON_RBBM_ACTIVE) {
-		DRM_ERROR("Failed to reset GPU (RBBM_STATUS=0x%08X)\n", status);
+	if (G_000E40_GA_BUSY(status) || G_000E40_VAP_BUSY(status)) {
+		dev_err(rdev->dev, "failed to reset GPU\n");
+		rdev->gpu_lockup = true;
 		return -1;
 	}
-	DRM_INFO("GPU reset succeed (RBBM_STATUS=0x%08X)\n", status);
+	r100_mc_resume(rdev, &save);
+	dev_info(rdev->dev, "GPU reset succeed\n");
 	return 0;
 }
 
-
 /*
  * r300,r350,rv350,rv380 VRAM info
  */
@@ -1316,7 +1316,7 @@ int r300_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	r300_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -1344,7 +1344,6 @@ int r300_suspend(struct radeon_device *rdev)
 
 void r300_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -1387,7 +1386,7 @@ int r300_init(struct radeon_device *rdev)
 			return r;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -1400,8 +1399,6 @@ int r300_init(struct radeon_device *rdev)
 	r300_errata(rdev);
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize AGP */
 	if (rdev->flags & RADEON_IS_AGP) {
 		r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r300d.h b/drivers/gpu/drm/radeon/r300d.h
index 4c73114..968a333 100644
--- a/drivers/gpu/drm/radeon/r300d.h
+++ b/drivers/gpu/drm/radeon/r300d.h
@@ -209,7 +209,52 @@
 #define   S_000E40_GUI_ACTIVE(x)                       (((x) & 0x1) << 31)
 #define   G_000E40_GUI_ACTIVE(x)                       (((x) >> 31) & 0x1)
 #define   C_000E40_GUI_ACTIVE                          0x7FFFFFFF
-
+#define R_0000F0_RBBM_SOFT_RESET                     0x0000F0
+#define   S_0000F0_SOFT_RESET_CP(x)                    (((x) & 0x1) << 0)
+#define   G_0000F0_SOFT_RESET_CP(x)                    (((x) >> 0) & 0x1)
+#define   C_0000F0_SOFT_RESET_CP                       0xFFFFFFFE
+#define   S_0000F0_SOFT_RESET_HI(x)                    (((x) & 0x1) << 1)
+#define   G_0000F0_SOFT_RESET_HI(x)                    (((x) >> 1) & 0x1)
+#define   C_0000F0_SOFT_RESET_HI                       0xFFFFFFFD
+#define   S_0000F0_SOFT_RESET_VAP(x)                   (((x) & 0x1) << 2)
+#define   G_0000F0_SOFT_RESET_VAP(x)                   (((x) >> 2) & 0x1)
+#define   C_0000F0_SOFT_RESET_VAP                      0xFFFFFFFB
+#define   S_0000F0_SOFT_RESET_RE(x)                    (((x) & 0x1) << 3)
+#define   G_0000F0_SOFT_RESET_RE(x)                    (((x) >> 3) & 0x1)
+#define   C_0000F0_SOFT_RESET_RE                       0xFFFFFFF7
+#define   S_0000F0_SOFT_RESET_PP(x)                    (((x) & 0x1) << 4)
+#define   G_0000F0_SOFT_RESET_PP(x)                    (((x) >> 4) & 0x1)
+#define   C_0000F0_SOFT_RESET_PP                       0xFFFFFFEF
+#define   S_0000F0_SOFT_RESET_E2(x)                    (((x) & 0x1) << 5)
+#define   G_0000F0_SOFT_RESET_E2(x)                    (((x) >> 5) & 0x1)
+#define   C_0000F0_SOFT_RESET_E2                       0xFFFFFFDF
+#define   S_0000F0_SOFT_RESET_RB(x)                    (((x) & 0x1) << 6)
+#define   G_0000F0_SOFT_RESET_RB(x)                    (((x) >> 6) & 0x1)
+#define   C_0000F0_SOFT_RESET_RB                       0xFFFFFFBF
+#define   S_0000F0_SOFT_RESET_HDP(x)                   (((x) & 0x1) << 7)
+#define   G_0000F0_SOFT_RESET_HDP(x)                   (((x) >> 7) & 0x1)
+#define   C_0000F0_SOFT_RESET_HDP                      0xFFFFFF7F
+#define   S_0000F0_SOFT_RESET_MC(x)                    (((x) & 0x1) << 8)
+#define   G_0000F0_SOFT_RESET_MC(x)                    (((x) >> 8) & 0x1)
+#define   C_0000F0_SOFT_RESET_MC                       0xFFFFFEFF
+#define   S_0000F0_SOFT_RESET_AIC(x)                   (((x) & 0x1) << 9)
+#define   G_0000F0_SOFT_RESET_AIC(x)                   (((x) >> 9) & 0x1)
+#define   C_0000F0_SOFT_RESET_AIC                      0xFFFFFDFF
+#define   S_0000F0_SOFT_RESET_VIP(x)                   (((x) & 0x1) << 10)
+#define   G_0000F0_SOFT_RESET_VIP(x)                   (((x) >> 10) & 0x1)
+#define   C_0000F0_SOFT_RESET_VIP                      0xFFFFFBFF
+#define   S_0000F0_SOFT_RESET_DISP(x)                  (((x) & 0x1) << 11)
+#define   G_0000F0_SOFT_RESET_DISP(x)                  (((x) >> 11) & 0x1)
+#define   C_0000F0_SOFT_RESET_DISP                     0xFFFFF7FF
+#define   S_0000F0_SOFT_RESET_CG(x)                    (((x) & 0x1) << 12)
+#define   G_0000F0_SOFT_RESET_CG(x)                    (((x) >> 12) & 0x1)
+#define   C_0000F0_SOFT_RESET_CG                       0xFFFFEFFF
+#define   S_0000F0_SOFT_RESET_GA(x)                    (((x) & 0x1) << 13)
+#define   G_0000F0_SOFT_RESET_GA(x)                    (((x) >> 13) & 0x1)
+#define   C_0000F0_SOFT_RESET_GA                       0xFFFFDFFF
+#define   S_0000F0_SOFT_RESET_IDCT(x)                  (((x) & 0x1) << 14)
+#define   G_0000F0_SOFT_RESET_IDCT(x)                  (((x) >> 14) & 0x1)
+#define   C_0000F0_SOFT_RESET_IDCT                     0xFFFFBFFF
 
 #define R_00000D_SCLK_CNTL                           0x00000D
 #define   S_00000D_SCLK_SRC_SEL(x)                     (((x) & 0x7) << 0)
diff --git a/drivers/gpu/drm/radeon/r420.c b/drivers/gpu/drm/radeon/r420.c
index c2bda4a..e6c8914 100644
--- a/drivers/gpu/drm/radeon/r420.c
+++ b/drivers/gpu/drm/radeon/r420.c
@@ -36,6 +36,45 @@
 #include "r420d.h"
 #include "r420_reg_safe.h"
 
+void r420_pm_init_profile(struct radeon_device *rdev)
+{
+	/* default */
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+	/* low sh */
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+	/* mid sh */
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 1;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+	/* high sh */
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+	/* low mh */
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+	/* mid mh */
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+	/* high mh */
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+	rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+}
+
 static void r420_set_reg_safe(struct radeon_device *rdev)
 {
 	rdev->config.r300.reg_safe_bm = r420_reg_safe_bm;
@@ -241,7 +280,7 @@ int r420_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	r420_clock_resume(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -274,7 +313,6 @@ int r420_suspend(struct radeon_device *rdev)
 
 void r420_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -322,7 +360,7 @@ int r420_init(struct radeon_device *rdev)
 		}
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -334,8 +372,6 @@ int r420_init(struct radeon_device *rdev)
 
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize AGP */
 	if (rdev->flags & RADEON_IS_AGP) {
 		r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r500_reg.h b/drivers/gpu/drm/radeon/r500_reg.h
index 0cf2ad2..93c9a2b 100644
--- a/drivers/gpu/drm/radeon/r500_reg.h
+++ b/drivers/gpu/drm/radeon/r500_reg.h
@@ -347,9 +347,11 @@
 
 #define AVIVO_D1CRTC_CONTROL                                    0x6080
 #       define AVIVO_CRTC_EN                                    (1 << 0)
+#       define AVIVO_CRTC_DISP_READ_REQUEST_DISABLE             (1 << 24)
 #define AVIVO_D1CRTC_BLANK_CONTROL                              0x6084
 #define AVIVO_D1CRTC_INTERLACE_CONTROL                          0x6088
 #define AVIVO_D1CRTC_INTERLACE_STATUS                           0x608c
+#define AVIVO_D1CRTC_STATUS_POSITION                            0x60a0
 #define AVIVO_D1CRTC_FRAME_COUNT                                0x60a4
 #define AVIVO_D1CRTC_STEREO_CONTROL                             0x60c4
 
@@ -488,6 +490,7 @@
 #define AVIVO_D2CRTC_BLANK_CONTROL                              0x6884
 #define AVIVO_D2CRTC_INTERLACE_CONTROL                          0x6888
 #define AVIVO_D2CRTC_INTERLACE_STATUS                           0x688c
+#define AVIVO_D2CRTC_STATUS_POSITION                            0x68a0
 #define AVIVO_D2CRTC_FRAME_COUNT                                0x68a4
 #define AVIVO_D2CRTC_STEREO_CONTROL                             0x68c4
 
diff --git a/drivers/gpu/drm/radeon/r520.c b/drivers/gpu/drm/radeon/r520.c
index 3c44b8d..34330df 100644
--- a/drivers/gpu/drm/radeon/r520.c
+++ b/drivers/gpu/drm/radeon/r520.c
@@ -53,7 +53,6 @@ static void r520_gpu_init(struct radeon_device *rdev)
 {
 	unsigned pipe_select_current, gb_pipe_select, tmp;
 
-	r100_hdp_reset(rdev);
 	rv515_vga_render_disable(rdev);
 	/*
 	 * DST_PIPE_CONFIG		0x170C
@@ -209,7 +208,7 @@ int r520_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	rv515_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -246,7 +245,7 @@ int r520_init(struct radeon_device *rdev)
 		return -EINVAL;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -262,8 +261,6 @@ int r520_init(struct radeon_device *rdev)
 	}
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize AGP */
 	if (rdev->flags & RADEON_IS_AGP) {
 		r = radeon_agp_init(rdev);
diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
index 8f3454e..0e91871 100644
--- a/drivers/gpu/drm/radeon/r600.c
+++ b/drivers/gpu/drm/radeon/r600.c
@@ -44,6 +44,9 @@
 #define R700_PFP_UCODE_SIZE 848
 #define R700_PM4_UCODE_SIZE 1360
 #define R700_RLC_UCODE_SIZE 1024
+#define EVERGREEN_PFP_UCODE_SIZE 1120
+#define EVERGREEN_PM4_UCODE_SIZE 1376
+#define EVERGREEN_RLC_UCODE_SIZE 768
 
 /* Firmware Names */
 MODULE_FIRMWARE("radeon/R600_pfp.bin");
@@ -68,6 +71,18 @@ MODULE_FIRMWARE("radeon/RV710_pfp.bin");
 MODULE_FIRMWARE("radeon/RV710_me.bin");
 MODULE_FIRMWARE("radeon/R600_rlc.bin");
 MODULE_FIRMWARE("radeon/R700_rlc.bin");
+MODULE_FIRMWARE("radeon/CEDAR_pfp.bin");
+MODULE_FIRMWARE("radeon/CEDAR_me.bin");
+MODULE_FIRMWARE("radeon/CEDAR_rlc.bin");
+MODULE_FIRMWARE("radeon/REDWOOD_pfp.bin");
+MODULE_FIRMWARE("radeon/REDWOOD_me.bin");
+MODULE_FIRMWARE("radeon/REDWOOD_rlc.bin");
+MODULE_FIRMWARE("radeon/JUNIPER_pfp.bin");
+MODULE_FIRMWARE("radeon/JUNIPER_me.bin");
+MODULE_FIRMWARE("radeon/JUNIPER_rlc.bin");
+MODULE_FIRMWARE("radeon/CYPRESS_pfp.bin");
+MODULE_FIRMWARE("radeon/CYPRESS_me.bin");
+MODULE_FIRMWARE("radeon/CYPRESS_rlc.bin");
 
 int r600_debugfs_mc_info_init(struct radeon_device *rdev);
 
@@ -75,6 +90,494 @@ int r600_debugfs_mc_info_init(struct radeon_device *rdev);
 int r600_mc_wait_for_idle(struct radeon_device *rdev);
 void r600_gpu_init(struct radeon_device *rdev);
 void r600_fini(struct radeon_device *rdev);
+void r600_irq_disable(struct radeon_device *rdev);
+
+void r600_pm_get_dynpm_state(struct radeon_device *rdev)
+{
+	int i;
+
+	rdev->pm.dynpm_can_upclock = true;
+	rdev->pm.dynpm_can_downclock = true;
+
+	/* power state array is low to high, default is first */
+	if ((rdev->flags & RADEON_IS_IGP) || (rdev->family == CHIP_R600)) {
+		int min_power_state_index = 0;
+
+		if (rdev->pm.num_power_states > 2)
+			min_power_state_index = 1;
+
+		switch (rdev->pm.dynpm_planned_action) {
+		case DYNPM_ACTION_MINIMUM:
+			rdev->pm.requested_power_state_index = min_power_state_index;
+			rdev->pm.requested_clock_mode_index = 0;
+			rdev->pm.dynpm_can_downclock = false;
+			break;
+		case DYNPM_ACTION_DOWNCLOCK:
+			if (rdev->pm.current_power_state_index == min_power_state_index) {
+				rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+				rdev->pm.dynpm_can_downclock = false;
+			} else {
+				if (rdev->pm.active_crtc_count > 1) {
+					for (i = 0; i < rdev->pm.num_power_states; i++) {
+						if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+							continue;
+						else if (i >= rdev->pm.current_power_state_index) {
+							rdev->pm.requested_power_state_index =
+								rdev->pm.current_power_state_index;
+							break;
+						} else {
+							rdev->pm.requested_power_state_index = i;
+							break;
+						}
+					}
+				} else
+					rdev->pm.requested_power_state_index =
+						rdev->pm.current_power_state_index - 1;
+			}
+			rdev->pm.requested_clock_mode_index = 0;
+			/* don't use the power state if crtcs are active and no display flag is set */
+			if ((rdev->pm.active_crtc_count > 0) &&
+			    (rdev->pm.power_state[rdev->pm.requested_power_state_index].
+			     clock_info[rdev->pm.requested_clock_mode_index].flags &
+			     RADEON_PM_MODE_NO_DISPLAY)) {
+				rdev->pm.requested_power_state_index++;
+			}
+			break;
+		case DYNPM_ACTION_UPCLOCK:
+			if (rdev->pm.current_power_state_index == (rdev->pm.num_power_states - 1)) {
+				rdev->pm.requested_power_state_index = rdev->pm.current_power_state_index;
+				rdev->pm.dynpm_can_upclock = false;
+			} else {
+				if (rdev->pm.active_crtc_count > 1) {
+					for (i = (rdev->pm.num_power_states - 1); i >= 0; i--) {
+						if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+							continue;
+						else if (i <= rdev->pm.current_power_state_index) {
+							rdev->pm.requested_power_state_index =
+								rdev->pm.current_power_state_index;
+							break;
+						} else {
+							rdev->pm.requested_power_state_index = i;
+							break;
+						}
+					}
+				} else
+					rdev->pm.requested_power_state_index =
+						rdev->pm.current_power_state_index + 1;
+			}
+			rdev->pm.requested_clock_mode_index = 0;
+			break;
+		case DYNPM_ACTION_DEFAULT:
+			rdev->pm.requested_power_state_index = rdev->pm.default_power_state_index;
+			rdev->pm.requested_clock_mode_index = 0;
+			rdev->pm.dynpm_can_upclock = false;
+			break;
+		case DYNPM_ACTION_NONE:
+		default:
+			DRM_ERROR("Requested mode for not defined action\n");
+			return;
+		}
+	} else {
+		/* XXX select a power state based on AC/DC, single/dualhead, etc. */
+		/* for now just select the first power state and switch between clock modes */
+		/* power state array is low to high, default is first (0) */
+		if (rdev->pm.active_crtc_count > 1) {
+			rdev->pm.requested_power_state_index = -1;
+			/* start at 1 as we don't want the default mode */
+			for (i = 1; i < rdev->pm.num_power_states; i++) {
+				if (rdev->pm.power_state[i].flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+					continue;
+				else if ((rdev->pm.power_state[i].type == POWER_STATE_TYPE_PERFORMANCE) ||
+					 (rdev->pm.power_state[i].type == POWER_STATE_TYPE_BATTERY)) {
+					rdev->pm.requested_power_state_index = i;
+					break;
+				}
+			}
+			/* if nothing selected, grab the default state. */
+			if (rdev->pm.requested_power_state_index == -1)
+				rdev->pm.requested_power_state_index = 0;
+		} else
+			rdev->pm.requested_power_state_index = 1;
+
+		switch (rdev->pm.dynpm_planned_action) {
+		case DYNPM_ACTION_MINIMUM:
+			rdev->pm.requested_clock_mode_index = 0;
+			rdev->pm.dynpm_can_downclock = false;
+			break;
+		case DYNPM_ACTION_DOWNCLOCK:
+			if (rdev->pm.requested_power_state_index == rdev->pm.current_power_state_index) {
+				if (rdev->pm.current_clock_mode_index == 0) {
+					rdev->pm.requested_clock_mode_index = 0;
+					rdev->pm.dynpm_can_downclock = false;
+				} else
+					rdev->pm.requested_clock_mode_index =
+						rdev->pm.current_clock_mode_index - 1;
+			} else {
+				rdev->pm.requested_clock_mode_index = 0;
+				rdev->pm.dynpm_can_downclock = false;
+			}
+			/* don't use the power state if crtcs are active and no display flag is set */
+			if ((rdev->pm.active_crtc_count > 0) &&
+			    (rdev->pm.power_state[rdev->pm.requested_power_state_index].
+			     clock_info[rdev->pm.requested_clock_mode_index].flags &
+			     RADEON_PM_MODE_NO_DISPLAY)) {
+				rdev->pm.requested_clock_mode_index++;
+			}
+			break;
+		case DYNPM_ACTION_UPCLOCK:
+			if (rdev->pm.requested_power_state_index == rdev->pm.current_power_state_index) {
+				if (rdev->pm.current_clock_mode_index ==
+				    (rdev->pm.power_state[rdev->pm.requested_power_state_index].num_clock_modes - 1)) {
+					rdev->pm.requested_clock_mode_index = rdev->pm.current_clock_mode_index;
+					rdev->pm.dynpm_can_upclock = false;
+				} else
+					rdev->pm.requested_clock_mode_index =
+						rdev->pm.current_clock_mode_index + 1;
+			} else {
+				rdev->pm.requested_clock_mode_index =
+					rdev->pm.power_state[rdev->pm.requested_power_state_index].num_clock_modes - 1;
+				rdev->pm.dynpm_can_upclock = false;
+			}
+			break;
+		case DYNPM_ACTION_DEFAULT:
+			rdev->pm.requested_power_state_index = rdev->pm.default_power_state_index;
+			rdev->pm.requested_clock_mode_index = 0;
+			rdev->pm.dynpm_can_upclock = false;
+			break;
+		case DYNPM_ACTION_NONE:
+		default:
+			DRM_ERROR("Requested mode for not defined action\n");
+			return;
+		}
+	}
+
+	DRM_DEBUG("Requested: e: %d m: %d p: %d\n",
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  clock_info[rdev->pm.requested_clock_mode_index].sclk,
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  clock_info[rdev->pm.requested_clock_mode_index].mclk,
+		  rdev->pm.power_state[rdev->pm.requested_power_state_index].
+		  pcie_lanes);
+}
+
+static int r600_pm_get_type_index(struct radeon_device *rdev,
+				  enum radeon_pm_state_type ps_type,
+				  int instance)
+{
+	int i;
+	int found_instance = -1;
+
+	for (i = 0; i < rdev->pm.num_power_states; i++) {
+		if (rdev->pm.power_state[i].type == ps_type) {
+			found_instance++;
+			if (found_instance == instance)
+				return i;
+		}
+	}
+	/* return default if no match */
+	return rdev->pm.default_power_state_index;
+}
+
+void rs780_pm_init_profile(struct radeon_device *rdev)
+{
+	if (rdev->pm.num_power_states == 2) {
+		/* default */
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+		/* low sh */
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+		/* mid sh */
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+		/* high sh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+		/* low mh */
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+		/* mid mh */
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+		/* high mh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+	} else if (rdev->pm.num_power_states == 3) {
+		/* default */
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+		/* low sh */
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+		/* mid sh */
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+		/* high sh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+		/* low mh */
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+		/* mid mh */
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+		/* high mh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 1;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+	} else {
+		/* default */
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+		/* low sh */
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+		/* mid sh */
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+		/* high sh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 3;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+		/* low mh */
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+		/* mid mh */
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+		/* high mh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 2;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 3;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+	}
+}
+
+void r600_pm_init_profile(struct radeon_device *rdev)
+{
+	if (rdev->family == CHIP_R600) {
+		/* XXX */
+		/* default */
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 0;
+		/* low sh */
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+		/* mid sh */
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 0;
+		/* high sh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 0;
+		/* low mh */
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+		/* mid mh */
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 0;
+		/* high mh */
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+		rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 0;
+	} else {
+		if (rdev->pm.num_power_states < 4) {
+			/* default */
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 2;
+			/* low sh */
+			rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+			/* mid sh */
+			rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1;
+			/* high sh */
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx = 1;
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 2;
+			/* low mh */
+			rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+			/* low mh */
+			rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1;
+			/* high mh */
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx = 2;
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 2;
+		} else {
+			/* default */
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_ps_idx = rdev->pm.default_power_state_index;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_ps_idx = rdev->pm.default_power_state_index;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_DEFAULT_IDX].dpms_on_cm_idx = 2;
+			/* low sh */
+			if (rdev->flags & RADEON_IS_MOBILITY) {
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0);
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0);
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+			} else {
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_LOW_SH_IDX].dpms_on_cm_idx = 0;
+			}
+			/* mid sh */
+			if (rdev->flags & RADEON_IS_MOBILITY) {
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0);
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 0);
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1;
+			} else {
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_MID_SH_IDX].dpms_on_cm_idx = 1;
+			}
+			/* high sh */
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_ps_idx =
+				r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_ps_idx =
+				r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 0);
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_HIGH_SH_IDX].dpms_on_cm_idx = 2;
+			/* low mh */
+			if (rdev->flags & RADEON_IS_MOBILITY) {
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1);
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1);
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+			} else {
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_LOW_MH_IDX].dpms_on_cm_idx = 0;
+			}
+			/* mid mh */
+			if (rdev->flags & RADEON_IS_MOBILITY) {
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1);
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_BATTERY, 1);
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1;
+			} else {
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_ps_idx =
+					r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_off_cm_idx = 0;
+				rdev->pm.profiles[PM_PROFILE_MID_MH_IDX].dpms_on_cm_idx = 1;
+			}
+			/* high mh */
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_ps_idx =
+				r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_ps_idx =
+				r600_pm_get_type_index(rdev, POWER_STATE_TYPE_PERFORMANCE, 1);
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_off_cm_idx = 0;
+			rdev->pm.profiles[PM_PROFILE_HIGH_MH_IDX].dpms_on_cm_idx = 2;
+		}
+	}
+}
+
+void r600_pm_misc(struct radeon_device *rdev)
+{
+	int req_ps_idx = rdev->pm.requested_power_state_index;
+	int req_cm_idx = rdev->pm.requested_clock_mode_index;
+	struct radeon_power_state *ps = &rdev->pm.power_state[req_ps_idx];
+	struct radeon_voltage *voltage = &ps->clock_info[req_cm_idx].voltage;
+
+	if ((voltage->type == VOLTAGE_SW) && voltage->voltage) {
+		if (voltage->voltage != rdev->pm.current_vddc) {
+			radeon_atom_set_voltage(rdev, voltage->voltage);
+			rdev->pm.current_vddc = voltage->voltage;
+			DRM_DEBUG("Setting: v: %d\n", voltage->voltage);
+		}
+	}
+}
+
+bool r600_gui_idle(struct radeon_device *rdev)
+{
+	if (RREG32(GRBM_STATUS) & GUI_ACTIVE)
+		return false;
+	else
+		return true;
+}
 
 /* hpd for digital panel detect/disconnect */
 bool r600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd)
@@ -714,11 +1217,6 @@ int r600_mc_init(struct radeon_device *rdev)
 	rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE);
 	rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE);
 	rdev->mc.visible_vram_size = rdev->mc.aper_size;
-	/* FIXME remove this once we support unmappable VRAM */
-	if (rdev->mc.mc_vram_size > rdev->mc.aper_size) {
-		rdev->mc.mc_vram_size = rdev->mc.aper_size;
-		rdev->mc.real_vram_size = rdev->mc.aper_size;
-	}
 	r600_vram_gtt_location(rdev, &rdev->mc);
 
 	if (rdev->flags & RADEON_IS_IGP)
@@ -750,7 +1248,6 @@ int r600_gpu_soft_reset(struct radeon_device *rdev)
 			S_008014_DB2_BUSY(1) | S_008014_DB3_BUSY(1) |
 			S_008014_CB0_BUSY(1) | S_008014_CB1_BUSY(1) |
 			S_008014_CB2_BUSY(1) | S_008014_CB3_BUSY(1);
-	u32 srbm_reset = 0;
 	u32 tmp;
 
 	dev_info(rdev->dev, "GPU softreset \n");
@@ -765,7 +1262,7 @@ int r600_gpu_soft_reset(struct radeon_device *rdev)
 		dev_warn(rdev->dev, "Wait for MC idle timedout !\n");
 	}
 	/* Disable CP parsing/prefetching */
-	WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(0xff));
+	WREG32(R_0086D8_CP_ME_CNTL, S_0086D8_CP_ME_HALT(1));
 	/* Check if any of the rendering block is busy and reset it */
 	if ((RREG32(R_008010_GRBM_STATUS) & grbm_busy_mask) ||
 	    (RREG32(R_008014_GRBM_STATUS2) & grbm2_busy_mask)) {
@@ -784,72 +1281,56 @@ int r600_gpu_soft_reset(struct radeon_device *rdev)
 			S_008020_SOFT_RESET_VGT(1);
 		dev_info(rdev->dev, "  R_008020_GRBM_SOFT_RESET=0x%08X\n", tmp);
 		WREG32(R_008020_GRBM_SOFT_RESET, tmp);
-		(void)RREG32(R_008020_GRBM_SOFT_RESET);
-		udelay(50);
+		RREG32(R_008020_GRBM_SOFT_RESET);
+		mdelay(15);
 		WREG32(R_008020_GRBM_SOFT_RESET, 0);
-		(void)RREG32(R_008020_GRBM_SOFT_RESET);
 	}
 	/* Reset CP (we always reset CP) */
 	tmp = S_008020_SOFT_RESET_CP(1);
 	dev_info(rdev->dev, "R_008020_GRBM_SOFT_RESET=0x%08X\n", tmp);
 	WREG32(R_008020_GRBM_SOFT_RESET, tmp);
-	(void)RREG32(R_008020_GRBM_SOFT_RESET);
-	udelay(50);
+	RREG32(R_008020_GRBM_SOFT_RESET);
+	mdelay(15);
 	WREG32(R_008020_GRBM_SOFT_RESET, 0);
-	(void)RREG32(R_008020_GRBM_SOFT_RESET);
-	/* Reset others GPU block if necessary */
-	if (G_000E50_RLC_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_RLC(1);
-	if (G_000E50_GRBM_RQ_PENDING(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_GRBM(1);
-	if (G_000E50_HI_RQ_PENDING(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_IH(1);
-	if (G_000E50_VMC_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_VMC(1);
-	if (G_000E50_MCB_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_MC(1);
-	if (G_000E50_MCDZ_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_MC(1);
-	if (G_000E50_MCDY_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_MC(1);
-	if (G_000E50_MCDX_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_MC(1);
-	if (G_000E50_MCDW_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_MC(1);
-	if (G_000E50_RLC_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_RLC(1);
-	if (G_000E50_SEM_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_SEM(1);
-	if (G_000E50_BIF_BUSY(RREG32(R_000E50_SRBM_STATUS)))
-		srbm_reset |= S_000E60_SOFT_RESET_BIF(1);
-	dev_info(rdev->dev, "  R_000E60_SRBM_SOFT_RESET=0x%08X\n", srbm_reset);
-	WREG32(R_000E60_SRBM_SOFT_RESET, srbm_reset);
-	(void)RREG32(R_000E60_SRBM_SOFT_RESET);
-	udelay(50);
-	WREG32(R_000E60_SRBM_SOFT_RESET, 0);
-	(void)RREG32(R_000E60_SRBM_SOFT_RESET);
-	WREG32(R_000E60_SRBM_SOFT_RESET, srbm_reset);
-	(void)RREG32(R_000E60_SRBM_SOFT_RESET);
-	udelay(50);
-	WREG32(R_000E60_SRBM_SOFT_RESET, 0);
-	(void)RREG32(R_000E60_SRBM_SOFT_RESET);
 	/* Wait a little for things to settle down */
-	udelay(50);
+	mdelay(1);
 	dev_info(rdev->dev, "  R_008010_GRBM_STATUS=0x%08X\n",
 		RREG32(R_008010_GRBM_STATUS));
 	dev_info(rdev->dev, "  R_008014_GRBM_STATUS2=0x%08X\n",
 		RREG32(R_008014_GRBM_STATUS2));
 	dev_info(rdev->dev, "  R_000E50_SRBM_STATUS=0x%08X\n",
 		RREG32(R_000E50_SRBM_STATUS));
-	/* After reset we need to reinit the asic as GPU often endup in an
-	 * incoherent state.
-	 */
-	atom_asic_init(rdev->mode_info.atom_context);
 	rv515_mc_resume(rdev, &save);
 	return 0;
 }
 
-int r600_gpu_reset(struct radeon_device *rdev)
+bool r600_gpu_is_lockup(struct radeon_device *rdev)
+{
+	u32 srbm_status;
+	u32 grbm_status;
+	u32 grbm_status2;
+	int r;
+
+	srbm_status = RREG32(R_000E50_SRBM_STATUS);
+	grbm_status = RREG32(R_008010_GRBM_STATUS);
+	grbm_status2 = RREG32(R_008014_GRBM_STATUS2);
+	if (!G_008010_GUI_ACTIVE(grbm_status)) {
+		r100_gpu_lockup_update(&rdev->config.r300.lockup, &rdev->cp);
+		return false;
+	}
+	/* force CP activities */
+	r = radeon_ring_lock(rdev, 2);
+	if (!r) {
+		/* PACKET2 NOP */
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_write(rdev, 0x80000000);
+		radeon_ring_unlock_commit(rdev);
+	}
+	rdev->cp.rptr = RREG32(R600_CP_RB_RPTR);
+	return r100_gpu_cp_is_lockup(rdev, &rdev->config.r300.lockup, &rdev->cp);
+}
+
+int r600_asic_reset(struct radeon_device *rdev)
 {
 	return r600_gpu_soft_reset(rdev);
 }
@@ -1467,10 +1948,31 @@ int r600_init_microcode(struct radeon_device *rdev)
 		chip_name = "RV710";
 		rlc_chip_name = "R700";
 		break;
+	case CHIP_CEDAR:
+		chip_name = "CEDAR";
+		rlc_chip_name = "CEDAR";
+		break;
+	case CHIP_REDWOOD:
+		chip_name = "REDWOOD";
+		rlc_chip_name = "REDWOOD";
+		break;
+	case CHIP_JUNIPER:
+		chip_name = "JUNIPER";
+		rlc_chip_name = "JUNIPER";
+		break;
+	case CHIP_CYPRESS:
+	case CHIP_HEMLOCK:
+		chip_name = "CYPRESS";
+		rlc_chip_name = "CYPRESS";
+		break;
 	default: BUG();
 	}
 
-	if (rdev->family >= CHIP_RV770) {
+	if (rdev->family >= CHIP_CEDAR) {
+		pfp_req_size = EVERGREEN_PFP_UCODE_SIZE * 4;
+		me_req_size = EVERGREEN_PM4_UCODE_SIZE * 4;
+		rlc_req_size = EVERGREEN_RLC_UCODE_SIZE * 4;
+	} else if (rdev->family >= CHIP_RV770) {
 		pfp_req_size = R700_PFP_UCODE_SIZE * 4;
 		me_req_size = R700_PM4_UCODE_SIZE * 4;
 		rlc_req_size = R700_RLC_UCODE_SIZE * 4;
@@ -1584,12 +2086,15 @@ int r600_cp_start(struct radeon_device *rdev)
 	}
 	radeon_ring_write(rdev, PACKET3(PACKET3_ME_INITIALIZE, 5));
 	radeon_ring_write(rdev, 0x1);
-	if (rdev->family < CHIP_RV770) {
-		radeon_ring_write(rdev, 0x3);
-		radeon_ring_write(rdev, rdev->config.r600.max_hw_contexts - 1);
-	} else {
+	if (rdev->family >= CHIP_CEDAR) {
+		radeon_ring_write(rdev, 0x0);
+		radeon_ring_write(rdev, rdev->config.evergreen.max_hw_contexts - 1);
+	} else if (rdev->family >= CHIP_RV770) {
 		radeon_ring_write(rdev, 0x0);
 		radeon_ring_write(rdev, rdev->config.rv770.max_hw_contexts - 1);
+	} else {
+		radeon_ring_write(rdev, 0x3);
+		radeon_ring_write(rdev, rdev->config.r600.max_hw_contexts - 1);
 	}
 	radeon_ring_write(rdev, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
 	radeon_ring_write(rdev, 0);
@@ -2051,8 +2556,6 @@ int r600_init(struct radeon_device *rdev)
 	r = radeon_clocks_init(rdev);
 	if (r)
 		return r;
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* Fence driver */
 	r = radeon_fence_driver_init(rdev);
 	if (r)
@@ -2117,7 +2620,6 @@ int r600_init(struct radeon_device *rdev)
 
 void r600_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r600_audio_fini(rdev);
 	r600_blit_fini(rdev);
 	r600_cp_fini(rdev);
@@ -2290,10 +2792,11 @@ static void r600_ih_ring_fini(struct radeon_device *rdev)
 	}
 }
 
-static void r600_rlc_stop(struct radeon_device *rdev)
+void r600_rlc_stop(struct radeon_device *rdev)
 {
 
-	if (rdev->family >= CHIP_RV770) {
+	if ((rdev->family >= CHIP_RV770) &&
+	    (rdev->family <= CHIP_RV740)) {
 		/* r7xx asics need to soft reset RLC before halting */
 		WREG32(SRBM_SOFT_RESET, SOFT_RESET_RLC);
 		RREG32(SRBM_SOFT_RESET);
@@ -2330,7 +2833,12 @@ static int r600_rlc_init(struct radeon_device *rdev)
 	WREG32(RLC_UCODE_CNTL, 0);
 
 	fw_data = (const __be32 *)rdev->rlc_fw->data;
-	if (rdev->family >= CHIP_RV770) {
+	if (rdev->family >= CHIP_CEDAR) {
+		for (i = 0; i < EVERGREEN_RLC_UCODE_SIZE; i++) {
+			WREG32(RLC_UCODE_ADDR, i);
+			WREG32(RLC_UCODE_DATA, be32_to_cpup(fw_data++));
+		}
+	} else if (rdev->family >= CHIP_RV770) {
 		for (i = 0; i < R700_RLC_UCODE_SIZE; i++) {
 			WREG32(RLC_UCODE_ADDR, i);
 			WREG32(RLC_UCODE_DATA, be32_to_cpup(fw_data++));
@@ -2360,7 +2868,7 @@ static void r600_enable_interrupts(struct radeon_device *rdev)
 	rdev->ih.enabled = true;
 }
 
-static void r600_disable_interrupts(struct radeon_device *rdev)
+void r600_disable_interrupts(struct radeon_device *rdev)
 {
 	u32 ih_rb_cntl = RREG32(IH_RB_CNTL);
 	u32 ih_cntl = RREG32(IH_CNTL);
@@ -2475,7 +2983,10 @@ int r600_irq_init(struct radeon_device *rdev)
 	WREG32(IH_CNTL, ih_cntl);
 
 	/* force the active interrupt state to all disabled */
-	r600_disable_interrupt_state(rdev);
+	if (rdev->family >= CHIP_CEDAR)
+		evergreen_disable_interrupt_state(rdev);
+	else
+		r600_disable_interrupt_state(rdev);
 
 	/* enable irqs */
 	r600_enable_interrupts(rdev);
@@ -2485,7 +2996,7 @@ int r600_irq_init(struct radeon_device *rdev)
 
 void r600_irq_suspend(struct radeon_device *rdev)
 {
-	r600_disable_interrupts(rdev);
+	r600_irq_disable(rdev);
 	r600_rlc_stop(rdev);
 }
 
@@ -2500,6 +3011,8 @@ int r600_irq_set(struct radeon_device *rdev)
 	u32 cp_int_cntl = CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE;
 	u32 mode_int = 0;
 	u32 hpd1, hpd2, hpd3, hpd4 = 0, hpd5 = 0, hpd6 = 0;
+	u32 grbm_int_cntl = 0;
+	u32 hdmi1, hdmi2;
 
 	if (!rdev->irq.installed) {
 		WARN(1, "Can't enable IRQ/MSI because no handler is installed.\n");
@@ -2513,7 +3026,9 @@ int r600_irq_set(struct radeon_device *rdev)
 		return 0;
 	}
 
+	hdmi1 = RREG32(R600_HDMI_BLOCK1 + R600_HDMI_CNTL) & ~R600_HDMI_INT_EN;
 	if (ASIC_IS_DCE3(rdev)) {
+		hdmi2 = RREG32(R600_HDMI_BLOCK3 + R600_HDMI_CNTL) & ~R600_HDMI_INT_EN;
 		hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
 		hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
 		hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
@@ -2523,6 +3038,7 @@ int r600_irq_set(struct radeon_device *rdev)
 			hpd6 = RREG32(DC_HPD6_INT_CONTROL) & ~DC_HPDx_INT_EN;
 		}
 	} else {
+		hdmi2 = RREG32(R600_HDMI_BLOCK2 + R600_HDMI_CNTL) & ~R600_HDMI_INT_EN;
 		hpd1 = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL) & ~DC_HPDx_INT_EN;
 		hpd2 = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL) & ~DC_HPDx_INT_EN;
 		hpd3 = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL) & ~DC_HPDx_INT_EN;
@@ -2564,10 +3080,25 @@ int r600_irq_set(struct radeon_device *rdev)
 		DRM_DEBUG("r600_irq_set: hpd 6\n");
 		hpd6 |= DC_HPDx_INT_EN;
 	}
+	if (rdev->irq.hdmi[0]) {
+		DRM_DEBUG("r600_irq_set: hdmi 1\n");
+		hdmi1 |= R600_HDMI_INT_EN;
+	}
+	if (rdev->irq.hdmi[1]) {
+		DRM_DEBUG("r600_irq_set: hdmi 2\n");
+		hdmi2 |= R600_HDMI_INT_EN;
+	}
+	if (rdev->irq.gui_idle) {
+		DRM_DEBUG("gui idle\n");
+		grbm_int_cntl |= GUI_IDLE_INT_ENABLE;
+	}
 
 	WREG32(CP_INT_CNTL, cp_int_cntl);
 	WREG32(DxMODE_INT_MASK, mode_int);
+	WREG32(GRBM_INT_CNTL, grbm_int_cntl);
+	WREG32(R600_HDMI_BLOCK1 + R600_HDMI_CNTL, hdmi1);
 	if (ASIC_IS_DCE3(rdev)) {
+		WREG32(R600_HDMI_BLOCK3 + R600_HDMI_CNTL, hdmi2);
 		WREG32(DC_HPD1_INT_CONTROL, hpd1);
 		WREG32(DC_HPD2_INT_CONTROL, hpd2);
 		WREG32(DC_HPD3_INT_CONTROL, hpd3);
@@ -2577,6 +3108,7 @@ int r600_irq_set(struct radeon_device *rdev)
 			WREG32(DC_HPD6_INT_CONTROL, hpd6);
 		}
 	} else {
+		WREG32(R600_HDMI_BLOCK2 + R600_HDMI_CNTL, hdmi2);
 		WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, hpd1);
 		WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, hpd2);
 		WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, hpd3);
@@ -2660,6 +3192,18 @@ static inline void r600_irq_ack(struct radeon_device *rdev,
 			WREG32(DC_HPD6_INT_CONTROL, tmp);
 		}
 	}
+	if (RREG32(R600_HDMI_BLOCK1 + R600_HDMI_STATUS) & R600_HDMI_INT_PENDING) {
+		WREG32_P(R600_HDMI_BLOCK1 + R600_HDMI_CNTL, R600_HDMI_INT_ACK, ~R600_HDMI_INT_ACK);
+	}
+	if (ASIC_IS_DCE3(rdev)) {
+		if (RREG32(R600_HDMI_BLOCK3 + R600_HDMI_STATUS) & R600_HDMI_INT_PENDING) {
+			WREG32_P(R600_HDMI_BLOCK3 + R600_HDMI_CNTL, R600_HDMI_INT_ACK, ~R600_HDMI_INT_ACK);
+		}
+	} else {
+		if (RREG32(R600_HDMI_BLOCK2 + R600_HDMI_STATUS) & R600_HDMI_INT_PENDING) {
+			WREG32_P(R600_HDMI_BLOCK2 + R600_HDMI_CNTL, R600_HDMI_INT_ACK, ~R600_HDMI_INT_ACK);
+		}
+	}
 }
 
 void r600_irq_disable(struct radeon_device *rdev)
@@ -2713,6 +3257,8 @@ static inline u32 r600_get_ih_wptr(struct radeon_device *rdev)
  *     19         1  FP Hot plug detection B
  *     19         2  DAC A auto-detection
  *     19         3  DAC B auto-detection
+ *     21         4  HDMI block A
+ *     21         5  HDMI block B
  *    176         -  CP_INT RB
  *    177         -  CP_INT IB1
  *    178         -  CP_INT IB2
@@ -2852,6 +3398,10 @@ restart_ih:
 				break;
 			}
 			break;
+		case 21: /* HDMI */
+			DRM_DEBUG("IH: HDMI: 0x%x\n", src_data);
+			r600_audio_schedule_polling(rdev);
+			break;
 		case 176: /* CP_INT in ring buffer */
 		case 177: /* CP_INT in IB1 */
 		case 178: /* CP_INT in IB2 */
@@ -2861,6 +3411,11 @@ restart_ih:
 		case 181: /* CP EOP event */
 			DRM_DEBUG("IH: CP EOP\n");
 			break;
+		case 233: /* GUI IDLE */
+			DRM_DEBUG("IH: CP EOP\n");
+			rdev->pm.gui_idle = true;
+			wake_up(&rdev->irq.idle_queue);
+			break;
 		default:
 			DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
 			break;
diff --git a/drivers/gpu/drm/radeon/r600_audio.c b/drivers/gpu/drm/radeon/r600_audio.c
index 1d89805..2b26553 100644
--- a/drivers/gpu/drm/radeon/r600_audio.c
+++ b/drivers/gpu/drm/radeon/r600_audio.c
@@ -44,7 +44,7 @@ static int r600_audio_chipset_supported(struct radeon_device *rdev)
 /*
  * current number of channels
  */
-static int r600_audio_channels(struct radeon_device *rdev)
+int r600_audio_channels(struct radeon_device *rdev)
 {
 	return (RREG32(R600_AUDIO_RATE_BPS_CHANNEL) & 0x7) + 1;
 }
@@ -52,7 +52,7 @@ static int r600_audio_channels(struct radeon_device *rdev)
 /*
  * current bits per sample
  */
-static int r600_audio_bits_per_sample(struct radeon_device *rdev)
+int r600_audio_bits_per_sample(struct radeon_device *rdev)
 {
 	uint32_t value = (RREG32(R600_AUDIO_RATE_BPS_CHANNEL) & 0xF0) >> 4;
 	switch (value) {
@@ -71,7 +71,7 @@ static int r600_audio_bits_per_sample(struct radeon_device *rdev)
 /*
  * current sampling rate in HZ
  */
-static int r600_audio_rate(struct radeon_device *rdev)
+int r600_audio_rate(struct radeon_device *rdev)
 {
 	uint32_t value = RREG32(R600_AUDIO_RATE_BPS_CHANNEL);
 	uint32_t result;
@@ -90,7 +90,7 @@ static int r600_audio_rate(struct radeon_device *rdev)
 /*
  * iec 60958 status bits
  */
-static uint8_t r600_audio_status_bits(struct radeon_device *rdev)
+uint8_t r600_audio_status_bits(struct radeon_device *rdev)
 {
 	return RREG32(R600_AUDIO_STATUS_BITS) & 0xff;
 }
@@ -98,12 +98,21 @@ static uint8_t r600_audio_status_bits(struct radeon_device *rdev)
 /*
  * iec 60958 category code
  */
-static uint8_t r600_audio_category_code(struct radeon_device *rdev)
+uint8_t r600_audio_category_code(struct radeon_device *rdev)
 {
 	return (RREG32(R600_AUDIO_STATUS_BITS) >> 8) & 0xff;
 }
 
 /*
+ * schedule next audio update event
+ */
+void r600_audio_schedule_polling(struct radeon_device *rdev)
+{
+	mod_timer(&rdev->audio_timer,
+		jiffies + msecs_to_jiffies(AUDIO_TIMER_INTERVALL));
+}
+
+/*
  * update all hdmi interfaces with current audio parameters
  */
 static void r600_audio_update_hdmi(unsigned long param)
@@ -118,7 +127,7 @@ static void r600_audio_update_hdmi(unsigned long param)
 	uint8_t category_code = r600_audio_category_code(rdev);
 
 	struct drm_encoder *encoder;
-	int changes = 0;
+	int changes = 0, still_going = 0;
 
 	changes |= channels != rdev->audio_channels;
 	changes |= rate != rdev->audio_rate;
@@ -135,15 +144,13 @@ static void r600_audio_update_hdmi(unsigned long param)
 	}
 
 	list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+		struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+		still_going |= radeon_encoder->audio_polling_active;
 		if (changes || r600_hdmi_buffer_status_changed(encoder))
-			r600_hdmi_update_audio_settings(
-				encoder, channels,
-				rate, bps, status_bits,
-				category_code);
+			r600_hdmi_update_audio_settings(encoder);
 	}
 
-	mod_timer(&rdev->audio_timer,
-		jiffies + msecs_to_jiffies(AUDIO_TIMER_INTERVALL));
+	if(still_going) r600_audio_schedule_polling(rdev);
 }
 
 /*
@@ -176,9 +183,34 @@ int r600_audio_init(struct radeon_device *rdev)
 		r600_audio_update_hdmi,
 		(unsigned long)rdev);
 
+	return 0;
+}
+
+/*
+ * enable the polling timer, to check for status changes
+ */
+void r600_audio_enable_polling(struct drm_encoder *encoder)
+{
+	struct drm_device *dev = encoder->dev;
+	struct radeon_device *rdev = dev->dev_private;
+	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+
+	DRM_DEBUG("r600_audio_enable_polling: %d", radeon_encoder->audio_polling_active);
+	if (radeon_encoder->audio_polling_active)
+		return;
+
+	radeon_encoder->audio_polling_active = 1;
 	mod_timer(&rdev->audio_timer, jiffies + 1);
+}
 
-	return 0;
+/*
+ * disable the polling timer, so we get no more status updates
+ */
+void r600_audio_disable_polling(struct drm_encoder *encoder)
+{
+	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+	DRM_DEBUG("r600_audio_disable_polling: %d", radeon_encoder->audio_polling_active);
+	radeon_encoder->audio_polling_active = 0;
 }
 
 /*
diff --git a/drivers/gpu/drm/radeon/r600_blit_kms.c b/drivers/gpu/drm/radeon/r600_blit_kms.c
index f6c6c77..d13622a 100644
--- a/drivers/gpu/drm/radeon/r600_blit_kms.c
+++ b/drivers/gpu/drm/radeon/r600_blit_kms.c
@@ -447,6 +447,9 @@ int r600_blit_init(struct radeon_device *rdev)
 	u32 packet2s[16];
 	int num_packet2s = 0;
 
+	/* don't reinitialize blit */
+	if (rdev->r600_blit.shader_obj)
+		return 0;
 	mutex_init(&rdev->r600_blit.mutex);
 	rdev->r600_blit.state_offset = 0;
 
diff --git a/drivers/gpu/drm/radeon/r600_hdmi.c b/drivers/gpu/drm/radeon/r600_hdmi.c
index 2616b82..26b4bc9 100644
--- a/drivers/gpu/drm/radeon/r600_hdmi.c
+++ b/drivers/gpu/drm/radeon/r600_hdmi.c
@@ -290,17 +290,15 @@ void r600_hdmi_audio_workaround(struct drm_encoder *encoder)
 	if (!offset)
 		return;
 
-	if (r600_hdmi_is_audio_buffer_filled(encoder)) {
-		/* disable audio workaround and start delivering of audio frames */
-		WREG32_P(offset+R600_HDMI_CNTL, 0x00000001, ~0x00001001);
+	if (!radeon_encoder->hdmi_audio_workaround ||
+		r600_hdmi_is_audio_buffer_filled(encoder)) {
 
-	} else if (radeon_encoder->hdmi_audio_workaround) {
-		/* enable audio workaround and start delivering of audio frames */
-		WREG32_P(offset+R600_HDMI_CNTL, 0x00001001, ~0x00001001);
+		/* disable audio workaround */
+		WREG32_P(offset+R600_HDMI_CNTL, 0x00000001, ~0x00001001);
 
 	} else {
-		/* disable audio workaround and stop delivering of audio frames */
-		WREG32_P(offset+R600_HDMI_CNTL, 0x00000000, ~0x00001001);
+		/* enable audio workaround */
+		WREG32_P(offset+R600_HDMI_CNTL, 0x00001001, ~0x00001001);
 	}
 }
 
@@ -345,25 +343,23 @@ void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mod
 
 	/* audio packets per line, does anyone know how to calc this ? */
 	WREG32_P(offset+R600_HDMI_CNTL, 0x00040000, ~0x001F0000);
-
-	/* update? reset? don't realy know */
-	WREG32_P(offset+R600_HDMI_CNTL, 0x14000000, ~0x14000000);
 }
 
 /*
  * update settings with current parameters from audio engine
  */
-void r600_hdmi_update_audio_settings(struct drm_encoder *encoder,
-				     int channels,
-				     int rate,
-				     int bps,
-				     uint8_t status_bits,
-				     uint8_t category_code)
+void r600_hdmi_update_audio_settings(struct drm_encoder *encoder)
 {
 	struct drm_device *dev = encoder->dev;
 	struct radeon_device *rdev = dev->dev_private;
 	uint32_t offset = to_radeon_encoder(encoder)->hdmi_offset;
 
+	int channels = r600_audio_channels(rdev);
+	int rate = r600_audio_rate(rdev);
+	int bps = r600_audio_bits_per_sample(rdev);
+	uint8_t status_bits = r600_audio_status_bits(rdev);
+	uint8_t category_code = r600_audio_category_code(rdev);
+
 	uint32_t iec;
 
 	if (!offset)
@@ -415,9 +411,6 @@ void r600_hdmi_update_audio_settings(struct drm_encoder *encoder,
 	r600_hdmi_audioinfoframe(encoder, channels-1, 0, 0, 0, 0, 0, 0, 0);
 
 	r600_hdmi_audio_workaround(encoder);
-
-	/* update? reset? don't realy know */
-	WREG32_P(offset+R600_HDMI_CNTL, 0x04000000, ~0x04000000);
 }
 
 static int r600_hdmi_find_free_block(struct drm_device *dev)
@@ -486,6 +479,7 @@ void r600_hdmi_enable(struct drm_encoder *encoder)
 	struct drm_device *dev = encoder->dev;
 	struct radeon_device *rdev = dev->dev_private;
 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+	uint32_t offset;
 
 	if (ASIC_IS_DCE4(rdev))
 		return;
@@ -499,10 +493,10 @@ void r600_hdmi_enable(struct drm_encoder *encoder)
 		}
 	}
 
+	offset = radeon_encoder->hdmi_offset;
 	if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) {
 		WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0x1, ~0x1);
 	} else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) {
-		int offset = radeon_encoder->hdmi_offset;
 		switch (radeon_encoder->encoder_id) {
 		case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
 			WREG32_P(AVIVO_TMDSA_CNTL, 0x4, ~0x4);
@@ -518,6 +512,21 @@ void r600_hdmi_enable(struct drm_encoder *encoder)
 		}
 	}
 
+	if (rdev->irq.installed
+	    && rdev->family != CHIP_RS600
+	    && rdev->family != CHIP_RS690
+	    && rdev->family != CHIP_RS740) {
+
+		/* if irq is available use it */
+		rdev->irq.hdmi[offset == R600_HDMI_BLOCK1 ? 0 : 1] = true;
+		radeon_irq_set(rdev);
+
+		r600_audio_disable_polling(encoder);
+	} else {
+		/* if not fallback to polling */
+		r600_audio_enable_polling(encoder);
+	}
+
 	DRM_DEBUG("Enabling HDMI interface @ 0x%04X for encoder 0x%x\n",
 		radeon_encoder->hdmi_offset, radeon_encoder->encoder_id);
 }
@@ -530,22 +539,30 @@ void r600_hdmi_disable(struct drm_encoder *encoder)
 	struct drm_device *dev = encoder->dev;
 	struct radeon_device *rdev = dev->dev_private;
 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+	uint32_t offset;
 
 	if (ASIC_IS_DCE4(rdev))
 		return;
 
-	if (!radeon_encoder->hdmi_offset) {
+	offset = radeon_encoder->hdmi_offset;
+	if (!offset) {
 		dev_err(rdev->dev, "Disabling not enabled HDMI\n");
 		return;
 	}
 
 	DRM_DEBUG("Disabling HDMI interface @ 0x%04X for encoder 0x%x\n",
-		radeon_encoder->hdmi_offset, radeon_encoder->encoder_id);
+		offset, radeon_encoder->encoder_id);
+
+	/* disable irq */
+	rdev->irq.hdmi[offset == R600_HDMI_BLOCK1 ? 0 : 1] = false;
+	radeon_irq_set(rdev);
+
+	/* disable polling */
+	r600_audio_disable_polling(encoder);
 
 	if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) {
 		WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0, ~0x1);
 	} else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) {
-		int offset = radeon_encoder->hdmi_offset;
 		switch (radeon_encoder->encoder_id) {
 		case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
 			WREG32_P(AVIVO_TMDSA_CNTL, 0, ~0x4);
diff --git a/drivers/gpu/drm/radeon/r600_reg.h b/drivers/gpu/drm/radeon/r600_reg.h
index 7b1d223..d84612a 100644
--- a/drivers/gpu/drm/radeon/r600_reg.h
+++ b/drivers/gpu/drm/radeon/r600_reg.h
@@ -157,33 +157,36 @@
 #define R600_HDMI_BLOCK3                  0x7800
 
 /* HDMI registers */
-#define R600_HDMI_ENABLE           0x00
-#define R600_HDMI_STATUS           0x04
-#define R600_HDMI_CNTL             0x08
-#define R600_HDMI_UNKNOWN_0        0x0C
-#define R600_HDMI_AUDIOCNTL        0x10
-#define R600_HDMI_VIDEOCNTL        0x14
-#define R600_HDMI_VERSION          0x18
-#define R600_HDMI_UNKNOWN_1        0x28
-#define R600_HDMI_VIDEOINFOFRAME_0 0x54
-#define R600_HDMI_VIDEOINFOFRAME_1 0x58
-#define R600_HDMI_VIDEOINFOFRAME_2 0x5c
-#define R600_HDMI_VIDEOINFOFRAME_3 0x60
-#define R600_HDMI_32kHz_CTS        0xac
-#define R600_HDMI_32kHz_N          0xb0
-#define R600_HDMI_44_1kHz_CTS      0xb4
-#define R600_HDMI_44_1kHz_N        0xb8
-#define R600_HDMI_48kHz_CTS        0xbc
-#define R600_HDMI_48kHz_N          0xc0
-#define R600_HDMI_AUDIOINFOFRAME_0 0xcc
-#define R600_HDMI_AUDIOINFOFRAME_1 0xd0
-#define R600_HDMI_IEC60958_1       0xd4
-#define R600_HDMI_IEC60958_2       0xd8
-#define R600_HDMI_UNKNOWN_2        0xdc
-#define R600_HDMI_AUDIO_DEBUG_0    0xe0
-#define R600_HDMI_AUDIO_DEBUG_1    0xe4
-#define R600_HDMI_AUDIO_DEBUG_2    0xe8
-#define R600_HDMI_AUDIO_DEBUG_3    0xec
+#define R600_HDMI_ENABLE                0x00
+#define R600_HDMI_STATUS                0x04
+#       define R600_HDMI_INT_PENDING    (1 << 29)
+#define R600_HDMI_CNTL                  0x08
+#       define R600_HDMI_INT_EN         (1 << 28)
+#       define R600_HDMI_INT_ACK        (1 << 29)
+#define R600_HDMI_UNKNOWN_0             0x0C
+#define R600_HDMI_AUDIOCNTL             0x10
+#define R600_HDMI_VIDEOCNTL             0x14
+#define R600_HDMI_VERSION               0x18
+#define R600_HDMI_UNKNOWN_1             0x28
+#define R600_HDMI_VIDEOINFOFRAME_0      0x54
+#define R600_HDMI_VIDEOINFOFRAME_1      0x58
+#define R600_HDMI_VIDEOINFOFRAME_2      0x5c
+#define R600_HDMI_VIDEOINFOFRAME_3      0x60
+#define R600_HDMI_32kHz_CTS             0xac
+#define R600_HDMI_32kHz_N               0xb0
+#define R600_HDMI_44_1kHz_CTS           0xb4
+#define R600_HDMI_44_1kHz_N             0xb8
+#define R600_HDMI_48kHz_CTS             0xbc
+#define R600_HDMI_48kHz_N               0xc0
+#define R600_HDMI_AUDIOINFOFRAME_0      0xcc
+#define R600_HDMI_AUDIOINFOFRAME_1      0xd0
+#define R600_HDMI_IEC60958_1            0xd4
+#define R600_HDMI_IEC60958_2            0xd8
+#define R600_HDMI_UNKNOWN_2             0xdc
+#define R600_HDMI_AUDIO_DEBUG_0         0xe0
+#define R600_HDMI_AUDIO_DEBUG_1         0xe4
+#define R600_HDMI_AUDIO_DEBUG_2         0xe8
+#define R600_HDMI_AUDIO_DEBUG_3         0xec
 
 /* HDMI additional config base register addresses */
 #define R600_HDMI_CONFIG1                 0x7600
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 034218c..9c8af5f 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -89,16 +89,17 @@ extern int radeon_testing;
 extern int radeon_connector_table;
 extern int radeon_tv;
 extern int radeon_new_pll;
-extern int radeon_dynpm;
 extern int radeon_audio;
 extern int radeon_disp_priority;
 extern int radeon_hw_i2c;
+extern int radeon_pm;
 
 /*
  * Copy from radeon_drv.h so we don't have to include both and have conflicting
  * symbol;
  */
 #define RADEON_MAX_USEC_TIMEOUT		100000	/* 100 ms */
+#define RADEON_FENCE_JIFFIES_TIMEOUT	(HZ / 2)
 /* RADEON_IB_POOL_SIZE must be a power of 2 */
 #define RADEON_IB_POOL_SIZE		16
 #define RADEON_DEBUGFS_MAX_NUM_FILES	32
@@ -172,8 +173,11 @@ struct radeon_clock {
 int radeon_pm_init(struct radeon_device *rdev);
 void radeon_pm_fini(struct radeon_device *rdev);
 void radeon_pm_compute_clocks(struct radeon_device *rdev);
+void radeon_pm_suspend(struct radeon_device *rdev);
+void radeon_pm_resume(struct radeon_device *rdev);
 void radeon_combios_get_power_modes(struct radeon_device *rdev);
 void radeon_atombios_get_power_modes(struct radeon_device *rdev);
+void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level);
 
 /*
  * Fences.
@@ -182,7 +186,8 @@ struct radeon_fence_driver {
 	uint32_t			scratch_reg;
 	atomic_t			seq;
 	uint32_t			last_seq;
-	unsigned long			count_timeout;
+	unsigned long			last_jiffies;
+	unsigned long			last_timeout;
 	wait_queue_head_t		queue;
 	rwlock_t			lock;
 	struct list_head		created;
@@ -197,7 +202,6 @@ struct radeon_fence {
 	struct list_head		list;
 	/* protected by radeon_fence.lock */
 	uint32_t			seq;
-	unsigned long			timeout;
 	bool				emited;
 	bool				signaled;
 };
@@ -259,6 +263,7 @@ struct radeon_bo_list {
 	unsigned		rdomain;
 	unsigned		wdomain;
 	u32			tiling_flags;
+	bool			reserved;
 };
 
 /*
@@ -371,10 +376,15 @@ struct radeon_irq {
 	bool		installed;
 	bool		sw_int;
 	/* FIXME: use a define max crtc rather than hardcode it */
-	bool		crtc_vblank_int[2];
+	bool		crtc_vblank_int[6];
 	wait_queue_head_t	vblank_queue;
 	/* FIXME: use defines for max hpd/dacs */
 	bool            hpd[6];
+	bool            gui_idle;
+	bool            gui_idle_acked;
+	wait_queue_head_t	idle_queue;
+	/* FIXME: use defines for max HDMI blocks */
+	bool		hdmi[2];
 	spinlock_t sw_lock;
 	int sw_refcount;
 };
@@ -462,7 +472,9 @@ int radeon_ib_test(struct radeon_device *rdev);
 extern void radeon_ib_bogus_add(struct radeon_device *rdev, struct radeon_ib *ib);
 /* Ring access between begin & end cannot sleep */
 void radeon_ring_free_size(struct radeon_device *rdev);
+int radeon_ring_alloc(struct radeon_device *rdev, unsigned ndw);
 int radeon_ring_lock(struct radeon_device *rdev, unsigned ndw);
+void radeon_ring_commit(struct radeon_device *rdev);
 void radeon_ring_unlock_commit(struct radeon_device *rdev);
 void radeon_ring_unlock_undo(struct radeon_device *rdev);
 int radeon_ring_test(struct radeon_device *rdev);
@@ -566,6 +578,7 @@ typedef int (*radeon_packet3_check_t)(struct radeon_cs_parser *p,
  */
 int radeon_agp_init(struct radeon_device *rdev);
 void radeon_agp_resume(struct radeon_device *rdev);
+void radeon_agp_suspend(struct radeon_device *rdev);
 void radeon_agp_fini(struct radeon_device *rdev);
 
 
@@ -597,17 +610,24 @@ struct radeon_wb {
  * Equation between gpu/memory clock and available bandwidth is hw dependent
  * (type of memory, bus size, efficiency, ...)
  */
-enum radeon_pm_state {
-	PM_STATE_DISABLED,
-	PM_STATE_MINIMUM,
-	PM_STATE_PAUSED,
-	PM_STATE_ACTIVE
+
+enum radeon_pm_method {
+	PM_METHOD_PROFILE,
+	PM_METHOD_DYNPM,
+};
+
+enum radeon_dynpm_state {
+	DYNPM_STATE_DISABLED,
+	DYNPM_STATE_MINIMUM,
+	DYNPM_STATE_PAUSED,
+	DYNPM_STATE_ACTIVE
 };
-enum radeon_pm_action {
-	PM_ACTION_NONE,
-	PM_ACTION_MINIMUM,
-	PM_ACTION_DOWNCLOCK,
-	PM_ACTION_UPCLOCK
+enum radeon_dynpm_action {
+	DYNPM_ACTION_NONE,
+	DYNPM_ACTION_MINIMUM,
+	DYNPM_ACTION_DOWNCLOCK,
+	DYNPM_ACTION_UPCLOCK,
+	DYNPM_ACTION_DEFAULT
 };
 
 enum radeon_voltage_type {
@@ -625,11 +645,28 @@ enum radeon_pm_state_type {
 	POWER_STATE_TYPE_PERFORMANCE,
 };
 
-enum radeon_pm_clock_mode_type {
-	POWER_MODE_TYPE_DEFAULT,
-	POWER_MODE_TYPE_LOW,
-	POWER_MODE_TYPE_MID,
-	POWER_MODE_TYPE_HIGH,
+enum radeon_pm_profile_type {
+	PM_PROFILE_DEFAULT,
+	PM_PROFILE_AUTO,
+	PM_PROFILE_LOW,
+	PM_PROFILE_MID,
+	PM_PROFILE_HIGH,
+};
+
+#define PM_PROFILE_DEFAULT_IDX 0
+#define PM_PROFILE_LOW_SH_IDX  1
+#define PM_PROFILE_MID_SH_IDX  2
+#define PM_PROFILE_HIGH_SH_IDX 3
+#define PM_PROFILE_LOW_MH_IDX  4
+#define PM_PROFILE_MID_MH_IDX  5
+#define PM_PROFILE_HIGH_MH_IDX 6
+#define PM_PROFILE_MAX         7
+
+struct radeon_pm_profile {
+	int dpms_off_ps_idx;
+	int dpms_on_ps_idx;
+	int dpms_off_cm_idx;
+	int dpms_on_cm_idx;
 };
 
 struct radeon_voltage {
@@ -646,12 +683,8 @@ struct radeon_voltage {
 	u32 voltage;
 };
 
-struct radeon_pm_non_clock_info {
-	/* pcie lanes */
-	int pcie_lanes;
-	/* standardized non-clock flags */
-	u32 flags;
-};
+/* clock mode flags */
+#define RADEON_PM_MODE_NO_DISPLAY          (1 << 0)
 
 struct radeon_pm_clock_info {
 	/* memory clock */
@@ -660,10 +693,13 @@ struct radeon_pm_clock_info {
 	u32 sclk;
 	/* voltage info */
 	struct radeon_voltage voltage;
-	/* standardized clock flags - not sure we'll need these */
+	/* standardized clock flags */
 	u32 flags;
 };
 
+/* state flags */
+#define RADEON_PM_STATE_SINGLE_DISPLAY_ONLY (1 << 0)
+
 struct radeon_power_state {
 	enum radeon_pm_state_type type;
 	/* XXX: use a define for num clock modes */
@@ -671,9 +707,11 @@ struct radeon_power_state {
 	/* number of valid clock modes in this power state */
 	int num_clock_modes;
 	struct radeon_pm_clock_info *default_clock_mode;
-	/* non clock info about this state */
-	struct radeon_pm_non_clock_info non_clock_info;
-	bool voltage_drop_active;
+	/* standardized state flags */
+	u32 flags;
+	u32 misc; /* vbios specific flags */
+	u32 misc2; /* vbios specific flags */
+	int pcie_lanes; /* pcie lanes */
 };
 
 /*
@@ -683,14 +721,11 @@ struct radeon_power_state {
 
 struct radeon_pm {
 	struct mutex		mutex;
-	struct delayed_work	idle_work;
-	enum radeon_pm_state	state;
-	enum radeon_pm_action	planned_action;
-	unsigned long		action_timeout;
-	bool 			downclocked;
-	int			active_crtcs;
+	u32			active_crtcs;
+	int			active_crtc_count;
 	int			req_vblank;
 	bool			vblank_sync;
+	bool			gui_idle;
 	fixed20_12		max_bandwidth;
 	fixed20_12		igp_sideport_mclk;
 	fixed20_12		igp_system_mclk;
@@ -707,12 +742,28 @@ struct radeon_pm {
 	struct radeon_power_state power_state[8];
 	/* number of valid power states */
 	int                     num_power_states;
-	struct radeon_power_state *current_power_state;
-	struct radeon_pm_clock_info *current_clock_mode;
-	struct radeon_power_state *requested_power_state;
-	struct radeon_pm_clock_info *requested_clock_mode;
-	struct radeon_power_state *default_power_state;
+	int                     current_power_state_index;
+	int                     current_clock_mode_index;
+	int                     requested_power_state_index;
+	int                     requested_clock_mode_index;
+	int                     default_power_state_index;
+	u32                     current_sclk;
+	u32                     current_mclk;
+	u32                     current_vddc;
 	struct radeon_i2c_chan *i2c_bus;
+	/* selected pm method */
+	enum radeon_pm_method     pm_method;
+	/* dynpm power management */
+	struct delayed_work	dynpm_idle_work;
+	enum radeon_dynpm_state	dynpm_state;
+	enum radeon_dynpm_action	dynpm_planned_action;
+	unsigned long		dynpm_action_timeout;
+	bool                    dynpm_can_upclock;
+	bool                    dynpm_can_downclock;
+	/* profile-based power management */
+	enum radeon_pm_profile_type profile;
+	int                     profile_index;
+	struct radeon_pm_profile profiles[PM_PROFILE_MAX];
 };
 
 
@@ -746,7 +797,8 @@ struct radeon_asic {
 	int (*resume)(struct radeon_device *rdev);
 	int (*suspend)(struct radeon_device *rdev);
 	void (*vga_set_state)(struct radeon_device *rdev, bool state);
-	int (*gpu_reset)(struct radeon_device *rdev);
+	bool (*gpu_is_lockup)(struct radeon_device *rdev);
+	int (*asic_reset)(struct radeon_device *rdev);
 	void (*gart_tlb_flush)(struct radeon_device *rdev);
 	int (*gart_set_page)(struct radeon_device *rdev, int i, uint64_t addr);
 	int (*cp_init)(struct radeon_device *rdev, unsigned ring_size);
@@ -799,44 +851,84 @@ struct radeon_asic {
 	 * through ring.
 	 */
 	void (*ioctl_wait_idle)(struct radeon_device *rdev, struct radeon_bo *bo);
+	bool (*gui_idle)(struct radeon_device *rdev);
+	/* power management */
+	void (*pm_misc)(struct radeon_device *rdev);
+	void (*pm_prepare)(struct radeon_device *rdev);
+	void (*pm_finish)(struct radeon_device *rdev);
+	void (*pm_init_profile)(struct radeon_device *rdev);
+	void (*pm_get_dynpm_state)(struct radeon_device *rdev);
 };
 
 /*
  * Asic structures
  */
+struct r100_gpu_lockup {
+	unsigned long	last_jiffies;
+	u32		last_cp_rptr;
+};
+
 struct r100_asic {
-	const unsigned	*reg_safe_bm;
-	unsigned	reg_safe_bm_size;
-	u32		hdp_cntl;
+	const unsigned		*reg_safe_bm;
+	unsigned		reg_safe_bm_size;
+	u32			hdp_cntl;
+	struct r100_gpu_lockup	lockup;
 };
 
 struct r300_asic {
-	const unsigned	*reg_safe_bm;
-	unsigned	reg_safe_bm_size;
-	u32		resync_scratch;
-	u32		hdp_cntl;
+	const unsigned		*reg_safe_bm;
+	unsigned		reg_safe_bm_size;
+	u32			resync_scratch;
+	u32			hdp_cntl;
+	struct r100_gpu_lockup	lockup;
 };
 
 struct r600_asic {
-	unsigned max_pipes;
-	unsigned max_tile_pipes;
-	unsigned max_simds;
-	unsigned max_backends;
-	unsigned max_gprs;
-	unsigned max_threads;
-	unsigned max_stack_entries;
-	unsigned max_hw_contexts;
-	unsigned max_gs_threads;
-	unsigned sx_max_export_size;
-	unsigned sx_max_export_pos_size;
-	unsigned sx_max_export_smx_size;
-	unsigned sq_num_cf_insts;
-	unsigned tiling_nbanks;
-	unsigned tiling_npipes;
-	unsigned tiling_group_size;
+	unsigned		max_pipes;
+	unsigned		max_tile_pipes;
+	unsigned		max_simds;
+	unsigned		max_backends;
+	unsigned		max_gprs;
+	unsigned		max_threads;
+	unsigned		max_stack_entries;
+	unsigned		max_hw_contexts;
+	unsigned		max_gs_threads;
+	unsigned		sx_max_export_size;
+	unsigned		sx_max_export_pos_size;
+	unsigned		sx_max_export_smx_size;
+	unsigned		sq_num_cf_insts;
+	unsigned		tiling_nbanks;
+	unsigned		tiling_npipes;
+	unsigned		tiling_group_size;
+	struct r100_gpu_lockup	lockup;
 };
 
 struct rv770_asic {
+	unsigned		max_pipes;
+	unsigned		max_tile_pipes;
+	unsigned		max_simds;
+	unsigned		max_backends;
+	unsigned		max_gprs;
+	unsigned		max_threads;
+	unsigned		max_stack_entries;
+	unsigned		max_hw_contexts;
+	unsigned		max_gs_threads;
+	unsigned		sx_max_export_size;
+	unsigned		sx_max_export_pos_size;
+	unsigned		sx_max_export_smx_size;
+	unsigned		sq_num_cf_insts;
+	unsigned		sx_num_of_sets;
+	unsigned		sc_prim_fifo_size;
+	unsigned		sc_hiz_tile_fifo_size;
+	unsigned		sc_earlyz_tile_fifo_fize;
+	unsigned		tiling_nbanks;
+	unsigned		tiling_npipes;
+	unsigned		tiling_group_size;
+	struct r100_gpu_lockup	lockup;
+};
+
+struct evergreen_asic {
+	unsigned num_ses;
 	unsigned max_pipes;
 	unsigned max_tile_pipes;
 	unsigned max_simds;
@@ -853,7 +945,7 @@ struct rv770_asic {
 	unsigned sx_num_of_sets;
 	unsigned sc_prim_fifo_size;
 	unsigned sc_hiz_tile_fifo_size;
-	unsigned sc_earlyz_tile_fifo_fize;
+	unsigned sc_earlyz_tile_fifo_size;
 	unsigned tiling_nbanks;
 	unsigned tiling_npipes;
 	unsigned tiling_group_size;
@@ -864,6 +956,7 @@ union radeon_asic_config {
 	struct r100_asic	r100;
 	struct r600_asic	r600;
 	struct rv770_asic	rv770;
+	struct evergreen_asic	evergreen;
 };
 
 /*
@@ -927,9 +1020,6 @@ struct radeon_device {
 	bool				is_atom_bios;
 	uint16_t			bios_header_start;
 	struct radeon_bo		*stollen_vga_memory;
-	struct fb_info			*fbdev_info;
-	struct radeon_bo		*fbdev_rbo;
-	struct radeon_framebuffer	*fbdev_rfb;
 	/* Register mmio */
 	resource_size_t			rmmio_base;
 	resource_size_t			rmmio_size;
@@ -974,6 +1064,7 @@ struct radeon_device {
 	struct work_struct hotplug_work;
 	int num_crtc; /* number of crtcs */
 	struct mutex dc_hw_i2c_mutex; /* display controller hw i2c mutex */
+	struct mutex vram_mutex;
 
 	/* audio stuff */
 	struct timer_list	audio_timer;
@@ -984,6 +1075,7 @@ struct radeon_device {
 	uint8_t			audio_category_code;
 
 	bool powered_down;
+	struct notifier_block acpi_nb;
 };
 
 int radeon_device_init(struct radeon_device *rdev,
@@ -1145,7 +1237,8 @@ static inline void radeon_ring_write(struct radeon_device *rdev, uint32_t v)
 #define radeon_suspend(rdev) (rdev)->asic->suspend((rdev))
 #define radeon_cs_parse(p) rdev->asic->cs_parse((p))
 #define radeon_vga_set_state(rdev, state) (rdev)->asic->vga_set_state((rdev), (state))
-#define radeon_gpu_reset(rdev) (rdev)->asic->gpu_reset((rdev))
+#define radeon_gpu_is_lockup(rdev) (rdev)->asic->gpu_is_lockup((rdev))
+#define radeon_asic_reset(rdev) (rdev)->asic->asic_reset((rdev))
 #define radeon_gart_tlb_flush(rdev) (rdev)->asic->gart_tlb_flush((rdev))
 #define radeon_gart_set_page(rdev, i, p) (rdev)->asic->gart_set_page((rdev), (i), (p))
 #define radeon_cp_commit(rdev) (rdev)->asic->cp_commit((rdev))
@@ -1173,9 +1266,16 @@ static inline void radeon_ring_write(struct radeon_device *rdev, uint32_t v)
 #define radeon_hpd_fini(rdev) (rdev)->asic->hpd_fini((rdev))
 #define radeon_hpd_sense(rdev, hpd) (rdev)->asic->hpd_sense((rdev), (hpd))
 #define radeon_hpd_set_polarity(rdev, hpd) (rdev)->asic->hpd_set_polarity((rdev), (hpd))
+#define radeon_gui_idle(rdev) (rdev)->asic->gui_idle((rdev))
+#define radeon_pm_misc(rdev) (rdev)->asic->pm_misc((rdev))
+#define radeon_pm_prepare(rdev) (rdev)->asic->pm_prepare((rdev))
+#define radeon_pm_finish(rdev) (rdev)->asic->pm_finish((rdev))
+#define radeon_pm_init_profile(rdev) (rdev)->asic->pm_init_profile((rdev))
+#define radeon_pm_get_dynpm_state(rdev) (rdev)->asic->pm_get_dynpm_state((rdev))
 
 /* Common functions */
 /* AGP */
+extern int radeon_gpu_reset(struct radeon_device *rdev);
 extern void radeon_agp_disable(struct radeon_device *rdev);
 extern int radeon_gart_table_vram_pin(struct radeon_device *rdev);
 extern void radeon_gart_restore(struct radeon_device *rdev);
@@ -1200,6 +1300,8 @@ extern int radeon_resume_kms(struct drm_device *dev);
 extern int radeon_suspend_kms(struct drm_device *dev, pm_message_t state);
 
 /* r100,rv100,rs100,rv200,rs200,r200,rv250,rs300,rv280 */
+extern void r100_gpu_lockup_update(struct r100_gpu_lockup *lockup, struct radeon_cp *cp);
+extern bool r100_gpu_cp_is_lockup(struct radeon_device *rdev, struct r100_gpu_lockup *lockup, struct radeon_cp *cp);
 
 /* rv200,rv250,rv280 */
 extern void r200_set_safe_registers(struct radeon_device *rdev);
@@ -1260,6 +1362,7 @@ extern void rs690_line_buffer_adjust(struct radeon_device *rdev,
 extern void r600_vram_gtt_location(struct radeon_device *rdev, struct radeon_mc *mc);
 extern bool r600_card_posted(struct radeon_device *rdev);
 extern void r600_cp_stop(struct radeon_device *rdev);
+extern int r600_cp_start(struct radeon_device *rdev);
 extern void r600_ring_init(struct radeon_device *rdev, unsigned ring_size);
 extern int r600_cp_resume(struct radeon_device *rdev);
 extern void r600_cp_fini(struct radeon_device *rdev);
@@ -1276,29 +1379,39 @@ extern void r600_scratch_init(struct radeon_device *rdev);
 extern int r600_blit_init(struct radeon_device *rdev);
 extern void r600_blit_fini(struct radeon_device *rdev);
 extern int r600_init_microcode(struct radeon_device *rdev);
-extern int r600_gpu_reset(struct radeon_device *rdev);
+extern int r600_asic_reset(struct radeon_device *rdev);
 /* r600 irq */
 extern int r600_irq_init(struct radeon_device *rdev);
 extern void r600_irq_fini(struct radeon_device *rdev);
 extern void r600_ih_ring_init(struct radeon_device *rdev, unsigned ring_size);
 extern int r600_irq_set(struct radeon_device *rdev);
 extern void r600_irq_suspend(struct radeon_device *rdev);
+extern void r600_disable_interrupts(struct radeon_device *rdev);
+extern void r600_rlc_stop(struct radeon_device *rdev);
 /* r600 audio */
 extern int r600_audio_init(struct radeon_device *rdev);
 extern int r600_audio_tmds_index(struct drm_encoder *encoder);
 extern void r600_audio_set_clock(struct drm_encoder *encoder, int clock);
+extern int r600_audio_channels(struct radeon_device *rdev);
+extern int r600_audio_bits_per_sample(struct radeon_device *rdev);
+extern int r600_audio_rate(struct radeon_device *rdev);
+extern uint8_t r600_audio_status_bits(struct radeon_device *rdev);
+extern uint8_t r600_audio_category_code(struct radeon_device *rdev);
+extern void r600_audio_schedule_polling(struct radeon_device *rdev);
+extern void r600_audio_enable_polling(struct drm_encoder *encoder);
+extern void r600_audio_disable_polling(struct drm_encoder *encoder);
 extern void r600_audio_fini(struct radeon_device *rdev);
 extern void r600_hdmi_init(struct drm_encoder *encoder);
 extern void r600_hdmi_enable(struct drm_encoder *encoder);
 extern void r600_hdmi_disable(struct drm_encoder *encoder);
 extern void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mode);
 extern int r600_hdmi_buffer_status_changed(struct drm_encoder *encoder);
-extern void r600_hdmi_update_audio_settings(struct drm_encoder *encoder,
-					    int channels,
-					    int rate,
-					    int bps,
-					    uint8_t status_bits,
-					    uint8_t category_code);
+extern void r600_hdmi_update_audio_settings(struct drm_encoder *encoder);
+
+extern void r700_cp_stop(struct radeon_device *rdev);
+extern void r700_cp_fini(struct radeon_device *rdev);
+extern void evergreen_disable_interrupt_state(struct radeon_device *rdev);
+extern int evergreen_irq_set(struct radeon_device *rdev);
 
 /* evergreen */
 struct evergreen_mc_save {
diff --git a/drivers/gpu/drm/radeon/radeon_agp.c b/drivers/gpu/drm/radeon/radeon_agp.c
index 28e473f..f40dfb7 100644
--- a/drivers/gpu/drm/radeon/radeon_agp.c
+++ b/drivers/gpu/drm/radeon/radeon_agp.c
@@ -270,3 +270,8 @@ void radeon_agp_fini(struct radeon_device *rdev)
 	}
 #endif
 }
+
+void radeon_agp_suspend(struct radeon_device *rdev)
+{
+	radeon_agp_fini(rdev);
+}
diff --git a/drivers/gpu/drm/radeon/radeon_asic.c b/drivers/gpu/drm/radeon/radeon_asic.c
index a4b4bc9..87f7e2c 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.c
+++ b/drivers/gpu/drm/radeon/radeon_asic.c
@@ -134,7 +134,8 @@ static struct radeon_asic r100_asic = {
 	.suspend = &r100_suspend,
 	.resume = &r100_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r100_gpu_reset,
+	.gpu_is_lockup = &r100_gpu_is_lockup,
+	.asic_reset = &r100_asic_reset,
 	.gart_tlb_flush = &r100_pci_gart_tlb_flush,
 	.gart_set_page = &r100_pci_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -164,6 +165,12 @@ static struct radeon_asic r100_asic = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r100_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r200_asic = {
@@ -172,7 +179,8 @@ static struct radeon_asic r200_asic = {
 	.suspend = &r100_suspend,
 	.resume = &r100_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r100_gpu_reset,
+	.gpu_is_lockup = &r100_gpu_is_lockup,
+	.asic_reset = &r100_asic_reset,
 	.gart_tlb_flush = &r100_pci_gart_tlb_flush,
 	.gart_set_page = &r100_pci_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -201,6 +209,12 @@ static struct radeon_asic r200_asic = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r100_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r300_asic = {
@@ -209,7 +223,8 @@ static struct radeon_asic r300_asic = {
 	.suspend = &r300_suspend,
 	.resume = &r300_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &r300_asic_reset,
 	.gart_tlb_flush = &r100_pci_gart_tlb_flush,
 	.gart_set_page = &r100_pci_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -239,6 +254,12 @@ static struct radeon_asic r300_asic = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r100_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r300_asic_pcie = {
@@ -247,7 +268,8 @@ static struct radeon_asic r300_asic_pcie = {
 	.suspend = &r300_suspend,
 	.resume = &r300_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &r300_asic_reset,
 	.gart_tlb_flush = &rv370_pcie_gart_tlb_flush,
 	.gart_set_page = &rv370_pcie_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -276,6 +298,12 @@ static struct radeon_asic r300_asic_pcie = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r100_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r420_asic = {
@@ -284,7 +312,8 @@ static struct radeon_asic r420_asic = {
 	.suspend = &r420_suspend,
 	.resume = &r420_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &r300_asic_reset,
 	.gart_tlb_flush = &rv370_pcie_gart_tlb_flush,
 	.gart_set_page = &rv370_pcie_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -314,6 +343,12 @@ static struct radeon_asic r420_asic = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r420_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rs400_asic = {
@@ -322,7 +357,8 @@ static struct radeon_asic rs400_asic = {
 	.suspend = &rs400_suspend,
 	.resume = &rs400_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &r300_asic_reset,
 	.gart_tlb_flush = &rs400_gart_tlb_flush,
 	.gart_set_page = &rs400_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -352,6 +388,12 @@ static struct radeon_asic rs400_asic = {
 	.hpd_sense = &r100_hpd_sense,
 	.hpd_set_polarity = &r100_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &r100_pm_misc,
+	.pm_prepare = &r100_pm_prepare,
+	.pm_finish = &r100_pm_finish,
+	.pm_init_profile = &r100_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rs600_asic = {
@@ -360,7 +402,8 @@ static struct radeon_asic rs600_asic = {
 	.suspend = &rs600_suspend,
 	.resume = &rs600_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &rs600_asic_reset,
 	.gart_tlb_flush = &rs600_gart_tlb_flush,
 	.gart_set_page = &rs600_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -390,6 +433,12 @@ static struct radeon_asic rs600_asic = {
 	.hpd_sense = &rs600_hpd_sense,
 	.hpd_set_polarity = &rs600_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &rs600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r420_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rs690_asic = {
@@ -398,7 +447,8 @@ static struct radeon_asic rs690_asic = {
 	.suspend = &rs690_suspend,
 	.resume = &rs690_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &r300_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &rs600_asic_reset,
 	.gart_tlb_flush = &rs400_gart_tlb_flush,
 	.gart_set_page = &rs400_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -428,6 +478,12 @@ static struct radeon_asic rs690_asic = {
 	.hpd_sense = &rs600_hpd_sense,
 	.hpd_set_polarity = &rs600_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &rs600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r420_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rv515_asic = {
@@ -436,7 +492,8 @@ static struct radeon_asic rv515_asic = {
 	.suspend = &rv515_suspend,
 	.resume = &rv515_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &rv515_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &rs600_asic_reset,
 	.gart_tlb_flush = &rv370_pcie_gart_tlb_flush,
 	.gart_set_page = &rv370_pcie_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -466,6 +523,12 @@ static struct radeon_asic rv515_asic = {
 	.hpd_sense = &rs600_hpd_sense,
 	.hpd_set_polarity = &rs600_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &rs600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r420_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r520_asic = {
@@ -474,7 +537,8 @@ static struct radeon_asic r520_asic = {
 	.suspend = &rv515_suspend,
 	.resume = &r520_resume,
 	.vga_set_state = &r100_vga_set_state,
-	.gpu_reset = &rv515_gpu_reset,
+	.gpu_is_lockup = &r300_gpu_is_lockup,
+	.asic_reset = &rs600_asic_reset,
 	.gart_tlb_flush = &rv370_pcie_gart_tlb_flush,
 	.gart_set_page = &rv370_pcie_gart_set_page,
 	.cp_commit = &r100_cp_commit,
@@ -504,6 +568,12 @@ static struct radeon_asic r520_asic = {
 	.hpd_sense = &rs600_hpd_sense,
 	.hpd_set_polarity = &rs600_hpd_set_polarity,
 	.ioctl_wait_idle = NULL,
+	.gui_idle = &r100_gui_idle,
+	.pm_misc = &rs600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r420_pm_init_profile,
+	.pm_get_dynpm_state = &r100_pm_get_dynpm_state,
 };
 
 static struct radeon_asic r600_asic = {
@@ -513,7 +583,8 @@ static struct radeon_asic r600_asic = {
 	.resume = &r600_resume,
 	.cp_commit = &r600_cp_commit,
 	.vga_set_state = &r600_vga_set_state,
-	.gpu_reset = &r600_gpu_reset,
+	.gpu_is_lockup = &r600_gpu_is_lockup,
+	.asic_reset = &r600_asic_reset,
 	.gart_tlb_flush = &r600_pcie_gart_tlb_flush,
 	.gart_set_page = &rs600_gart_set_page,
 	.ring_test = &r600_ring_test,
@@ -541,6 +612,12 @@ static struct radeon_asic r600_asic = {
 	.hpd_sense = &r600_hpd_sense,
 	.hpd_set_polarity = &r600_hpd_set_polarity,
 	.ioctl_wait_idle = r600_ioctl_wait_idle,
+	.gui_idle = &r600_gui_idle,
+	.pm_misc = &r600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r600_pm_init_profile,
+	.pm_get_dynpm_state = &r600_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rs780_asic = {
@@ -549,8 +626,9 @@ static struct radeon_asic rs780_asic = {
 	.suspend = &r600_suspend,
 	.resume = &r600_resume,
 	.cp_commit = &r600_cp_commit,
+	.gpu_is_lockup = &r600_gpu_is_lockup,
 	.vga_set_state = &r600_vga_set_state,
-	.gpu_reset = &r600_gpu_reset,
+	.asic_reset = &r600_asic_reset,
 	.gart_tlb_flush = &r600_pcie_gart_tlb_flush,
 	.gart_set_page = &rs600_gart_set_page,
 	.ring_test = &r600_ring_test,
@@ -578,6 +656,12 @@ static struct radeon_asic rs780_asic = {
 	.hpd_sense = &r600_hpd_sense,
 	.hpd_set_polarity = &r600_hpd_set_polarity,
 	.ioctl_wait_idle = r600_ioctl_wait_idle,
+	.gui_idle = &r600_gui_idle,
+	.pm_misc = &r600_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &rs780_pm_init_profile,
+	.pm_get_dynpm_state = &r600_pm_get_dynpm_state,
 };
 
 static struct radeon_asic rv770_asic = {
@@ -586,7 +670,8 @@ static struct radeon_asic rv770_asic = {
 	.suspend = &rv770_suspend,
 	.resume = &rv770_resume,
 	.cp_commit = &r600_cp_commit,
-	.gpu_reset = &rv770_gpu_reset,
+	.asic_reset = &r600_asic_reset,
+	.gpu_is_lockup = &r600_gpu_is_lockup,
 	.vga_set_state = &r600_vga_set_state,
 	.gart_tlb_flush = &r600_pcie_gart_tlb_flush,
 	.gart_set_page = &rs600_gart_set_page,
@@ -615,6 +700,12 @@ static struct radeon_asic rv770_asic = {
 	.hpd_sense = &r600_hpd_sense,
 	.hpd_set_polarity = &r600_hpd_set_polarity,
 	.ioctl_wait_idle = r600_ioctl_wait_idle,
+	.gui_idle = &r600_gui_idle,
+	.pm_misc = &rv770_pm_misc,
+	.pm_prepare = &rs600_pm_prepare,
+	.pm_finish = &rs600_pm_finish,
+	.pm_init_profile = &r600_pm_init_profile,
+	.pm_get_dynpm_state = &r600_pm_get_dynpm_state,
 };
 
 static struct radeon_asic evergreen_asic = {
@@ -622,18 +713,19 @@ static struct radeon_asic evergreen_asic = {
 	.fini = &evergreen_fini,
 	.suspend = &evergreen_suspend,
 	.resume = &evergreen_resume,
-	.cp_commit = NULL,
-	.gpu_reset = &evergreen_gpu_reset,
+	.cp_commit = &r600_cp_commit,
+	.gpu_is_lockup = &evergreen_gpu_is_lockup,
+	.asic_reset = &evergreen_asic_reset,
 	.vga_set_state = &r600_vga_set_state,
-	.gart_tlb_flush = &r600_pcie_gart_tlb_flush,
+	.gart_tlb_flush = &evergreen_pcie_gart_tlb_flush,
 	.gart_set_page = &rs600_gart_set_page,
-	.ring_test = NULL,
-	.ring_ib_execute = NULL,
-	.irq_set = NULL,
-	.irq_process = NULL,
-	.get_vblank_counter = NULL,
-	.fence_ring_emit = NULL,
-	.cs_parse = NULL,
+	.ring_test = &r600_ring_test,
+	.ring_ib_execute = &r600_ring_ib_execute,
+	.irq_set = &evergreen_irq_set,
+	.irq_process = &evergreen_irq_process,
+	.get_vblank_counter = &evergreen_get_vblank_counter,
+	.fence_ring_emit = &r600_fence_ring_emit,
+	.cs_parse = &evergreen_cs_parse,
 	.copy_blit = NULL,
 	.copy_dma = NULL,
 	.copy = NULL,
@@ -650,6 +742,12 @@ static struct radeon_asic evergreen_asic = {
 	.hpd_fini = &evergreen_hpd_fini,
 	.hpd_sense = &evergreen_hpd_sense,
 	.hpd_set_polarity = &evergreen_hpd_set_polarity,
+	.gui_idle = &r600_gui_idle,
+	.pm_misc = &evergreen_pm_misc,
+	.pm_prepare = &evergreen_pm_prepare,
+	.pm_finish = &evergreen_pm_finish,
+	.pm_init_profile = &r600_pm_init_profile,
+	.pm_get_dynpm_state = &r600_pm_get_dynpm_state,
 };
 
 int radeon_asic_init(struct radeon_device *rdev)
diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
index a0b8280..c0bbaa6 100644
--- a/drivers/gpu/drm/radeon/radeon_asic.h
+++ b/drivers/gpu/drm/radeon/radeon_asic.h
@@ -60,7 +60,8 @@ int r100_resume(struct radeon_device *rdev);
 uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg);
 void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
 void r100_vga_set_state(struct radeon_device *rdev, bool state);
-int r100_gpu_reset(struct radeon_device *rdev);
+bool r100_gpu_is_lockup(struct radeon_device *rdev);
+int r100_asic_reset(struct radeon_device *rdev);
 u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc);
 void r100_pci_gart_tlb_flush(struct radeon_device *rdev);
 int r100_pci_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr);
@@ -110,8 +111,6 @@ void r100_vram_init_sizes(struct radeon_device *rdev);
 void r100_wb_disable(struct radeon_device *rdev);
 void r100_wb_fini(struct radeon_device *rdev);
 int r100_wb_init(struct radeon_device *rdev);
-void r100_hdp_reset(struct radeon_device *rdev);
-int r100_rb2d_reset(struct radeon_device *rdev);
 int r100_cp_reset(struct radeon_device *rdev);
 void r100_vga_render_disable(struct radeon_device *rdev);
 int r100_cs_track_check_pkt3_indx_buffer(struct radeon_cs_parser *p,
@@ -126,6 +125,13 @@ int r100_cs_packet_parse(struct radeon_cs_parser *p,
 			 unsigned idx);
 void r100_enable_bm(struct radeon_device *rdev);
 void r100_set_common_regs(struct radeon_device *rdev);
+void r100_bm_disable(struct radeon_device *rdev);
+extern bool r100_gui_idle(struct radeon_device *rdev);
+extern void r100_pm_misc(struct radeon_device *rdev);
+extern void r100_pm_prepare(struct radeon_device *rdev);
+extern void r100_pm_finish(struct radeon_device *rdev);
+extern void r100_pm_init_profile(struct radeon_device *rdev);
+extern void r100_pm_get_dynpm_state(struct radeon_device *rdev);
 
 /*
  * r200,rv250,rs300,rv280
@@ -134,7 +140,7 @@ extern int r200_copy_dma(struct radeon_device *rdev,
 			uint64_t src_offset,
 			uint64_t dst_offset,
 			unsigned num_pages,
-			struct radeon_fence *fence);
+			 struct radeon_fence *fence);
 
 /*
  * r300,r350,rv350,rv380
@@ -143,7 +149,8 @@ extern int r300_init(struct radeon_device *rdev);
 extern void r300_fini(struct radeon_device *rdev);
 extern int r300_suspend(struct radeon_device *rdev);
 extern int r300_resume(struct radeon_device *rdev);
-extern int r300_gpu_reset(struct radeon_device *rdev);
+extern bool r300_gpu_is_lockup(struct radeon_device *rdev);
+extern int r300_asic_reset(struct radeon_device *rdev);
 extern void r300_ring_start(struct radeon_device *rdev);
 extern void r300_fence_ring_emit(struct radeon_device *rdev,
 				struct radeon_fence *fence);
@@ -162,6 +169,7 @@ extern int r420_init(struct radeon_device *rdev);
 extern void r420_fini(struct radeon_device *rdev);
 extern int r420_suspend(struct radeon_device *rdev);
 extern int r420_resume(struct radeon_device *rdev);
+extern void r420_pm_init_profile(struct radeon_device *rdev);
 
 /*
  * rs400,rs480
@@ -178,6 +186,7 @@ void rs400_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
 /*
  * rs600.
  */
+extern int rs600_asic_reset(struct radeon_device *rdev);
 extern int rs600_init(struct radeon_device *rdev);
 extern void rs600_fini(struct radeon_device *rdev);
 extern int rs600_suspend(struct radeon_device *rdev);
@@ -195,6 +204,9 @@ void rs600_hpd_fini(struct radeon_device *rdev);
 bool rs600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
 void rs600_hpd_set_polarity(struct radeon_device *rdev,
 			    enum radeon_hpd_id hpd);
+extern void rs600_pm_misc(struct radeon_device *rdev);
+extern void rs600_pm_prepare(struct radeon_device *rdev);
+extern void rs600_pm_finish(struct radeon_device *rdev);
 
 /*
  * rs690,rs740
@@ -212,7 +224,6 @@ void rs690_bandwidth_update(struct radeon_device *rdev);
  */
 int rv515_init(struct radeon_device *rdev);
 void rv515_fini(struct radeon_device *rdev);
-int rv515_gpu_reset(struct radeon_device *rdev);
 uint32_t rv515_mc_rreg(struct radeon_device *rdev, uint32_t reg);
 void rv515_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v);
 void rv515_ring_start(struct radeon_device *rdev);
@@ -252,7 +263,8 @@ int r600_copy_dma(struct radeon_device *rdev,
 		  struct radeon_fence *fence);
 int r600_irq_process(struct radeon_device *rdev);
 int r600_irq_set(struct radeon_device *rdev);
-int r600_gpu_reset(struct radeon_device *rdev);
+bool r600_gpu_is_lockup(struct radeon_device *rdev);
+int r600_asic_reset(struct radeon_device *rdev);
 int r600_set_surface_reg(struct radeon_device *rdev, int reg,
 			 uint32_t tiling_flags, uint32_t pitch,
 			 uint32_t offset, uint32_t obj_size);
@@ -268,6 +280,11 @@ bool r600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
 void r600_hpd_set_polarity(struct radeon_device *rdev,
 			   enum radeon_hpd_id hpd);
 extern void r600_ioctl_wait_idle(struct radeon_device *rdev, struct radeon_bo *bo);
+extern bool r600_gui_idle(struct radeon_device *rdev);
+extern void r600_pm_misc(struct radeon_device *rdev);
+extern void r600_pm_init_profile(struct radeon_device *rdev);
+extern void rs780_pm_init_profile(struct radeon_device *rdev);
+extern void r600_pm_get_dynpm_state(struct radeon_device *rdev);
 
 /*
  * rv770,rv730,rv710,rv740
@@ -276,20 +293,30 @@ int rv770_init(struct radeon_device *rdev);
 void rv770_fini(struct radeon_device *rdev);
 int rv770_suspend(struct radeon_device *rdev);
 int rv770_resume(struct radeon_device *rdev);
-int rv770_gpu_reset(struct radeon_device *rdev);
+extern void rv770_pm_misc(struct radeon_device *rdev);
 
 /*
  * evergreen
  */
+void evergreen_pcie_gart_tlb_flush(struct radeon_device *rdev);
 int evergreen_init(struct radeon_device *rdev);
 void evergreen_fini(struct radeon_device *rdev);
 int evergreen_suspend(struct radeon_device *rdev);
 int evergreen_resume(struct radeon_device *rdev);
-int evergreen_gpu_reset(struct radeon_device *rdev);
+bool evergreen_gpu_is_lockup(struct radeon_device *rdev);
+int evergreen_asic_reset(struct radeon_device *rdev);
 void evergreen_bandwidth_update(struct radeon_device *rdev);
 void evergreen_hpd_init(struct radeon_device *rdev);
 void evergreen_hpd_fini(struct radeon_device *rdev);
 bool evergreen_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd);
 void evergreen_hpd_set_polarity(struct radeon_device *rdev,
 				enum radeon_hpd_id hpd);
+u32 evergreen_get_vblank_counter(struct radeon_device *rdev, int crtc);
+int evergreen_irq_set(struct radeon_device *rdev);
+int evergreen_irq_process(struct radeon_device *rdev);
+extern int evergreen_cs_parse(struct radeon_cs_parser *p);
+extern void evergreen_pm_misc(struct radeon_device *rdev);
+extern void evergreen_pm_prepare(struct radeon_device *rdev);
+extern void evergreen_pm_finish(struct radeon_device *rdev);
+
 #endif
diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c
index 9916d82..99bd8a9 100644
--- a/drivers/gpu/drm/radeon/radeon_atombios.c
+++ b/drivers/gpu/drm/radeon/radeon_atombios.c
@@ -530,6 +530,8 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
 			}
 
 			/* look up gpio for ddc, hpd */
+			ddc_bus.valid = false;
+			hpd.hpd = RADEON_HPD_NONE;
 			if ((le16_to_cpu(path->usDeviceTag) &
 			     (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) == 0) {
 				for (j = 0; j < con_obj->ucNumberOfObjects; j++) {
@@ -547,7 +549,6 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
 						ATOM_I2C_RECORD *i2c_record;
 						ATOM_HPD_INT_RECORD *hpd_record;
 						ATOM_I2C_ID_CONFIG_ACCESS *i2c_config;
-						hpd.hpd = RADEON_HPD_NONE;
 
 						while (record->ucRecordType > 0
 						       && record->
@@ -585,13 +586,10 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
 						break;
 					}
 				}
-			} else {
-				hpd.hpd = RADEON_HPD_NONE;
-				ddc_bus.valid = false;
 			}
 
 			/* needed for aux chan transactions */
-			ddc_bus.hpd_id = hpd.hpd ? (hpd.hpd - 1) : 0;
+			ddc_bus.hpd = hpd.hpd;
 
 			conn_id = le16_to_cpu(path->usConnObjectId);
 
@@ -682,10 +680,18 @@ bool radeon_get_atom_connector_info_from_supported_devices_table(struct
 	uint8_t dac;
 	union atom_supported_devices *supported_devices;
 	int i, j, max_device;
-	struct bios_connector bios_connectors[ATOM_MAX_SUPPORTED_DEVICE];
+	struct bios_connector *bios_connectors;
+	size_t bc_size = sizeof(*bios_connectors) * ATOM_MAX_SUPPORTED_DEVICE;
 
-	if (!atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset))
+	bios_connectors = kzalloc(bc_size, GFP_KERNEL);
+	if (!bios_connectors)
+		return false;
+
+	if (!atom_parse_data_header(ctx, index, &size, &frev, &crev,
+				    &data_offset)) {
+		kfree(bios_connectors);
 		return false;
+	}
 
 	supported_devices =
 	    (union atom_supported_devices *)(ctx->bios + data_offset);
@@ -853,6 +859,7 @@ bool radeon_get_atom_connector_info_from_supported_devices_table(struct
 
 	radeon_link_encoder_connector(dev);
 
+	kfree(bios_connectors);
 	return true;
 }
 
@@ -1174,7 +1181,7 @@ struct radeon_encoder_atom_dig *radeon_atombios_get_lvds_info(struct
 		lvds->native_mode.vtotal = lvds->native_mode.vdisplay +
 			le16_to_cpu(lvds_info->info.sLCDTiming.usVBlanking_Time);
 		lvds->native_mode.vsync_start = lvds->native_mode.vdisplay +
-			le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncWidth);
+			le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncOffset);
 		lvds->native_mode.vsync_end = lvds->native_mode.vsync_start +
 			le16_to_cpu(lvds_info->info.sLCDTiming.usVSyncWidth);
 		lvds->panel_pwr_delay =
@@ -1442,26 +1449,30 @@ radeon_atombios_get_tv_dac_info(struct radeon_encoder *encoder)
 
 static const char *thermal_controller_names[] = {
 	"NONE",
-	"LM63",
-	"ADM1032",
-	"ADM1030",
-	"MUA6649",
-	"LM64",
-	"F75375",
-	"ASC7512",
+	"lm63",
+	"adm1032",
+	"adm1030",
+	"max6649",
+	"lm64",
+	"f75375",
+	"asc7xxx",
 };
 
 static const char *pp_lib_thermal_controller_names[] = {
 	"NONE",
-	"LM63",
-	"ADM1032",
-	"ADM1030",
-	"MUA6649",
-	"LM64",
-	"F75375",
+	"lm63",
+	"adm1032",
+	"adm1030",
+	"max6649",
+	"lm64",
+	"f75375",
 	"RV6xx",
 	"RV770",
-	"ADT7473",
+	"adt7473",
+	"External GPIO",
+	"Evergreen",
+	"adt7473 with internal",
+
 };
 
 union power_info {
@@ -1485,7 +1496,7 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 	int state_index = 0, mode_index = 0;
 	struct radeon_i2c_bus_rec i2c_bus;
 
-	rdev->pm.default_power_state = NULL;
+	rdev->pm.default_power_state_index = -1;
 
 	if (atom_parse_data_header(mode_info->atom_context, index, NULL,
 				   &frev, &crev, &data_offset)) {
@@ -1498,10 +1509,19 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					 power_info->info.ucOverdriveControllerAddress >> 1);
 				i2c_bus = radeon_lookup_i2c_gpio(rdev, power_info->info.ucOverdriveI2cLine);
 				rdev->pm.i2c_bus = radeon_i2c_create(rdev->ddev, &i2c_bus, "Thermal");
+				if (rdev->pm.i2c_bus) {
+					struct i2c_board_info info = { };
+					const char *name = thermal_controller_names[power_info->info.
+										    ucOverdriveThermalController];
+					info.addr = power_info->info.ucOverdriveControllerAddress >> 1;
+					strlcpy(info.type, name, sizeof(info.type));
+					i2c_new_device(&rdev->pm.i2c_bus->adapter, &info);
+				}
 			}
 			num_modes = power_info->info.ucNumOfPowerModeEntries;
 			if (num_modes > ATOM_MAX_NUMBEROF_POWER_BLOCK)
 				num_modes = ATOM_MAX_NUMBEROF_POWER_BLOCK;
+			/* last mode is usually default, array is low to high */
 			for (i = 0; i < num_modes; i++) {
 				rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
 				switch (frev) {
@@ -1515,16 +1535,11 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if ((rdev->pm.power_state[state_index].clock_info[0].mclk == 0) ||
 					    (rdev->pm.power_state[state_index].clock_info[0].sclk == 0))
 						continue;
-					/* skip overclock modes for now */
-					if ((rdev->pm.power_state[state_index].clock_info[0].mclk >
-					     rdev->clock.default_mclk + RADEON_MODE_OVERCLOCK_MARGIN) ||
-					    (rdev->pm.power_state[state_index].clock_info[0].sclk >
-					     rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN))
-						continue;
-					rdev->pm.power_state[state_index].non_clock_info.pcie_lanes =
+					rdev->pm.power_state[state_index].pcie_lanes =
 						power_info->info.asPowerPlayInfo[i].ucNumPciELanes;
 					misc = le32_to_cpu(power_info->info.asPowerPlayInfo[i].ulMiscInfo);
-					if (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) {
+					if ((misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) ||
+					    (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_ACTIVE_HIGH)) {
 						rdev->pm.power_state[state_index].clock_info[0].voltage.type =
 							VOLTAGE_GPIO;
 						rdev->pm.power_state[state_index].clock_info[0].voltage.gpio =
@@ -1542,6 +1557,8 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 						rdev->pm.power_state[state_index].clock_info[0].voltage.vddc_id =
 							power_info->info.asPowerPlayInfo[i].ucVoltageDropIndex;
 					}
+					rdev->pm.power_state[state_index].flags = RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					rdev->pm.power_state[state_index].misc = misc;
 					/* order matters! */
 					if (misc & ATOM_PM_MISCINFO_POWER_SAVING_MODE)
 						rdev->pm.power_state[state_index].type =
@@ -1555,15 +1572,23 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if (misc & ATOM_PM_MISCINFO_LOAD_BALANCE_EN)
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_BALANCED;
-					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN)
+					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_PERFORMANCE;
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					}
 					if (misc & ATOM_PM_MISCINFO_DRIVER_DEFAULT_MODE) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_DEFAULT;
-						rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+						rdev->pm.default_power_state_index = state_index;
 						rdev->pm.power_state[state_index].default_clock_mode =
 							&rdev->pm.power_state[state_index].clock_info[0];
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					} else if (state_index == 0) {
+						rdev->pm.power_state[state_index].clock_info[0].flags |=
+							RADEON_PM_MODE_NO_DISPLAY;
 					}
 					state_index++;
 					break;
@@ -1577,17 +1602,12 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if ((rdev->pm.power_state[state_index].clock_info[0].mclk == 0) ||
 					    (rdev->pm.power_state[state_index].clock_info[0].sclk == 0))
 						continue;
-					/* skip overclock modes for now */
-					if ((rdev->pm.power_state[state_index].clock_info[0].mclk >
-					     rdev->clock.default_mclk + RADEON_MODE_OVERCLOCK_MARGIN) ||
-					    (rdev->pm.power_state[state_index].clock_info[0].sclk >
-					     rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN))
-						continue;
-					rdev->pm.power_state[state_index].non_clock_info.pcie_lanes =
+					rdev->pm.power_state[state_index].pcie_lanes =
 						power_info->info_2.asPowerPlayInfo[i].ucNumPciELanes;
 					misc = le32_to_cpu(power_info->info_2.asPowerPlayInfo[i].ulMiscInfo);
 					misc2 = le32_to_cpu(power_info->info_2.asPowerPlayInfo[i].ulMiscInfo2);
-					if (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) {
+					if ((misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) ||
+					    (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_ACTIVE_HIGH)) {
 						rdev->pm.power_state[state_index].clock_info[0].voltage.type =
 							VOLTAGE_GPIO;
 						rdev->pm.power_state[state_index].clock_info[0].voltage.gpio =
@@ -1605,6 +1625,9 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 						rdev->pm.power_state[state_index].clock_info[0].voltage.vddc_id =
 							power_info->info_2.asPowerPlayInfo[i].ucVoltageDropIndex;
 					}
+					rdev->pm.power_state[state_index].flags = RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					rdev->pm.power_state[state_index].misc = misc;
+					rdev->pm.power_state[state_index].misc2 = misc2;
 					/* order matters! */
 					if (misc & ATOM_PM_MISCINFO_POWER_SAVING_MODE)
 						rdev->pm.power_state[state_index].type =
@@ -1618,18 +1641,29 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if (misc & ATOM_PM_MISCINFO_LOAD_BALANCE_EN)
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_BALANCED;
-					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN)
+					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_PERFORMANCE;
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					}
 					if (misc2 & ATOM_PM_MISCINFO2_SYSTEM_AC_LITE_MODE)
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_BALANCED;
+					if (misc2 & ATOM_PM_MISCINFO2_MULTI_DISPLAY_SUPPORT)
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
 					if (misc & ATOM_PM_MISCINFO_DRIVER_DEFAULT_MODE) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_DEFAULT;
-						rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+						rdev->pm.default_power_state_index = state_index;
 						rdev->pm.power_state[state_index].default_clock_mode =
 							&rdev->pm.power_state[state_index].clock_info[0];
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					} else if (state_index == 0) {
+						rdev->pm.power_state[state_index].clock_info[0].flags |=
+							RADEON_PM_MODE_NO_DISPLAY;
 					}
 					state_index++;
 					break;
@@ -1643,17 +1677,12 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if ((rdev->pm.power_state[state_index].clock_info[0].mclk == 0) ||
 					    (rdev->pm.power_state[state_index].clock_info[0].sclk == 0))
 						continue;
-					/* skip overclock modes for now */
-					if ((rdev->pm.power_state[state_index].clock_info[0].mclk >
-					     rdev->clock.default_mclk + RADEON_MODE_OVERCLOCK_MARGIN) ||
-					    (rdev->pm.power_state[state_index].clock_info[0].sclk >
-					     rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN))
-						continue;
-					rdev->pm.power_state[state_index].non_clock_info.pcie_lanes =
+					rdev->pm.power_state[state_index].pcie_lanes =
 						power_info->info_3.asPowerPlayInfo[i].ucNumPciELanes;
 					misc = le32_to_cpu(power_info->info_3.asPowerPlayInfo[i].ulMiscInfo);
 					misc2 = le32_to_cpu(power_info->info_3.asPowerPlayInfo[i].ulMiscInfo2);
-					if (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) {
+					if ((misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) ||
+					    (misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_ACTIVE_HIGH)) {
 						rdev->pm.power_state[state_index].clock_info[0].voltage.type =
 							VOLTAGE_GPIO;
 						rdev->pm.power_state[state_index].clock_info[0].voltage.gpio =
@@ -1677,6 +1706,9 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 							power_info->info_3.asPowerPlayInfo[i].ucVDDCI_VoltageDropIndex;
 						}
 					}
+					rdev->pm.power_state[state_index].flags = RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					rdev->pm.power_state[state_index].misc = misc;
+					rdev->pm.power_state[state_index].misc2 = misc2;
 					/* order matters! */
 					if (misc & ATOM_PM_MISCINFO_POWER_SAVING_MODE)
 						rdev->pm.power_state[state_index].type =
@@ -1690,42 +1722,89 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 					if (misc & ATOM_PM_MISCINFO_LOAD_BALANCE_EN)
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_BALANCED;
-					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN)
+					if (misc & ATOM_PM_MISCINFO_3D_ACCELERATION_EN) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_PERFORMANCE;
+						rdev->pm.power_state[state_index].flags &=
+							~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+					}
 					if (misc2 & ATOM_PM_MISCINFO2_SYSTEM_AC_LITE_MODE)
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_BALANCED;
 					if (misc & ATOM_PM_MISCINFO_DRIVER_DEFAULT_MODE) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_DEFAULT;
-						rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+						rdev->pm.default_power_state_index = state_index;
 						rdev->pm.power_state[state_index].default_clock_mode =
 							&rdev->pm.power_state[state_index].clock_info[0];
+					} else if (state_index == 0) {
+						rdev->pm.power_state[state_index].clock_info[0].flags |=
+							RADEON_PM_MODE_NO_DISPLAY;
 					}
 					state_index++;
 					break;
 				}
 			}
-		} else if (frev == 4) {
+			/* last mode is usually default */
+			if (rdev->pm.default_power_state_index == -1) {
+				rdev->pm.power_state[state_index - 1].type =
+					POWER_STATE_TYPE_DEFAULT;
+				rdev->pm.default_power_state_index = state_index - 1;
+				rdev->pm.power_state[state_index - 1].default_clock_mode =
+					&rdev->pm.power_state[state_index - 1].clock_info[0];
+				rdev->pm.power_state[state_index].flags &=
+					~RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
+				rdev->pm.power_state[state_index].misc = 0;
+				rdev->pm.power_state[state_index].misc2 = 0;
+			}
+		} else {
+			int fw_index = GetIndexIntoMasterTable(DATA, FirmwareInfo);
+			uint8_t fw_frev, fw_crev;
+			uint16_t fw_data_offset, vddc = 0;
+			union firmware_info *firmware_info;
+			ATOM_PPLIB_THERMALCONTROLLER *controller = &power_info->info_4.sThermalController;
+
+			if (atom_parse_data_header(mode_info->atom_context, fw_index, NULL,
+						   &fw_frev, &fw_crev, &fw_data_offset)) {
+				firmware_info =
+					(union firmware_info *)(mode_info->atom_context->bios +
+								fw_data_offset);
+				vddc = firmware_info->info_14.usBootUpVDDCVoltage;
+			}
+
 			/* add the i2c bus for thermal/fan chip */
 			/* no support for internal controller yet */
-			if (power_info->info_4.sThermalController.ucType > 0) {
-				if ((power_info->info_4.sThermalController.ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) ||
-				    (power_info->info_4.sThermalController.ucType == ATOM_PP_THERMALCONTROLLER_RV770)) {
+			if (controller->ucType > 0) {
+				if ((controller->ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) ||
+				    (controller->ucType == ATOM_PP_THERMALCONTROLLER_RV770) ||
+				    (controller->ucType == ATOM_PP_THERMALCONTROLLER_EVERGREEN)) {
 					DRM_INFO("Internal thermal controller %s fan control\n",
-						 (power_info->info_4.sThermalController.ucFanParameters &
+						 (controller->ucFanParameters &
 						  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
+				} else if ((controller->ucType ==
+					    ATOM_PP_THERMALCONTROLLER_EXTERNAL_GPIO) ||
+					   (controller->ucType ==
+					    ATOM_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL)) {
+					DRM_INFO("Special thermal controller config\n");
 				} else {
 					DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n",
-						 pp_lib_thermal_controller_names[power_info->info_4.sThermalController.ucType],
-						 power_info->info_4.sThermalController.ucI2cAddress >> 1,
-						 (power_info->info_4.sThermalController.ucFanParameters &
+						 pp_lib_thermal_controller_names[controller->ucType],
+						 controller->ucI2cAddress >> 1,
+						 (controller->ucFanParameters &
 						  ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with");
-					i2c_bus = radeon_lookup_i2c_gpio(rdev, power_info->info_4.sThermalController.ucI2cLine);
+					i2c_bus = radeon_lookup_i2c_gpio(rdev, controller->ucI2cLine);
 					rdev->pm.i2c_bus = radeon_i2c_create(rdev->ddev, &i2c_bus, "Thermal");
+					if (rdev->pm.i2c_bus) {
+						struct i2c_board_info info = { };
+						const char *name = pp_lib_thermal_controller_names[controller->ucType];
+						info.addr = controller->ucI2cAddress >> 1;
+						strlcpy(info.type, name, sizeof(info.type));
+						i2c_new_device(&rdev->pm.i2c_bus->adapter, &info);
+					}
+
 				}
 			}
+			/* first mode is usually default, followed by low to high */
 			for (i = 0; i < power_info->info_4.ucNumStates; i++) {
 				mode_index = 0;
 				power_state = (struct _ATOM_PPLIB_STATE *)
@@ -1754,14 +1833,31 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 						/* skip invalid modes */
 						if (rdev->pm.power_state[state_index].clock_info[mode_index].sclk == 0)
 							continue;
-						/* skip overclock modes for now */
-						if (rdev->pm.power_state[state_index].clock_info[mode_index].sclk >
-						    rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN)
+						/* voltage works differently on IGPs */
+						mode_index++;
+					} else if (ASIC_IS_DCE4(rdev)) {
+						struct _ATOM_PPLIB_EVERGREEN_CLOCK_INFO *clock_info =
+							(struct _ATOM_PPLIB_EVERGREEN_CLOCK_INFO *)
+							(mode_info->atom_context->bios +
+							 data_offset +
+							 le16_to_cpu(power_info->info_4.usClockInfoArrayOffset) +
+							 (power_state->ucClockStateIndices[j] *
+							  power_info->info_4.ucClockInfoSize));
+						sclk = le16_to_cpu(clock_info->usEngineClockLow);
+						sclk |= clock_info->ucEngineClockHigh << 16;
+						mclk = le16_to_cpu(clock_info->usMemoryClockLow);
+						mclk |= clock_info->ucMemoryClockHigh << 16;
+						rdev->pm.power_state[state_index].clock_info[mode_index].mclk = mclk;
+						rdev->pm.power_state[state_index].clock_info[mode_index].sclk = sclk;
+						/* skip invalid modes */
+						if ((rdev->pm.power_state[state_index].clock_info[mode_index].mclk == 0) ||
+						    (rdev->pm.power_state[state_index].clock_info[mode_index].sclk == 0))
 							continue;
 						rdev->pm.power_state[state_index].clock_info[mode_index].voltage.type =
 							VOLTAGE_SW;
 						rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage =
 							clock_info->usVDDC;
+						/* XXX usVDDCI */
 						mode_index++;
 					} else {
 						struct _ATOM_PPLIB_R600_CLOCK_INFO *clock_info =
@@ -1781,12 +1877,6 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 						if ((rdev->pm.power_state[state_index].clock_info[mode_index].mclk == 0) ||
 						    (rdev->pm.power_state[state_index].clock_info[mode_index].sclk == 0))
 							continue;
-						/* skip overclock modes for now */
-						if ((rdev->pm.power_state[state_index].clock_info[mode_index].mclk >
-						     rdev->clock.default_mclk + RADEON_MODE_OVERCLOCK_MARGIN) ||
-						    (rdev->pm.power_state[state_index].clock_info[mode_index].sclk >
-						     rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN))
-							continue;
 						rdev->pm.power_state[state_index].clock_info[mode_index].voltage.type =
 							VOLTAGE_SW;
 						rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage =
@@ -1798,7 +1888,9 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 				if (mode_index) {
 					misc = le32_to_cpu(non_clock_info->ulCapsAndSettings);
 					misc2 = le16_to_cpu(non_clock_info->usClassification);
-					rdev->pm.power_state[state_index].non_clock_info.pcie_lanes =
+					rdev->pm.power_state[state_index].misc = misc;
+					rdev->pm.power_state[state_index].misc2 = misc2;
+					rdev->pm.power_state[state_index].pcie_lanes =
 						((misc & ATOM_PPLIB_PCIE_LINK_WIDTH_MASK) >>
 						ATOM_PPLIB_PCIE_LINK_WIDTH_SHIFT) + 1;
 					switch (misc2 & ATOM_PPLIB_CLASSIFICATION_UI_MASK) {
@@ -1815,22 +1907,46 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 							POWER_STATE_TYPE_PERFORMANCE;
 						break;
 					}
+					rdev->pm.power_state[state_index].flags = 0;
+					if (misc & ATOM_PPLIB_SINGLE_DISPLAY_ONLY)
+						rdev->pm.power_state[state_index].flags |=
+							RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
 					if (misc2 & ATOM_PPLIB_CLASSIFICATION_BOOT) {
 						rdev->pm.power_state[state_index].type =
 							POWER_STATE_TYPE_DEFAULT;
-						rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+						rdev->pm.default_power_state_index = state_index;
 						rdev->pm.power_state[state_index].default_clock_mode =
 							&rdev->pm.power_state[state_index].clock_info[mode_index - 1];
+						/* patch the table values with the default slck/mclk from firmware info */
+						for (j = 0; j < mode_index; j++) {
+							rdev->pm.power_state[state_index].clock_info[j].mclk =
+								rdev->clock.default_mclk;
+							rdev->pm.power_state[state_index].clock_info[j].sclk =
+								rdev->clock.default_sclk;
+							if (vddc)
+								rdev->pm.power_state[state_index].clock_info[j].voltage.voltage =
+									vddc;
+						}
 					}
 					state_index++;
 				}
 			}
+			/* if multiple clock modes, mark the lowest as no display */
+			for (i = 0; i < state_index; i++) {
+				if (rdev->pm.power_state[i].num_clock_modes > 1)
+					rdev->pm.power_state[i].clock_info[0].flags |=
+						RADEON_PM_MODE_NO_DISPLAY;
+			}
+			/* first mode is usually default */
+			if (rdev->pm.default_power_state_index == -1) {
+				rdev->pm.power_state[0].type =
+					POWER_STATE_TYPE_DEFAULT;
+				rdev->pm.default_power_state_index = 0;
+				rdev->pm.power_state[0].default_clock_mode =
+					&rdev->pm.power_state[0].clock_info[0];
+			}
 		}
 	} else {
-		/* XXX figure out some good default low power mode for cards w/out power tables */
-	}
-
-	if (rdev->pm.default_power_state == NULL) {
 		/* add the default mode */
 		rdev->pm.power_state[state_index].type =
 			POWER_STATE_TYPE_DEFAULT;
@@ -1840,18 +1956,17 @@ void radeon_atombios_get_power_modes(struct radeon_device *rdev)
 		rdev->pm.power_state[state_index].default_clock_mode =
 			&rdev->pm.power_state[state_index].clock_info[0];
 		rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
-		if (rdev->asic->get_pcie_lanes)
-			rdev->pm.power_state[state_index].non_clock_info.pcie_lanes = radeon_get_pcie_lanes(rdev);
-		else
-			rdev->pm.power_state[state_index].non_clock_info.pcie_lanes = 16;
-		rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+		rdev->pm.power_state[state_index].pcie_lanes = 16;
+		rdev->pm.default_power_state_index = state_index;
+		rdev->pm.power_state[state_index].flags = 0;
 		state_index++;
 	}
+
 	rdev->pm.num_power_states = state_index;
 
-	rdev->pm.current_power_state = rdev->pm.default_power_state;
-	rdev->pm.current_clock_mode =
-		rdev->pm.default_power_state->default_clock_mode;
+	rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
+	rdev->pm.current_clock_mode_index = 0;
+	rdev->pm.current_vddc = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage;
 }
 
 void radeon_atom_set_clock_gating(struct radeon_device *rdev, int enable)
@@ -1907,6 +2022,42 @@ void radeon_atom_set_memory_clock(struct radeon_device *rdev,
 	atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
 }
 
+union set_voltage {
+	struct _SET_VOLTAGE_PS_ALLOCATION alloc;
+	struct _SET_VOLTAGE_PARAMETERS v1;
+	struct _SET_VOLTAGE_PARAMETERS_V2 v2;
+};
+
+void radeon_atom_set_voltage(struct radeon_device *rdev, u16 level)
+{
+	union set_voltage args;
+	int index = GetIndexIntoMasterTable(COMMAND, SetVoltage);
+	u8 frev, crev, volt_index = level;
+
+	if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
+		return;
+
+	switch (crev) {
+	case 1:
+		args.v1.ucVoltageType = SET_VOLTAGE_TYPE_ASIC_VDDC;
+		args.v1.ucVoltageMode = SET_ASIC_VOLTAGE_MODE_ALL_SOURCE;
+		args.v1.ucVoltageIndex = volt_index;
+		break;
+	case 2:
+		args.v2.ucVoltageType = SET_VOLTAGE_TYPE_ASIC_VDDC;
+		args.v2.ucVoltageMode = SET_ASIC_VOLTAGE_MODE_SET_VOLTAGE;
+		args.v2.usVoltageLevel = cpu_to_le16(level);
+		break;
+	default:
+		DRM_ERROR("Unknown table version %d, %d\n", frev, crev);
+		return;
+	}
+
+	atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
+}
+
+
+
 void radeon_atom_initialize_bios_scratch_regs(struct drm_device *dev)
 {
 	struct radeon_device *rdev = dev->dev_private;
diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c
index 8ad71f7..fbba938 100644
--- a/drivers/gpu/drm/radeon/radeon_bios.c
+++ b/drivers/gpu/drm/radeon/radeon_bios.c
@@ -85,12 +85,11 @@ static bool radeon_read_bios(struct radeon_device *rdev)
 		pci_unmap_rom(rdev->pdev, bios);
 		return false;
 	}
-	rdev->bios = kmalloc(size, GFP_KERNEL);
+	rdev->bios = kmemdup(bios, size, GFP_KERNEL);
 	if (rdev->bios == NULL) {
 		pci_unmap_rom(rdev->pdev, bios);
 		return false;
 	}
-	memcpy(rdev->bios, bios, size);
 	pci_unmap_rom(rdev->pdev, bios);
 	return true;
 }
diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
index 37db8ad..1bee2f9 100644
--- a/drivers/gpu/drm/radeon/radeon_combios.c
+++ b/drivers/gpu/drm/radeon/radeon_combios.c
@@ -450,17 +450,17 @@ bool radeon_combios_check_hardcoded_edid(struct radeon_device *rdev)
 {
 	int edid_info;
 	struct edid *edid;
+	unsigned char *raw;
 	edid_info = combios_get_table_offset(rdev->ddev, COMBIOS_HARDCODED_EDID_TABLE);
 	if (!edid_info)
 		return false;
 
-	edid = kmalloc(EDID_LENGTH * (DRM_MAX_EDID_EXT_NUM + 1),
-		       GFP_KERNEL);
+	raw = rdev->bios + edid_info;
+	edid = kmalloc(EDID_LENGTH * (raw[0x7e] + 1), GFP_KERNEL);
 	if (edid == NULL)
 		return false;
 
-	memcpy((unsigned char *)edid,
-	       (unsigned char *)(rdev->bios + edid_info), EDID_LENGTH);
+	memcpy((unsigned char *)edid, raw, EDID_LENGTH * (raw[0x7e] + 1));
 
 	if (!drm_edid_is_valid(edid)) {
 		kfree(edid);
@@ -600,7 +600,7 @@ static struct radeon_i2c_bus_rec combios_setup_i2c_bus(struct radeon_device *rde
 	}
 	i2c.mm_i2c = false;
 	i2c.i2c_id = 0;
-	i2c.hpd_id = 0;
+	i2c.hpd = RADEON_HPD_NONE;
 
 	if (ddc_line)
 		i2c.valid = true;
@@ -1113,18 +1113,20 @@ struct radeon_encoder_lvds *radeon_combios_get_lvds_info(struct radeon_encoder
 				break;
 
 			if ((RBIOS16(tmp) == lvds->native_mode.hdisplay) &&
-			    (RBIOS16(tmp + 2) ==
-			     lvds->native_mode.vdisplay)) {
-				lvds->native_mode.htotal = RBIOS16(tmp + 17) * 8;
-				lvds->native_mode.hsync_start = RBIOS16(tmp + 21) * 8;
-				lvds->native_mode.hsync_end = (RBIOS8(tmp + 23) +
-							       RBIOS16(tmp + 21)) * 8;
-
-				lvds->native_mode.vtotal = RBIOS16(tmp + 24);
-				lvds->native_mode.vsync_start = RBIOS16(tmp + 28) & 0x7ff;
-				lvds->native_mode.vsync_end =
-					((RBIOS16(tmp + 28) & 0xf800) >> 11) +
-					(RBIOS16(tmp + 28) & 0x7ff);
+			    (RBIOS16(tmp + 2) == lvds->native_mode.vdisplay)) {
+				lvds->native_mode.htotal = lvds->native_mode.hdisplay +
+					(RBIOS16(tmp + 17) - RBIOS16(tmp + 19)) * 8;
+				lvds->native_mode.hsync_start = lvds->native_mode.hdisplay +
+					(RBIOS16(tmp + 21) - RBIOS16(tmp + 19) - 1) * 8;
+				lvds->native_mode.hsync_end = lvds->native_mode.hsync_start +
+					(RBIOS8(tmp + 23) * 8);
+
+				lvds->native_mode.vtotal = lvds->native_mode.vdisplay +
+					(RBIOS16(tmp + 24) - RBIOS16(tmp + 26));
+				lvds->native_mode.vsync_start = lvds->native_mode.vdisplay +
+					((RBIOS16(tmp + 28) & 0x7ff) - RBIOS16(tmp + 26));
+				lvds->native_mode.vsync_end = lvds->native_mode.vsync_start +
+					((RBIOS16(tmp + 28) & 0xf800) >> 11);
 
 				lvds->native_mode.clock = RBIOS16(tmp + 9) * 10;
 				lvds->native_mode.flags = 0;
@@ -2024,6 +2026,7 @@ bool radeon_get_legacy_connector_info_from_bios(struct drm_device *dev)
 					combios_setup_i2c_bus(rdev, RADEON_GPIO_CRT2_DDC);
 				break;
 			default:
+				ddc_i2c.valid = false;
 				break;
 			}
 
@@ -2196,7 +2199,7 @@ bool radeon_get_legacy_connector_info_from_bios(struct drm_device *dev)
 						  ATOM_DEVICE_DFP1_SUPPORT);
 
 			ddc_i2c = combios_setup_i2c_bus(rdev, RADEON_GPIO_DVI_DDC);
-			hpd.hpd = RADEON_HPD_NONE;
+			hpd.hpd = RADEON_HPD_1;
 			radeon_add_legacy_connector(dev,
 						    0,
 						    ATOM_DEVICE_CRT1_SUPPORT |
@@ -2337,6 +2340,7 @@ bool radeon_get_legacy_connector_info_from_bios(struct drm_device *dev)
 			if (RBIOS8(tv_info + 6) == 'T') {
 				if (radeon_apply_legacy_tv_quirks(dev)) {
 					hpd.hpd = RADEON_HPD_NONE;
+					ddc_i2c.valid = false;
 					radeon_add_legacy_encoder(dev,
 								  radeon_get_encoder_id
 								  (dev,
@@ -2366,7 +2370,7 @@ void radeon_combios_get_power_modes(struct radeon_device *rdev)
 	u8 rev, blocks, tmp;
 	int state_index = 0;
 
-	rdev->pm.default_power_state = NULL;
+	rdev->pm.default_power_state_index = -1;
 
 	if (rdev->flags & RADEON_IS_MOBILITY) {
 		offset = combios_get_table_offset(dev, COMBIOS_POWERPLAY_INFO_TABLE);
@@ -2380,17 +2384,13 @@ void radeon_combios_get_power_modes(struct radeon_device *rdev)
 			if ((rdev->pm.power_state[state_index].clock_info[0].mclk == 0) ||
 			    (rdev->pm.power_state[state_index].clock_info[0].sclk == 0))
 				goto default_mode;
-			/* skip overclock modes for now */
-			if ((rdev->pm.power_state[state_index].clock_info[0].mclk >
-			     rdev->clock.default_mclk + RADEON_MODE_OVERCLOCK_MARGIN) ||
-			    (rdev->pm.power_state[state_index].clock_info[0].sclk >
-			     rdev->clock.default_sclk + RADEON_MODE_OVERCLOCK_MARGIN))
-				goto default_mode;
 			rdev->pm.power_state[state_index].type =
 				POWER_STATE_TYPE_BATTERY;
 			misc = RBIOS16(offset + 0x5 + 0x0);
 			if (rev > 4)
 				misc2 = RBIOS16(offset + 0x5 + 0xe);
+			rdev->pm.power_state[state_index].misc = misc;
+			rdev->pm.power_state[state_index].misc2 = misc2;
 			if (misc & 0x4) {
 				rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_GPIO;
 				if (misc & 0x8)
@@ -2437,8 +2437,9 @@ void radeon_combios_get_power_modes(struct radeon_device *rdev)
 			} else
 				rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
 			if (rev > 6)
-				rdev->pm.power_state[state_index].non_clock_info.pcie_lanes =
+				rdev->pm.power_state[state_index].pcie_lanes =
 					RBIOS8(offset + 0x5 + 0x10);
+			rdev->pm.power_state[state_index].flags = RADEON_PM_STATE_SINGLE_DISPLAY_ONLY;
 			state_index++;
 		} else {
 			/* XXX figure out some good default low power mode for mobility cards w/out power tables */
@@ -2455,17 +2456,19 @@ default_mode:
 	rdev->pm.power_state[state_index].clock_info[0].mclk = rdev->clock.default_mclk;
 	rdev->pm.power_state[state_index].clock_info[0].sclk = rdev->clock.default_sclk;
 	rdev->pm.power_state[state_index].default_clock_mode = &rdev->pm.power_state[state_index].clock_info[0];
-	rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
-	if (rdev->asic->get_pcie_lanes)
-		rdev->pm.power_state[state_index].non_clock_info.pcie_lanes = radeon_get_pcie_lanes(rdev);
+	if ((state_index > 0) &&
+	    (rdev->pm.power_state[0].clock_info[0].voltage.type == VOLTAGE_GPIO))
+		rdev->pm.power_state[state_index].clock_info[0].voltage =
+			rdev->pm.power_state[0].clock_info[0].voltage;
 	else
-		rdev->pm.power_state[state_index].non_clock_info.pcie_lanes = 16;
-	rdev->pm.default_power_state = &rdev->pm.power_state[state_index];
+		rdev->pm.power_state[state_index].clock_info[0].voltage.type = VOLTAGE_NONE;
+	rdev->pm.power_state[state_index].pcie_lanes = 16;
+	rdev->pm.power_state[state_index].flags = 0;
+	rdev->pm.default_power_state_index = state_index;
 	rdev->pm.num_power_states = state_index + 1;
 
-	rdev->pm.current_power_state = rdev->pm.default_power_state;
-	rdev->pm.current_clock_mode =
-		rdev->pm.default_power_state->default_clock_mode;
+	rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
+	rdev->pm.current_clock_mode_index = 0;
 }
 
 void radeon_external_tmds_setup(struct drm_encoder *encoder)
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index 4559a53..0c7ccc6 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -1041,7 +1041,6 @@ radeon_add_atom_connector(struct drm_device *dev,
 	struct radeon_connector_atom_dig *radeon_dig_connector;
 	uint32_t subpixel_order = SubPixelNone;
 	bool shared_ddc = false;
-	int ret;
 
 	/* fixme - tv/cv/din */
 	if (connector_type == DRM_MODE_CONNECTOR_Unknown)
@@ -1076,9 +1075,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 	switch (connector_type) {
 	case DRM_MODE_CONNECTOR_VGA:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_vga_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "VGA");
 			if (!radeon_connector->ddc_bus)
@@ -1088,12 +1085,11 @@ radeon_add_atom_connector(struct drm_device *dev,
 		drm_connector_attach_property(&radeon_connector->base,
 					      rdev->mode_info.load_detect_property,
 					      1);
+		connector->polled = DRM_CONNECTOR_POLL_CONNECT;
 		break;
 	case DRM_MODE_CONNECTOR_DVIA:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_vga_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "DVI");
 			if (!radeon_connector->ddc_bus)
@@ -1113,9 +1109,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 		radeon_dig_connector->igp_lane_info = igp_lane_info;
 		radeon_connector->con_priv = radeon_dig_connector;
 		drm_connector_init(dev, &radeon_connector->base, &radeon_dvi_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "DVI");
 			if (!radeon_connector->ddc_bus)
@@ -1141,9 +1135,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 		radeon_dig_connector->igp_lane_info = igp_lane_info;
 		radeon_connector->con_priv = radeon_dig_connector;
 		drm_connector_init(dev, &radeon_connector->base, &radeon_dvi_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "HDMI");
 			if (!radeon_connector->ddc_bus)
@@ -1163,9 +1155,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 		radeon_dig_connector->igp_lane_info = igp_lane_info;
 		radeon_connector->con_priv = radeon_dig_connector;
 		drm_connector_init(dev, &radeon_connector->base, &radeon_dp_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			/* add DP i2c bus */
 			if (connector_type == DRM_MODE_CONNECTOR_eDP)
@@ -1191,9 +1181,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 	case DRM_MODE_CONNECTOR_9PinDIN:
 		if (radeon_tv == 1) {
 			drm_connector_init(dev, &radeon_connector->base, &radeon_tv_connector_funcs, connector_type);
-			ret = drm_connector_helper_add(&radeon_connector->base, &radeon_tv_connector_helper_funcs);
-			if (ret)
-				goto failed;
+			drm_connector_helper_add(&radeon_connector->base, &radeon_tv_connector_helper_funcs);
 			radeon_connector->dac_load_detect = true;
 			drm_connector_attach_property(&radeon_connector->base,
 						      rdev->mode_info.load_detect_property,
@@ -1211,9 +1199,7 @@ radeon_add_atom_connector(struct drm_device *dev,
 		radeon_dig_connector->igp_lane_info = igp_lane_info;
 		radeon_connector->con_priv = radeon_dig_connector;
 		drm_connector_init(dev, &radeon_connector->base, &radeon_lvds_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_lvds_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_lvds_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "LVDS");
 			if (!radeon_connector->ddc_bus)
@@ -1226,6 +1212,12 @@ radeon_add_atom_connector(struct drm_device *dev,
 		break;
 	}
 
+	if (hpd->hpd == RADEON_HPD_NONE) {
+		if (i2c_bus->valid)
+			connector->polled = DRM_CONNECTOR_POLL_CONNECT;
+	} else
+		connector->polled = DRM_CONNECTOR_POLL_HPD;
+
 	connector->display_info.subpixel_order = subpixel_order;
 	drm_sysfs_connector_add(connector);
 	return;
@@ -1250,7 +1242,6 @@ radeon_add_legacy_connector(struct drm_device *dev,
 	struct drm_connector *connector;
 	struct radeon_connector *radeon_connector;
 	uint32_t subpixel_order = SubPixelNone;
-	int ret;
 
 	/* fixme - tv/cv/din */
 	if (connector_type == DRM_MODE_CONNECTOR_Unknown)
@@ -1278,9 +1269,7 @@ radeon_add_legacy_connector(struct drm_device *dev,
 	switch (connector_type) {
 	case DRM_MODE_CONNECTOR_VGA:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_vga_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "VGA");
 			if (!radeon_connector->ddc_bus)
@@ -1290,12 +1279,11 @@ radeon_add_legacy_connector(struct drm_device *dev,
 		drm_connector_attach_property(&radeon_connector->base,
 					      rdev->mode_info.load_detect_property,
 					      1);
+		connector->polled = DRM_CONNECTOR_POLL_CONNECT;
 		break;
 	case DRM_MODE_CONNECTOR_DVIA:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_vga_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_vga_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "DVI");
 			if (!radeon_connector->ddc_bus)
@@ -1309,9 +1297,7 @@ radeon_add_legacy_connector(struct drm_device *dev,
 	case DRM_MODE_CONNECTOR_DVII:
 	case DRM_MODE_CONNECTOR_DVID:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_dvi_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_dvi_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "DVI");
 			if (!radeon_connector->ddc_bus)
@@ -1330,9 +1316,7 @@ radeon_add_legacy_connector(struct drm_device *dev,
 	case DRM_MODE_CONNECTOR_9PinDIN:
 		if (radeon_tv == 1) {
 			drm_connector_init(dev, &radeon_connector->base, &radeon_tv_connector_funcs, connector_type);
-			ret = drm_connector_helper_add(&radeon_connector->base, &radeon_tv_connector_helper_funcs);
-			if (ret)
-				goto failed;
+			drm_connector_helper_add(&radeon_connector->base, &radeon_tv_connector_helper_funcs);
 			radeon_connector->dac_load_detect = true;
 			/* RS400,RC410,RS480 chipset seems to report a lot
 			 * of false positive on load detect, we haven't yet
@@ -1351,9 +1335,7 @@ radeon_add_legacy_connector(struct drm_device *dev,
 		break;
 	case DRM_MODE_CONNECTOR_LVDS:
 		drm_connector_init(dev, &radeon_connector->base, &radeon_lvds_connector_funcs, connector_type);
-		ret = drm_connector_helper_add(&radeon_connector->base, &radeon_lvds_connector_helper_funcs);
-		if (ret)
-			goto failed;
+		drm_connector_helper_add(&radeon_connector->base, &radeon_lvds_connector_helper_funcs);
 		if (i2c_bus->valid) {
 			radeon_connector->ddc_bus = radeon_i2c_create(dev, i2c_bus, "LVDS");
 			if (!radeon_connector->ddc_bus)
@@ -1366,6 +1348,11 @@ radeon_add_legacy_connector(struct drm_device *dev,
 		break;
 	}
 
+	if (hpd->hpd == RADEON_HPD_NONE) {
+		if (i2c_bus->valid)
+			connector->polled = DRM_CONNECTOR_POLL_CONNECT;
+	} else
+		connector->polled = DRM_CONNECTOR_POLL_HPD;
 	connector->display_info.subpixel_order = subpixel_order;
 	drm_sysfs_connector_add(connector);
 	return;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c
index f9b0fe0..ae0fb73 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -220,10 +220,6 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 	int r;
 
 	mutex_lock(&rdev->cs_mutex);
-	if (rdev->gpu_lockup) {
-		mutex_unlock(&rdev->cs_mutex);
-		return -EINVAL;
-	}
 	/* initialize parser */
 	memset(&parser, 0, sizeof(struct radeon_cs_parser));
 	parser.filp = filp;
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index 7b629e3..f10faed 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -299,24 +299,24 @@ void radeon_update_bandwidth_info(struct radeon_device *rdev)
 		sclk = radeon_get_engine_clock(rdev);
 		mclk = rdev->clock.default_mclk;
 
-		a.full = rfixed_const(100);
-		rdev->pm.sclk.full = rfixed_const(sclk);
-		rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a);
-		rdev->pm.mclk.full = rfixed_const(mclk);
-		rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a);
+		a.full = dfixed_const(100);
+		rdev->pm.sclk.full = dfixed_const(sclk);
+		rdev->pm.sclk.full = dfixed_div(rdev->pm.sclk, a);
+		rdev->pm.mclk.full = dfixed_const(mclk);
+		rdev->pm.mclk.full = dfixed_div(rdev->pm.mclk, a);
 
-		a.full = rfixed_const(16);
+		a.full = dfixed_const(16);
 		/* core_bandwidth = sclk(Mhz) * 16 */
-		rdev->pm.core_bandwidth.full = rfixed_div(rdev->pm.sclk, a);
+		rdev->pm.core_bandwidth.full = dfixed_div(rdev->pm.sclk, a);
 	} else {
 		sclk = radeon_get_engine_clock(rdev);
 		mclk = radeon_get_memory_clock(rdev);
 
-		a.full = rfixed_const(100);
-		rdev->pm.sclk.full = rfixed_const(sclk);
-		rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a);
-		rdev->pm.mclk.full = rfixed_const(mclk);
-		rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a);
+		a.full = dfixed_const(100);
+		rdev->pm.sclk.full = dfixed_const(sclk);
+		rdev->pm.sclk.full = dfixed_div(rdev->pm.sclk, a);
+		rdev->pm.mclk.full = dfixed_const(mclk);
+		rdev->pm.mclk.full = dfixed_div(rdev->pm.mclk, a);
 	}
 }
 
@@ -546,8 +546,10 @@ static void radeon_switcheroo_set_state(struct pci_dev *pdev, enum vga_switchero
 		/* don't suspend or resume card normally */
 		rdev->powered_down = false;
 		radeon_resume_kms(dev);
+		drm_kms_helper_poll_enable(dev);
 	} else {
 		printk(KERN_INFO "radeon: switched off\n");
+		drm_kms_helper_poll_disable(dev);
 		radeon_suspend_kms(dev, pmm);
 		/* don't suspend or resume card normally */
 		rdev->powered_down = true;
@@ -599,9 +601,11 @@ int radeon_device_init(struct radeon_device *rdev,
 		spin_lock_init(&rdev->ih.lock);
 	mutex_init(&rdev->gem.mutex);
 	mutex_init(&rdev->pm.mutex);
+	mutex_init(&rdev->vram_mutex);
 	rwlock_init(&rdev->fence_drv.lock);
 	INIT_LIST_HEAD(&rdev->gem.objects);
 	init_waitqueue_head(&rdev->irq.vblank_queue);
+	init_waitqueue_head(&rdev->irq.idle_queue);
 
 	/* setup workqueue */
 	rdev->wq = create_workqueue("radeon");
@@ -671,7 +675,7 @@ int radeon_device_init(struct radeon_device *rdev,
 		/* Acceleration not working on AGP card try again
 		 * with fallback to PCI or PCIE GART
 		 */
-		radeon_gpu_reset(rdev);
+		radeon_asic_reset(rdev);
 		radeon_fini(rdev);
 		radeon_agp_disable(rdev);
 		r = radeon_init(rdev);
@@ -691,6 +695,8 @@ void radeon_device_fini(struct radeon_device *rdev)
 {
 	DRM_INFO("radeon: finishing device.\n");
 	rdev->shutdown = true;
+	/* evict vram memory */
+	radeon_bo_evict_vram(rdev);
 	radeon_fini(rdev);
 	destroy_workqueue(rdev->wq);
 	vga_switcheroo_unregister_client(rdev->pdev);
@@ -707,6 +713,7 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
 {
 	struct radeon_device *rdev;
 	struct drm_crtc *crtc;
+	struct drm_connector *connector;
 	int r;
 
 	if (dev == NULL || dev->dev_private == NULL) {
@@ -719,6 +726,12 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
 
 	if (rdev->powered_down)
 		return 0;
+
+	/* turn off display hw */
+	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+		drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+	}
+
 	/* unpin the front buffers */
 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
 		struct radeon_framebuffer *rfb = to_radeon_framebuffer(crtc->fb);
@@ -728,9 +741,10 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
 			continue;
 		}
 		robj = rfb->obj->driver_private;
-		if (robj != rdev->fbdev_rbo) {
+		/* don't unpin kernel fb objects */
+		if (!radeon_fbdev_robj_is_fb(rdev, robj)) {
 			r = radeon_bo_reserve(robj, false);
-			if (unlikely(r == 0)) {
+			if (r == 0) {
 				radeon_bo_unpin(robj);
 				radeon_bo_unreserve(robj);
 			}
@@ -743,11 +757,14 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
 
 	radeon_save_bios_scratch_regs(rdev);
 
+	radeon_pm_suspend(rdev);
 	radeon_suspend(rdev);
 	radeon_hpd_fini(rdev);
 	/* evict remaining vram memory */
 	radeon_bo_evict_vram(rdev);
 
+	radeon_agp_suspend(rdev);
+
 	pci_save_state(dev->pdev);
 	if (state.event == PM_EVENT_SUSPEND) {
 		/* Shut down the device */
@@ -755,7 +772,7 @@ int radeon_suspend_kms(struct drm_device *dev, pm_message_t state)
 		pci_set_power_state(dev->pdev, PCI_D3hot);
 	}
 	acquire_console_sem();
-	fb_set_suspend(rdev->fbdev_info, 1);
+	radeon_fbdev_set_suspend(rdev, 1);
 	release_console_sem();
 	return 0;
 }
@@ -778,8 +795,9 @@ int radeon_resume_kms(struct drm_device *dev)
 	/* resume AGP if in use */
 	radeon_agp_resume(rdev);
 	radeon_resume(rdev);
+	radeon_pm_resume(rdev);
 	radeon_restore_bios_scratch_regs(rdev);
-	fb_set_suspend(rdev->fbdev_info, 0);
+	radeon_fbdev_set_suspend(rdev, 0);
 	release_console_sem();
 
 	/* reset hpd state */
@@ -789,6 +807,26 @@ int radeon_resume_kms(struct drm_device *dev)
 	return 0;
 }
 
+int radeon_gpu_reset(struct radeon_device *rdev)
+{
+	int r;
+
+	radeon_save_bios_scratch_regs(rdev);
+	radeon_suspend(rdev);
+
+	r = radeon_asic_reset(rdev);
+	if (!r) {
+		dev_info(rdev->dev, "GPU reset succeed\n");
+		radeon_resume(rdev);
+		radeon_restore_bios_scratch_regs(rdev);
+		drm_helper_resume_force_mode(rdev->ddev);
+		return 0;
+	}
+	/* bad news, how to tell it to userspace ? */
+	dev_info(rdev->dev, "GPU reset failed\n");
+	return r;
+}
+
 
 /*
  * Debugfs
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index bb1c122..c73444a 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -284,8 +284,7 @@ static const char *connector_names[15] = {
 	"eDP",
 };
 
-static const char *hpd_names[7] = {
-	"NONE",
+static const char *hpd_names[6] = {
 	"HPD1",
 	"HPD2",
 	"HPD3",
@@ -633,37 +632,37 @@ calc_fb_div(struct radeon_pll *pll,
 
 	vco_freq = freq * post_div;
 	/* feedback_divider = vco_freq * ref_div / pll->reference_freq; */
-	a.full = rfixed_const(pll->reference_freq);
-	feedback_divider.full = rfixed_const(vco_freq);
-	feedback_divider.full = rfixed_div(feedback_divider, a);
-	a.full = rfixed_const(ref_div);
-	feedback_divider.full = rfixed_mul(feedback_divider, a);
+	a.full = dfixed_const(pll->reference_freq);
+	feedback_divider.full = dfixed_const(vco_freq);
+	feedback_divider.full = dfixed_div(feedback_divider, a);
+	a.full = dfixed_const(ref_div);
+	feedback_divider.full = dfixed_mul(feedback_divider, a);
 
 	if (pll->flags & RADEON_PLL_USE_FRAC_FB_DIV) {
 		/* feedback_divider = floor((feedback_divider * 10.0) + 0.5) * 0.1; */
-		a.full = rfixed_const(10);
-		feedback_divider.full = rfixed_mul(feedback_divider, a);
-		feedback_divider.full += rfixed_const_half(0);
-		feedback_divider.full = rfixed_floor(feedback_divider);
-		feedback_divider.full = rfixed_div(feedback_divider, a);
+		a.full = dfixed_const(10);
+		feedback_divider.full = dfixed_mul(feedback_divider, a);
+		feedback_divider.full += dfixed_const_half(0);
+		feedback_divider.full = dfixed_floor(feedback_divider);
+		feedback_divider.full = dfixed_div(feedback_divider, a);
 
 		/* *fb_div = floor(feedback_divider); */
-		a.full = rfixed_floor(feedback_divider);
-		*fb_div = rfixed_trunc(a);
+		a.full = dfixed_floor(feedback_divider);
+		*fb_div = dfixed_trunc(a);
 		/* *fb_div_frac = fmod(feedback_divider, 1.0) * 10.0; */
-		a.full = rfixed_const(10);
-		b.full = rfixed_mul(feedback_divider, a);
+		a.full = dfixed_const(10);
+		b.full = dfixed_mul(feedback_divider, a);
 
-		feedback_divider.full = rfixed_floor(feedback_divider);
-		feedback_divider.full = rfixed_mul(feedback_divider, a);
+		feedback_divider.full = dfixed_floor(feedback_divider);
+		feedback_divider.full = dfixed_mul(feedback_divider, a);
 		feedback_divider.full = b.full - feedback_divider.full;
-		*fb_div_frac = rfixed_trunc(feedback_divider);
+		*fb_div_frac = dfixed_trunc(feedback_divider);
 	} else {
 		/* *fb_div = floor(feedback_divider + 0.5); */
-		feedback_divider.full += rfixed_const_half(0);
-		feedback_divider.full = rfixed_floor(feedback_divider);
+		feedback_divider.full += dfixed_const_half(0);
+		feedback_divider.full = dfixed_floor(feedback_divider);
 
-		*fb_div = rfixed_trunc(feedback_divider);
+		*fb_div = dfixed_trunc(feedback_divider);
 		*fb_div_frac = 0;
 	}
 
@@ -693,10 +692,10 @@ calc_fb_ref_div(struct radeon_pll *pll,
 		pll_out_max = pll->pll_out_max;
 	}
 
-	ffreq.full = rfixed_const(freq);
+	ffreq.full = dfixed_const(freq);
 	/* max_error = ffreq * 0.0025; */
-	a.full = rfixed_const(400);
-	max_error.full = rfixed_div(ffreq, a);
+	a.full = dfixed_const(400);
+	max_error.full = dfixed_div(ffreq, a);
 
 	for ((*ref_div) = pll->min_ref_div; (*ref_div) < pll->max_ref_div; ++(*ref_div)) {
 		if (calc_fb_div(pll, freq, post_div, (*ref_div), fb_div, fb_div_frac)) {
@@ -707,9 +706,9 @@ calc_fb_ref_div(struct radeon_pll *pll,
 				continue;
 
 			/* pll_out = vco / post_div; */
-			a.full = rfixed_const(post_div);
-			pll_out.full = rfixed_const(vco);
-			pll_out.full = rfixed_div(pll_out, a);
+			a.full = dfixed_const(post_div);
+			pll_out.full = dfixed_const(vco);
+			pll_out.full = dfixed_div(pll_out, a);
 
 			if (pll_out.full >= ffreq.full) {
 				error.full = pll_out.full - ffreq.full;
@@ -831,10 +830,6 @@ void radeon_compute_pll(struct radeon_pll *pll,
 static void radeon_user_framebuffer_destroy(struct drm_framebuffer *fb)
 {
 	struct radeon_framebuffer *radeon_fb = to_radeon_framebuffer(fb);
-	struct drm_device *dev = fb->dev;
-
-	if (fb->fbdev)
-		radeonfb_remove(dev, fb);
 
 	if (radeon_fb->obj)
 		drm_gem_object_unreference_unlocked(radeon_fb->obj);
@@ -856,21 +851,15 @@ static const struct drm_framebuffer_funcs radeon_fb_funcs = {
 	.create_handle = radeon_user_framebuffer_create_handle,
 };
 
-struct drm_framebuffer *
-radeon_framebuffer_create(struct drm_device *dev,
-			  struct drm_mode_fb_cmd *mode_cmd,
-			  struct drm_gem_object *obj)
+void
+radeon_framebuffer_init(struct drm_device *dev,
+			struct radeon_framebuffer *rfb,
+			struct drm_mode_fb_cmd *mode_cmd,
+			struct drm_gem_object *obj)
 {
-	struct radeon_framebuffer *radeon_fb;
-
-	radeon_fb = kzalloc(sizeof(*radeon_fb), GFP_KERNEL);
-	if (radeon_fb == NULL) {
-		return NULL;
-	}
-	drm_framebuffer_init(dev, &radeon_fb->base, &radeon_fb_funcs);
-	drm_helper_mode_fill_fb_struct(&radeon_fb->base, mode_cmd);
-	radeon_fb->obj = obj;
-	return &radeon_fb->base;
+	rfb->obj = obj;
+	drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs);
+	drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd);
 }
 
 static struct drm_framebuffer *
@@ -879,6 +868,7 @@ radeon_user_framebuffer_create(struct drm_device *dev,
 			       struct drm_mode_fb_cmd *mode_cmd)
 {
 	struct drm_gem_object *obj;
+	struct radeon_framebuffer *radeon_fb;
 
 	obj = drm_gem_object_lookup(dev, file_priv, mode_cmd->handle);
 	if (obj ==  NULL) {
@@ -886,12 +876,26 @@ radeon_user_framebuffer_create(struct drm_device *dev,
 			"can't create framebuffer\n", mode_cmd->handle);
 		return NULL;
 	}
-	return radeon_framebuffer_create(dev, mode_cmd, obj);
+
+	radeon_fb = kzalloc(sizeof(*radeon_fb), GFP_KERNEL);
+	if (radeon_fb == NULL) {
+		return NULL;
+	}
+
+	radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj);
+
+	return &radeon_fb->base;
+}
+
+static void radeon_output_poll_changed(struct drm_device *dev)
+{
+	struct radeon_device *rdev = dev->dev_private;
+	radeon_fb_output_poll_changed(rdev);
 }
 
 static const struct drm_mode_config_funcs radeon_mode_funcs = {
 	.fb_create = radeon_user_framebuffer_create,
-	.fb_changed = radeonfb_probe,
+	.output_poll_changed = radeon_output_poll_changed
 };
 
 struct drm_prop_enum_list {
@@ -978,8 +982,11 @@ void radeon_update_display_priority(struct radeon_device *rdev)
 		/* set display priority to high for r3xx, rv515 chips
 		 * this avoids flickering due to underflow to the
 		 * display controllers during heavy acceleration.
+		 * Don't force high on rs4xx igp chips as it seems to
+		 * affect the sound card.  See kernel bug 15982.
 		 */
-		if (ASIC_IS_R300(rdev) || (rdev->family == CHIP_RV515))
+		if ((ASIC_IS_R300(rdev) || (rdev->family == CHIP_RV515)) &&
+		    !(rdev->flags & RADEON_IS_IGP))
 			rdev->disp_priority = 2;
 		else
 			rdev->disp_priority = 0;
@@ -1031,15 +1038,27 @@ int radeon_modeset_init(struct radeon_device *rdev)
 	}
 	/* initialize hpd */
 	radeon_hpd_init(rdev);
-	drm_helper_initial_config(rdev->ddev);
+
+	/* Initialize power management */
+	if (radeon_pm)
+		radeon_pm_init(rdev);
+
+	radeon_fbdev_init(rdev);
+	drm_kms_helper_poll_init(rdev->ddev);
+
 	return 0;
 }
 
 void radeon_modeset_fini(struct radeon_device *rdev)
 {
+	radeon_fbdev_fini(rdev);
 	kfree(rdev->mode_info.bios_hardcoded_edid);
 
+	if (radeon_pm)
+		radeon_pm_fini(rdev);
+
 	if (rdev->mode_info.mode_config_initialized) {
+		drm_kms_helper_poll_fini(rdev->ddev);
 		radeon_hpd_fini(rdev);
 		drm_mode_config_cleanup(rdev->ddev);
 		rdev->mode_info.mode_config_initialized = false;
@@ -1089,15 +1108,15 @@ bool radeon_crtc_scaling_mode_fixup(struct drm_crtc *crtc,
 	}
 	if (radeon_crtc->rmx_type != RMX_OFF) {
 		fixed20_12 a, b;
-		a.full = rfixed_const(crtc->mode.vdisplay);
-		b.full = rfixed_const(radeon_crtc->native_mode.hdisplay);
-		radeon_crtc->vsc.full = rfixed_div(a, b);
-		a.full = rfixed_const(crtc->mode.hdisplay);
-		b.full = rfixed_const(radeon_crtc->native_mode.vdisplay);
-		radeon_crtc->hsc.full = rfixed_div(a, b);
+		a.full = dfixed_const(crtc->mode.vdisplay);
+		b.full = dfixed_const(radeon_crtc->native_mode.hdisplay);
+		radeon_crtc->vsc.full = dfixed_div(a, b);
+		a.full = dfixed_const(crtc->mode.hdisplay);
+		b.full = dfixed_const(radeon_crtc->native_mode.vdisplay);
+		radeon_crtc->hsc.full = dfixed_div(a, b);
 	} else {
-		radeon_crtc->vsc.full = rfixed_const(1);
-		radeon_crtc->hsc.full = rfixed_const(1);
+		radeon_crtc->vsc.full = dfixed_const(1);
+		radeon_crtc->hsc.full = dfixed_const(1);
 	}
 	return true;
 }
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index b3749d4..7ed94d2 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -44,9 +44,11 @@
  * - 2.1.0 - add square tiling interface
  * - 2.2.0 - add r6xx/r7xx const buffer support
  * - 2.3.0 - add MSPOS + 3D texture + r500 VAP regs
+ * - 2.4.0 - add crtc id query
+ * - 2.5.0 - add get accel 2 to work around ddx breakage for evergreen
  */
 #define KMS_DRIVER_MAJOR	2
-#define KMS_DRIVER_MINOR	3
+#define KMS_DRIVER_MINOR	5
 #define KMS_DRIVER_PATCHLEVEL	0
 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
 int radeon_driver_unload_kms(struct drm_device *dev);
@@ -91,10 +93,10 @@ int radeon_testing = 0;
 int radeon_connector_table = 0;
 int radeon_tv = 1;
 int radeon_new_pll = -1;
-int radeon_dynpm = -1;
 int radeon_audio = 1;
 int radeon_disp_priority = 0;
 int radeon_hw_i2c = 0;
+int radeon_pm = 0;
 
 MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers");
 module_param_named(no_wb, radeon_no_wb, int, 0444);
@@ -132,9 +134,6 @@ module_param_named(tv, radeon_tv, int, 0444);
 MODULE_PARM_DESC(new_pll, "Select new PLL code");
 module_param_named(new_pll, radeon_new_pll, int, 0444);
 
-MODULE_PARM_DESC(dynpm, "Disable/Enable dynamic power management (1 = enable)");
-module_param_named(dynpm, radeon_dynpm, int, 0444);
-
 MODULE_PARM_DESC(audio, "Audio enable (0 = disable)");
 module_param_named(audio, radeon_audio, int, 0444);
 
@@ -144,6 +143,9 @@ module_param_named(disp_priority, radeon_disp_priority, int, 0444);
 MODULE_PARM_DESC(hw_i2c, "hw i2c engine enable (0 = disable)");
 module_param_named(hw_i2c, radeon_hw_i2c, int, 0444);
 
+MODULE_PARM_DESC(pm, "enable power management (0 = disable)");
+module_param_named(pm, radeon_pm, int, 0444);
+
 static int radeon_suspend(struct drm_device *dev, pm_message_t state)
 {
 	drm_radeon_private_t *dev_priv = dev->dev_private;
diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c
index c5ddaf5..1ebb100 100644
--- a/drivers/gpu/drm/radeon/radeon_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_encoders.c
@@ -309,9 +309,6 @@ static bool radeon_atom_mode_fixup(struct drm_encoder *encoder,
 	struct drm_device *dev = encoder->dev;
 	struct radeon_device *rdev = dev->dev_private;
 
-	/* adjust pm to upcoming mode change */
-	radeon_pm_compute_clocks(rdev);
-
 	/* set the active encoder to connector routing */
 	radeon_encoder_set_active_device(encoder);
 	drm_mode_set_crtcinfo(adjusted_mode, 0);
@@ -1111,8 +1108,6 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
 	}
 	radeon_atombios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 union crtc_source_param {
@@ -1546,10 +1541,49 @@ static void radeon_atom_encoder_commit(struct drm_encoder *encoder)
 
 static void radeon_atom_encoder_disable(struct drm_encoder *encoder)
 {
+	struct drm_device *dev = encoder->dev;
+	struct radeon_device *rdev = dev->dev_private;
 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
 	struct radeon_encoder_atom_dig *dig;
 	radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
 
+	switch (radeon_encoder->encoder_id) {
+	case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
+	case ENCODER_OBJECT_ID_INTERNAL_LVDS:
+	case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
+		atombios_digital_setup(encoder, PANEL_ENCODER_ACTION_DISABLE);
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
+		if (ASIC_IS_DCE4(rdev))
+			/* disable the transmitter */
+			atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0);
+		else {
+			/* disable the encoder and transmitter */
+			atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0);
+			atombios_dig_encoder_setup(encoder, ATOM_DISABLE);
+		}
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DDI:
+		atombios_ddia_setup(encoder, ATOM_DISABLE);
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DVO1:
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
+		atombios_external_tmds_setup(encoder, ATOM_DISABLE);
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DAC1:
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
+	case ENCODER_OBJECT_ID_INTERNAL_DAC2:
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
+		atombios_dac_setup(encoder, ATOM_DISABLE);
+		if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT))
+			atombios_tv_setup(encoder, ATOM_DISABLE);
+		break;
+	}
+
 	if (radeon_encoder_is_digital(encoder)) {
 		if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI)
 			r600_hdmi_disable(encoder);
diff --git a/drivers/gpu/drm/radeon/radeon_fb.c b/drivers/gpu/drm/radeon/radeon_fb.c
index 9ac57a0..dc1634b 100644
--- a/drivers/gpu/drm/radeon/radeon_fb.c
+++ b/drivers/gpu/drm/radeon/radeon_fb.c
@@ -23,10 +23,6 @@
  * Authors:
  *     David Airlie
  */
-    /*
-     *  Modularization
-     */
-
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <linux/fb.h>
@@ -42,17 +38,21 @@
 
 #include <linux/vga_switcheroo.h>
 
-struct radeon_fb_device {
+/* object hierarchy -
+   this contains a helper + a radeon fb
+   the helper contains a pointer to radeon framebuffer baseclass.
+*/
+struct radeon_fbdev {
 	struct drm_fb_helper helper;
-	struct radeon_framebuffer	*rfb;
-	struct radeon_device		*rdev;
+	struct radeon_framebuffer rfb;
+	struct list_head fbdev_list;
+	struct radeon_device *rdev;
 };
 
 static struct fb_ops radeonfb_ops = {
 	.owner = THIS_MODULE,
 	.fb_check_var = drm_fb_helper_check_var,
 	.fb_set_par = drm_fb_helper_set_par,
-	.fb_setcolreg = drm_fb_helper_setcolreg,
 	.fb_fillrect = cfb_fillrect,
 	.fb_copyarea = cfb_copyarea,
 	.fb_imageblit = cfb_imageblit,
@@ -61,45 +61,6 @@ static struct fb_ops radeonfb_ops = {
 	.fb_setcmap = drm_fb_helper_setcmap,
 };
 
-/**
- * Currently it is assumed that the old framebuffer is reused.
- *
- * LOCKING
- * caller should hold the mode config lock.
- *
- */
-int radeonfb_resize(struct drm_device *dev, struct drm_crtc *crtc)
-{
-	struct fb_info *info;
-	struct drm_framebuffer *fb;
-	struct drm_display_mode *mode = crtc->desired_mode;
-
-	fb = crtc->fb;
-	if (fb == NULL) {
-		return 1;
-	}
-	info = fb->fbdev;
-	if (info == NULL) {
-		return 1;
-	}
-	if (mode == NULL) {
-		return 1;
-	}
-	info->var.xres = mode->hdisplay;
-	info->var.right_margin = mode->hsync_start - mode->hdisplay;
-	info->var.hsync_len = mode->hsync_end - mode->hsync_start;
-	info->var.left_margin = mode->htotal - mode->hsync_end;
-	info->var.yres = mode->vdisplay;
-	info->var.lower_margin = mode->vsync_start - mode->vdisplay;
-	info->var.vsync_len = mode->vsync_end - mode->vsync_start;
-	info->var.upper_margin = mode->vtotal - mode->vsync_end;
-	info->var.pixclock = 10000000 / mode->htotal * 1000 / mode->vtotal * 100;
-	/* avoid overflow */
-	info->var.pixclock = info->var.pixclock * 1000 / mode->vrefresh;
-
-	return 0;
-}
-EXPORT_SYMBOL(radeonfb_resize);
 
 static int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled)
 {
@@ -125,57 +86,44 @@ static int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bo
 	return aligned;
 }
 
-static struct drm_fb_helper_funcs radeon_fb_helper_funcs = {
-	.gamma_set = radeon_crtc_fb_gamma_set,
-	.gamma_get = radeon_crtc_fb_gamma_get,
-};
+static void radeonfb_destroy_pinned_object(struct drm_gem_object *gobj)
+{
+	struct radeon_bo *rbo = gobj->driver_private;
+	int ret;
+
+	ret = radeon_bo_reserve(rbo, false);
+	if (likely(ret == 0)) {
+		radeon_bo_kunmap(rbo);
+		radeon_bo_unreserve(rbo);
+	}
+	drm_gem_object_unreference_unlocked(gobj);
+}
 
-int radeonfb_create(struct drm_device *dev,
-		    uint32_t fb_width, uint32_t fb_height,
-		    uint32_t surface_width, uint32_t surface_height,
-		    uint32_t surface_depth, uint32_t surface_bpp,
-		    struct drm_framebuffer **fb_p)
+static int radeonfb_create_pinned_object(struct radeon_fbdev *rfbdev,
+					 struct drm_mode_fb_cmd *mode_cmd,
+					 struct drm_gem_object **gobj_p)
 {
-	struct radeon_device *rdev = dev->dev_private;
-	struct fb_info *info;
-	struct radeon_fb_device *rfbdev;
-	struct drm_framebuffer *fb = NULL;
-	struct radeon_framebuffer *rfb;
-	struct drm_mode_fb_cmd mode_cmd;
+	struct radeon_device *rdev = rfbdev->rdev;
 	struct drm_gem_object *gobj = NULL;
 	struct radeon_bo *rbo = NULL;
-	struct device *device = &rdev->pdev->dev;
-	int size, aligned_size, ret;
-	u64 fb_gpuaddr;
-	void *fbptr = NULL;
-	unsigned long tmp;
 	bool fb_tiled = false; /* useful for testing */
 	u32 tiling_flags = 0;
+	int ret;
+	int aligned_size, size;
 
-	mode_cmd.width = surface_width;
-	mode_cmd.height = surface_height;
-
-	/* avivo can't scanout real 24bpp */
-	if ((surface_bpp == 24) && ASIC_IS_AVIVO(rdev))
-		surface_bpp = 32;
-
-	mode_cmd.bpp = surface_bpp;
 	/* need to align pitch with crtc limits */
-	mode_cmd.pitch = radeon_align_pitch(rdev, mode_cmd.width, mode_cmd.bpp, fb_tiled) * ((mode_cmd.bpp + 1) / 8);
-	mode_cmd.depth = surface_depth;
+	mode_cmd->pitch = radeon_align_pitch(rdev, mode_cmd->width, mode_cmd->bpp, fb_tiled) * ((mode_cmd->bpp + 1) / 8);
 
-	size = mode_cmd.pitch * mode_cmd.height;
+	size = mode_cmd->pitch * mode_cmd->height;
 	aligned_size = ALIGN(size, PAGE_SIZE);
-
 	ret = radeon_gem_object_create(rdev, aligned_size, 0,
-			RADEON_GEM_DOMAIN_VRAM,
-			false, ttm_bo_type_kernel,
-			&gobj);
+				       RADEON_GEM_DOMAIN_VRAM,
+				       false, ttm_bo_type_kernel,
+				       &gobj);
 	if (ret) {
-		printk(KERN_ERR "failed to allocate framebuffer (%d %d)\n",
-		       surface_width, surface_height);
-		ret = -ENOMEM;
-		goto out;
+		printk(KERN_ERR "failed to allocate framebuffer (%d)\n",
+		       aligned_size);
+		return -ENOMEM;
 	}
 	rbo = gobj->driver_private;
 
@@ -183,7 +131,7 @@ int radeonfb_create(struct drm_device *dev,
 		tiling_flags = RADEON_TILING_MACRO;
 
 #ifdef __BIG_ENDIAN
-	switch (mode_cmd.bpp) {
+	switch (mode_cmd->bpp) {
 	case 32:
 		tiling_flags |= RADEON_TILING_SWAP_32BIT;
 		break;
@@ -196,57 +144,81 @@ int radeonfb_create(struct drm_device *dev,
 
 	if (tiling_flags) {
 		ret = radeon_bo_set_tiling_flags(rbo,
-					tiling_flags | RADEON_TILING_SURFACE,
-					mode_cmd.pitch);
+						 tiling_flags | RADEON_TILING_SURFACE,
+						 mode_cmd->pitch);
 		if (ret)
 			dev_err(rdev->dev, "FB failed to set tiling flags\n");
 	}
-	mutex_lock(&rdev->ddev->struct_mutex);
-	fb = radeon_framebuffer_create(rdev->ddev, &mode_cmd, gobj);
-	if (fb == NULL) {
-		DRM_ERROR("failed to allocate fb.\n");
-		ret = -ENOMEM;
-		goto out_unref;
-	}
+
+
 	ret = radeon_bo_reserve(rbo, false);
 	if (unlikely(ret != 0))
 		goto out_unref;
-	ret = radeon_bo_pin(rbo, RADEON_GEM_DOMAIN_VRAM, &fb_gpuaddr);
+	ret = radeon_bo_pin(rbo, RADEON_GEM_DOMAIN_VRAM, NULL);
 	if (ret) {
 		radeon_bo_unreserve(rbo);
 		goto out_unref;
 	}
 	if (fb_tiled)
 		radeon_bo_check_tiling(rbo, 0, 0);
-	ret = radeon_bo_kmap(rbo, &fbptr);
+	ret = radeon_bo_kmap(rbo, NULL);
 	radeon_bo_unreserve(rbo);
 	if (ret) {
 		goto out_unref;
 	}
 
-	list_add(&fb->filp_head, &rdev->ddev->mode_config.fb_kernel_list);
+	*gobj_p = gobj;
+	return 0;
+out_unref:
+	radeonfb_destroy_pinned_object(gobj);
+	*gobj_p = NULL;
+	return ret;
+}
 
-	*fb_p = fb;
-	rfb = to_radeon_framebuffer(fb);
-	rdev->fbdev_rfb = rfb;
-	rdev->fbdev_rbo = rbo;
+static int radeonfb_create(struct radeon_fbdev *rfbdev,
+			   struct drm_fb_helper_surface_size *sizes)
+{
+	struct radeon_device *rdev = rfbdev->rdev;
+	struct fb_info *info;
+	struct drm_framebuffer *fb = NULL;
+	struct drm_mode_fb_cmd mode_cmd;
+	struct drm_gem_object *gobj = NULL;
+	struct radeon_bo *rbo = NULL;
+	struct device *device = &rdev->pdev->dev;
+	int ret;
+	unsigned long tmp;
+
+	mode_cmd.width = sizes->surface_width;
+	mode_cmd.height = sizes->surface_height;
+
+	/* avivo can't scanout real 24bpp */
+	if ((sizes->surface_bpp == 24) && ASIC_IS_AVIVO(rdev))
+		sizes->surface_bpp = 32;
 
-	info = framebuffer_alloc(sizeof(struct radeon_fb_device), device);
+	mode_cmd.bpp = sizes->surface_bpp;
+	mode_cmd.depth = sizes->surface_depth;
+
+	ret = radeonfb_create_pinned_object(rfbdev, &mode_cmd, &gobj);
+	rbo = gobj->driver_private;
+
+	/* okay we have an object now allocate the framebuffer */
+	info = framebuffer_alloc(0, device);
 	if (info == NULL) {
 		ret = -ENOMEM;
 		goto out_unref;
 	}
 
-	rdev->fbdev_info = info;
-	rfbdev = info->par;
-	rfbdev->helper.funcs = &radeon_fb_helper_funcs;
-	rfbdev->helper.dev = dev;
-	ret = drm_fb_helper_init_crtc_count(&rfbdev->helper, rdev->num_crtc,
-					    RADEONFB_CONN_LIMIT);
-	if (ret)
-		goto out_unref;
+	info->par = rfbdev;
+
+	radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj);
+
+	fb = &rfbdev->rfb.base;
+
+	/* setup helper */
+	rfbdev->helper.fb = fb;
+	rfbdev->helper.fbdev = info;
 
-	memset_io(fbptr, 0x0, aligned_size);
+	memset_io(rbo->kptr, 0x0, radeon_bo_size(rbo));
 
 	strcpy(info->fix.id, "radeondrmfb");
 
@@ -255,17 +227,22 @@ int radeonfb_create(struct drm_device *dev,
 	info->flags = FBINFO_DEFAULT;
 	info->fbops = &radeonfb_ops;
 
-	tmp = fb_gpuaddr - rdev->mc.vram_start;
+	tmp = radeon_bo_gpu_offset(rbo) - rdev->mc.vram_start;
 	info->fix.smem_start = rdev->mc.aper_base + tmp;
-	info->fix.smem_len = size;
-	info->screen_base = fbptr;
-	info->screen_size = size;
+	info->fix.smem_len = radeon_bo_size(rbo);
+	info->screen_base = rbo->kptr;
+	info->screen_size = radeon_bo_size(rbo);
 
-	drm_fb_helper_fill_var(info, fb, fb_width, fb_height);
+	drm_fb_helper_fill_var(info, &rfbdev->helper, sizes->fb_width, sizes->fb_height);
 
 	/* setup aperture base/size for vesafb takeover */
-	info->aperture_base = rdev->ddev->mode_config.fb_base;
-	info->aperture_size = rdev->mc.real_vram_size;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		ret = -ENOMEM;
+		goto out_unref;
+	}
+	info->apertures->ranges[0].base = rdev->ddev->mode_config.fb_base;
+	info->apertures->ranges[0].size = rdev->mc.real_vram_size;
 
 	info->fix.mmio_start = 0;
 	info->fix.mmio_len = 0;
@@ -274,44 +251,55 @@ int radeonfb_create(struct drm_device *dev,
 	info->pixmap.access_align = 32;
 	info->pixmap.flags = FB_PIXMAP_SYSTEM;
 	info->pixmap.scan_align = 1;
+
 	if (info->screen_base == NULL) {
 		ret = -ENOSPC;
 		goto out_unref;
 	}
+
+	ret = fb_alloc_cmap(&info->cmap, 256, 0);
+	if (ret) {
+		ret = -ENOMEM;
+		goto out_unref;
+	}
+
 	DRM_INFO("fb mappable at 0x%lX\n",  info->fix.smem_start);
 	DRM_INFO("vram apper at 0x%lX\n",  (unsigned long)rdev->mc.aper_base);
-	DRM_INFO("size %lu\n", (unsigned long)size);
+	DRM_INFO("size %lu\n", (unsigned long)radeon_bo_size(rbo));
 	DRM_INFO("fb depth is %d\n", fb->depth);
 	DRM_INFO("   pitch is %d\n", fb->pitch);
 
-	fb->fbdev = info;
-	rfbdev->rfb = rfb;
-	rfbdev->rdev = rdev;
-
-	mutex_unlock(&rdev->ddev->struct_mutex);
 	vga_switcheroo_client_fb_set(rdev->ddev->pdev, info);
 	return 0;
 
 out_unref:
 	if (rbo) {
-		ret = radeon_bo_reserve(rbo, false);
-		if (likely(ret == 0)) {
-			radeon_bo_kunmap(rbo);
-			radeon_bo_unreserve(rbo);
-		}
+
 	}
 	if (fb && ret) {
-		list_del(&fb->filp_head);
 		drm_gem_object_unreference(gobj);
 		drm_framebuffer_cleanup(fb);
 		kfree(fb);
 	}
-	drm_gem_object_unreference(gobj);
-	mutex_unlock(&rdev->ddev->struct_mutex);
-out:
 	return ret;
 }
 
+static int radeon_fb_find_or_create_single(struct drm_fb_helper *helper,
+					   struct drm_fb_helper_surface_size *sizes)
+{
+	struct radeon_fbdev *rfbdev = (struct radeon_fbdev *)helper;
+	int new_fb = 0;
+	int ret;
+
+	if (!helper->fb) {
+		ret = radeonfb_create(rfbdev, sizes);
+		if (ret)
+			return ret;
+		new_fb = 1;
+	}
+	return new_fb;
+}
+
 static char *mode_option;
 int radeon_parse_options(char *options)
 {
@@ -328,46 +316,108 @@ int radeon_parse_options(char *options)
 	return 0;
 }
 
-int radeonfb_probe(struct drm_device *dev)
+void radeon_fb_output_poll_changed(struct radeon_device *rdev)
 {
-	struct radeon_device *rdev = dev->dev_private;
-	int bpp_sel = 32;
-
-	/* select 8 bpp console on RN50 or 16MB cards */
-	if (ASIC_IS_RN50(rdev) || rdev->mc.real_vram_size <= (32*1024*1024))
-		bpp_sel = 8;
-
-	return drm_fb_helper_single_fb_probe(dev, bpp_sel, &radeonfb_create);
+	drm_fb_helper_hotplug_event(&rdev->mode_info.rfbdev->helper);
 }
 
-int radeonfb_remove(struct drm_device *dev, struct drm_framebuffer *fb)
+static int radeon_fbdev_destroy(struct drm_device *dev, struct radeon_fbdev *rfbdev)
 {
 	struct fb_info *info;
-	struct radeon_framebuffer *rfb = to_radeon_framebuffer(fb);
+	struct radeon_framebuffer *rfb = &rfbdev->rfb;
 	struct radeon_bo *rbo;
 	int r;
 
-	if (!fb) {
-		return -EINVAL;
+	if (rfbdev->helper.fbdev) {
+		info = rfbdev->helper.fbdev;
+
+		unregister_framebuffer(info);
+		if (info->cmap.len)
+			fb_dealloc_cmap(&info->cmap);
+		framebuffer_release(info);
 	}
-	info = fb->fbdev;
-	if (info) {
-		struct radeon_fb_device *rfbdev = info->par;
+
+	if (rfb->obj) {
 		rbo = rfb->obj->driver_private;
-		unregister_framebuffer(info);
 		r = radeon_bo_reserve(rbo, false);
 		if (likely(r == 0)) {
 			radeon_bo_kunmap(rbo);
 			radeon_bo_unpin(rbo);
 			radeon_bo_unreserve(rbo);
 		}
-		drm_fb_helper_free(&rfbdev->helper);
-		framebuffer_release(info);
+		drm_gem_object_unreference_unlocked(rfb->obj);
 	}
+	drm_fb_helper_fini(&rfbdev->helper);
+	drm_framebuffer_cleanup(&rfb->base);
 
-	printk(KERN_INFO "unregistered panic notifier\n");
+	return 0;
+}
 
+static struct drm_fb_helper_funcs radeon_fb_helper_funcs = {
+	.gamma_set = radeon_crtc_fb_gamma_set,
+	.gamma_get = radeon_crtc_fb_gamma_get,
+	.fb_probe = radeon_fb_find_or_create_single,
+};
+
+int radeon_fbdev_init(struct radeon_device *rdev)
+{
+	struct radeon_fbdev *rfbdev;
+	int bpp_sel = 32;
+	int ret;
+
+	/* select 8 bpp console on RN50 or 16MB cards */
+	if (ASIC_IS_RN50(rdev) || rdev->mc.real_vram_size <= (32*1024*1024))
+		bpp_sel = 8;
+
+	rfbdev = kzalloc(sizeof(struct radeon_fbdev), GFP_KERNEL);
+	if (!rfbdev)
+		return -ENOMEM;
+
+	rfbdev->rdev = rdev;
+	rdev->mode_info.rfbdev = rfbdev;
+	rfbdev->helper.funcs = &radeon_fb_helper_funcs;
+
+	ret = drm_fb_helper_init(rdev->ddev, &rfbdev->helper,
+				 rdev->num_crtc,
+				 RADEONFB_CONN_LIMIT);
+	if (ret) {
+		kfree(rfbdev);
+		return ret;
+	}
+
+	drm_fb_helper_single_add_all_connectors(&rfbdev->helper);
+	drm_fb_helper_initial_config(&rfbdev->helper, bpp_sel);
 	return 0;
 }
-EXPORT_SYMBOL(radeonfb_remove);
-MODULE_LICENSE("GPL");
+
+void radeon_fbdev_fini(struct radeon_device *rdev)
+{
+	if (!rdev->mode_info.rfbdev)
+		return;
+
+	radeon_fbdev_destroy(rdev->ddev, rdev->mode_info.rfbdev);
+	kfree(rdev->mode_info.rfbdev);
+	rdev->mode_info.rfbdev = NULL;
+}
+
+void radeon_fbdev_set_suspend(struct radeon_device *rdev, int state)
+{
+	fb_set_suspend(rdev->mode_info.rfbdev->helper.fbdev, state);
+}
+
+int radeon_fbdev_total_size(struct radeon_device *rdev)
+{
+	struct radeon_bo *robj;
+	int size = 0;
+
+	robj = rdev->mode_info.rfbdev->rfb.obj->driver_private;
+	size += radeon_bo_size(robj);
+	return size;
+}
+
+bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj)
+{
+	if (robj == rdev->mode_info.rfbdev->rfb.obj->driver_private)
+		return true;
+	return false;
+}
diff --git a/drivers/gpu/drm/radeon/radeon_fence.c b/drivers/gpu/drm/radeon/radeon_fence.c
index d90f95b..b1f9a81 100644
--- a/drivers/gpu/drm/radeon/radeon_fence.c
+++ b/drivers/gpu/drm/radeon/radeon_fence.c
@@ -58,7 +58,6 @@ int radeon_fence_emit(struct radeon_device *rdev, struct radeon_fence *fence)
 		radeon_fence_ring_emit(rdev, fence);
 
 	fence->emited = true;
-	fence->timeout = jiffies + ((2000 * HZ) / 1000);
 	list_del(&fence->list);
 	list_add_tail(&fence->list, &rdev->fence_drv.emited);
 	write_unlock_irqrestore(&rdev->fence_drv.lock, irq_flags);
@@ -71,15 +70,34 @@ static bool radeon_fence_poll_locked(struct radeon_device *rdev)
 	struct list_head *i, *n;
 	uint32_t seq;
 	bool wake = false;
+	unsigned long cjiffies;
 
-	if (rdev == NULL) {
-		return true;
-	}
-	if (rdev->shutdown) {
-		return true;
-	}
 	seq = RREG32(rdev->fence_drv.scratch_reg);
-	rdev->fence_drv.last_seq = seq;
+	if (seq != rdev->fence_drv.last_seq) {
+		rdev->fence_drv.last_seq = seq;
+		rdev->fence_drv.last_jiffies = jiffies;
+		rdev->fence_drv.last_timeout = RADEON_FENCE_JIFFIES_TIMEOUT;
+	} else {
+		cjiffies = jiffies;
+		if (time_after(cjiffies, rdev->fence_drv.last_jiffies)) {
+			cjiffies -= rdev->fence_drv.last_jiffies;
+			if (time_after(rdev->fence_drv.last_timeout, cjiffies)) {
+				/* update the timeout */
+				rdev->fence_drv.last_timeout -= cjiffies;
+			} else {
+				/* the 500ms timeout is elapsed we should test
+				 * for GPU lockup
+				 */
+				rdev->fence_drv.last_timeout = 1;
+			}
+		} else {
+			/* wrap around update last jiffies, we will just wait
+			 * a little longer
+			 */
+			rdev->fence_drv.last_jiffies = cjiffies;
+		}
+		return false;
+	}
 	n = NULL;
 	list_for_each(i, &rdev->fence_drv.emited) {
 		fence = list_entry(i, struct radeon_fence, list);
@@ -171,9 +189,8 @@ bool radeon_fence_signaled(struct radeon_fence *fence)
 int radeon_fence_wait(struct radeon_fence *fence, bool intr)
 {
 	struct radeon_device *rdev;
-	unsigned long cur_jiffies;
-	unsigned long timeout;
-	bool expired = false;
+	unsigned long irq_flags, timeout;
+	u32 seq;
 	int r;
 
 	if (fence == NULL) {
@@ -184,21 +201,18 @@ int radeon_fence_wait(struct radeon_fence *fence, bool intr)
 	if (radeon_fence_signaled(fence)) {
 		return 0;
 	}
-
+	timeout = rdev->fence_drv.last_timeout;
 retry:
-	cur_jiffies = jiffies;
-	timeout = HZ / 100;
-	if (time_after(fence->timeout, cur_jiffies)) {
-		timeout = fence->timeout - cur_jiffies;
-	}
-
+	/* save current sequence used to check for GPU lockup */
+	seq = rdev->fence_drv.last_seq;
 	if (intr) {
 		radeon_irq_kms_sw_irq_get(rdev);
 		r = wait_event_interruptible_timeout(rdev->fence_drv.queue,
 				radeon_fence_signaled(fence), timeout);
 		radeon_irq_kms_sw_irq_put(rdev);
-		if (unlikely(r < 0))
+		if (unlikely(r < 0)) {
 			return r;
+		}
 	} else {
 		radeon_irq_kms_sw_irq_get(rdev);
 		r = wait_event_timeout(rdev->fence_drv.queue,
@@ -206,38 +220,36 @@ retry:
 		radeon_irq_kms_sw_irq_put(rdev);
 	}
 	if (unlikely(!radeon_fence_signaled(fence))) {
-		if (unlikely(r == 0)) {
-			expired = true;
+		/* we were interrupted for some reason and fence isn't
+		 * isn't signaled yet, resume wait
+		 */
+		if (r) {
+			timeout = r;
+			goto retry;
 		}
-		if (unlikely(expired)) {
-			timeout = 1;
-			if (time_after(cur_jiffies, fence->timeout)) {
-				timeout = cur_jiffies - fence->timeout;
-			}
-			timeout = jiffies_to_msecs(timeout);
-			if (timeout > 500) {
-				DRM_ERROR("fence(%p:0x%08X) %lums timeout "
-					  "going to reset GPU\n",
-					  fence, fence->seq, timeout);
-				radeon_gpu_reset(rdev);
-				WREG32(rdev->fence_drv.scratch_reg, fence->seq);
-			}
+		/* don't protect read access to rdev->fence_drv.last_seq
+		 * if we experiencing a lockup the value doesn't change
+		 */
+		if (seq == rdev->fence_drv.last_seq && radeon_gpu_is_lockup(rdev)) {
+			/* good news we believe it's a lockup */
+			WARN(1, "GPU lockup (waiting for 0x%08X last fence id 0x%08X)\n", fence->seq, seq);
+			/* FIXME: what should we do ? marking everyone
+			 * as signaled for now
+			 */
+			rdev->gpu_lockup = true;
+			r = radeon_gpu_reset(rdev);
+			if (r)
+				return r;
+			WREG32(rdev->fence_drv.scratch_reg, fence->seq);
+			rdev->gpu_lockup = false;
 		}
+		timeout = RADEON_FENCE_JIFFIES_TIMEOUT;
+		write_lock_irqsave(&rdev->fence_drv.lock, irq_flags);
+		rdev->fence_drv.last_timeout = RADEON_FENCE_JIFFIES_TIMEOUT;
+		rdev->fence_drv.last_jiffies = jiffies;
+		write_unlock_irqrestore(&rdev->fence_drv.lock, irq_flags);
 		goto retry;
 	}
-	if (unlikely(expired)) {
-		rdev->fence_drv.count_timeout++;
-		cur_jiffies = jiffies;
-		timeout = 1;
-		if (time_after(cur_jiffies, fence->timeout)) {
-			timeout = cur_jiffies - fence->timeout;
-		}
-		timeout = jiffies_to_msecs(timeout);
-		DRM_ERROR("fence(%p:0x%08X) %lums timeout\n",
-			  fence, fence->seq, timeout);
-		DRM_ERROR("last signaled fence(0x%08X)\n",
-			  rdev->fence_drv.last_seq);
-	}
 	return 0;
 }
 
@@ -333,7 +345,6 @@ int radeon_fence_driver_init(struct radeon_device *rdev)
 	INIT_LIST_HEAD(&rdev->fence_drv.created);
 	INIT_LIST_HEAD(&rdev->fence_drv.emited);
 	INIT_LIST_HEAD(&rdev->fence_drv.signaled);
-	rdev->fence_drv.count_timeout = 0;
 	init_waitqueue_head(&rdev->fence_drv.queue);
 	rdev->fence_drv.initialized = true;
 	write_unlock_irqrestore(&rdev->fence_drv.lock, irq_flags);
diff --git a/drivers/gpu/drm/radeon/radeon_fixed.h b/drivers/gpu/drm/radeon/radeon_fixed.h
deleted file mode 100644
index 3d4d84e..0000000
--- a/drivers/gpu/drm/radeon/radeon_fixed.h
+++ /dev/null
@@ -1,67 +0,0 @@
-/*
- * Copyright 2009 Red Hat Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- * Authors: Dave Airlie
- */
-#ifndef RADEON_FIXED_H
-#define RADEON_FIXED_H
-
-typedef union rfixed {
-	u32 full;
-} fixed20_12;
-
-
-#define rfixed_const(A) (u32)(((A) << 12))/*  + ((B + 0.000122)*4096)) */
-#define rfixed_const_half(A) (u32)(((A) << 12) + 2048)
-#define rfixed_const_666(A) (u32)(((A) << 12) + 2731)
-#define rfixed_const_8(A) (u32)(((A) << 12) + 3277)
-#define rfixed_mul(A, B) ((u64)((u64)(A).full * (B).full + 2048) >> 12)
-#define fixed_init(A) { .full = rfixed_const((A)) }
-#define fixed_init_half(A) { .full = rfixed_const_half((A)) }
-#define rfixed_trunc(A) ((A).full >> 12)
-
-static inline u32 rfixed_floor(fixed20_12 A)
-{
-	u32 non_frac = rfixed_trunc(A);
-
-	return rfixed_const(non_frac);
-}
-
-static inline u32 rfixed_ceil(fixed20_12 A)
-{
-	u32 non_frac = rfixed_trunc(A);
-
-	if (A.full > rfixed_const(non_frac))
-		return rfixed_const(non_frac + 1);
-	else
-		return rfixed_const(non_frac);
-}
-
-static inline u32 rfixed_div(fixed20_12 A, fixed20_12 B)
-{
-	u64 tmp = ((u64)A.full << 13);
-
-	do_div(tmp, B.full);
-	tmp += 1;
-	tmp /= 2;
-	return lower_32_bits(tmp);
-}
-#endif
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
index 1770d3c..e65b903 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -173,7 +173,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
 	int i, j;
 
 	if (!rdev->gart.ready) {
-		DRM_ERROR("trying to bind memory to unitialized GART !\n");
+		WARN(1, "trying to bind memory to unitialized GART !\n");
 		return -EINVAL;
 	}
 	t = offset / RADEON_GPU_PAGE_SIZE;
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index ef92d14..a72a3ee 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -44,6 +44,9 @@ void radeon_gem_object_free(struct drm_gem_object *gobj)
 	if (robj) {
 		radeon_bo_unref(&robj);
 	}
+
+	drm_gem_object_release(gobj);
+	kfree(gobj);
 }
 
 int radeon_gem_object_create(struct radeon_device *rdev, int size,
@@ -158,8 +161,7 @@ int radeon_gem_info_ioctl(struct drm_device *dev, void *data,
 	args->vram_visible = rdev->mc.real_vram_size;
 	if (rdev->stollen_vga_memory)
 		args->vram_visible -= radeon_bo_size(rdev->stollen_vga_memory);
-	if (rdev->fbdev_rbo)
-		args->vram_visible -= radeon_bo_size(rdev->fbdev_rbo);
+	args->vram_visible -= radeon_fbdev_total_size(rdev);
 	args->gart_size = rdev->mc.gtt_size - rdev->cp.ring_size - 4096 -
 		RADEON_IB_POOL_SIZE*64*1024;
 	return 0;
diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
index a212041..059bfa4 100644
--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
@@ -26,6 +26,7 @@
  *          Jerome Glisse
  */
 #include "drmP.h"
+#include "drm_crtc_helper.h"
 #include "radeon_drm.h"
 #include "radeon_reg.h"
 #include "radeon.h"
@@ -55,7 +56,7 @@ static void radeon_hotplug_work_func(struct work_struct *work)
 			radeon_connector_hotplug(connector);
 	}
 	/* Just fire off a uevent and let userspace tell us what to do */
-	drm_sysfs_hotplug_event(dev);
+	drm_helper_hpd_irq_event(dev);
 }
 
 void radeon_driver_irq_preinstall_kms(struct drm_device *dev)
@@ -67,6 +68,7 @@ void radeon_driver_irq_preinstall_kms(struct drm_device *dev)
 
 	/* Disable *all* interrupts */
 	rdev->irq.sw_int = false;
+	rdev->irq.gui_idle = false;
 	for (i = 0; i < rdev->num_crtc; i++)
 		rdev->irq.crtc_vblank_int[i] = false;
 	for (i = 0; i < 6; i++)
@@ -96,6 +98,7 @@ void radeon_driver_irq_uninstall_kms(struct drm_device *dev)
 	}
 	/* Disable *all* interrupts */
 	rdev->irq.sw_int = false;
+	rdev->irq.gui_idle = false;
 	for (i = 0; i < rdev->num_crtc; i++)
 		rdev->irq.crtc_vblank_int[i] = false;
 	for (i = 0; i < 6; i++)
diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
index c633319..6a70c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_kms.c
+++ b/drivers/gpu/drm/radeon/radeon_kms.c
@@ -98,11 +98,15 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 {
 	struct radeon_device *rdev = dev->dev_private;
 	struct drm_radeon_info *info;
+	struct radeon_mode_info *minfo = &rdev->mode_info;
 	uint32_t *value_ptr;
 	uint32_t value;
+	struct drm_crtc *crtc;
+	int i, found;
 
 	info = data;
 	value_ptr = (uint32_t *)((unsigned long)info->value);
+	value = *value_ptr;
 	switch (info->request) {
 	case RADEON_INFO_DEVICE_ID:
 		value = dev->pci_device;
@@ -114,6 +118,27 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
 		value = rdev->num_z_pipes;
 		break;
 	case RADEON_INFO_ACCEL_WORKING:
+		/* xf86-video-ati 6.13.0 relies on this being false for evergreen */
+		if ((rdev->family >= CHIP_CEDAR) && (rdev->family <= CHIP_HEMLOCK))
+			value = false;
+		else
+			value = rdev->accel_working;
+		break;
+	case RADEON_INFO_CRTC_FROM_ID:
+		for (i = 0, found = 0; i < rdev->num_crtc; i++) {
+			crtc = (struct drm_crtc *)minfo->crtcs[i];
+			if (crtc && crtc->base.id == value) {
+				value = i;
+				found = 1;
+				break;
+			}
+		}
+		if (!found) {
+			DRM_DEBUG("unknown crtc id %d\n", value);
+			return -EINVAL;
+		}
+		break;
+	case RADEON_INFO_ACCEL_WORKING2:
 		value = rdev->accel_working;
 		break;
 	default:
diff --git a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
index 88865e3..e1e5255 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_crtc.c
@@ -26,7 +26,7 @@
 #include <drm/drmP.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/radeon_drm.h>
-#include "radeon_fixed.h"
+#include <drm/drm_fixed.h>
 #include "radeon.h"
 #include "atom.h"
 
@@ -314,6 +314,9 @@ void radeon_crtc_dpms(struct drm_crtc *crtc, int mode)
 
 	switch (mode) {
 	case DRM_MODE_DPMS_ON:
+		radeon_crtc->enabled = true;
+		/* adjust pm to dpms changes BEFORE enabling crtcs */
+		radeon_pm_compute_clocks(rdev);
 		if (radeon_crtc->crtc_id)
 			WREG32_P(RADEON_CRTC2_GEN_CNTL, RADEON_CRTC2_EN, ~(RADEON_CRTC2_EN | mask));
 		else {
@@ -335,6 +338,9 @@ void radeon_crtc_dpms(struct drm_crtc *crtc, int mode)
 										    RADEON_CRTC_DISP_REQ_EN_B));
 			WREG32_P(RADEON_CRTC_EXT_CNTL, mask, ~mask);
 		}
+		radeon_crtc->enabled = false;
+		/* adjust pm to dpms changes AFTER disabling crtcs */
+		radeon_pm_compute_clocks(rdev);
 		break;
 	}
 }
@@ -966,6 +972,12 @@ static bool radeon_crtc_mode_fixup(struct drm_crtc *crtc,
 				   struct drm_display_mode *mode,
 				   struct drm_display_mode *adjusted_mode)
 {
+	struct drm_device *dev = crtc->dev;
+	struct radeon_device *rdev = dev->dev_private;
+
+	/* adjust pm to upcoming mode change */
+	radeon_pm_compute_clocks(rdev);
+
 	if (!radeon_crtc_scaling_mode_fixup(crtc, mode, adjusted_mode))
 		return false;
 	return true;
diff --git a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
index 0274abe..5b07b88 100644
--- a/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
+++ b/drivers/gpu/drm/radeon/radeon_legacy_encoders.c
@@ -116,8 +116,6 @@ static void radeon_legacy_lvds_dpms(struct drm_encoder *encoder, int mode)
 	else
 		radeon_combios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 static void radeon_legacy_lvds_prepare(struct drm_encoder *encoder)
@@ -217,11 +215,6 @@ static bool radeon_legacy_mode_fixup(struct drm_encoder *encoder,
 				     struct drm_display_mode *adjusted_mode)
 {
 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
-	struct drm_device *dev = encoder->dev;
-	struct radeon_device *rdev = dev->dev_private;
-
-	/* adjust pm to upcoming mode change */
-	radeon_pm_compute_clocks(rdev);
 
 	/* set the active encoder to connector routing */
 	radeon_encoder_set_active_device(encoder);
@@ -286,8 +279,6 @@ static void radeon_legacy_primary_dac_dpms(struct drm_encoder *encoder, int mode
 	else
 		radeon_combios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 static void radeon_legacy_primary_dac_prepare(struct drm_encoder *encoder)
@@ -474,8 +465,6 @@ static void radeon_legacy_tmds_int_dpms(struct drm_encoder *encoder, int mode)
 	else
 		radeon_combios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 static void radeon_legacy_tmds_int_prepare(struct drm_encoder *encoder)
@@ -642,8 +631,6 @@ static void radeon_legacy_tmds_ext_dpms(struct drm_encoder *encoder, int mode)
 	else
 		radeon_combios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 static void radeon_legacy_tmds_ext_prepare(struct drm_encoder *encoder)
@@ -852,8 +839,6 @@ static void radeon_legacy_tv_dac_dpms(struct drm_encoder *encoder, int mode)
 	else
 		radeon_combios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
 
-	/* adjust pm to dpms change */
-	radeon_pm_compute_clocks(rdev);
 }
 
 static void radeon_legacy_tv_dac_prepare(struct drm_encoder *encoder)
@@ -1183,6 +1168,17 @@ static enum drm_connector_status radeon_legacy_tv_dac_detect(struct drm_encoder
 	struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
 	struct radeon_encoder_tv_dac *tv_dac = radeon_encoder->enc_priv;
 	bool color = true;
+	struct drm_crtc *crtc;
+
+	/* find out if crtc2 is in use or if this encoder is using it */
+	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+		struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+		if ((radeon_crtc->crtc_id == 1) && crtc->enabled) {
+			if (encoder->crtc != crtc) {
+				return connector_status_disconnected;
+			}
+		}
+	}
 
 	if (connector->connector_type == DRM_MODE_CONNECTOR_SVIDEO ||
 	    connector->connector_type == DRM_MODE_CONNECTOR_Composite ||
diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h
index 5413fcd..67358ba 100644
--- a/drivers/gpu/drm/radeon/radeon_mode.h
+++ b/drivers/gpu/drm/radeon/radeon_mode.h
@@ -34,11 +34,12 @@
 #include <drm_mode.h>
 #include <drm_edid.h>
 #include <drm_dp_helper.h>
+#include <drm_fixed.h>
 #include <linux/i2c.h>
 #include <linux/i2c-id.h>
 #include <linux/i2c-algo-bit.h>
-#include "radeon_fixed.h"
 
+struct radeon_bo;
 struct radeon_device;
 
 #define to_radeon_crtc(x) container_of(x, struct radeon_crtc, base)
@@ -65,6 +66,16 @@ enum radeon_tv_std {
 	TV_STD_PAL_N,
 };
 
+enum radeon_hpd_id {
+	RADEON_HPD_1 = 0,
+	RADEON_HPD_2,
+	RADEON_HPD_3,
+	RADEON_HPD_4,
+	RADEON_HPD_5,
+	RADEON_HPD_6,
+	RADEON_HPD_NONE = 0xff,
+};
+
 /* radeon gpio-based i2c
  * 1. "mask" reg and bits
  *    grabs the gpio pins for software use
@@ -84,7 +95,7 @@ struct radeon_i2c_bus_rec {
 	/* id used by atom */
 	uint8_t i2c_id;
 	/* id used by atom */
-	uint8_t hpd_id;
+	enum radeon_hpd_id hpd;
 	/* can be used with hw i2c engine */
 	bool hw_capable;
 	/* uses multi-media i2c engine */
@@ -202,6 +213,8 @@ enum radeon_dvo_chip {
 	DVO_SIL1178,
 };
 
+struct radeon_fbdev;
+
 struct radeon_mode_info {
 	struct atom_context *atom_context;
 	struct card_info *atom_card_info;
@@ -218,6 +231,9 @@ struct radeon_mode_info {
 	struct drm_property *tmds_pll_property;
 	/* hardcoded DFP edid from BIOS */
 	struct edid *bios_hardcoded_edid;
+
+	/* pointer to fbdev info structure */
+	struct radeon_fbdev *rfbdev;
 };
 
 #define MAX_H_CODE_TIMING_LEN 32
@@ -339,6 +355,7 @@ struct radeon_encoder {
 	enum radeon_rmx_type rmx_type;
 	struct drm_display_mode native_mode;
 	void *enc_priv;
+	int audio_polling_active;
 	int hdmi_offset;
 	int hdmi_config_offset;
 	int hdmi_audio_workaround;
@@ -363,16 +380,6 @@ struct radeon_gpio_rec {
 	u32 mask;
 };
 
-enum radeon_hpd_id {
-	RADEON_HPD_NONE = 0,
-	RADEON_HPD_1,
-	RADEON_HPD_2,
-	RADEON_HPD_3,
-	RADEON_HPD_4,
-	RADEON_HPD_5,
-	RADEON_HPD_6,
-};
-
 struct radeon_hpd {
 	enum radeon_hpd_id hpd;
 	u8 plugged_state;
@@ -532,11 +539,10 @@ extern void radeon_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
 				     u16 blue, int regno);
 extern void radeon_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green,
 				     u16 *blue, int regno);
-struct drm_framebuffer *radeon_framebuffer_create(struct drm_device *dev,
-						  struct drm_mode_fb_cmd *mode_cmd,
-						  struct drm_gem_object *obj);
-
-int radeonfb_probe(struct drm_device *dev);
+void radeon_framebuffer_init(struct drm_device *dev,
+			     struct radeon_framebuffer *rfb,
+			     struct drm_mode_fb_cmd *mode_cmd,
+			     struct drm_gem_object *obj);
 
 int radeonfb_remove(struct drm_device *dev, struct drm_framebuffer *fb);
 bool radeon_get_legacy_connector_info_from_bios(struct drm_device *dev);
@@ -575,4 +581,13 @@ void radeon_legacy_tv_adjust_pll2(struct drm_encoder *encoder,
 void radeon_legacy_tv_mode_set(struct drm_encoder *encoder,
 			       struct drm_display_mode *mode,
 			       struct drm_display_mode *adjusted_mode);
+
+/* fbdev layer */
+int radeon_fbdev_init(struct radeon_device *rdev);
+void radeon_fbdev_fini(struct radeon_device *rdev);
+void radeon_fbdev_set_suspend(struct radeon_device *rdev, int state);
+int radeon_fbdev_total_size(struct radeon_device *rdev);
+bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj);
+
+void radeon_fb_output_poll_changed(struct radeon_device *rdev);
 #endif
diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
index 1227747..d5b9373 100644
--- a/drivers/gpu/drm/radeon/radeon_object.c
+++ b/drivers/gpu/drm/radeon/radeon_object.c
@@ -112,9 +112,11 @@ int radeon_bo_create(struct radeon_device *rdev, struct drm_gem_object *gobj,
 retry:
 	radeon_ttm_placement_from_domain(bo, domain);
 	/* Kernel allocation are uninterruptible */
+	mutex_lock(&rdev->vram_mutex);
 	r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type,
 			&bo->placement, 0, 0, !kernel, NULL, size,
 			&radeon_ttm_bo_destroy);
+	mutex_unlock(&rdev->vram_mutex);
 	if (unlikely(r != 0)) {
 		if (r != -ERESTARTSYS) {
 			if (domain == RADEON_GEM_DOMAIN_VRAM) {
@@ -166,11 +168,15 @@ void radeon_bo_kunmap(struct radeon_bo *bo)
 void radeon_bo_unref(struct radeon_bo **bo)
 {
 	struct ttm_buffer_object *tbo;
+	struct radeon_device *rdev;
 
 	if ((*bo) == NULL)
 		return;
+	rdev = (*bo)->rdev;
 	tbo = &((*bo)->tbo);
+	mutex_lock(&rdev->vram_mutex);
 	ttm_bo_unref(&tbo);
+	mutex_unlock(&rdev->vram_mutex);
 	if (tbo == NULL)
 		*bo = NULL;
 }
@@ -192,7 +198,7 @@ int radeon_bo_pin(struct radeon_bo *bo, u32 domain, u64 *gpu_addr)
 	}
 	for (i = 0; i < bo->placement.num_placement; i++)
 		bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;
-	r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+	r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false, false);
 	if (likely(r == 0)) {
 		bo->pin_count = 1;
 		if (gpu_addr != NULL)
@@ -216,7 +222,7 @@ int radeon_bo_unpin(struct radeon_bo *bo)
 		return 0;
 	for (i = 0; i < bo->placement.num_placement; i++)
 		bo->placements[i] &= ~TTM_PL_FLAG_NO_EVICT;
-	r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+	r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false, false);
 	if (unlikely(r != 0))
 		dev_err(bo->rdev->dev, "%p validate failed for unpin\n", bo);
 	return r;
@@ -295,6 +301,7 @@ int radeon_bo_list_reserve(struct list_head *head)
 		r = radeon_bo_reserve(lobj->bo, false);
 		if (unlikely(r != 0))
 			return r;
+		lobj->reserved = true;
 	}
 	return 0;
 }
@@ -305,7 +312,7 @@ void radeon_bo_list_unreserve(struct list_head *head)
 
 	list_for_each_entry(lobj, head, list) {
 		/* only unreserve object we successfully reserved */
-		if (radeon_bo_is_reserved(lobj->bo))
+		if (lobj->reserved && radeon_bo_is_reserved(lobj->bo))
 			radeon_bo_unreserve(lobj->bo);
 	}
 }
@@ -316,6 +323,9 @@ int radeon_bo_list_validate(struct list_head *head)
 	struct radeon_bo *bo;
 	int r;
 
+	list_for_each_entry(lobj, head, list) {
+		lobj->reserved = false;
+	}
 	r = radeon_bo_list_reserve(head);
 	if (unlikely(r != 0)) {
 		return r;
@@ -331,7 +341,7 @@ int radeon_bo_list_validate(struct list_head *head)
 		retry:
 			radeon_ttm_placement_from_domain(bo, domain);
 			r = ttm_bo_validate(&bo->tbo, &bo->placement,
-						true, false);
+						true, false, false);
 			if (unlikely(r)) {
 				if (r != -ERESTARTSYS && domain == RADEON_GEM_DOMAIN_VRAM) {
 					domain |= RADEON_GEM_DOMAIN_GTT;
@@ -499,11 +509,33 @@ void radeon_bo_move_notify(struct ttm_buffer_object *bo,
 	radeon_bo_check_tiling(rbo, 0, 1);
 }
 
-void radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
+int radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo)
 {
+	struct radeon_device *rdev;
 	struct radeon_bo *rbo;
+	unsigned long offset, size;
+	int r;
+
 	if (!radeon_ttm_bo_is_radeon_bo(bo))
-		return;
+		return 0;
 	rbo = container_of(bo, struct radeon_bo, tbo);
 	radeon_bo_check_tiling(rbo, 0, 0);
+	rdev = rbo->rdev;
+	if (bo->mem.mem_type == TTM_PL_VRAM) {
+		size = bo->mem.num_pages << PAGE_SHIFT;
+		offset = bo->mem.mm_node->start << PAGE_SHIFT;
+		if ((offset + size) > rdev->mc.visible_vram_size) {
+			/* hurrah the memory is not visible ! */
+			radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);
+			rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;
+			r = ttm_bo_validate(bo, &rbo->placement, false, true, false);
+			if (unlikely(r != 0))
+				return r;
+			offset = bo->mem.mm_node->start << PAGE_SHIFT;
+			/* this should not happen */
+			if ((offset + size) > rdev->mc.visible_vram_size)
+				return -EINVAL;
+		}
+	}
+	return 0;
 }
diff --git a/drivers/gpu/drm/radeon/radeon_object.h b/drivers/gpu/drm/radeon/radeon_object.h
index 7ab43de..353998d 100644
--- a/drivers/gpu/drm/radeon/radeon_object.h
+++ b/drivers/gpu/drm/radeon/radeon_object.h
@@ -168,6 +168,6 @@ extern int radeon_bo_check_tiling(struct radeon_bo *bo, bool has_moved,
 				bool force_drop);
 extern void radeon_bo_move_notify(struct ttm_buffer_object *bo,
 					struct ttm_mem_reg *mem);
-extern void radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
+extern int radeon_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
 extern int radeon_bo_get_surface_reg(struct radeon_bo *bo);
 #endif
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index a4b5749..63f679a 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -23,25 +23,17 @@
 #include "drmP.h"
 #include "radeon.h"
 #include "avivod.h"
+#ifdef CONFIG_ACPI
+#include <linux/acpi.h>
+#endif
+#include <linux/power_supply.h>
 
 #define RADEON_IDLE_LOOP_MS 100
 #define RADEON_RECLOCK_DELAY_MS 200
 #define RADEON_WAIT_VBLANK_TIMEOUT 200
+#define RADEON_WAIT_IDLE_TIMEOUT 200
 
-static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish);
-static void radeon_pm_set_clocks_locked(struct radeon_device *rdev);
-static void radeon_pm_set_clocks(struct radeon_device *rdev);
-static void radeon_pm_idle_work_handler(struct work_struct *work);
-static int radeon_debugfs_pm_init(struct radeon_device *rdev);
-
-static const char *pm_state_names[4] = {
-	"PM_STATE_DISABLED",
-	"PM_STATE_MINIMUM",
-	"PM_STATE_PAUSED",
-	"PM_STATE_ACTIVE"
-};
-
-static const char *pm_state_types[5] = {
+static const char *radeon_pm_state_type_name[5] = {
 	"Default",
 	"Powersave",
 	"Battery",
@@ -49,138 +41,109 @@ static const char *pm_state_types[5] = {
 	"Performance",
 };
 
-static void radeon_print_power_mode_info(struct radeon_device *rdev)
+static void radeon_dynpm_idle_work_handler(struct work_struct *work);
+static int radeon_debugfs_pm_init(struct radeon_device *rdev);
+static bool radeon_pm_in_vbl(struct radeon_device *rdev);
+static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish);
+static void radeon_pm_update_profile(struct radeon_device *rdev);
+static void radeon_pm_set_clocks(struct radeon_device *rdev);
+
+#define ACPI_AC_CLASS           "ac_adapter"
+
+#ifdef CONFIG_ACPI
+static int radeon_acpi_event(struct notifier_block *nb,
+			     unsigned long val,
+			     void *data)
 {
-	int i, j;
-	bool is_default;
+	struct radeon_device *rdev = container_of(nb, struct radeon_device, acpi_nb);
+	struct acpi_bus_event *entry = (struct acpi_bus_event *)data;
 
-	DRM_INFO("%d Power State(s)\n", rdev->pm.num_power_states);
-	for (i = 0; i < rdev->pm.num_power_states; i++) {
-		if (rdev->pm.default_power_state == &rdev->pm.power_state[i])
-			is_default = true;
+	if (strcmp(entry->device_class, ACPI_AC_CLASS) == 0) {
+		if (power_supply_is_system_supplied() > 0)
+			DRM_DEBUG("pm: AC\n");
 		else
-			is_default = false;
-		DRM_INFO("State %d %s %s\n", i,
-			 pm_state_types[rdev->pm.power_state[i].type],
-			 is_default ? "(default)" : "");
-		if ((rdev->flags & RADEON_IS_PCIE) && !(rdev->flags & RADEON_IS_IGP))
-			DRM_INFO("\t%d PCIE Lanes\n", rdev->pm.power_state[i].non_clock_info.pcie_lanes);
-		DRM_INFO("\t%d Clock Mode(s)\n", rdev->pm.power_state[i].num_clock_modes);
-		for (j = 0; j < rdev->pm.power_state[i].num_clock_modes; j++) {
-			if (rdev->flags & RADEON_IS_IGP)
-				DRM_INFO("\t\t%d engine: %d\n",
-					 j,
-					 rdev->pm.power_state[i].clock_info[j].sclk * 10);
-			else
-				DRM_INFO("\t\t%d engine/memory: %d/%d\n",
-					 j,
-					 rdev->pm.power_state[i].clock_info[j].sclk * 10,
-					 rdev->pm.power_state[i].clock_info[j].mclk * 10);
+			DRM_DEBUG("pm: DC\n");
+
+		if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+			if (rdev->pm.profile == PM_PROFILE_AUTO) {
+				mutex_lock(&rdev->pm.mutex);
+				radeon_pm_update_profile(rdev);
+				radeon_pm_set_clocks(rdev);
+				mutex_unlock(&rdev->pm.mutex);
+			}
 		}
 	}
+
+	return NOTIFY_OK;
 }
+#endif
 
-static struct radeon_power_state * radeon_pick_power_state(struct radeon_device *rdev,
-							   enum radeon_pm_state_type type)
+static void radeon_pm_update_profile(struct radeon_device *rdev)
 {
-	int i, j;
-	enum radeon_pm_state_type wanted_types[2];
-	int wanted_count;
-
-	switch (type) {
-	case POWER_STATE_TYPE_DEFAULT:
-	default:
-		return rdev->pm.default_power_state;
-	case POWER_STATE_TYPE_POWERSAVE:
-		if (rdev->flags & RADEON_IS_MOBILITY) {
-			wanted_types[0] = POWER_STATE_TYPE_POWERSAVE;
-			wanted_types[1] = POWER_STATE_TYPE_BATTERY;
-			wanted_count = 2;
-		} else {
-			wanted_types[0] = POWER_STATE_TYPE_PERFORMANCE;
-			wanted_count = 1;
-		}
+	switch (rdev->pm.profile) {
+	case PM_PROFILE_DEFAULT:
+		rdev->pm.profile_index = PM_PROFILE_DEFAULT_IDX;
 		break;
-	case POWER_STATE_TYPE_BATTERY:
-		if (rdev->flags & RADEON_IS_MOBILITY) {
-			wanted_types[0] = POWER_STATE_TYPE_BATTERY;
-			wanted_types[1] = POWER_STATE_TYPE_POWERSAVE;
-			wanted_count = 2;
+	case PM_PROFILE_AUTO:
+		if (power_supply_is_system_supplied() > 0) {
+			if (rdev->pm.active_crtc_count > 1)
+				rdev->pm.profile_index = PM_PROFILE_HIGH_MH_IDX;
+			else
+				rdev->pm.profile_index = PM_PROFILE_HIGH_SH_IDX;
 		} else {
-			wanted_types[0] = POWER_STATE_TYPE_PERFORMANCE;
-			wanted_count = 1;
+			if (rdev->pm.active_crtc_count > 1)
+				rdev->pm.profile_index = PM_PROFILE_MID_MH_IDX;
+			else
+				rdev->pm.profile_index = PM_PROFILE_MID_SH_IDX;
 		}
 		break;
-	case POWER_STATE_TYPE_BALANCED:
-	case POWER_STATE_TYPE_PERFORMANCE:
-		wanted_types[0] = type;
-		wanted_count = 1;
+	case PM_PROFILE_LOW:
+		if (rdev->pm.active_crtc_count > 1)
+			rdev->pm.profile_index = PM_PROFILE_LOW_MH_IDX;
+		else
+			rdev->pm.profile_index = PM_PROFILE_LOW_SH_IDX;
 		break;
-	}
-
-	for (i = 0; i < wanted_count; i++) {
-		for (j = 0; j < rdev->pm.num_power_states; j++) {
-			if (rdev->pm.power_state[j].type == wanted_types[i])
-				return &rdev->pm.power_state[j];
-		}
-	}
-
-	return rdev->pm.default_power_state;
-}
-
-static struct radeon_pm_clock_info * radeon_pick_clock_mode(struct radeon_device *rdev,
-							    struct radeon_power_state *power_state,
-							    enum radeon_pm_clock_mode_type type)
-{
-	switch (type) {
-	case POWER_MODE_TYPE_DEFAULT:
-	default:
-		return power_state->default_clock_mode;
-	case POWER_MODE_TYPE_LOW:
-		return &power_state->clock_info[0];
-	case POWER_MODE_TYPE_MID:
-		if (power_state->num_clock_modes > 2)
-			return &power_state->clock_info[1];
+	case PM_PROFILE_MID:
+		if (rdev->pm.active_crtc_count > 1)
+			rdev->pm.profile_index = PM_PROFILE_MID_MH_IDX;
+		else
+			rdev->pm.profile_index = PM_PROFILE_MID_SH_IDX;
+		break;
+	case PM_PROFILE_HIGH:
+		if (rdev->pm.active_crtc_count > 1)
+			rdev->pm.profile_index = PM_PROFILE_HIGH_MH_IDX;
 		else
-			return &power_state->clock_info[0];
+			rdev->pm.profile_index = PM_PROFILE_HIGH_SH_IDX;
 		break;
-	case POWER_MODE_TYPE_HIGH:
-		return &power_state->clock_info[power_state->num_clock_modes - 1];
 	}
 
+	if (rdev->pm.active_crtc_count == 0) {
+		rdev->pm.requested_power_state_index =
+			rdev->pm.profiles[rdev->pm.profile_index].dpms_off_ps_idx;
+		rdev->pm.requested_clock_mode_index =
+			rdev->pm.profiles[rdev->pm.profile_index].dpms_off_cm_idx;
+	} else {
+		rdev->pm.requested_power_state_index =
+			rdev->pm.profiles[rdev->pm.profile_index].dpms_on_ps_idx;
+		rdev->pm.requested_clock_mode_index =
+			rdev->pm.profiles[rdev->pm.profile_index].dpms_on_cm_idx;
+	}
 }
 
-static void radeon_get_power_state(struct radeon_device *rdev,
-				   enum radeon_pm_action action)
+static void radeon_unmap_vram_bos(struct radeon_device *rdev)
 {
-	switch (action) {
-	case PM_ACTION_MINIMUM:
-		rdev->pm.requested_power_state = radeon_pick_power_state(rdev, POWER_STATE_TYPE_BATTERY);
-		rdev->pm.requested_clock_mode =
-			radeon_pick_clock_mode(rdev, rdev->pm.requested_power_state, POWER_MODE_TYPE_LOW);
-		break;
-	case PM_ACTION_DOWNCLOCK:
-		rdev->pm.requested_power_state = radeon_pick_power_state(rdev, POWER_STATE_TYPE_POWERSAVE);
-		rdev->pm.requested_clock_mode =
-			radeon_pick_clock_mode(rdev, rdev->pm.requested_power_state, POWER_MODE_TYPE_MID);
-		break;
-	case PM_ACTION_UPCLOCK:
-		rdev->pm.requested_power_state = radeon_pick_power_state(rdev, POWER_STATE_TYPE_DEFAULT);
-		rdev->pm.requested_clock_mode =
-			radeon_pick_clock_mode(rdev, rdev->pm.requested_power_state, POWER_MODE_TYPE_HIGH);
-		break;
-	case PM_ACTION_NONE:
-	default:
-		DRM_ERROR("Requested mode for not defined action\n");
+	struct radeon_bo *bo, *n;
+
+	if (list_empty(&rdev->gem.objects))
 		return;
+
+	list_for_each_entry_safe(bo, n, &rdev->gem.objects, list) {
+		if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
+			ttm_bo_unmap_virtual(&bo->tbo);
 	}
-	DRM_INFO("Requested: e: %d m: %d p: %d\n",
-		 rdev->pm.requested_clock_mode->sclk,
-		 rdev->pm.requested_clock_mode->mclk,
-		 rdev->pm.requested_power_state->non_clock_info.pcie_lanes);
 }
 
-static inline void radeon_sync_with_vblank(struct radeon_device *rdev)
+static void radeon_sync_with_vblank(struct radeon_device *rdev)
 {
 	if (rdev->pm.active_crtcs) {
 		rdev->pm.vblank_sync = false;
@@ -192,73 +155,359 @@ static inline void radeon_sync_with_vblank(struct radeon_device *rdev)
 
 static void radeon_set_power_state(struct radeon_device *rdev)
 {
-	/* if *_clock_mode are the same, *_power_state are as well */
-	if (rdev->pm.requested_clock_mode == rdev->pm.current_clock_mode)
-		return;
+	u32 sclk, mclk;
+	bool misc_after = false;
 
-	DRM_INFO("Setting: e: %d m: %d p: %d\n",
-		 rdev->pm.requested_clock_mode->sclk,
-		 rdev->pm.requested_clock_mode->mclk,
-		 rdev->pm.requested_power_state->non_clock_info.pcie_lanes);
+	if ((rdev->pm.requested_clock_mode_index == rdev->pm.current_clock_mode_index) &&
+	    (rdev->pm.requested_power_state_index == rdev->pm.current_power_state_index))
+		return;
 
-	/* set pcie lanes */
-	/* TODO */
+	if (radeon_gui_idle(rdev)) {
+		sclk = rdev->pm.power_state[rdev->pm.requested_power_state_index].
+			clock_info[rdev->pm.requested_clock_mode_index].sclk;
+		if (sclk > rdev->clock.default_sclk)
+			sclk = rdev->clock.default_sclk;
 
-	/* set voltage */
-	/* TODO */
+		mclk = rdev->pm.power_state[rdev->pm.requested_power_state_index].
+			clock_info[rdev->pm.requested_clock_mode_index].mclk;
+		if (mclk > rdev->clock.default_mclk)
+			mclk = rdev->clock.default_mclk;
 
-	/* set engine clock */
-	radeon_sync_with_vblank(rdev);
-	radeon_pm_debug_check_in_vbl(rdev, false);
-	radeon_set_engine_clock(rdev, rdev->pm.requested_clock_mode->sclk);
-	radeon_pm_debug_check_in_vbl(rdev, true);
+		/* upvolt before raising clocks, downvolt after lowering clocks */
+		if (sclk < rdev->pm.current_sclk)
+			misc_after = true;
 
-#if 0
-	/* set memory clock */
-	if (rdev->asic->set_memory_clock) {
 		radeon_sync_with_vblank(rdev);
-		radeon_pm_debug_check_in_vbl(rdev, false);
-		radeon_set_memory_clock(rdev, rdev->pm.requested_clock_mode->mclk);
-		radeon_pm_debug_check_in_vbl(rdev, true);
+
+		if (rdev->pm.pm_method == PM_METHOD_DYNPM) {
+			if (!radeon_pm_in_vbl(rdev))
+				return;
+		}
+
+		radeon_pm_prepare(rdev);
+
+		if (!misc_after)
+			/* voltage, pcie lanes, etc.*/
+			radeon_pm_misc(rdev);
+
+		/* set engine clock */
+		if (sclk != rdev->pm.current_sclk) {
+			radeon_pm_debug_check_in_vbl(rdev, false);
+			radeon_set_engine_clock(rdev, sclk);
+			radeon_pm_debug_check_in_vbl(rdev, true);
+			rdev->pm.current_sclk = sclk;
+			DRM_DEBUG("Setting: e: %d\n", sclk);
+		}
+
+		/* set memory clock */
+		if (rdev->asic->set_memory_clock && (mclk != rdev->pm.current_mclk)) {
+			radeon_pm_debug_check_in_vbl(rdev, false);
+			radeon_set_memory_clock(rdev, mclk);
+			radeon_pm_debug_check_in_vbl(rdev, true);
+			rdev->pm.current_mclk = mclk;
+			DRM_DEBUG("Setting: m: %d\n", mclk);
+		}
+
+		if (misc_after)
+			/* voltage, pcie lanes, etc.*/
+			radeon_pm_misc(rdev);
+
+		radeon_pm_finish(rdev);
+
+		rdev->pm.current_power_state_index = rdev->pm.requested_power_state_index;
+		rdev->pm.current_clock_mode_index = rdev->pm.requested_clock_mode_index;
+	} else
+		DRM_DEBUG("pm: GUI not idle!!!\n");
+}
+
+static void radeon_pm_set_clocks(struct radeon_device *rdev)
+{
+	int i;
+
+	mutex_lock(&rdev->ddev->struct_mutex);
+	mutex_lock(&rdev->vram_mutex);
+	mutex_lock(&rdev->cp.mutex);
+
+	/* gui idle int has issues on older chips it seems */
+	if (rdev->family >= CHIP_R600) {
+		if (rdev->irq.installed) {
+			/* wait for GPU idle */
+			rdev->pm.gui_idle = false;
+			rdev->irq.gui_idle = true;
+			radeon_irq_set(rdev);
+			wait_event_interruptible_timeout(
+				rdev->irq.idle_queue, rdev->pm.gui_idle,
+				msecs_to_jiffies(RADEON_WAIT_IDLE_TIMEOUT));
+			rdev->irq.gui_idle = false;
+			radeon_irq_set(rdev);
+		}
+	} else {
+		if (rdev->cp.ready) {
+			struct radeon_fence *fence;
+			radeon_ring_alloc(rdev, 64);
+			radeon_fence_create(rdev, &fence);
+			radeon_fence_emit(rdev, fence);
+			radeon_ring_commit(rdev);
+			radeon_fence_wait(fence, false);
+			radeon_fence_unref(&fence);
+		}
 	}
-#endif
+	radeon_unmap_vram_bos(rdev);
+
+	if (rdev->irq.installed) {
+		for (i = 0; i < rdev->num_crtc; i++) {
+			if (rdev->pm.active_crtcs & (1 << i)) {
+				rdev->pm.req_vblank |= (1 << i);
+				drm_vblank_get(rdev->ddev, i);
+			}
+		}
+	}
+
+	radeon_set_power_state(rdev);
+
+	if (rdev->irq.installed) {
+		for (i = 0; i < rdev->num_crtc; i++) {
+			if (rdev->pm.req_vblank & (1 << i)) {
+				rdev->pm.req_vblank &= ~(1 << i);
+				drm_vblank_put(rdev->ddev, i);
+			}
+		}
+	}
+
+	/* update display watermarks based on new power state */
+	radeon_update_bandwidth_info(rdev);
+	if (rdev->pm.active_crtc_count)
+		radeon_bandwidth_update(rdev);
+
+	rdev->pm.dynpm_planned_action = DYNPM_ACTION_NONE;
+
+	mutex_unlock(&rdev->cp.mutex);
+	mutex_unlock(&rdev->vram_mutex);
+	mutex_unlock(&rdev->ddev->struct_mutex);
+}
+
+static void radeon_pm_print_states(struct radeon_device *rdev)
+{
+	int i, j;
+	struct radeon_power_state *power_state;
+	struct radeon_pm_clock_info *clock_info;
+
+	DRM_DEBUG("%d Power State(s)\n", rdev->pm.num_power_states);
+	for (i = 0; i < rdev->pm.num_power_states; i++) {
+		power_state = &rdev->pm.power_state[i];
+		DRM_DEBUG("State %d: %s\n", i,
+			radeon_pm_state_type_name[power_state->type]);
+		if (i == rdev->pm.default_power_state_index)
+			DRM_DEBUG("\tDefault");
+		if ((rdev->flags & RADEON_IS_PCIE) && !(rdev->flags & RADEON_IS_IGP))
+			DRM_DEBUG("\t%d PCIE Lanes\n", power_state->pcie_lanes);
+		if (power_state->flags & RADEON_PM_STATE_SINGLE_DISPLAY_ONLY)
+			DRM_DEBUG("\tSingle display only\n");
+		DRM_DEBUG("\t%d Clock Mode(s)\n", power_state->num_clock_modes);
+		for (j = 0; j < power_state->num_clock_modes; j++) {
+			clock_info = &(power_state->clock_info[j]);
+			if (rdev->flags & RADEON_IS_IGP)
+				DRM_DEBUG("\t\t%d e: %d%s\n",
+					j,
+					clock_info->sclk * 10,
+					clock_info->flags & RADEON_PM_MODE_NO_DISPLAY ? "\tNo display only" : "");
+			else
+				DRM_DEBUG("\t\t%d e: %d\tm: %d\tv: %d%s\n",
+					j,
+					clock_info->sclk * 10,
+					clock_info->mclk * 10,
+					clock_info->voltage.voltage,
+					clock_info->flags & RADEON_PM_MODE_NO_DISPLAY ? "\tNo display only" : "");
+		}
+	}
+}
+
+static ssize_t radeon_get_pm_profile(struct device *dev,
+				     struct device_attribute *attr,
+				     char *buf)
+{
+	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+	struct radeon_device *rdev = ddev->dev_private;
+	int cp = rdev->pm.profile;
+
+	return snprintf(buf, PAGE_SIZE, "%s\n",
+			(cp == PM_PROFILE_AUTO) ? "auto" :
+			(cp == PM_PROFILE_LOW) ? "low" :
+			(cp == PM_PROFILE_HIGH) ? "high" : "default");
+}
+
+static ssize_t radeon_set_pm_profile(struct device *dev,
+				     struct device_attribute *attr,
+				     const char *buf,
+				     size_t count)
+{
+	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+	struct radeon_device *rdev = ddev->dev_private;
+
+	mutex_lock(&rdev->pm.mutex);
+	if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+		if (strncmp("default", buf, strlen("default")) == 0)
+			rdev->pm.profile = PM_PROFILE_DEFAULT;
+		else if (strncmp("auto", buf, strlen("auto")) == 0)
+			rdev->pm.profile = PM_PROFILE_AUTO;
+		else if (strncmp("low", buf, strlen("low")) == 0)
+			rdev->pm.profile = PM_PROFILE_LOW;
+		else if (strncmp("mid", buf, strlen("mid")) == 0)
+			rdev->pm.profile = PM_PROFILE_MID;
+		else if (strncmp("high", buf, strlen("high")) == 0)
+			rdev->pm.profile = PM_PROFILE_HIGH;
+		else {
+			DRM_ERROR("invalid power profile!\n");
+			goto fail;
+		}
+		radeon_pm_update_profile(rdev);
+		radeon_pm_set_clocks(rdev);
+	}
+fail:
+	mutex_unlock(&rdev->pm.mutex);
+
+	return count;
+}
+
+static ssize_t radeon_get_pm_method(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+	struct radeon_device *rdev = ddev->dev_private;
+	int pm = rdev->pm.pm_method;
+
+	return snprintf(buf, PAGE_SIZE, "%s\n",
+			(pm == PM_METHOD_DYNPM) ? "dynpm" : "profile");
+}
+
+static ssize_t radeon_set_pm_method(struct device *dev,
+				    struct device_attribute *attr,
+				    const char *buf,
+				    size_t count)
+{
+	struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+	struct radeon_device *rdev = ddev->dev_private;
+
+
+	if (strncmp("dynpm", buf, strlen("dynpm")) == 0) {
+		mutex_lock(&rdev->pm.mutex);
+		rdev->pm.pm_method = PM_METHOD_DYNPM;
+		rdev->pm.dynpm_state = DYNPM_STATE_PAUSED;
+		rdev->pm.dynpm_planned_action = DYNPM_ACTION_DEFAULT;
+		mutex_unlock(&rdev->pm.mutex);
+	} else if (strncmp("profile", buf, strlen("profile")) == 0) {
+		mutex_lock(&rdev->pm.mutex);
+		rdev->pm.pm_method = PM_METHOD_PROFILE;
+		/* disable dynpm */
+		rdev->pm.dynpm_state = DYNPM_STATE_DISABLED;
+		rdev->pm.dynpm_planned_action = DYNPM_ACTION_NONE;
+		cancel_delayed_work(&rdev->pm.dynpm_idle_work);
+		mutex_unlock(&rdev->pm.mutex);
+	} else {
+		DRM_ERROR("invalid power method!\n");
+		goto fail;
+	}
+	radeon_pm_compute_clocks(rdev);
+fail:
+	return count;
+}
+
+static DEVICE_ATTR(power_profile, S_IRUGO | S_IWUSR, radeon_get_pm_profile, radeon_set_pm_profile);
+static DEVICE_ATTR(power_method, S_IRUGO | S_IWUSR, radeon_get_pm_method, radeon_set_pm_method);
 
-	rdev->pm.current_power_state = rdev->pm.requested_power_state;
-	rdev->pm.current_clock_mode = rdev->pm.requested_clock_mode;
+void radeon_pm_suspend(struct radeon_device *rdev)
+{
+	mutex_lock(&rdev->pm.mutex);
+	cancel_delayed_work(&rdev->pm.dynpm_idle_work);
+	mutex_unlock(&rdev->pm.mutex);
+}
+
+void radeon_pm_resume(struct radeon_device *rdev)
+{
+	/* asic init will reset the default power state */
+	mutex_lock(&rdev->pm.mutex);
+	rdev->pm.current_power_state_index = rdev->pm.default_power_state_index;
+	rdev->pm.current_clock_mode_index = 0;
+	rdev->pm.current_sclk = rdev->clock.default_sclk;
+	rdev->pm.current_mclk = rdev->clock.default_mclk;
+	rdev->pm.current_vddc = rdev->pm.power_state[rdev->pm.default_power_state_index].clock_info[0].voltage.voltage;
+	mutex_unlock(&rdev->pm.mutex);
+	radeon_pm_compute_clocks(rdev);
 }
 
 int radeon_pm_init(struct radeon_device *rdev)
 {
-	rdev->pm.state = PM_STATE_DISABLED;
-	rdev->pm.planned_action = PM_ACTION_NONE;
-	rdev->pm.downclocked = false;
+	int ret;
+	/* default to profile method */
+	rdev->pm.pm_method = PM_METHOD_PROFILE;
+	rdev->pm.profile = PM_PROFILE_DEFAULT;
+	rdev->pm.dynpm_state = DYNPM_STATE_DISABLED;
+	rdev->pm.dynpm_planned_action = DYNPM_ACTION_NONE;
+	rdev->pm.dynpm_can_upclock = true;
+	rdev->pm.dynpm_can_downclock = true;
+	rdev->pm.current_sclk = rdev->clock.default_sclk;
+	rdev->pm.current_mclk = rdev->clock.default_mclk;
 
 	if (rdev->bios) {
 		if (rdev->is_atom_bios)
 			radeon_atombios_get_power_modes(rdev);
 		else
 			radeon_combios_get_power_modes(rdev);
-		radeon_print_power_mode_info(rdev);
+		radeon_pm_print_states(rdev);
+		radeon_pm_init_profile(rdev);
 	}
 
-	if (radeon_debugfs_pm_init(rdev)) {
-		DRM_ERROR("Failed to register debugfs file for PM!\n");
-	}
+	if (rdev->pm.num_power_states > 1) {
+		/* where's the best place to put these? */
+		ret = device_create_file(rdev->dev, &dev_attr_power_profile);
+		if (ret)
+			DRM_ERROR("failed to create device file for power profile\n");
+		ret = device_create_file(rdev->dev, &dev_attr_power_method);
+		if (ret)
+			DRM_ERROR("failed to create device file for power method\n");
+
+#ifdef CONFIG_ACPI
+		rdev->acpi_nb.notifier_call = radeon_acpi_event;
+		register_acpi_notifier(&rdev->acpi_nb);
+#endif
+		INIT_DELAYED_WORK(&rdev->pm.dynpm_idle_work, radeon_dynpm_idle_work_handler);
 
-	INIT_DELAYED_WORK(&rdev->pm.idle_work, radeon_pm_idle_work_handler);
+		if (radeon_debugfs_pm_init(rdev)) {
+			DRM_ERROR("Failed to register debugfs file for PM!\n");
+		}
 
-	if (radeon_dynpm != -1 && radeon_dynpm) {
-		rdev->pm.state = PM_STATE_PAUSED;
-		DRM_INFO("radeon: dynamic power management enabled\n");
+		DRM_INFO("radeon: power management initialized\n");
 	}
 
-	DRM_INFO("radeon: power management initialized\n");
-
 	return 0;
 }
 
 void radeon_pm_fini(struct radeon_device *rdev)
 {
+	if (rdev->pm.num_power_states > 1) {
+		mutex_lock(&rdev->pm.mutex);
+		if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+			rdev->pm.profile = PM_PROFILE_DEFAULT;
+			radeon_pm_update_profile(rdev);
+			radeon_pm_set_clocks(rdev);
+		} else if (rdev->pm.pm_method == PM_METHOD_DYNPM) {
+			/* cancel work */
+			cancel_delayed_work_sync(&rdev->pm.dynpm_idle_work);
+			/* reset default clocks */
+			rdev->pm.dynpm_state = DYNPM_STATE_DISABLED;
+			rdev->pm.dynpm_planned_action = DYNPM_ACTION_DEFAULT;
+			radeon_pm_set_clocks(rdev);
+		}
+		mutex_unlock(&rdev->pm.mutex);
+
+		device_remove_file(rdev->dev, &dev_attr_power_profile);
+		device_remove_file(rdev->dev, &dev_attr_power_method);
+#ifdef CONFIG_ACPI
+		unregister_acpi_notifier(&rdev->acpi_nb);
+#endif
+	}
+
 	if (rdev->pm.i2c_bus)
 		radeon_i2c_destroy(rdev->pm.i2c_bus);
 }
@@ -266,146 +515,167 @@ void radeon_pm_fini(struct radeon_device *rdev)
 void radeon_pm_compute_clocks(struct radeon_device *rdev)
 {
 	struct drm_device *ddev = rdev->ddev;
-	struct drm_connector *connector;
+	struct drm_crtc *crtc;
 	struct radeon_crtc *radeon_crtc;
-	int count = 0;
 
-	if (rdev->pm.state == PM_STATE_DISABLED)
+	if (rdev->pm.num_power_states < 2)
 		return;
 
 	mutex_lock(&rdev->pm.mutex);
 
 	rdev->pm.active_crtcs = 0;
-	list_for_each_entry(connector,
-		&ddev->mode_config.connector_list, head) {
-		if (connector->encoder &&
-		    connector->encoder->crtc &&
-		    connector->dpms != DRM_MODE_DPMS_OFF) {
-			radeon_crtc = to_radeon_crtc(connector->encoder->crtc);
+	rdev->pm.active_crtc_count = 0;
+	list_for_each_entry(crtc,
+		&ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
 			rdev->pm.active_crtcs |= (1 << radeon_crtc->crtc_id);
-			++count;
+			rdev->pm.active_crtc_count++;
 		}
 	}
 
-	if (count > 1) {
-		if (rdev->pm.state == PM_STATE_ACTIVE) {
-			cancel_delayed_work(&rdev->pm.idle_work);
-
-			rdev->pm.state = PM_STATE_PAUSED;
-			rdev->pm.planned_action = PM_ACTION_UPCLOCK;
-			if (rdev->pm.downclocked)
-				radeon_pm_set_clocks(rdev);
-
-			DRM_DEBUG("radeon: dynamic power management deactivated\n");
-		}
-	} else if (count == 1) {
-		/* TODO: Increase clocks if needed for current mode */
-
-		if (rdev->pm.state == PM_STATE_MINIMUM) {
-			rdev->pm.state = PM_STATE_ACTIVE;
-			rdev->pm.planned_action = PM_ACTION_UPCLOCK;
-			radeon_pm_set_clocks(rdev);
-
-			queue_delayed_work(rdev->wq, &rdev->pm.idle_work,
-				msecs_to_jiffies(RADEON_IDLE_LOOP_MS));
-		}
-		else if (rdev->pm.state == PM_STATE_PAUSED) {
-			rdev->pm.state = PM_STATE_ACTIVE;
-			queue_delayed_work(rdev->wq, &rdev->pm.idle_work,
-				msecs_to_jiffies(RADEON_IDLE_LOOP_MS));
-			DRM_DEBUG("radeon: dynamic power management activated\n");
-		}
-	}
-	else { /* count == 0 */
-		if (rdev->pm.state != PM_STATE_MINIMUM) {
-			cancel_delayed_work(&rdev->pm.idle_work);
-
-			rdev->pm.state = PM_STATE_MINIMUM;
-			rdev->pm.planned_action = PM_ACTION_MINIMUM;
-			radeon_pm_set_clocks(rdev);
+	if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+		radeon_pm_update_profile(rdev);
+		radeon_pm_set_clocks(rdev);
+	} else if (rdev->pm.pm_method == PM_METHOD_DYNPM) {
+		if (rdev->pm.dynpm_state != DYNPM_STATE_DISABLED) {
+			if (rdev->pm.active_crtc_count > 1) {
+				if (rdev->pm.dynpm_state == DYNPM_STATE_ACTIVE) {
+					cancel_delayed_work(&rdev->pm.dynpm_idle_work);
+
+					rdev->pm.dynpm_state = DYNPM_STATE_PAUSED;
+					rdev->pm.dynpm_planned_action = DYNPM_ACTION_DEFAULT;
+					radeon_pm_get_dynpm_state(rdev);
+					radeon_pm_set_clocks(rdev);
+
+					DRM_DEBUG("radeon: dynamic power management deactivated\n");
+				}
+			} else if (rdev->pm.active_crtc_count == 1) {
+				/* TODO: Increase clocks if needed for current mode */
+
+				if (rdev->pm.dynpm_state == DYNPM_STATE_MINIMUM) {
+					rdev->pm.dynpm_state = DYNPM_STATE_ACTIVE;
+					rdev->pm.dynpm_planned_action = DYNPM_ACTION_UPCLOCK;
+					radeon_pm_get_dynpm_state(rdev);
+					radeon_pm_set_clocks(rdev);
+
+					queue_delayed_work(rdev->wq, &rdev->pm.dynpm_idle_work,
+							   msecs_to_jiffies(RADEON_IDLE_LOOP_MS));
+				} else if (rdev->pm.dynpm_state == DYNPM_STATE_PAUSED) {
+					rdev->pm.dynpm_state = DYNPM_STATE_ACTIVE;
+					queue_delayed_work(rdev->wq, &rdev->pm.dynpm_idle_work,
+							   msecs_to_jiffies(RADEON_IDLE_LOOP_MS));
+					DRM_DEBUG("radeon: dynamic power management activated\n");
+				}
+			} else { /* count == 0 */
+				if (rdev->pm.dynpm_state != DYNPM_STATE_MINIMUM) {
+					cancel_delayed_work(&rdev->pm.dynpm_idle_work);
+
+					rdev->pm.dynpm_state = DYNPM_STATE_MINIMUM;
+					rdev->pm.dynpm_planned_action = DYNPM_ACTION_MINIMUM;
+					radeon_pm_get_dynpm_state(rdev);
+					radeon_pm_set_clocks(rdev);
+				}
+			}
 		}
 	}
 
 	mutex_unlock(&rdev->pm.mutex);
 }
 
-static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish)
+static bool radeon_pm_in_vbl(struct radeon_device *rdev)
 {
-	u32 stat_crtc1 = 0, stat_crtc2 = 0;
+	u32 stat_crtc = 0, vbl = 0, position = 0;
 	bool in_vbl = true;
 
-	if (ASIC_IS_AVIVO(rdev)) {
+	if (ASIC_IS_DCE4(rdev)) {
 		if (rdev->pm.active_crtcs & (1 << 0)) {
-			stat_crtc1 = RREG32(D1CRTC_STATUS);
-			if (!(stat_crtc1 & 1))
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC0_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC0_REGISTER_OFFSET) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 1)) {
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC1_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC1_REGISTER_OFFSET) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 2)) {
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC2_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC2_REGISTER_OFFSET) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 3)) {
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC3_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC3_REGISTER_OFFSET) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 4)) {
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC4_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC4_REGISTER_OFFSET) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 5)) {
+			vbl = RREG32(EVERGREEN_CRTC_V_BLANK_START_END +
+				     EVERGREEN_CRTC5_REGISTER_OFFSET) & 0xfff;
+			position = RREG32(EVERGREEN_CRTC_STATUS_POSITION +
+					  EVERGREEN_CRTC5_REGISTER_OFFSET) & 0xfff;
+		}
+	} else if (ASIC_IS_AVIVO(rdev)) {
+		if (rdev->pm.active_crtcs & (1 << 0)) {
+			vbl = RREG32(AVIVO_D1CRTC_V_BLANK_START_END) & 0xfff;
+			position = RREG32(AVIVO_D1CRTC_STATUS_POSITION) & 0xfff;
+		}
+		if (rdev->pm.active_crtcs & (1 << 1)) {
+			vbl = RREG32(AVIVO_D2CRTC_V_BLANK_START_END) & 0xfff;
+			position = RREG32(AVIVO_D2CRTC_STATUS_POSITION) & 0xfff;
+		}
+		if (position < vbl && position > 1)
+			in_vbl = false;
+	} else {
+		if (rdev->pm.active_crtcs & (1 << 0)) {
+			stat_crtc = RREG32(RADEON_CRTC_STATUS);
+			if (!(stat_crtc & 1))
 				in_vbl = false;
 		}
 		if (rdev->pm.active_crtcs & (1 << 1)) {
-			stat_crtc2 = RREG32(D2CRTC_STATUS);
-			if (!(stat_crtc2 & 1))
+			stat_crtc = RREG32(RADEON_CRTC2_STATUS);
+			if (!(stat_crtc & 1))
 				in_vbl = false;
 		}
 	}
-	if (in_vbl == false)
-		DRM_INFO("not in vbl for pm change %08x %08x at %s\n", stat_crtc1,
-			 stat_crtc2, finish ? "exit" : "entry");
-	return in_vbl;
-}
-static void radeon_pm_set_clocks_locked(struct radeon_device *rdev)
-{
-	/*radeon_fence_wait_last(rdev);*/
-	switch (rdev->pm.planned_action) {
-	case PM_ACTION_UPCLOCK:
-		rdev->pm.downclocked = false;
-		break;
-	case PM_ACTION_DOWNCLOCK:
-		rdev->pm.downclocked = true;
-		break;
-	case PM_ACTION_MINIMUM:
-		break;
-	case PM_ACTION_NONE:
-		DRM_ERROR("%s: PM_ACTION_NONE\n", __func__);
-		break;
-	}
 
-	radeon_set_power_state(rdev);
-	rdev->pm.planned_action = PM_ACTION_NONE;
+	if (position < vbl && position > 1)
+		in_vbl = false;
+
+	return in_vbl;
 }
 
-static void radeon_pm_set_clocks(struct radeon_device *rdev)
+static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish)
 {
-	radeon_get_power_state(rdev, rdev->pm.planned_action);
-	mutex_lock(&rdev->cp.mutex);
+	u32 stat_crtc = 0;
+	bool in_vbl = radeon_pm_in_vbl(rdev);
 
-	if (rdev->pm.active_crtcs & (1 << 0)) {
-		rdev->pm.req_vblank |= (1 << 0);
-		drm_vblank_get(rdev->ddev, 0);
-	}
-	if (rdev->pm.active_crtcs & (1 << 1)) {
-		rdev->pm.req_vblank |= (1 << 1);
-		drm_vblank_get(rdev->ddev, 1);
-	}
-	radeon_pm_set_clocks_locked(rdev);
-	if (rdev->pm.req_vblank & (1 << 0)) {
-		rdev->pm.req_vblank &= ~(1 << 0);
-		drm_vblank_put(rdev->ddev, 0);
-	}
-	if (rdev->pm.req_vblank & (1 << 1)) {
-		rdev->pm.req_vblank &= ~(1 << 1);
-		drm_vblank_put(rdev->ddev, 1);
-	}
-
-	mutex_unlock(&rdev->cp.mutex);
+	if (in_vbl == false)
+		DRM_DEBUG("not in vbl for pm change %08x at %s\n", stat_crtc,
+			 finish ? "exit" : "entry");
+	return in_vbl;
 }
 
-static void radeon_pm_idle_work_handler(struct work_struct *work)
+static void radeon_dynpm_idle_work_handler(struct work_struct *work)
 {
 	struct radeon_device *rdev;
+	int resched;
 	rdev = container_of(work, struct radeon_device,
-				pm.idle_work.work);
+				pm.dynpm_idle_work.work);
 
+	resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
 	mutex_lock(&rdev->pm.mutex);
-	if (rdev->pm.state == PM_STATE_ACTIVE) {
+	if (rdev->pm.dynpm_state == DYNPM_STATE_ACTIVE) {
 		unsigned long irq_flags;
 		int not_processed = 0;
 
@@ -421,35 +691,40 @@ static void radeon_pm_idle_work_handler(struct work_struct *work)
 		read_unlock_irqrestore(&rdev->fence_drv.lock, irq_flags);
 
 		if (not_processed >= 3) { /* should upclock */
-			if (rdev->pm.planned_action == PM_ACTION_DOWNCLOCK) {
-				rdev->pm.planned_action = PM_ACTION_NONE;
-			} else if (rdev->pm.planned_action == PM_ACTION_NONE &&
-				rdev->pm.downclocked) {
-				rdev->pm.planned_action =
-					PM_ACTION_UPCLOCK;
-				rdev->pm.action_timeout = jiffies +
+			if (rdev->pm.dynpm_planned_action == DYNPM_ACTION_DOWNCLOCK) {
+				rdev->pm.dynpm_planned_action = DYNPM_ACTION_NONE;
+			} else if (rdev->pm.dynpm_planned_action == DYNPM_ACTION_NONE &&
+				   rdev->pm.dynpm_can_upclock) {
+				rdev->pm.dynpm_planned_action =
+					DYNPM_ACTION_UPCLOCK;
+				rdev->pm.dynpm_action_timeout = jiffies +
 				msecs_to_jiffies(RADEON_RECLOCK_DELAY_MS);
 			}
 		} else if (not_processed == 0) { /* should downclock */
-			if (rdev->pm.planned_action == PM_ACTION_UPCLOCK) {
-				rdev->pm.planned_action = PM_ACTION_NONE;
-			} else if (rdev->pm.planned_action == PM_ACTION_NONE &&
-				!rdev->pm.downclocked) {
-				rdev->pm.planned_action =
-					PM_ACTION_DOWNCLOCK;
-				rdev->pm.action_timeout = jiffies +
+			if (rdev->pm.dynpm_planned_action == DYNPM_ACTION_UPCLOCK) {
+				rdev->pm.dynpm_planned_action = DYNPM_ACTION_NONE;
+			} else if (rdev->pm.dynpm_planned_action == DYNPM_ACTION_NONE &&
+				   rdev->pm.dynpm_can_downclock) {
+				rdev->pm.dynpm_planned_action =
+					DYNPM_ACTION_DOWNCLOCK;
+				rdev->pm.dynpm_action_timeout = jiffies +
 				msecs_to_jiffies(RADEON_RECLOCK_DELAY_MS);
 			}
 		}
 
-		if (rdev->pm.planned_action != PM_ACTION_NONE &&
-		    jiffies > rdev->pm.action_timeout) {
+		/* Note, radeon_pm_set_clocks is called with static_switch set
+		 * to false since we want to wait for vbl to avoid flicker.
+		 */
+		if (rdev->pm.dynpm_planned_action != DYNPM_ACTION_NONE &&
+		    jiffies > rdev->pm.dynpm_action_timeout) {
+			radeon_pm_get_dynpm_state(rdev);
 			radeon_pm_set_clocks(rdev);
 		}
 	}
 	mutex_unlock(&rdev->pm.mutex);
+	ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
 
-	queue_delayed_work(rdev->wq, &rdev->pm.idle_work,
+	queue_delayed_work(rdev->wq, &rdev->pm.dynpm_idle_work,
 					msecs_to_jiffies(RADEON_IDLE_LOOP_MS));
 }
 
@@ -464,12 +739,13 @@ static int radeon_debugfs_pm_info(struct seq_file *m, void *data)
 	struct drm_device *dev = node->minor->dev;
 	struct radeon_device *rdev = dev->dev_private;
 
-	seq_printf(m, "state: %s\n", pm_state_names[rdev->pm.state]);
 	seq_printf(m, "default engine clock: %u0 kHz\n", rdev->clock.default_sclk);
 	seq_printf(m, "current engine clock: %u0 kHz\n", radeon_get_engine_clock(rdev));
 	seq_printf(m, "default memory clock: %u0 kHz\n", rdev->clock.default_mclk);
 	if (rdev->asic->get_memory_clock)
 		seq_printf(m, "current memory clock: %u0 kHz\n", radeon_get_memory_clock(rdev));
+	if (rdev->pm.current_vddc)
+		seq_printf(m, "voltage: %u mV\n", rdev->pm.current_vddc);
 	if (rdev->asic->get_pcie_lanes)
 		seq_printf(m, "PCIE lanes: %d\n", radeon_get_pcie_lanes(rdev));
 
diff --git a/drivers/gpu/drm/radeon/radeon_reg.h b/drivers/gpu/drm/radeon/radeon_reg.h
index eabbc9c..c332f46 100644
--- a/drivers/gpu/drm/radeon/radeon_reg.h
+++ b/drivers/gpu/drm/radeon/radeon_reg.h
@@ -553,7 +553,6 @@
 #       define RADEON_CRTC_CRNT_VLINE_MASK  (0x7ff << 16)
 #define RADEON_CRTC2_CRNT_FRAME             0x0314
 #define RADEON_CRTC2_GUI_TRIG_VLINE         0x0318
-#define RADEON_CRTC2_STATUS                 0x03fc
 #define RADEON_CRTC2_VLINE_CRNT_VLINE       0x0310
 #define RADEON_CRTC8_DATA                   0x03d5 /* VGA, 0x3b5 */
 #define RADEON_CRTC8_IDX                    0x03d4 /* VGA, 0x3b4 */
@@ -995,6 +994,7 @@
 #	define RADEON_FP_DETECT_MASK		(1 << 4)
 #	define RADEON_CRTC2_VBLANK_MASK		(1 << 9)
 #	define RADEON_FP2_DETECT_MASK		(1 << 10)
+#	define RADEON_GUI_IDLE_MASK		(1 << 19)
 #	define RADEON_SW_INT_ENABLE		(1 << 25)
 #define RADEON_GEN_INT_STATUS               0x0044
 #	define AVIVO_DISPLAY_INT_STATUS		(1 << 0)
@@ -1006,6 +1006,8 @@
 #	define RADEON_CRTC2_VBLANK_STAT_ACK	(1 << 9)
 #	define RADEON_FP2_DETECT_STAT		(1 << 10)
 #	define RADEON_FP2_DETECT_STAT_ACK	(1 << 10)
+#	define RADEON_GUI_IDLE_STAT		(1 << 19)
+#	define RADEON_GUI_IDLE_STAT_ACK		(1 << 19)
 #	define RADEON_SW_INT_FIRE		(1 << 26)
 #	define RADEON_SW_INT_TEST		(1 << 25)
 #	define RADEON_SW_INT_TEST_ACK		(1 << 25)
diff --git a/drivers/gpu/drm/radeon/radeon_ring.c b/drivers/gpu/drm/radeon/radeon_ring.c
index f6e1e8d..261e98a 100644
--- a/drivers/gpu/drm/radeon/radeon_ring.c
+++ b/drivers/gpu/drm/radeon/radeon_ring.c
@@ -219,24 +219,26 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
 void radeon_ib_pool_fini(struct radeon_device *rdev)
 {
 	int r;
+	struct radeon_bo *robj;
 
 	if (!rdev->ib_pool.ready) {
 		return;
 	}
 	mutex_lock(&rdev->ib_pool.mutex);
 	radeon_ib_bogus_cleanup(rdev);
+	robj = rdev->ib_pool.robj;
+	rdev->ib_pool.robj = NULL;
+	mutex_unlock(&rdev->ib_pool.mutex);
 
-	if (rdev->ib_pool.robj) {
-		r = radeon_bo_reserve(rdev->ib_pool.robj, false);
+	if (robj) {
+		r = radeon_bo_reserve(robj, false);
 		if (likely(r == 0)) {
-			radeon_bo_kunmap(rdev->ib_pool.robj);
-			radeon_bo_unpin(rdev->ib_pool.robj);
-			radeon_bo_unreserve(rdev->ib_pool.robj);
+			radeon_bo_kunmap(robj);
+			radeon_bo_unpin(robj);
+			radeon_bo_unreserve(robj);
 		}
-		radeon_bo_unref(&rdev->ib_pool.robj);
-		rdev->ib_pool.robj = NULL;
+		radeon_bo_unref(&robj);
 	}
-	mutex_unlock(&rdev->ib_pool.mutex);
 }
 
 
@@ -258,31 +260,41 @@ void radeon_ring_free_size(struct radeon_device *rdev)
 	}
 }
 
-int radeon_ring_lock(struct radeon_device *rdev, unsigned ndw)
+int radeon_ring_alloc(struct radeon_device *rdev, unsigned ndw)
 {
 	int r;
 
 	/* Align requested size with padding so unlock_commit can
 	 * pad safely */
 	ndw = (ndw + rdev->cp.align_mask) & ~rdev->cp.align_mask;
-	mutex_lock(&rdev->cp.mutex);
 	while (ndw > (rdev->cp.ring_free_dw - 1)) {
 		radeon_ring_free_size(rdev);
 		if (ndw < rdev->cp.ring_free_dw) {
 			break;
 		}
 		r = radeon_fence_wait_next(rdev);
-		if (r) {
-			mutex_unlock(&rdev->cp.mutex);
+		if (r)
 			return r;
-		}
 	}
 	rdev->cp.count_dw = ndw;
 	rdev->cp.wptr_old = rdev->cp.wptr;
 	return 0;
 }
 
-void radeon_ring_unlock_commit(struct radeon_device *rdev)
+int radeon_ring_lock(struct radeon_device *rdev, unsigned ndw)
+{
+	int r;
+
+	mutex_lock(&rdev->cp.mutex);
+	r = radeon_ring_alloc(rdev, ndw);
+	if (r) {
+		mutex_unlock(&rdev->cp.mutex);
+		return r;
+	}
+	return 0;
+}
+
+void radeon_ring_commit(struct radeon_device *rdev)
 {
 	unsigned count_dw_pad;
 	unsigned i;
@@ -295,6 +307,11 @@ void radeon_ring_unlock_commit(struct radeon_device *rdev)
 	}
 	DRM_MEMORYBARRIER();
 	radeon_cp_commit(rdev);
+}
+
+void radeon_ring_unlock_commit(struct radeon_device *rdev)
+{
+	radeon_ring_commit(rdev);
 	mutex_unlock(&rdev->cp.mutex);
 }
 
@@ -344,20 +361,23 @@ int radeon_ring_init(struct radeon_device *rdev, unsigned ring_size)
 void radeon_ring_fini(struct radeon_device *rdev)
 {
 	int r;
+	struct radeon_bo *ring_obj;
 
 	mutex_lock(&rdev->cp.mutex);
-	if (rdev->cp.ring_obj) {
-		r = radeon_bo_reserve(rdev->cp.ring_obj, false);
+	ring_obj = rdev->cp.ring_obj;
+	rdev->cp.ring = NULL;
+	rdev->cp.ring_obj = NULL;
+	mutex_unlock(&rdev->cp.mutex);
+
+	if (ring_obj) {
+		r = radeon_bo_reserve(ring_obj, false);
 		if (likely(r == 0)) {
-			radeon_bo_kunmap(rdev->cp.ring_obj);
-			radeon_bo_unpin(rdev->cp.ring_obj);
-			radeon_bo_unreserve(rdev->cp.ring_obj);
+			radeon_bo_kunmap(ring_obj);
+			radeon_bo_unpin(ring_obj);
+			radeon_bo_unreserve(ring_obj);
 		}
-		radeon_bo_unref(&rdev->cp.ring_obj);
-		rdev->cp.ring = NULL;
-		rdev->cp.ring_obj = NULL;
+		radeon_bo_unref(&ring_obj);
 	}
-	mutex_unlock(&rdev->cp.mutex);
 }
 
 
diff --git a/drivers/gpu/drm/radeon/radeon_state.c b/drivers/gpu/drm/radeon/radeon_state.c
index cc5316d..b3ba44c 100644
--- a/drivers/gpu/drm/radeon/radeon_state.c
+++ b/drivers/gpu/drm/radeon/radeon_state.c
@@ -900,9 +900,10 @@ static void radeon_cp_dispatch_clear(struct drm_device * dev,
 			flags |= RADEON_FRONT;
 	}
 	if (flags & (RADEON_DEPTH|RADEON_STENCIL)) {
-		if (!dev_priv->have_z_offset)
+		if (!dev_priv->have_z_offset) {
 			printk_once(KERN_ERR "radeon: illegal depth clear request. Buggy mesa detected - please update.\n");
-		flags &= ~(RADEON_DEPTH | RADEON_STENCIL);
+			flags &= ~(RADEON_DEPTH | RADEON_STENCIL);
+		}
 	}
 
 	if (flags & (RADEON_FRONT | RADEON_BACK)) {
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index d031b68..e9918d8 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -33,6 +33,7 @@
 #include <ttm/ttm_bo_driver.h>
 #include <ttm/ttm_placement.h>
 #include <ttm/ttm_module.h>
+#include <ttm/ttm_page_alloc.h>
 #include <drm/drmP.h>
 #include <drm/radeon_drm.h>
 #include <linux/seq_file.h>
@@ -162,34 +163,21 @@ static int radeon_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 					  (unsigned)type);
 				return -EINVAL;
 			}
-			man->io_offset = rdev->mc.agp_base;
-			man->io_size = rdev->mc.gtt_size;
-			man->io_addr = NULL;
 			if (!rdev->ddev->agp->cant_use_aperture)
-				man->flags = TTM_MEMTYPE_FLAG_NEEDS_IOREMAP |
-					     TTM_MEMTYPE_FLAG_MAPPABLE;
+				man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
 			man->available_caching = TTM_PL_FLAG_UNCACHED |
 						 TTM_PL_FLAG_WC;
 			man->default_caching = TTM_PL_FLAG_WC;
-		} else
-#endif
-		{
-			man->io_offset = 0;
-			man->io_size = 0;
-			man->io_addr = NULL;
 		}
+#endif
 		break;
 	case TTM_PL_VRAM:
 		/* "On-card" video ram */
 		man->gpu_offset = rdev->mc.vram_start;
 		man->flags = TTM_MEMTYPE_FLAG_FIXED |
-			     TTM_MEMTYPE_FLAG_NEEDS_IOREMAP |
 			     TTM_MEMTYPE_FLAG_MAPPABLE;
 		man->available_caching = TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_WC;
 		man->default_caching = TTM_PL_FLAG_WC;
-		man->io_addr = NULL;
-		man->io_offset = rdev->mc.aper_base;
-		man->io_size = rdev->mc.aper_size;
 		break;
 	default:
 		DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
@@ -244,9 +232,9 @@ static void radeon_move_null(struct ttm_buffer_object *bo,
 }
 
 static int radeon_move_blit(struct ttm_buffer_object *bo,
-			    bool evict, int no_wait,
-			    struct ttm_mem_reg *new_mem,
-			    struct ttm_mem_reg *old_mem)
+			bool evict, int no_wait_reserve, bool no_wait_gpu,
+			struct ttm_mem_reg *new_mem,
+			struct ttm_mem_reg *old_mem)
 {
 	struct radeon_device *rdev;
 	uint64_t old_start, new_start;
@@ -290,13 +278,14 @@ static int radeon_move_blit(struct ttm_buffer_object *bo,
 	r = radeon_copy(rdev, old_start, new_start, new_mem->num_pages, fence);
 	/* FIXME: handle copy error */
 	r = ttm_bo_move_accel_cleanup(bo, (void *)fence, NULL,
-				      evict, no_wait, new_mem);
+				      evict, no_wait_reserve, no_wait_gpu, new_mem);
 	radeon_fence_unref(&fence);
 	return r;
 }
 
 static int radeon_move_vram_ram(struct ttm_buffer_object *bo,
-				bool evict, bool interruptible, bool no_wait,
+				bool evict, bool interruptible,
+				bool no_wait_reserve, bool no_wait_gpu,
 				struct ttm_mem_reg *new_mem)
 {
 	struct radeon_device *rdev;
@@ -317,7 +306,7 @@ static int radeon_move_vram_ram(struct ttm_buffer_object *bo,
 	placement.busy_placement = &placements;
 	placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
 	r = ttm_bo_mem_space(bo, &placement, &tmp_mem,
-			     interruptible, no_wait);
+			     interruptible, no_wait_reserve, no_wait_gpu);
 	if (unlikely(r)) {
 		return r;
 	}
@@ -331,11 +320,11 @@ static int radeon_move_vram_ram(struct ttm_buffer_object *bo,
 	if (unlikely(r)) {
 		goto out_cleanup;
 	}
-	r = radeon_move_blit(bo, true, no_wait, &tmp_mem, old_mem);
+	r = radeon_move_blit(bo, true, no_wait_reserve, no_wait_gpu, &tmp_mem, old_mem);
 	if (unlikely(r)) {
 		goto out_cleanup;
 	}
-	r = ttm_bo_move_ttm(bo, true, no_wait, new_mem);
+	r = ttm_bo_move_ttm(bo, true, no_wait_reserve, no_wait_gpu, new_mem);
 out_cleanup:
 	if (tmp_mem.mm_node) {
 		struct ttm_bo_global *glob = rdev->mman.bdev.glob;
@@ -349,7 +338,8 @@ out_cleanup:
 }
 
 static int radeon_move_ram_vram(struct ttm_buffer_object *bo,
-				bool evict, bool interruptible, bool no_wait,
+				bool evict, bool interruptible,
+				bool no_wait_reserve, bool no_wait_gpu,
 				struct ttm_mem_reg *new_mem)
 {
 	struct radeon_device *rdev;
@@ -369,15 +359,15 @@ static int radeon_move_ram_vram(struct ttm_buffer_object *bo,
 	placement.num_busy_placement = 1;
 	placement.busy_placement = &placements;
 	placements = TTM_PL_MASK_CACHING | TTM_PL_FLAG_TT;
-	r = ttm_bo_mem_space(bo, &placement, &tmp_mem, interruptible, no_wait);
+	r = ttm_bo_mem_space(bo, &placement, &tmp_mem, interruptible, no_wait_reserve, no_wait_gpu);
 	if (unlikely(r)) {
 		return r;
 	}
-	r = ttm_bo_move_ttm(bo, true, no_wait, &tmp_mem);
+	r = ttm_bo_move_ttm(bo, true, no_wait_reserve, no_wait_gpu, &tmp_mem);
 	if (unlikely(r)) {
 		goto out_cleanup;
 	}
-	r = radeon_move_blit(bo, true, no_wait, new_mem, old_mem);
+	r = radeon_move_blit(bo, true, no_wait_reserve, no_wait_gpu, new_mem, old_mem);
 	if (unlikely(r)) {
 		goto out_cleanup;
 	}
@@ -394,8 +384,9 @@ out_cleanup:
 }
 
 static int radeon_bo_move(struct ttm_buffer_object *bo,
-			  bool evict, bool interruptible, bool no_wait,
-			  struct ttm_mem_reg *new_mem)
+			bool evict, bool interruptible,
+			bool no_wait_reserve, bool no_wait_gpu,
+			struct ttm_mem_reg *new_mem)
 {
 	struct radeon_device *rdev;
 	struct ttm_mem_reg *old_mem = &bo->mem;
@@ -422,23 +413,66 @@ static int radeon_bo_move(struct ttm_buffer_object *bo,
 	if (old_mem->mem_type == TTM_PL_VRAM &&
 	    new_mem->mem_type == TTM_PL_SYSTEM) {
 		r = radeon_move_vram_ram(bo, evict, interruptible,
-					    no_wait, new_mem);
+					no_wait_reserve, no_wait_gpu, new_mem);
 	} else if (old_mem->mem_type == TTM_PL_SYSTEM &&
 		   new_mem->mem_type == TTM_PL_VRAM) {
 		r = radeon_move_ram_vram(bo, evict, interruptible,
-					    no_wait, new_mem);
+					    no_wait_reserve, no_wait_gpu, new_mem);
 	} else {
-		r = radeon_move_blit(bo, evict, no_wait, new_mem, old_mem);
+		r = radeon_move_blit(bo, evict, no_wait_reserve, no_wait_gpu, new_mem, old_mem);
 	}
 
 	if (r) {
 memcpy:
-		r = ttm_bo_move_memcpy(bo, evict, no_wait, new_mem);
+		r = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, new_mem);
 	}
-
 	return r;
 }
 
+static int radeon_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
+	struct radeon_device *rdev = radeon_get_rdev(bdev);
+
+	mem->bus.addr = NULL;
+	mem->bus.offset = 0;
+	mem->bus.size = mem->num_pages << PAGE_SHIFT;
+	mem->bus.base = 0;
+	mem->bus.is_iomem = false;
+	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
+		return -EINVAL;
+	switch (mem->mem_type) {
+	case TTM_PL_SYSTEM:
+		/* system memory */
+		return 0;
+	case TTM_PL_TT:
+#if __OS_HAS_AGP
+		if (rdev->flags & RADEON_IS_AGP) {
+			/* RADEON_IS_AGP is set only if AGP is active */
+			mem->bus.offset = mem->mm_node->start << PAGE_SHIFT;
+			mem->bus.base = rdev->mc.agp_base;
+			mem->bus.is_iomem = !rdev->ddev->agp->cant_use_aperture;
+		}
+#endif
+		break;
+	case TTM_PL_VRAM:
+		mem->bus.offset = mem->mm_node->start << PAGE_SHIFT;
+		/* check if it's visible */
+		if ((mem->bus.offset + mem->bus.size) > rdev->mc.visible_vram_size)
+			return -EINVAL;
+		mem->bus.base = rdev->mc.aper_base;
+		mem->bus.is_iomem = true;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static void radeon_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+}
+
 static int radeon_sync_obj_wait(void *sync_obj, void *sync_arg,
 				bool lazy, bool interruptible)
 {
@@ -479,6 +513,8 @@ static struct ttm_bo_driver radeon_bo_driver = {
 	.sync_obj_ref = &radeon_sync_obj_ref,
 	.move_notify = &radeon_bo_move_notify,
 	.fault_reserve_notify = &radeon_bo_fault_reserve_notify,
+	.io_mem_reserve = &radeon_ttm_io_mem_reserve,
+	.io_mem_free = &radeon_ttm_io_mem_free,
 };
 
 int radeon_ttm_init(struct radeon_device *rdev)
@@ -571,13 +607,17 @@ static const struct vm_operations_struct *ttm_vm_ops = NULL;
 static int radeon_ttm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	struct ttm_buffer_object *bo;
+	struct radeon_device *rdev;
 	int r;
 
-	bo = (struct ttm_buffer_object *)vma->vm_private_data;
+	bo = (struct ttm_buffer_object *)vma->vm_private_data;	
 	if (bo == NULL) {
 		return VM_FAULT_NOPAGE;
 	}
+	rdev = radeon_get_rdev(bo->bdev);
+	mutex_lock(&rdev->vram_mutex);
 	r = ttm_vm_ops->fault(vma, vmf);
+	mutex_unlock(&rdev->vram_mutex);
 	return r;
 }
 
@@ -745,8 +785,8 @@ static int radeon_mm_dump_table(struct seq_file *m, void *data)
 static int radeon_ttm_debugfs_init(struct radeon_device *rdev)
 {
 #if defined(CONFIG_DEBUG_FS)
-	static struct drm_info_list radeon_mem_types_list[RADEON_DEBUGFS_MEM_TYPES];
-	static char radeon_mem_types_names[RADEON_DEBUGFS_MEM_TYPES][32];
+	static struct drm_info_list radeon_mem_types_list[RADEON_DEBUGFS_MEM_TYPES+1];
+	static char radeon_mem_types_names[RADEON_DEBUGFS_MEM_TYPES+1][32];
 	unsigned i;
 
 	for (i = 0; i < RADEON_DEBUGFS_MEM_TYPES; i++) {
@@ -763,7 +803,13 @@ static int radeon_ttm_debugfs_init(struct radeon_device *rdev)
 			radeon_mem_types_list[i].data = &rdev->mman.bdev.man[TTM_PL_TT].manager;
 
 	}
-	return radeon_debugfs_add_files(rdev, radeon_mem_types_list, RADEON_DEBUGFS_MEM_TYPES);
+	/* Add ttm page pool to debugfs */
+	sprintf(radeon_mem_types_names[i], "ttm_page_pool");
+	radeon_mem_types_list[i].name = radeon_mem_types_names[i];
+	radeon_mem_types_list[i].show = &ttm_page_alloc_debugfs;
+	radeon_mem_types_list[i].driver_features = 0;
+	radeon_mem_types_list[i].data = NULL;
+	return radeon_debugfs_add_files(rdev, radeon_mem_types_list, RADEON_DEBUGFS_MEM_TYPES+1);
 
 #endif
 	return 0;
diff --git a/drivers/gpu/drm/radeon/reg_srcs/evergreen b/drivers/gpu/drm/radeon/reg_srcs/evergreen
new file mode 100644
index 0000000..b5c757f
--- /dev/null
+++ b/drivers/gpu/drm/radeon/reg_srcs/evergreen
@@ -0,0 +1,611 @@
+evergreen 0x9400
+0x00008040 WAIT_UNTIL
+0x00008044 WAIT_UNTIL_POLL_CNTL
+0x00008048 WAIT_UNTIL_POLL_MASK
+0x0000804c WAIT_UNTIL_POLL_REFDATA
+0x000088B0 VGT_VTX_VECT_EJECT_REG
+0x000088C4 VGT_CACHE_INVALIDATION
+0x000088D4 VGT_GS_VERTEX_REUSE
+0x00008958 VGT_PRIMITIVE_TYPE
+0x0000895C VGT_INDEX_TYPE
+0x00008970 VGT_NUM_INDICES
+0x00008974 VGT_NUM_INSTANCES
+0x00008990 VGT_COMPUTE_DIM_X
+0x00008994 VGT_COMPUTE_DIM_Y
+0x00008998 VGT_COMPUTE_DIM_Z
+0x0000899C VGT_COMPUTE_START_X
+0x000089A0 VGT_COMPUTE_START_Y
+0x000089A4 VGT_COMPUTE_START_Z
+0x000089AC VGT_COMPUTE_THREAD_GOURP_SIZE
+0x00008A14 PA_CL_ENHANCE
+0x00008A60 PA_SC_LINE_STIPPLE_VALUE
+0x00008B10 PA_SC_LINE_STIPPLE_STATE
+0x00008BF0 PA_SC_ENHANCE
+0x00008D8C SQ_DYN_GPR_CNTL_PS_FLUSH_REQ
+0x00008C00 SQ_CONFIG
+0x00008C04 SQ_GPR_RESOURCE_MGMT_1
+0x00008C08 SQ_GPR_RESOURCE_MGMT_2
+0x00008C0C SQ_GPR_RESOURCE_MGMT_3
+0x00008C10 SQ_GLOBAL_GPR_RESOURCE_MGMT_1
+0x00008C14 SQ_GLOBAL_GPR_RESOURCE_MGMT_2
+0x00008C18 SQ_THREAD_RESOURCE_MGMT
+0x00008C1C SQ_THREAD_RESOURCE_MGMT_2
+0x00008C20 SQ_STACK_RESOURCE_MGMT_1
+0x00008C24 SQ_STACK_RESOURCE_MGMT_2
+0x00008C28 SQ_STACK_RESOURCE_MGMT_3
+0x00008DF8 SQ_CONST_MEM_BASE
+0x00008E48 SQ_EX_ALLOC_TABLE_SLOTS
+0x00009100 SPI_CONFIG_CNTL
+0x0000913C SPI_CONFIG_CNTL_1
+0x00009700 VC_CNTL
+0x00009714 VC_ENHANCE
+0x00009830 DB_DEBUG
+0x00009834 DB_DEBUG2
+0x00009838 DB_DEBUG3
+0x0000983C DB_DEBUG4
+0x00009854 DB_WATERMARKS
+0x0000A400 TD_PS_BORDER_COLOR_INDEX
+0x0000A404 TD_PS_BORDER_COLOR_RED
+0x0000A408 TD_PS_BORDER_COLOR_GREEN
+0x0000A40C TD_PS_BORDER_COLOR_BLUE
+0x0000A410 TD_PS_BORDER_COLOR_ALPHA
+0x0000A414 TD_VS_BORDER_COLOR_INDEX
+0x0000A418 TD_VS_BORDER_COLOR_RED
+0x0000A41C TD_VS_BORDER_COLOR_GREEN
+0x0000A420 TD_VS_BORDER_COLOR_BLUE
+0x0000A424 TD_VS_BORDER_COLOR_ALPHA
+0x0000A428 TD_GS_BORDER_COLOR_INDEX
+0x0000A42C TD_GS_BORDER_COLOR_RED
+0x0000A430 TD_GS_BORDER_COLOR_GREEN
+0x0000A434 TD_GS_BORDER_COLOR_BLUE
+0x0000A438 TD_GS_BORDER_COLOR_ALPHA
+0x0000A43C TD_HS_BORDER_COLOR_INDEX
+0x0000A440 TD_HS_BORDER_COLOR_RED
+0x0000A444 TD_HS_BORDER_COLOR_GREEN
+0x0000A448 TD_HS_BORDER_COLOR_BLUE
+0x0000A44C TD_HS_BORDER_COLOR_ALPHA
+0x0000A450 TD_LS_BORDER_COLOR_INDEX
+0x0000A454 TD_LS_BORDER_COLOR_RED
+0x0000A458 TD_LS_BORDER_COLOR_GREEN
+0x0000A45C TD_LS_BORDER_COLOR_BLUE
+0x0000A460 TD_LS_BORDER_COLOR_ALPHA
+0x0000A464 TD_CS_BORDER_COLOR_INDEX
+0x0000A468 TD_CS_BORDER_COLOR_RED
+0x0000A46C TD_CS_BORDER_COLOR_GREEN
+0x0000A470 TD_CS_BORDER_COLOR_BLUE
+0x0000A474 TD_CS_BORDER_COLOR_ALPHA
+0x00028000 DB_RENDER_CONTROL
+0x00028004 DB_COUNT_CONTROL
+0x0002800C DB_RENDER_OVERRIDE
+0x00028010 DB_RENDER_OVERRIDE2
+0x00028028 DB_STENCIL_CLEAR
+0x0002802C DB_DEPTH_CLEAR
+0x00028034 PA_SC_SCREEN_SCISSOR_BR
+0x00028030 PA_SC_SCREEN_SCISSOR_TL
+0x0002805C DB_DEPTH_SLICE
+0x00028140 SQ_ALU_CONST_BUFFER_SIZE_PS_0
+0x00028144 SQ_ALU_CONST_BUFFER_SIZE_PS_1
+0x00028148 SQ_ALU_CONST_BUFFER_SIZE_PS_2
+0x0002814C SQ_ALU_CONST_BUFFER_SIZE_PS_3
+0x00028150 SQ_ALU_CONST_BUFFER_SIZE_PS_4
+0x00028154 SQ_ALU_CONST_BUFFER_SIZE_PS_5
+0x00028158 SQ_ALU_CONST_BUFFER_SIZE_PS_6
+0x0002815C SQ_ALU_CONST_BUFFER_SIZE_PS_7
+0x00028160 SQ_ALU_CONST_BUFFER_SIZE_PS_8
+0x00028164 SQ_ALU_CONST_BUFFER_SIZE_PS_9
+0x00028168 SQ_ALU_CONST_BUFFER_SIZE_PS_10
+0x0002816C SQ_ALU_CONST_BUFFER_SIZE_PS_11
+0x00028170 SQ_ALU_CONST_BUFFER_SIZE_PS_12
+0x00028174 SQ_ALU_CONST_BUFFER_SIZE_PS_13
+0x00028178 SQ_ALU_CONST_BUFFER_SIZE_PS_14
+0x0002817C SQ_ALU_CONST_BUFFER_SIZE_PS_15
+0x00028180 SQ_ALU_CONST_BUFFER_SIZE_VS_0
+0x00028184 SQ_ALU_CONST_BUFFER_SIZE_VS_1
+0x00028188 SQ_ALU_CONST_BUFFER_SIZE_VS_2
+0x0002818C SQ_ALU_CONST_BUFFER_SIZE_VS_3
+0x00028190 SQ_ALU_CONST_BUFFER_SIZE_VS_4
+0x00028194 SQ_ALU_CONST_BUFFER_SIZE_VS_5
+0x00028198 SQ_ALU_CONST_BUFFER_SIZE_VS_6
+0x0002819C SQ_ALU_CONST_BUFFER_SIZE_VS_7
+0x000281A0 SQ_ALU_CONST_BUFFER_SIZE_VS_8
+0x000281A4 SQ_ALU_CONST_BUFFER_SIZE_VS_9
+0x000281A8 SQ_ALU_CONST_BUFFER_SIZE_VS_10
+0x000281AC SQ_ALU_CONST_BUFFER_SIZE_VS_11
+0x000281B0 SQ_ALU_CONST_BUFFER_SIZE_VS_12
+0x000281B4 SQ_ALU_CONST_BUFFER_SIZE_VS_13
+0x000281B8 SQ_ALU_CONST_BUFFER_SIZE_VS_14
+0x000281BC SQ_ALU_CONST_BUFFER_SIZE_VS_15
+0x000281C0 SQ_ALU_CONST_BUFFER_SIZE_GS_0
+0x000281C4 SQ_ALU_CONST_BUFFER_SIZE_GS_1
+0x000281C8 SQ_ALU_CONST_BUFFER_SIZE_GS_2
+0x000281CC SQ_ALU_CONST_BUFFER_SIZE_GS_3
+0x000281D0 SQ_ALU_CONST_BUFFER_SIZE_GS_4
+0x000281D4 SQ_ALU_CONST_BUFFER_SIZE_GS_5
+0x000281D8 SQ_ALU_CONST_BUFFER_SIZE_GS_6
+0x000281DC SQ_ALU_CONST_BUFFER_SIZE_GS_7
+0x000281E0 SQ_ALU_CONST_BUFFER_SIZE_GS_8
+0x000281E4 SQ_ALU_CONST_BUFFER_SIZE_GS_9
+0x000281E8 SQ_ALU_CONST_BUFFER_SIZE_GS_10
+0x000281EC SQ_ALU_CONST_BUFFER_SIZE_GS_11
+0x000281F0 SQ_ALU_CONST_BUFFER_SIZE_GS_12
+0x000281F4 SQ_ALU_CONST_BUFFER_SIZE_GS_13
+0x000281F8 SQ_ALU_CONST_BUFFER_SIZE_GS_14
+0x000281FC SQ_ALU_CONST_BUFFER_SIZE_GS_15
+0x00028200 PA_SC_WINDOW_OFFSET
+0x00028204 PA_SC_WINDOW_SCISSOR_TL
+0x00028208 PA_SC_WINDOW_SCISSOR_BR
+0x0002820C PA_SC_CLIPRECT_RULE
+0x00028210 PA_SC_CLIPRECT_0_TL
+0x00028214 PA_SC_CLIPRECT_0_BR
+0x00028218 PA_SC_CLIPRECT_1_TL
+0x0002821C PA_SC_CLIPRECT_1_BR
+0x00028220 PA_SC_CLIPRECT_2_TL
+0x00028224 PA_SC_CLIPRECT_2_BR
+0x00028228 PA_SC_CLIPRECT_3_TL
+0x0002822C PA_SC_CLIPRECT_3_BR
+0x00028230 PA_SC_EDGERULE
+0x00028234 PA_SU_HARDWARE_SCREEN_OFFSET
+0x00028240 PA_SC_GENERIC_SCISSOR_TL
+0x00028244 PA_SC_GENERIC_SCISSOR_BR
+0x00028250 PA_SC_VPORT_SCISSOR_0_TL
+0x00028254 PA_SC_VPORT_SCISSOR_0_BR
+0x00028258 PA_SC_VPORT_SCISSOR_1_TL
+0x0002825C PA_SC_VPORT_SCISSOR_1_BR
+0x00028260 PA_SC_VPORT_SCISSOR_2_TL
+0x00028264 PA_SC_VPORT_SCISSOR_2_BR
+0x00028268 PA_SC_VPORT_SCISSOR_3_TL
+0x0002826C PA_SC_VPORT_SCISSOR_3_BR
+0x00028270 PA_SC_VPORT_SCISSOR_4_TL
+0x00028274 PA_SC_VPORT_SCISSOR_4_BR
+0x00028278 PA_SC_VPORT_SCISSOR_5_TL
+0x0002827C PA_SC_VPORT_SCISSOR_5_BR
+0x00028280 PA_SC_VPORT_SCISSOR_6_TL
+0x00028284 PA_SC_VPORT_SCISSOR_6_BR
+0x00028288 PA_SC_VPORT_SCISSOR_7_TL
+0x0002828C PA_SC_VPORT_SCISSOR_7_BR
+0x00028290 PA_SC_VPORT_SCISSOR_8_TL
+0x00028294 PA_SC_VPORT_SCISSOR_8_BR
+0x00028298 PA_SC_VPORT_SCISSOR_9_TL
+0x0002829C PA_SC_VPORT_SCISSOR_9_BR
+0x000282A0 PA_SC_VPORT_SCISSOR_10_TL
+0x000282A4 PA_SC_VPORT_SCISSOR_10_BR
+0x000282A8 PA_SC_VPORT_SCISSOR_11_TL
+0x000282AC PA_SC_VPORT_SCISSOR_11_BR
+0x000282B0 PA_SC_VPORT_SCISSOR_12_TL
+0x000282B4 PA_SC_VPORT_SCISSOR_12_BR
+0x000282B8 PA_SC_VPORT_SCISSOR_13_TL
+0x000282BC PA_SC_VPORT_SCISSOR_13_BR
+0x000282C0 PA_SC_VPORT_SCISSOR_14_TL
+0x000282C4 PA_SC_VPORT_SCISSOR_14_BR
+0x000282C8 PA_SC_VPORT_SCISSOR_15_TL
+0x000282CC PA_SC_VPORT_SCISSOR_15_BR
+0x000282D0 PA_SC_VPORT_ZMIN_0
+0x000282D4 PA_SC_VPORT_ZMAX_0
+0x000282D8 PA_SC_VPORT_ZMIN_1
+0x000282DC PA_SC_VPORT_ZMAX_1
+0x000282E0 PA_SC_VPORT_ZMIN_2
+0x000282E4 PA_SC_VPORT_ZMAX_2
+0x000282E8 PA_SC_VPORT_ZMIN_3
+0x000282EC PA_SC_VPORT_ZMAX_3
+0x000282F0 PA_SC_VPORT_ZMIN_4
+0x000282F4 PA_SC_VPORT_ZMAX_4
+0x000282F8 PA_SC_VPORT_ZMIN_5
+0x000282FC PA_SC_VPORT_ZMAX_5
+0x00028300 PA_SC_VPORT_ZMIN_6
+0x00028304 PA_SC_VPORT_ZMAX_6
+0x00028308 PA_SC_VPORT_ZMIN_7
+0x0002830C PA_SC_VPORT_ZMAX_7
+0x00028310 PA_SC_VPORT_ZMIN_8
+0x00028314 PA_SC_VPORT_ZMAX_8
+0x00028318 PA_SC_VPORT_ZMIN_9
+0x0002831C PA_SC_VPORT_ZMAX_9
+0x00028320 PA_SC_VPORT_ZMIN_10
+0x00028324 PA_SC_VPORT_ZMAX_10
+0x00028328 PA_SC_VPORT_ZMIN_11
+0x0002832C PA_SC_VPORT_ZMAX_11
+0x00028330 PA_SC_VPORT_ZMIN_12
+0x00028334 PA_SC_VPORT_ZMAX_12
+0x00028338 PA_SC_VPORT_ZMIN_13
+0x0002833C PA_SC_VPORT_ZMAX_13
+0x00028340 PA_SC_VPORT_ZMIN_14
+0x00028344 PA_SC_VPORT_ZMAX_14
+0x00028348 PA_SC_VPORT_ZMIN_15
+0x0002834C PA_SC_VPORT_ZMAX_15
+0x00028350 SX_MISC
+0x00028380 SQ_VTX_SEMANTIC_0
+0x00028384 SQ_VTX_SEMANTIC_1
+0x00028388 SQ_VTX_SEMANTIC_2
+0x0002838C SQ_VTX_SEMANTIC_3
+0x00028390 SQ_VTX_SEMANTIC_4
+0x00028394 SQ_VTX_SEMANTIC_5
+0x00028398 SQ_VTX_SEMANTIC_6
+0x0002839C SQ_VTX_SEMANTIC_7
+0x000283A0 SQ_VTX_SEMANTIC_8
+0x000283A4 SQ_VTX_SEMANTIC_9
+0x000283A8 SQ_VTX_SEMANTIC_10
+0x000283AC SQ_VTX_SEMANTIC_11
+0x000283B0 SQ_VTX_SEMANTIC_12
+0x000283B4 SQ_VTX_SEMANTIC_13
+0x000283B8 SQ_VTX_SEMANTIC_14
+0x000283BC SQ_VTX_SEMANTIC_15
+0x000283C0 SQ_VTX_SEMANTIC_16
+0x000283C4 SQ_VTX_SEMANTIC_17
+0x000283C8 SQ_VTX_SEMANTIC_18
+0x000283CC SQ_VTX_SEMANTIC_19
+0x000283D0 SQ_VTX_SEMANTIC_20
+0x000283D4 SQ_VTX_SEMANTIC_21
+0x000283D8 SQ_VTX_SEMANTIC_22
+0x000283DC SQ_VTX_SEMANTIC_23
+0x000283E0 SQ_VTX_SEMANTIC_24
+0x000283E4 SQ_VTX_SEMANTIC_25
+0x000283E8 SQ_VTX_SEMANTIC_26
+0x000283EC SQ_VTX_SEMANTIC_27
+0x000283F0 SQ_VTX_SEMANTIC_28
+0x000283F4 SQ_VTX_SEMANTIC_29
+0x000283F8 SQ_VTX_SEMANTIC_30
+0x000283FC SQ_VTX_SEMANTIC_31
+0x00028400 VGT_MAX_VTX_INDX
+0x00028404 VGT_MIN_VTX_INDX
+0x00028408 VGT_INDX_OFFSET
+0x0002840C VGT_MULTI_PRIM_IB_RESET_INDX
+0x00028410 SX_ALPHA_TEST_CONTROL
+0x00028414 CB_BLEND_RED
+0x00028418 CB_BLEND_GREEN
+0x0002841C CB_BLEND_BLUE
+0x00028420 CB_BLEND_ALPHA
+0x00028430 DB_STENCILREFMASK
+0x00028434 DB_STENCILREFMASK_BF
+0x00028438 SX_ALPHA_REF
+0x0002843C PA_CL_VPORT_XSCALE_0
+0x00028440 PA_CL_VPORT_XOFFSET_0
+0x00028444 PA_CL_VPORT_YSCALE_0
+0x00028448 PA_CL_VPORT_YOFFSET_0
+0x0002844C PA_CL_VPORT_ZSCALE_0
+0x00028450 PA_CL_VPORT_ZOFFSET_0
+0x00028454 PA_CL_VPORT_XSCALE_1
+0x00028458 PA_CL_VPORT_XOFFSET_1
+0x0002845C PA_CL_VPORT_YSCALE_1
+0x00028460 PA_CL_VPORT_YOFFSET_1
+0x00028464 PA_CL_VPORT_ZSCALE_1
+0x00028468 PA_CL_VPORT_ZOFFSET_1
+0x0002846C PA_CL_VPORT_XSCALE_2
+0x00028470 PA_CL_VPORT_XOFFSET_2
+0x00028474 PA_CL_VPORT_YSCALE_2
+0x00028478 PA_CL_VPORT_YOFFSET_2
+0x0002847C PA_CL_VPORT_ZSCALE_2
+0x00028480 PA_CL_VPORT_ZOFFSET_2
+0x00028484 PA_CL_VPORT_XSCALE_3
+0x00028488 PA_CL_VPORT_XOFFSET_3
+0x0002848C PA_CL_VPORT_YSCALE_3
+0x00028490 PA_CL_VPORT_YOFFSET_3
+0x00028494 PA_CL_VPORT_ZSCALE_3
+0x00028498 PA_CL_VPORT_ZOFFSET_3
+0x0002849C PA_CL_VPORT_XSCALE_4
+0x000284A0 PA_CL_VPORT_XOFFSET_4
+0x000284A4 PA_CL_VPORT_YSCALE_4
+0x000284A8 PA_CL_VPORT_YOFFSET_4
+0x000284AC PA_CL_VPORT_ZSCALE_4
+0x000284B0 PA_CL_VPORT_ZOFFSET_4
+0x000284B4 PA_CL_VPORT_XSCALE_5
+0x000284B8 PA_CL_VPORT_XOFFSET_5
+0x000284BC PA_CL_VPORT_YSCALE_5
+0x000284C0 PA_CL_VPORT_YOFFSET_5
+0x000284C4 PA_CL_VPORT_ZSCALE_5
+0x000284C8 PA_CL_VPORT_ZOFFSET_5
+0x000284CC PA_CL_VPORT_XSCALE_6
+0x000284D0 PA_CL_VPORT_XOFFSET_6
+0x000284D4 PA_CL_VPORT_YSCALE_6
+0x000284D8 PA_CL_VPORT_YOFFSET_6
+0x000284DC PA_CL_VPORT_ZSCALE_6
+0x000284E0 PA_CL_VPORT_ZOFFSET_6
+0x000284E4 PA_CL_VPORT_XSCALE_7
+0x000284E8 PA_CL_VPORT_XOFFSET_7
+0x000284EC PA_CL_VPORT_YSCALE_7
+0x000284F0 PA_CL_VPORT_YOFFSET_7
+0x000284F4 PA_CL_VPORT_ZSCALE_7
+0x000284F8 PA_CL_VPORT_ZOFFSET_7
+0x000284FC PA_CL_VPORT_XSCALE_8
+0x00028500 PA_CL_VPORT_XOFFSET_8
+0x00028504 PA_CL_VPORT_YSCALE_8
+0x00028508 PA_CL_VPORT_YOFFSET_8
+0x0002850C PA_CL_VPORT_ZSCALE_8
+0x00028510 PA_CL_VPORT_ZOFFSET_8
+0x00028514 PA_CL_VPORT_XSCALE_9
+0x00028518 PA_CL_VPORT_XOFFSET_9
+0x0002851C PA_CL_VPORT_YSCALE_9
+0x00028520 PA_CL_VPORT_YOFFSET_9
+0x00028524 PA_CL_VPORT_ZSCALE_9
+0x00028528 PA_CL_VPORT_ZOFFSET_9
+0x0002852C PA_CL_VPORT_XSCALE_10
+0x00028530 PA_CL_VPORT_XOFFSET_10
+0x00028534 PA_CL_VPORT_YSCALE_10
+0x00028538 PA_CL_VPORT_YOFFSET_10
+0x0002853C PA_CL_VPORT_ZSCALE_10
+0x00028540 PA_CL_VPORT_ZOFFSET_10
+0x00028544 PA_CL_VPORT_XSCALE_11
+0x00028548 PA_CL_VPORT_XOFFSET_11
+0x0002854C PA_CL_VPORT_YSCALE_11
+0x00028550 PA_CL_VPORT_YOFFSET_11
+0x00028554 PA_CL_VPORT_ZSCALE_11
+0x00028558 PA_CL_VPORT_ZOFFSET_11
+0x0002855C PA_CL_VPORT_XSCALE_12
+0x00028560 PA_CL_VPORT_XOFFSET_12
+0x00028564 PA_CL_VPORT_YSCALE_12
+0x00028568 PA_CL_VPORT_YOFFSET_12
+0x0002856C PA_CL_VPORT_ZSCALE_12
+0x00028570 PA_CL_VPORT_ZOFFSET_12
+0x00028574 PA_CL_VPORT_XSCALE_13
+0x00028578 PA_CL_VPORT_XOFFSET_13
+0x0002857C PA_CL_VPORT_YSCALE_13
+0x00028580 PA_CL_VPORT_YOFFSET_13
+0x00028584 PA_CL_VPORT_ZSCALE_13
+0x00028588 PA_CL_VPORT_ZOFFSET_13
+0x0002858C PA_CL_VPORT_XSCALE_14
+0x00028590 PA_CL_VPORT_XOFFSET_14
+0x00028594 PA_CL_VPORT_YSCALE_14
+0x00028598 PA_CL_VPORT_YOFFSET_14
+0x0002859C PA_CL_VPORT_ZSCALE_14
+0x000285A0 PA_CL_VPORT_ZOFFSET_14
+0x000285A4 PA_CL_VPORT_XSCALE_15
+0x000285A8 PA_CL_VPORT_XOFFSET_15
+0x000285AC PA_CL_VPORT_YSCALE_15
+0x000285B0 PA_CL_VPORT_YOFFSET_15
+0x000285B4 PA_CL_VPORT_ZSCALE_15
+0x000285B8 PA_CL_VPORT_ZOFFSET_15
+0x000285BC PA_CL_UCP_0_X
+0x000285C0 PA_CL_UCP_0_Y
+0x000285C4 PA_CL_UCP_0_Z
+0x000285C8 PA_CL_UCP_0_W
+0x000285CC PA_CL_UCP_1_X
+0x000285D0 PA_CL_UCP_1_Y
+0x000285D4 PA_CL_UCP_1_Z
+0x000285D8 PA_CL_UCP_1_W
+0x000285DC PA_CL_UCP_2_X
+0x000285E0 PA_CL_UCP_2_Y
+0x000285E4 PA_CL_UCP_2_Z
+0x000285E8 PA_CL_UCP_2_W
+0x000285EC PA_CL_UCP_3_X
+0x000285F0 PA_CL_UCP_3_Y
+0x000285F4 PA_CL_UCP_3_Z
+0x000285F8 PA_CL_UCP_3_W
+0x000285FC PA_CL_UCP_4_X
+0x00028600 PA_CL_UCP_4_Y
+0x00028604 PA_CL_UCP_4_Z
+0x00028608 PA_CL_UCP_4_W
+0x0002860C PA_CL_UCP_5_X
+0x00028610 PA_CL_UCP_5_Y
+0x00028614 PA_CL_UCP_5_Z
+0x00028618 PA_CL_UCP_5_W
+0x0002861C SPI_VS_OUT_ID_0
+0x00028620 SPI_VS_OUT_ID_1
+0x00028624 SPI_VS_OUT_ID_2
+0x00028628 SPI_VS_OUT_ID_3
+0x0002862C SPI_VS_OUT_ID_4
+0x00028630 SPI_VS_OUT_ID_5
+0x00028634 SPI_VS_OUT_ID_6
+0x00028638 SPI_VS_OUT_ID_7
+0x0002863C SPI_VS_OUT_ID_8
+0x00028640 SPI_VS_OUT_ID_9
+0x00028644 SPI_PS_INPUT_CNTL_0
+0x00028648 SPI_PS_INPUT_CNTL_1
+0x0002864C SPI_PS_INPUT_CNTL_2
+0x00028650 SPI_PS_INPUT_CNTL_3
+0x00028654 SPI_PS_INPUT_CNTL_4
+0x00028658 SPI_PS_INPUT_CNTL_5
+0x0002865C SPI_PS_INPUT_CNTL_6
+0x00028660 SPI_PS_INPUT_CNTL_7
+0x00028664 SPI_PS_INPUT_CNTL_8
+0x00028668 SPI_PS_INPUT_CNTL_9
+0x0002866C SPI_PS_INPUT_CNTL_10
+0x00028670 SPI_PS_INPUT_CNTL_11
+0x00028674 SPI_PS_INPUT_CNTL_12
+0x00028678 SPI_PS_INPUT_CNTL_13
+0x0002867C SPI_PS_INPUT_CNTL_14
+0x00028680 SPI_PS_INPUT_CNTL_15
+0x00028684 SPI_PS_INPUT_CNTL_16
+0x00028688 SPI_PS_INPUT_CNTL_17
+0x0002868C SPI_PS_INPUT_CNTL_18
+0x00028690 SPI_PS_INPUT_CNTL_19
+0x00028694 SPI_PS_INPUT_CNTL_20
+0x00028698 SPI_PS_INPUT_CNTL_21
+0x0002869C SPI_PS_INPUT_CNTL_22
+0x000286A0 SPI_PS_INPUT_CNTL_23
+0x000286A4 SPI_PS_INPUT_CNTL_24
+0x000286A8 SPI_PS_INPUT_CNTL_25
+0x000286AC SPI_PS_INPUT_CNTL_26
+0x000286B0 SPI_PS_INPUT_CNTL_27
+0x000286B4 SPI_PS_INPUT_CNTL_28
+0x000286B8 SPI_PS_INPUT_CNTL_29
+0x000286BC SPI_PS_INPUT_CNTL_30
+0x000286C0 SPI_PS_INPUT_CNTL_31
+0x000286C4 SPI_VS_OUT_CONFIG
+0x000286C8 SPI_THREAD_GROUPING
+0x000286CC SPI_PS_IN_CONTROL_0
+0x000286D0 SPI_PS_IN_CONTROL_1
+0x000286D4 SPI_INTERP_CONTROL_0
+0x000286D8 SPI_INPUT_Z
+0x000286DC SPI_FOG_CNTL
+0x000286E0 SPI_BARYC_CNTL
+0x000286E4 SPI_PS_IN_CONTROL_2
+0x000286E8 SPI_COMPUTE_INPUT_CNTL
+0x000286EC SPI_COMPUTE_NUM_THREAD_X
+0x000286F0 SPI_COMPUTE_NUM_THREAD_Y
+0x000286F4 SPI_COMPUTE_NUM_THREAD_Z
+0x000286F8 GDS_ADDR_SIZE
+0x00028780 CB_BLEND0_CONTROL
+0x00028784 CB_BLEND1_CONTROL
+0x00028788 CB_BLEND2_CONTROL
+0x0002878C CB_BLEND3_CONTROL
+0x00028790 CB_BLEND4_CONTROL
+0x00028794 CB_BLEND5_CONTROL
+0x00028798 CB_BLEND6_CONTROL
+0x0002879C CB_BLEND7_CONTROL
+0x000287CC CS_COPY_STATE
+0x000287D0 GFX_COPY_STATE
+0x000287D4 PA_CL_POINT_X_RAD
+0x000287D8 PA_CL_POINT_Y_RAD
+0x000287DC PA_CL_POINT_SIZE
+0x000287E0 PA_CL_POINT_CULL_RAD
+0x00028808 CB_COLOR_CONTROL
+0x0002880C DB_SHADER_CONTROL
+0x00028810 PA_CL_CLIP_CNTL
+0x00028814 PA_SU_SC_MODE_CNTL
+0x00028818 PA_CL_VTE_CNTL
+0x0002881C PA_CL_VS_OUT_CNTL
+0x00028820 PA_CL_NANINF_CNTL
+0x00028824 PA_SU_LINE_STIPPLE_CNTL
+0x00028828 PA_SU_LINE_STIPPLE_SCALE
+0x0002882C PA_SU_PRIM_FILTER_CNTL
+0x00028838 SQ_DYN_GPR_RESOURCE_LIMIT_1
+0x00028844 SQ_PGM_RESOURCES_PS
+0x00028848 SQ_PGM_RESOURCES_2_PS
+0x0002884C SQ_PGM_EXPORTS_PS
+0x0002885C SQ_PGM_RESOURCES_VS
+0x00028860 SQ_PGM_RESOURCES_2_VS
+0x00028878 SQ_PGM_RESOURCES_GS
+0x0002887C SQ_PGM_RESOURCES_2_GS
+0x00028890 SQ_PGM_RESOURCES_ES
+0x00028894 SQ_PGM_RESOURCES_2_ES
+0x000288A8 SQ_PGM_RESOURCES_FS
+0x000288BC SQ_PGM_RESOURCES_HS
+0x000288C0 SQ_PGM_RESOURCES_2_HS
+0x000288D0 SQ_PGM_RESOURCES_LS
+0x000288D4 SQ_PGM_RESOURCES_2_LS
+0x000288E8 SQ_LDS_ALLOC
+0x000288EC SQ_LDS_ALLOC_PS
+0x000288F0 SQ_VTX_SEMANTIC_CLEAR
+0x00028A00 PA_SU_POINT_SIZE
+0x00028A04 PA_SU_POINT_MINMAX
+0x00028A08 PA_SU_LINE_CNTL
+0x00028A0C PA_SC_LINE_STIPPLE
+0x00028A10 VGT_OUTPUT_PATH_CNTL
+0x00028A14 VGT_HOS_CNTL
+0x00028A18 VGT_HOS_MAX_TESS_LEVEL
+0x00028A1C VGT_HOS_MIN_TESS_LEVEL
+0x00028A20 VGT_HOS_REUSE_DEPTH
+0x00028A24 VGT_GROUP_PRIM_TYPE
+0x00028A28 VGT_GROUP_FIRST_DECR
+0x00028A2C VGT_GROUP_DECR
+0x00028A30 VGT_GROUP_VECT_0_CNTL
+0x00028A34 VGT_GROUP_VECT_1_CNTL
+0x00028A38 VGT_GROUP_VECT_0_FMT_CNTL
+0x00028A3C VGT_GROUP_VECT_1_FMT_CNTL
+0x00028A40 VGT_GS_MODE
+0x00028A48 PA_SC_MODE_CNTL_0
+0x00028A4C PA_SC_MODE_CNTL_1
+0x00028A50 VGT_ENHANCE
+0x00028A54 VGT_GS_PER_ES
+0x00028A58 VGT_ES_PER_GS
+0x00028A5C VGT_GS_PER_VS
+0x00028A6C VGT_GS_OUT_PRIM_TYPE
+0x00028A84 VGT_PRIMITIVEID_EN
+0x00028A94 VGT_MULTI_PRIM_IB_RESET_EN
+0x00028AA0 VGT_INSTANCE_STEP_RATE_0
+0x00028AA4 VGT_INSTANCE_STEP_RATE_1
+0x00028AB4 VGT_REUSE_OFF
+0x00028AB8 VGT_VTX_CNT_EN
+0x00028ABC DB_HTILE_SURFACE
+0x00028AC0 DB_SRESULTS_COMPARE_STATE0
+0x00028AC4 DB_SRESULTS_COMPARE_STATE1
+0x00028AC8 DB_PRELOAD_CONTROL
+0x00028B38 VGT_GS_MAX_VERT_OUT
+0x00028B54 VGT_SHADER_STAGES_EN
+0x00028B58 VGT_LS_HS_CONFIG
+0x00028B5C VGT_LS_SIZE
+0x00028B60 VGT_HS_SIZE
+0x00028B64 VGT_LS_HS_ALLOC
+0x00028B68 VGT_HS_PATCH_CONST
+0x00028B6C VGT_TF_PARAM
+0x00028B70 DB_ALPHA_TO_MASK
+0x00028B74 VGT_DISPATCH_INITIATOR
+0x00028B78 PA_SU_POLY_OFFSET_DB_FMT_CNTL
+0x00028B7C PA_SU_POLY_OFFSET_CLAMP
+0x00028B80 PA_SU_POLY_OFFSET_FRONT_SCALE
+0x00028B84 PA_SU_POLY_OFFSET_FRONT_OFFSET
+0x00028B88 PA_SU_POLY_OFFSET_BACK_SCALE
+0x00028B8C PA_SU_POLY_OFFSET_BACK_OFFSET
+0x00028B74 VGT_GS_INSTANCE_CNT
+0x00028C00 PA_SC_LINE_CNTL
+0x00028C08 PA_SU_VTX_CNTL
+0x00028C0C PA_CL_GB_VERT_CLIP_ADJ
+0x00028C10 PA_CL_GB_VERT_DISC_ADJ
+0x00028C14 PA_CL_GB_HORZ_CLIP_ADJ
+0x00028C18 PA_CL_GB_HORZ_DISC_ADJ
+0x00028C1C PA_SC_AA_SAMPLE_LOCS_0
+0x00028C20 PA_SC_AA_SAMPLE_LOCS_1
+0x00028C24 PA_SC_AA_SAMPLE_LOCS_2
+0x00028C28 PA_SC_AA_SAMPLE_LOCS_3
+0x00028C2C PA_SC_AA_SAMPLE_LOCS_4
+0x00028C30 PA_SC_AA_SAMPLE_LOCS_5
+0x00028C34 PA_SC_AA_SAMPLE_LOCS_6
+0x00028C38 PA_SC_AA_SAMPLE_LOCS_7
+0x00028C3C PA_SC_AA_MASK
+0x00028C8C CB_COLOR0_CLEAR_WORD0
+0x00028C90 CB_COLOR0_CLEAR_WORD1
+0x00028C94 CB_COLOR0_CLEAR_WORD2
+0x00028C98 CB_COLOR0_CLEAR_WORD3
+0x00028CC8 CB_COLOR1_CLEAR_WORD0
+0x00028CCC CB_COLOR1_CLEAR_WORD1
+0x00028CD0 CB_COLOR1_CLEAR_WORD2
+0x00028CD4 CB_COLOR1_CLEAR_WORD3
+0x00028D04 CB_COLOR2_CLEAR_WORD0
+0x00028D08 CB_COLOR2_CLEAR_WORD1
+0x00028D0C CB_COLOR2_CLEAR_WORD2
+0x00028D10 CB_COLOR2_CLEAR_WORD3
+0x00028D40 CB_COLOR3_CLEAR_WORD0
+0x00028D44 CB_COLOR3_CLEAR_WORD1
+0x00028D48 CB_COLOR3_CLEAR_WORD2
+0x00028D4C CB_COLOR3_CLEAR_WORD3
+0x00028D7C CB_COLOR4_CLEAR_WORD0
+0x00028D80 CB_COLOR4_CLEAR_WORD1
+0x00028D84 CB_COLOR4_CLEAR_WORD2
+0x00028D88 CB_COLOR4_CLEAR_WORD3
+0x00028DB8 CB_COLOR5_CLEAR_WORD0
+0x00028DBC CB_COLOR5_CLEAR_WORD1
+0x00028DC0 CB_COLOR5_CLEAR_WORD2
+0x00028DC4 CB_COLOR5_CLEAR_WORD3
+0x00028DF4 CB_COLOR6_CLEAR_WORD0
+0x00028DF8 CB_COLOR6_CLEAR_WORD1
+0x00028DFC CB_COLOR6_CLEAR_WORD2
+0x00028E00 CB_COLOR6_CLEAR_WORD3
+0x00028E30 CB_COLOR7_CLEAR_WORD0
+0x00028E34 CB_COLOR7_CLEAR_WORD1
+0x00028E38 CB_COLOR7_CLEAR_WORD2
+0x00028E3C CB_COLOR7_CLEAR_WORD3
+0x00028F80 SQ_ALU_CONST_BUFFER_SIZE_HS_0
+0x00028F84 SQ_ALU_CONST_BUFFER_SIZE_HS_1
+0x00028F88 SQ_ALU_CONST_BUFFER_SIZE_HS_2
+0x00028F8C SQ_ALU_CONST_BUFFER_SIZE_HS_3
+0x00028F90 SQ_ALU_CONST_BUFFER_SIZE_HS_4
+0x00028F94 SQ_ALU_CONST_BUFFER_SIZE_HS_5
+0x00028F98 SQ_ALU_CONST_BUFFER_SIZE_HS_6
+0x00028F9C SQ_ALU_CONST_BUFFER_SIZE_HS_7
+0x00028FA0 SQ_ALU_CONST_BUFFER_SIZE_HS_8
+0x00028FA4 SQ_ALU_CONST_BUFFER_SIZE_HS_9
+0x00028FA8 SQ_ALU_CONST_BUFFER_SIZE_HS_10
+0x00028FAC SQ_ALU_CONST_BUFFER_SIZE_HS_11
+0x00028FB0 SQ_ALU_CONST_BUFFER_SIZE_HS_12
+0x00028FB4 SQ_ALU_CONST_BUFFER_SIZE_HS_13
+0x00028FB8 SQ_ALU_CONST_BUFFER_SIZE_HS_14
+0x00028FBC SQ_ALU_CONST_BUFFER_SIZE_HS_15
+0x00028FC0 SQ_ALU_CONST_BUFFER_SIZE_LS_0
+0x00028FC4 SQ_ALU_CONST_BUFFER_SIZE_LS_1
+0x00028FC8 SQ_ALU_CONST_BUFFER_SIZE_LS_2
+0x00028FCC SQ_ALU_CONST_BUFFER_SIZE_LS_3
+0x00028FD0 SQ_ALU_CONST_BUFFER_SIZE_LS_4
+0x00028FD4 SQ_ALU_CONST_BUFFER_SIZE_LS_5
+0x00028FD8 SQ_ALU_CONST_BUFFER_SIZE_LS_6
+0x00028FDC SQ_ALU_CONST_BUFFER_SIZE_LS_7
+0x00028FE0 SQ_ALU_CONST_BUFFER_SIZE_LS_8
+0x00028FE4 SQ_ALU_CONST_BUFFER_SIZE_LS_9
+0x00028FE8 SQ_ALU_CONST_BUFFER_SIZE_LS_10
+0x00028FEC SQ_ALU_CONST_BUFFER_SIZE_LS_11
+0x00028FF0 SQ_ALU_CONST_BUFFER_SIZE_LS_12
+0x00028FF4 SQ_ALU_CONST_BUFFER_SIZE_LS_13
+0x00028FF8 SQ_ALU_CONST_BUFFER_SIZE_LS_14
+0x00028FFC SQ_ALU_CONST_BUFFER_SIZE_LS_15
+0x0003CFF0 SQ_VTX_BASE_VTX_LOC
+0x0003CFF4 SQ_VTX_START_INST_LOC
+0x0003FF00 SQ_TEX_SAMPLER_CLEAR
+0x0003FF04 SQ_TEX_RESOURCE_CLEAR
+0x0003FF08 SQ_LOOP_BOOL_CLEAR
diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
index 1a41cb2..9e4240b 100644
--- a/drivers/gpu/drm/radeon/rs400.c
+++ b/drivers/gpu/drm/radeon/rs400.c
@@ -243,8 +243,6 @@ int rs400_mc_wait_for_idle(struct radeon_device *rdev)
 
 void rs400_gpu_init(struct radeon_device *rdev)
 {
-	/* FIXME: HDP same place on rs400 ? */
-	r100_hdp_reset(rdev);
 	/* FIXME: is this correct ? */
 	r420_pipes_init(rdev);
 	if (rs400_mc_wait_for_idle(rdev)) {
@@ -433,7 +431,7 @@ int rs400_resume(struct radeon_device *rdev)
 	/* setup MC before calling post tables */
 	rs400_mc_program(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -458,7 +456,6 @@ int rs400_suspend(struct radeon_device *rdev)
 
 void rs400_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -497,7 +494,7 @@ int rs400_init(struct radeon_device *rdev)
 			return r;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -509,8 +506,6 @@ int rs400_init(struct radeon_device *rdev)
 
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize memory controller */
 	rs400_mc_init(rdev);
 	/* Fence driver */
diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c
index a81bc7a..7bb4c3e 100644
--- a/drivers/gpu/drm/radeon/rs600.c
+++ b/drivers/gpu/drm/radeon/rs600.c
@@ -46,6 +46,136 @@
 void rs600_gpu_init(struct radeon_device *rdev);
 int rs600_mc_wait_for_idle(struct radeon_device *rdev);
 
+void rs600_pm_misc(struct radeon_device *rdev)
+{
+	int requested_index = rdev->pm.requested_power_state_index;
+	struct radeon_power_state *ps = &rdev->pm.power_state[requested_index];
+	struct radeon_voltage *voltage = &ps->clock_info[0].voltage;
+	u32 tmp, dyn_pwrmgt_sclk_length, dyn_sclk_vol_cntl;
+	u32 hdp_dyn_cntl, /*mc_host_dyn_cntl,*/ dyn_backbias_cntl;
+
+	if ((voltage->type == VOLTAGE_GPIO) && (voltage->gpio.valid)) {
+		if (ps->misc & ATOM_PM_MISCINFO_VOLTAGE_DROP_SUPPORT) {
+			tmp = RREG32(voltage->gpio.reg);
+			if (voltage->active_high)
+				tmp |= voltage->gpio.mask;
+			else
+				tmp &= ~(voltage->gpio.mask);
+			WREG32(voltage->gpio.reg, tmp);
+			if (voltage->delay)
+				udelay(voltage->delay);
+		} else {
+			tmp = RREG32(voltage->gpio.reg);
+			if (voltage->active_high)
+				tmp &= ~voltage->gpio.mask;
+			else
+				tmp |= voltage->gpio.mask;
+			WREG32(voltage->gpio.reg, tmp);
+			if (voltage->delay)
+				udelay(voltage->delay);
+		}
+	} else if (voltage->type == VOLTAGE_VDDC)
+		radeon_atom_set_voltage(rdev, voltage->vddc_id);
+
+	dyn_pwrmgt_sclk_length = RREG32_PLL(DYN_PWRMGT_SCLK_LENGTH);
+	dyn_pwrmgt_sclk_length &= ~REDUCED_POWER_SCLK_HILEN(0xf);
+	dyn_pwrmgt_sclk_length &= ~REDUCED_POWER_SCLK_LOLEN(0xf);
+	if (ps->misc & ATOM_PM_MISCINFO_ASIC_REDUCED_SPEED_SCLK_EN) {
+		if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_2) {
+			dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_HILEN(2);
+			dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_LOLEN(2);
+		} else if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_CLOCK_DIVIDER_BY_4) {
+			dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_HILEN(4);
+			dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_LOLEN(4);
+		}
+	} else {
+		dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_HILEN(1);
+		dyn_pwrmgt_sclk_length |= REDUCED_POWER_SCLK_LOLEN(1);
+	}
+	WREG32_PLL(DYN_PWRMGT_SCLK_LENGTH, dyn_pwrmgt_sclk_length);
+
+	dyn_sclk_vol_cntl = RREG32_PLL(DYN_SCLK_VOL_CNTL);
+	if (ps->misc & ATOM_PM_MISCINFO_ASIC_DYNAMIC_VOLTAGE_EN) {
+		dyn_sclk_vol_cntl |= IO_CG_VOLTAGE_DROP;
+		if (voltage->delay) {
+			dyn_sclk_vol_cntl |= VOLTAGE_DROP_SYNC;
+			dyn_sclk_vol_cntl |= VOLTAGE_DELAY_SEL(voltage->delay);
+		} else
+			dyn_sclk_vol_cntl &= ~VOLTAGE_DROP_SYNC;
+	} else
+		dyn_sclk_vol_cntl &= ~IO_CG_VOLTAGE_DROP;
+	WREG32_PLL(DYN_SCLK_VOL_CNTL, dyn_sclk_vol_cntl);
+
+	hdp_dyn_cntl = RREG32_PLL(HDP_DYN_CNTL);
+	if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_HDP_BLOCK_EN)
+		hdp_dyn_cntl &= ~HDP_FORCEON;
+	else
+		hdp_dyn_cntl |= HDP_FORCEON;
+	WREG32_PLL(HDP_DYN_CNTL, hdp_dyn_cntl);
+#if 0
+	/* mc_host_dyn seems to cause hangs from time to time */
+	mc_host_dyn_cntl = RREG32_PLL(MC_HOST_DYN_CNTL);
+	if (ps->misc & ATOM_PM_MISCINFO_DYNAMIC_MC_HOST_BLOCK_EN)
+		mc_host_dyn_cntl &= ~MC_HOST_FORCEON;
+	else
+		mc_host_dyn_cntl |= MC_HOST_FORCEON;
+	WREG32_PLL(MC_HOST_DYN_CNTL, mc_host_dyn_cntl);
+#endif
+	dyn_backbias_cntl = RREG32_PLL(DYN_BACKBIAS_CNTL);
+	if (ps->misc & ATOM_PM_MISCINFO2_DYNAMIC_BACK_BIAS_EN)
+		dyn_backbias_cntl |= IO_CG_BACKBIAS_EN;
+	else
+		dyn_backbias_cntl &= ~IO_CG_BACKBIAS_EN;
+	WREG32_PLL(DYN_BACKBIAS_CNTL, dyn_backbias_cntl);
+
+	/* set pcie lanes */
+	if ((rdev->flags & RADEON_IS_PCIE) &&
+	    !(rdev->flags & RADEON_IS_IGP) &&
+	    rdev->asic->set_pcie_lanes &&
+	    (ps->pcie_lanes !=
+	     rdev->pm.power_state[rdev->pm.current_power_state_index].pcie_lanes)) {
+		radeon_set_pcie_lanes(rdev,
+				      ps->pcie_lanes);
+		DRM_DEBUG("Setting: p: %d\n", ps->pcie_lanes);
+	}
+}
+
+void rs600_pm_prepare(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* disable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			tmp = RREG32(AVIVO_D1CRTC_CONTROL + radeon_crtc->crtc_offset);
+			tmp |= AVIVO_CRTC_DISP_READ_REQUEST_DISABLE;
+			WREG32(AVIVO_D1CRTC_CONTROL + radeon_crtc->crtc_offset, tmp);
+		}
+	}
+}
+
+void rs600_pm_finish(struct radeon_device *rdev)
+{
+	struct drm_device *ddev = rdev->ddev;
+	struct drm_crtc *crtc;
+	struct radeon_crtc *radeon_crtc;
+	u32 tmp;
+
+	/* enable any active CRTCs */
+	list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) {
+		radeon_crtc = to_radeon_crtc(crtc);
+		if (radeon_crtc->enabled) {
+			tmp = RREG32(AVIVO_D1CRTC_CONTROL + radeon_crtc->crtc_offset);
+			tmp &= ~AVIVO_CRTC_DISP_READ_REQUEST_DISABLE;
+			WREG32(AVIVO_D1CRTC_CONTROL + radeon_crtc->crtc_offset, tmp);
+		}
+	}
+}
+
 /* hpd for digital panel detect/disconnect */
 bool rs600_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd)
 {
@@ -147,6 +277,78 @@ void rs600_hpd_fini(struct radeon_device *rdev)
 	}
 }
 
+void rs600_bm_disable(struct radeon_device *rdev)
+{
+	u32 tmp;
+
+	/* disable bus mastering */
+	pci_read_config_word(rdev->pdev, 0x4, (u16*)&tmp);
+	pci_write_config_word(rdev->pdev, 0x4, tmp & 0xFFFB);
+	mdelay(1);
+}
+
+int rs600_asic_reset(struct radeon_device *rdev)
+{
+	u32 status, tmp;
+
+	struct rv515_mc_save save;
+
+	/* Stops all mc clients */
+	rv515_mc_stop(rdev, &save);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	if (!G_000E40_GUI_ACTIVE(status)) {
+		return 0;
+	}
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* stop CP */
+	WREG32(RADEON_CP_CSQ_CNTL, 0);
+	tmp = RREG32(RADEON_CP_RB_CNTL);
+	WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA);
+	WREG32(RADEON_CP_RB_RPTR_WR, 0);
+	WREG32(RADEON_CP_RB_WPTR, 0);
+	WREG32(RADEON_CP_RB_CNTL, tmp);
+	pci_save_state(rdev->pdev);
+	/* disable bus mastering */
+	rs600_bm_disable(rdev);
+	/* reset GA+VAP */
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_VAP(1) |
+					S_0000F0_SOFT_RESET_GA(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* reset CP */
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_CP(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* reset MC */
+	WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_MC(1));
+	RREG32(R_0000F0_RBBM_SOFT_RESET);
+	mdelay(500);
+	WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
+	mdelay(1);
+	status = RREG32(R_000E40_RBBM_STATUS);
+	dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
+	/* restore PCI & busmastering */
+	pci_restore_state(rdev->pdev);
+	/* Check if GPU is idle */
+	if (G_000E40_GA_BUSY(status) || G_000E40_VAP_BUSY(status)) {
+		dev_err(rdev->dev, "failed to reset GPU\n");
+		rdev->gpu_lockup = true;
+		return -1;
+	}
+	rv515_mc_resume(rdev, &save);
+	dev_info(rdev->dev, "GPU reset succeed\n");
+	return 0;
+}
+
 /*
  * GART.
  */
@@ -310,6 +512,9 @@ int rs600_irq_set(struct radeon_device *rdev)
 	if (rdev->irq.sw_int) {
 		tmp |= S_000040_SW_INT_EN(1);
 	}
+	if (rdev->irq.gui_idle) {
+		tmp |= S_000040_GUI_IDLE(1);
+	}
 	if (rdev->irq.crtc_vblank_int[0]) {
 		mode_int |= S_006540_D1MODE_VBLANK_INT_MASK(1);
 	}
@@ -332,9 +537,15 @@ int rs600_irq_set(struct radeon_device *rdev)
 static inline uint32_t rs600_irq_ack(struct radeon_device *rdev, u32 *r500_disp_int)
 {
 	uint32_t irqs = RREG32(R_000044_GEN_INT_STATUS);
-	uint32_t irq_mask = ~C_000044_SW_INT;
+	uint32_t irq_mask = S_000044_SW_INT(1);
 	u32 tmp;
 
+	/* the interrupt works, but the status bit is permanently asserted */
+	if (rdev->irq.gui_idle && radeon_gui_idle(rdev)) {
+		if (!rdev->irq.gui_idle_acked)
+			irq_mask |= S_000044_GUI_IDLE_STAT(1);
+	}
+
 	if (G_000044_DISPLAY_INT_STAT(irqs)) {
 		*r500_disp_int = RREG32(R_007EDC_DISP_INTERRUPT_STATUS);
 		if (G_007EDC_LB_D1_VBLANK_INTERRUPT(*r500_disp_int)) {
@@ -382,6 +593,9 @@ int rs600_irq_process(struct radeon_device *rdev)
 	uint32_t r500_disp_int;
 	bool queue_hotplug = false;
 
+	/* reset gui idle ack.  the status bit is broken */
+	rdev->irq.gui_idle_acked = false;
+
 	status = rs600_irq_ack(rdev, &r500_disp_int);
 	if (!status && !r500_disp_int) {
 		return IRQ_NONE;
@@ -390,6 +604,12 @@ int rs600_irq_process(struct radeon_device *rdev)
 		/* SW interrupt */
 		if (G_000044_SW_INT(status))
 			radeon_fence_process(rdev);
+		/* GUI idle */
+		if (G_000040_GUI_IDLE(status)) {
+			rdev->irq.gui_idle_acked = true;
+			rdev->pm.gui_idle = true;
+			wake_up(&rdev->irq.idle_queue);
+		}
 		/* Vertical blank interrupts */
 		if (G_007EDC_LB_D1_VBLANK_INTERRUPT(r500_disp_int)) {
 			drm_handle_vblank(rdev->ddev, 0);
@@ -411,6 +631,8 @@ int rs600_irq_process(struct radeon_device *rdev)
 		}
 		status = rs600_irq_ack(rdev, &r500_disp_int);
 	}
+	/* reset gui idle ack.  the status bit is broken */
+	rdev->irq.gui_idle_acked = false;
 	if (queue_hotplug)
 		queue_work(rdev->wq, &rdev->hotplug_work);
 	if (rdev->msi_enabled) {
@@ -454,7 +676,6 @@ int rs600_mc_wait_for_idle(struct radeon_device *rdev)
 
 void rs600_gpu_init(struct radeon_device *rdev)
 {
-	r100_hdp_reset(rdev);
 	r420_pipes_init(rdev);
 	/* Wait for mc idle */
 	if (rs600_mc_wait_for_idle(rdev))
@@ -601,7 +822,7 @@ int rs600_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	rv515_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -626,7 +847,6 @@ int rs600_suspend(struct radeon_device *rdev)
 
 void rs600_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -664,7 +884,7 @@ int rs600_init(struct radeon_device *rdev)
 		return -EINVAL;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -676,8 +896,6 @@ int rs600_init(struct radeon_device *rdev)
 
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize memory controller */
 	rs600_mc_init(rdev);
 	rs600_debugfs(rdev);
diff --git a/drivers/gpu/drm/radeon/rs600d.h b/drivers/gpu/drm/radeon/rs600d.h
index e52d269..a27c13a 100644
--- a/drivers/gpu/drm/radeon/rs600d.h
+++ b/drivers/gpu/drm/radeon/rs600d.h
@@ -178,6 +178,52 @@
 #define   S_000074_MC_IND_DATA(x)                      (((x) & 0xFFFFFFFF) << 0)
 #define   G_000074_MC_IND_DATA(x)                      (((x) >> 0) & 0xFFFFFFFF)
 #define   C_000074_MC_IND_DATA                         0x00000000
+#define R_0000F0_RBBM_SOFT_RESET                     0x0000F0
+#define   S_0000F0_SOFT_RESET_CP(x)                    (((x) & 0x1) << 0)
+#define   G_0000F0_SOFT_RESET_CP(x)                    (((x) >> 0) & 0x1)
+#define   C_0000F0_SOFT_RESET_CP                       0xFFFFFFFE
+#define   S_0000F0_SOFT_RESET_HI(x)                    (((x) & 0x1) << 1)
+#define   G_0000F0_SOFT_RESET_HI(x)                    (((x) >> 1) & 0x1)
+#define   C_0000F0_SOFT_RESET_HI                       0xFFFFFFFD
+#define   S_0000F0_SOFT_RESET_VAP(x)                   (((x) & 0x1) << 2)
+#define   G_0000F0_SOFT_RESET_VAP(x)                   (((x) >> 2) & 0x1)
+#define   C_0000F0_SOFT_RESET_VAP                      0xFFFFFFFB
+#define   S_0000F0_SOFT_RESET_RE(x)                    (((x) & 0x1) << 3)
+#define   G_0000F0_SOFT_RESET_RE(x)                    (((x) >> 3) & 0x1)
+#define   C_0000F0_SOFT_RESET_RE                       0xFFFFFFF7
+#define   S_0000F0_SOFT_RESET_PP(x)                    (((x) & 0x1) << 4)
+#define   G_0000F0_SOFT_RESET_PP(x)                    (((x) >> 4) & 0x1)
+#define   C_0000F0_SOFT_RESET_PP                       0xFFFFFFEF
+#define   S_0000F0_SOFT_RESET_E2(x)                    (((x) & 0x1) << 5)
+#define   G_0000F0_SOFT_RESET_E2(x)                    (((x) >> 5) & 0x1)
+#define   C_0000F0_SOFT_RESET_E2                       0xFFFFFFDF
+#define   S_0000F0_SOFT_RESET_RB(x)                    (((x) & 0x1) << 6)
+#define   G_0000F0_SOFT_RESET_RB(x)                    (((x) >> 6) & 0x1)
+#define   C_0000F0_SOFT_RESET_RB                       0xFFFFFFBF
+#define   S_0000F0_SOFT_RESET_HDP(x)                   (((x) & 0x1) << 7)
+#define   G_0000F0_SOFT_RESET_HDP(x)                   (((x) >> 7) & 0x1)
+#define   C_0000F0_SOFT_RESET_HDP                      0xFFFFFF7F
+#define   S_0000F0_SOFT_RESET_MC(x)                    (((x) & 0x1) << 8)
+#define   G_0000F0_SOFT_RESET_MC(x)                    (((x) >> 8) & 0x1)
+#define   C_0000F0_SOFT_RESET_MC                       0xFFFFFEFF
+#define   S_0000F0_SOFT_RESET_AIC(x)                   (((x) & 0x1) << 9)
+#define   G_0000F0_SOFT_RESET_AIC(x)                   (((x) >> 9) & 0x1)
+#define   C_0000F0_SOFT_RESET_AIC                      0xFFFFFDFF
+#define   S_0000F0_SOFT_RESET_VIP(x)                   (((x) & 0x1) << 10)
+#define   G_0000F0_SOFT_RESET_VIP(x)                   (((x) >> 10) & 0x1)
+#define   C_0000F0_SOFT_RESET_VIP                      0xFFFFFBFF
+#define   S_0000F0_SOFT_RESET_DISP(x)                  (((x) & 0x1) << 11)
+#define   G_0000F0_SOFT_RESET_DISP(x)                  (((x) >> 11) & 0x1)
+#define   C_0000F0_SOFT_RESET_DISP                     0xFFFFF7FF
+#define   S_0000F0_SOFT_RESET_CG(x)                    (((x) & 0x1) << 12)
+#define   G_0000F0_SOFT_RESET_CG(x)                    (((x) >> 12) & 0x1)
+#define   C_0000F0_SOFT_RESET_CG                       0xFFFFEFFF
+#define   S_0000F0_SOFT_RESET_GA(x)                    (((x) & 0x1) << 13)
+#define   G_0000F0_SOFT_RESET_GA(x)                    (((x) >> 13) & 0x1)
+#define   C_0000F0_SOFT_RESET_GA                       0xFFFFDFFF
+#define   S_0000F0_SOFT_RESET_IDCT(x)                  (((x) & 0x1) << 14)
+#define   G_0000F0_SOFT_RESET_IDCT(x)                  (((x) >> 14) & 0x1)
+#define   C_0000F0_SOFT_RESET_IDCT                     0xFFFFBFFF
 #define R_000134_HDP_FB_LOCATION                     0x000134
 #define   S_000134_HDP_FB_START(x)                     (((x) & 0xFFFF) << 0)
 #define   G_000134_HDP_FB_START(x)                     (((x) >> 0) & 0xFFFF)
@@ -588,4 +634,38 @@
 #define   G_006D4C_D2MODE_PRIORITY_B_FORCE_MASK(x)     (((x) >> 24) & 0x1)
 #define   C_006D4C_D2MODE_PRIORITY_B_FORCE_MASK        0xFEFFFFFF
 
+/* PLL regs */
+#define GENERAL_PWRMGT                                 0x8
+#define   GLOBAL_PWRMGT_EN                             (1 << 0)
+#define   MOBILE_SU                                    (1 << 2)
+#define DYN_PWRMGT_SCLK_LENGTH                         0xc
+#define   NORMAL_POWER_SCLK_HILEN(x)                   ((x) << 0)
+#define   NORMAL_POWER_SCLK_LOLEN(x)                   ((x) << 4)
+#define   REDUCED_POWER_SCLK_HILEN(x)                  ((x) << 8)
+#define   REDUCED_POWER_SCLK_LOLEN(x)                  ((x) << 12)
+#define   POWER_D1_SCLK_HILEN(x)                       ((x) << 16)
+#define   POWER_D1_SCLK_LOLEN(x)                       ((x) << 20)
+#define   STATIC_SCREEN_HILEN(x)                       ((x) << 24)
+#define   STATIC_SCREEN_LOLEN(x)                       ((x) << 28)
+#define DYN_SCLK_VOL_CNTL                              0xe
+#define   IO_CG_VOLTAGE_DROP                           (1 << 0)
+#define   VOLTAGE_DROP_SYNC                            (1 << 2)
+#define   VOLTAGE_DELAY_SEL(x)                         ((x) << 3)
+#define HDP_DYN_CNTL                                   0x10
+#define   HDP_FORCEON                                  (1 << 0)
+#define MC_HOST_DYN_CNTL                               0x1e
+#define   MC_HOST_FORCEON                              (1 << 0)
+#define DYN_BACKBIAS_CNTL                              0x29
+#define   IO_CG_BACKBIAS_EN                            (1 << 0)
+
+/* mmreg */
+#define DOUT_POWER_MANAGEMENT_CNTL                     0x7ee0
+#define   PWRDN_WAIT_BUSY_OFF                          (1 << 0)
+#define   PWRDN_WAIT_PWRSEQ_OFF                        (1 << 4)
+#define   PWRDN_WAIT_PPLL_OFF                          (1 << 8)
+#define   PWRUP_WAIT_PPLL_ON                           (1 << 12)
+#define   PWRUP_WAIT_MEM_INIT_DONE                     (1 << 16)
+#define   PM_ASSERT_RESET                              (1 << 20)
+#define   PM_PWRDN_PPLL                                (1 << 24)
+
 #endif
diff --git a/drivers/gpu/drm/radeon/rs690.c b/drivers/gpu/drm/radeon/rs690.c
index bbf3da7..bcc3319 100644
--- a/drivers/gpu/drm/radeon/rs690.c
+++ b/drivers/gpu/drm/radeon/rs690.c
@@ -48,8 +48,6 @@ static int rs690_mc_wait_for_idle(struct radeon_device *rdev)
 
 static void rs690_gpu_init(struct radeon_device *rdev)
 {
-	/* FIXME: HDP same place on rs690 ? */
-	r100_hdp_reset(rdev);
 	/* FIXME: is this correct ? */
 	r420_pipes_init(rdev);
 	if (rs690_mc_wait_for_idle(rdev)) {
@@ -78,59 +76,59 @@ void rs690_pm_info(struct radeon_device *rdev)
 		/* Get various system informations from bios */
 		switch (crev) {
 		case 1:
-			tmp.full = rfixed_const(100);
-			rdev->pm.igp_sideport_mclk.full = rfixed_const(info->info.ulBootUpMemoryClock);
-			rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp);
-			rdev->pm.igp_system_mclk.full = rfixed_const(le16_to_cpu(info->info.usK8MemoryClock));
-			rdev->pm.igp_ht_link_clk.full = rfixed_const(le16_to_cpu(info->info.usFSBClock));
-			rdev->pm.igp_ht_link_width.full = rfixed_const(info->info.ucHTLinkWidth);
+			tmp.full = dfixed_const(100);
+			rdev->pm.igp_sideport_mclk.full = dfixed_const(info->info.ulBootUpMemoryClock);
+			rdev->pm.igp_sideport_mclk.full = dfixed_div(rdev->pm.igp_sideport_mclk, tmp);
+			rdev->pm.igp_system_mclk.full = dfixed_const(le16_to_cpu(info->info.usK8MemoryClock));
+			rdev->pm.igp_ht_link_clk.full = dfixed_const(le16_to_cpu(info->info.usFSBClock));
+			rdev->pm.igp_ht_link_width.full = dfixed_const(info->info.ucHTLinkWidth);
 			break;
 		case 2:
-			tmp.full = rfixed_const(100);
-			rdev->pm.igp_sideport_mclk.full = rfixed_const(info->info_v2.ulBootUpSidePortClock);
-			rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp);
-			rdev->pm.igp_system_mclk.full = rfixed_const(info->info_v2.ulBootUpUMAClock);
-			rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp);
-			rdev->pm.igp_ht_link_clk.full = rfixed_const(info->info_v2.ulHTLinkFreq);
-			rdev->pm.igp_ht_link_clk.full = rfixed_div(rdev->pm.igp_ht_link_clk, tmp);
-			rdev->pm.igp_ht_link_width.full = rfixed_const(le16_to_cpu(info->info_v2.usMinHTLinkWidth));
+			tmp.full = dfixed_const(100);
+			rdev->pm.igp_sideport_mclk.full = dfixed_const(info->info_v2.ulBootUpSidePortClock);
+			rdev->pm.igp_sideport_mclk.full = dfixed_div(rdev->pm.igp_sideport_mclk, tmp);
+			rdev->pm.igp_system_mclk.full = dfixed_const(info->info_v2.ulBootUpUMAClock);
+			rdev->pm.igp_system_mclk.full = dfixed_div(rdev->pm.igp_system_mclk, tmp);
+			rdev->pm.igp_ht_link_clk.full = dfixed_const(info->info_v2.ulHTLinkFreq);
+			rdev->pm.igp_ht_link_clk.full = dfixed_div(rdev->pm.igp_ht_link_clk, tmp);
+			rdev->pm.igp_ht_link_width.full = dfixed_const(le16_to_cpu(info->info_v2.usMinHTLinkWidth));
 			break;
 		default:
-			tmp.full = rfixed_const(100);
+			tmp.full = dfixed_const(100);
 			/* We assume the slower possible clock ie worst case */
 			/* DDR 333Mhz */
-			rdev->pm.igp_sideport_mclk.full = rfixed_const(333);
+			rdev->pm.igp_sideport_mclk.full = dfixed_const(333);
 			/* FIXME: system clock ? */
-			rdev->pm.igp_system_mclk.full = rfixed_const(100);
-			rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp);
-			rdev->pm.igp_ht_link_clk.full = rfixed_const(200);
-			rdev->pm.igp_ht_link_width.full = rfixed_const(8);
+			rdev->pm.igp_system_mclk.full = dfixed_const(100);
+			rdev->pm.igp_system_mclk.full = dfixed_div(rdev->pm.igp_system_mclk, tmp);
+			rdev->pm.igp_ht_link_clk.full = dfixed_const(200);
+			rdev->pm.igp_ht_link_width.full = dfixed_const(8);
 			DRM_ERROR("No integrated system info for your GPU, using safe default\n");
 			break;
 		}
 	} else {
-		tmp.full = rfixed_const(100);
+		tmp.full = dfixed_const(100);
 		/* We assume the slower possible clock ie worst case */
 		/* DDR 333Mhz */
-		rdev->pm.igp_sideport_mclk.full = rfixed_const(333);
+		rdev->pm.igp_sideport_mclk.full = dfixed_const(333);
 		/* FIXME: system clock ? */
-		rdev->pm.igp_system_mclk.full = rfixed_const(100);
-		rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp);
-		rdev->pm.igp_ht_link_clk.full = rfixed_const(200);
-		rdev->pm.igp_ht_link_width.full = rfixed_const(8);
+		rdev->pm.igp_system_mclk.full = dfixed_const(100);
+		rdev->pm.igp_system_mclk.full = dfixed_div(rdev->pm.igp_system_mclk, tmp);
+		rdev->pm.igp_ht_link_clk.full = dfixed_const(200);
+		rdev->pm.igp_ht_link_width.full = dfixed_const(8);
 		DRM_ERROR("No integrated system info for your GPU, using safe default\n");
 	}
 	/* Compute various bandwidth */
 	/* k8_bandwidth = (memory_clk / 2) * 2 * 8 * 0.5 = memory_clk * 4  */
-	tmp.full = rfixed_const(4);
-	rdev->pm.k8_bandwidth.full = rfixed_mul(rdev->pm.igp_system_mclk, tmp);
+	tmp.full = dfixed_const(4);
+	rdev->pm.k8_bandwidth.full = dfixed_mul(rdev->pm.igp_system_mclk, tmp);
 	/* ht_bandwidth = ht_clk * 2 * ht_width / 8 * 0.8
 	 *              = ht_clk * ht_width / 5
 	 */
-	tmp.full = rfixed_const(5);
-	rdev->pm.ht_bandwidth.full = rfixed_mul(rdev->pm.igp_ht_link_clk,
+	tmp.full = dfixed_const(5);
+	rdev->pm.ht_bandwidth.full = dfixed_mul(rdev->pm.igp_ht_link_clk,
 						rdev->pm.igp_ht_link_width);
-	rdev->pm.ht_bandwidth.full = rfixed_div(rdev->pm.ht_bandwidth, tmp);
+	rdev->pm.ht_bandwidth.full = dfixed_div(rdev->pm.ht_bandwidth, tmp);
 	if (tmp.full < rdev->pm.max_bandwidth.full) {
 		/* HT link is a limiting factor */
 		rdev->pm.max_bandwidth.full = tmp.full;
@@ -138,10 +136,10 @@ void rs690_pm_info(struct radeon_device *rdev)
 	/* sideport_bandwidth = (sideport_clk / 2) * 2 * 2 * 0.7
 	 *                    = (sideport_clk * 14) / 10
 	 */
-	tmp.full = rfixed_const(14);
-	rdev->pm.sideport_bandwidth.full = rfixed_mul(rdev->pm.igp_sideport_mclk, tmp);
-	tmp.full = rfixed_const(10);
-	rdev->pm.sideport_bandwidth.full = rfixed_div(rdev->pm.sideport_bandwidth, tmp);
+	tmp.full = dfixed_const(14);
+	rdev->pm.sideport_bandwidth.full = dfixed_mul(rdev->pm.igp_sideport_mclk, tmp);
+	tmp.full = dfixed_const(10);
+	rdev->pm.sideport_bandwidth.full = dfixed_div(rdev->pm.sideport_bandwidth, tmp);
 }
 
 void rs690_mc_init(struct radeon_device *rdev)
@@ -241,20 +239,20 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 		return;
 	}
 
-	if (crtc->vsc.full > rfixed_const(2))
-		wm->num_line_pair.full = rfixed_const(2);
+	if (crtc->vsc.full > dfixed_const(2))
+		wm->num_line_pair.full = dfixed_const(2);
 	else
-		wm->num_line_pair.full = rfixed_const(1);
-
-	b.full = rfixed_const(mode->crtc_hdisplay);
-	c.full = rfixed_const(256);
-	a.full = rfixed_div(b, c);
-	request_fifo_depth.full = rfixed_mul(a, wm->num_line_pair);
-	request_fifo_depth.full = rfixed_ceil(request_fifo_depth);
-	if (a.full < rfixed_const(4)) {
+		wm->num_line_pair.full = dfixed_const(1);
+
+	b.full = dfixed_const(mode->crtc_hdisplay);
+	c.full = dfixed_const(256);
+	a.full = dfixed_div(b, c);
+	request_fifo_depth.full = dfixed_mul(a, wm->num_line_pair);
+	request_fifo_depth.full = dfixed_ceil(request_fifo_depth);
+	if (a.full < dfixed_const(4)) {
 		wm->lb_request_fifo_depth = 4;
 	} else {
-		wm->lb_request_fifo_depth = rfixed_trunc(request_fifo_depth);
+		wm->lb_request_fifo_depth = dfixed_trunc(request_fifo_depth);
 	}
 
 	/* Determine consumption rate
@@ -263,23 +261,23 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *  vsc = vertical scaling ratio, defined as source/destination
 	 *  hsc = horizontal scaling ration, defined as source/destination
 	 */
-	a.full = rfixed_const(mode->clock);
-	b.full = rfixed_const(1000);
-	a.full = rfixed_div(a, b);
-	pclk.full = rfixed_div(b, a);
+	a.full = dfixed_const(mode->clock);
+	b.full = dfixed_const(1000);
+	a.full = dfixed_div(a, b);
+	pclk.full = dfixed_div(b, a);
 	if (crtc->rmx_type != RMX_OFF) {
-		b.full = rfixed_const(2);
+		b.full = dfixed_const(2);
 		if (crtc->vsc.full > b.full)
 			b.full = crtc->vsc.full;
-		b.full = rfixed_mul(b, crtc->hsc);
-		c.full = rfixed_const(2);
-		b.full = rfixed_div(b, c);
-		consumption_time.full = rfixed_div(pclk, b);
+		b.full = dfixed_mul(b, crtc->hsc);
+		c.full = dfixed_const(2);
+		b.full = dfixed_div(b, c);
+		consumption_time.full = dfixed_div(pclk, b);
 	} else {
 		consumption_time.full = pclk.full;
 	}
-	a.full = rfixed_const(1);
-	wm->consumption_rate.full = rfixed_div(a, consumption_time);
+	a.full = dfixed_const(1);
+	wm->consumption_rate.full = dfixed_div(a, consumption_time);
 
 
 	/* Determine line time
@@ -287,18 +285,18 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *  LineTime = total number of horizontal pixels
 	 *  pclk = pixel clock period(ns)
 	 */
-	a.full = rfixed_const(crtc->base.mode.crtc_htotal);
-	line_time.full = rfixed_mul(a, pclk);
+	a.full = dfixed_const(crtc->base.mode.crtc_htotal);
+	line_time.full = dfixed_mul(a, pclk);
 
 	/* Determine active time
 	 *  ActiveTime = time of active region of display within one line,
 	 *  hactive = total number of horizontal active pixels
 	 *  htotal = total number of horizontal pixels
 	 */
-	a.full = rfixed_const(crtc->base.mode.crtc_htotal);
-	b.full = rfixed_const(crtc->base.mode.crtc_hdisplay);
-	wm->active_time.full = rfixed_mul(line_time, b);
-	wm->active_time.full = rfixed_div(wm->active_time, a);
+	a.full = dfixed_const(crtc->base.mode.crtc_htotal);
+	b.full = dfixed_const(crtc->base.mode.crtc_hdisplay);
+	wm->active_time.full = dfixed_mul(line_time, b);
+	wm->active_time.full = dfixed_div(wm->active_time, a);
 
 	/* Maximun bandwidth is the minimun bandwidth of all component */
 	rdev->pm.max_bandwidth = rdev->pm.core_bandwidth;
@@ -306,8 +304,8 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 		if (rdev->pm.max_bandwidth.full > rdev->pm.sideport_bandwidth.full &&
 			rdev->pm.sideport_bandwidth.full)
 			rdev->pm.max_bandwidth = rdev->pm.sideport_bandwidth;
-		read_delay_latency.full = rfixed_const(370 * 800 * 1000);
-		read_delay_latency.full = rfixed_div(read_delay_latency,
+		read_delay_latency.full = dfixed_const(370 * 800 * 1000);
+		read_delay_latency.full = dfixed_div(read_delay_latency,
 			rdev->pm.igp_sideport_mclk);
 	} else {
 		if (rdev->pm.max_bandwidth.full > rdev->pm.k8_bandwidth.full &&
@@ -316,23 +314,23 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 		if (rdev->pm.max_bandwidth.full > rdev->pm.ht_bandwidth.full &&
 			rdev->pm.ht_bandwidth.full)
 			rdev->pm.max_bandwidth = rdev->pm.ht_bandwidth;
-		read_delay_latency.full = rfixed_const(5000);
+		read_delay_latency.full = dfixed_const(5000);
 	}
 
 	/* sclk = system clocks(ns) = 1000 / max_bandwidth / 16 */
-	a.full = rfixed_const(16);
-	rdev->pm.sclk.full = rfixed_mul(rdev->pm.max_bandwidth, a);
-	a.full = rfixed_const(1000);
-	rdev->pm.sclk.full = rfixed_div(a, rdev->pm.sclk);
+	a.full = dfixed_const(16);
+	rdev->pm.sclk.full = dfixed_mul(rdev->pm.max_bandwidth, a);
+	a.full = dfixed_const(1000);
+	rdev->pm.sclk.full = dfixed_div(a, rdev->pm.sclk);
 	/* Determine chunk time
 	 * ChunkTime = the time it takes the DCP to send one chunk of data
 	 * to the LB which consists of pipeline delay and inter chunk gap
 	 * sclk = system clock(ns)
 	 */
-	a.full = rfixed_const(256 * 13);
-	chunk_time.full = rfixed_mul(rdev->pm.sclk, a);
-	a.full = rfixed_const(10);
-	chunk_time.full = rfixed_div(chunk_time, a);
+	a.full = dfixed_const(256 * 13);
+	chunk_time.full = dfixed_mul(rdev->pm.sclk, a);
+	a.full = dfixed_const(10);
+	chunk_time.full = dfixed_div(chunk_time, a);
 
 	/* Determine the worst case latency
 	 * NumLinePair = Number of line pairs to request(1=2 lines, 2=4 lines)
@@ -342,13 +340,13 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 * ChunkTime = time it takes the DCP to send one chunk of data to the LB
 	 *             which consists of pipeline delay and inter chunk gap
 	 */
-	if (rfixed_trunc(wm->num_line_pair) > 1) {
-		a.full = rfixed_const(3);
-		wm->worst_case_latency.full = rfixed_mul(a, chunk_time);
+	if (dfixed_trunc(wm->num_line_pair) > 1) {
+		a.full = dfixed_const(3);
+		wm->worst_case_latency.full = dfixed_mul(a, chunk_time);
 		wm->worst_case_latency.full += read_delay_latency.full;
 	} else {
-		a.full = rfixed_const(2);
-		wm->worst_case_latency.full = rfixed_mul(a, chunk_time);
+		a.full = dfixed_const(2);
+		wm->worst_case_latency.full = dfixed_mul(a, chunk_time);
 		wm->worst_case_latency.full += read_delay_latency.full;
 	}
 
@@ -362,34 +360,34 @@ void rs690_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *              of data to the LB which consists of
 	 *  pipeline delay and inter chunk gap
 	 */
-	if ((2+wm->lb_request_fifo_depth) >= rfixed_trunc(request_fifo_depth)) {
+	if ((2+wm->lb_request_fifo_depth) >= dfixed_trunc(request_fifo_depth)) {
 		tolerable_latency.full = line_time.full;
 	} else {
-		tolerable_latency.full = rfixed_const(wm->lb_request_fifo_depth - 2);
+		tolerable_latency.full = dfixed_const(wm->lb_request_fifo_depth - 2);
 		tolerable_latency.full = request_fifo_depth.full - tolerable_latency.full;
-		tolerable_latency.full = rfixed_mul(tolerable_latency, chunk_time);
+		tolerable_latency.full = dfixed_mul(tolerable_latency, chunk_time);
 		tolerable_latency.full = line_time.full - tolerable_latency.full;
 	}
 	/* We assume worst case 32bits (4 bytes) */
-	wm->dbpp.full = rfixed_const(4 * 8);
+	wm->dbpp.full = dfixed_const(4 * 8);
 
 	/* Determine the maximum priority mark
 	 *  width = viewport width in pixels
 	 */
-	a.full = rfixed_const(16);
-	wm->priority_mark_max.full = rfixed_const(crtc->base.mode.crtc_hdisplay);
-	wm->priority_mark_max.full = rfixed_div(wm->priority_mark_max, a);
-	wm->priority_mark_max.full = rfixed_ceil(wm->priority_mark_max);
+	a.full = dfixed_const(16);
+	wm->priority_mark_max.full = dfixed_const(crtc->base.mode.crtc_hdisplay);
+	wm->priority_mark_max.full = dfixed_div(wm->priority_mark_max, a);
+	wm->priority_mark_max.full = dfixed_ceil(wm->priority_mark_max);
 
 	/* Determine estimated width */
 	estimated_width.full = tolerable_latency.full - wm->worst_case_latency.full;
-	estimated_width.full = rfixed_div(estimated_width, consumption_time);
-	if (rfixed_trunc(estimated_width) > crtc->base.mode.crtc_hdisplay) {
-		wm->priority_mark.full = rfixed_const(10);
+	estimated_width.full = dfixed_div(estimated_width, consumption_time);
+	if (dfixed_trunc(estimated_width) > crtc->base.mode.crtc_hdisplay) {
+		wm->priority_mark.full = dfixed_const(10);
 	} else {
-		a.full = rfixed_const(16);
-		wm->priority_mark.full = rfixed_div(estimated_width, a);
-		wm->priority_mark.full = rfixed_ceil(wm->priority_mark);
+		a.full = dfixed_const(16);
+		wm->priority_mark.full = dfixed_div(estimated_width, a);
+		wm->priority_mark.full = dfixed_ceil(wm->priority_mark);
 		wm->priority_mark.full = wm->priority_mark_max.full - wm->priority_mark.full;
 	}
 }
@@ -441,58 +439,58 @@ void rs690_bandwidth_update(struct radeon_device *rdev)
 	WREG32(R_006D58_LB_MAX_REQ_OUTSTANDING, tmp);
 
 	if (mode0 && mode1) {
-		if (rfixed_trunc(wm0.dbpp) > 64)
-			a.full = rfixed_mul(wm0.dbpp, wm0.num_line_pair);
+		if (dfixed_trunc(wm0.dbpp) > 64)
+			a.full = dfixed_mul(wm0.dbpp, wm0.num_line_pair);
 		else
 			a.full = wm0.num_line_pair.full;
-		if (rfixed_trunc(wm1.dbpp) > 64)
-			b.full = rfixed_mul(wm1.dbpp, wm1.num_line_pair);
+		if (dfixed_trunc(wm1.dbpp) > 64)
+			b.full = dfixed_mul(wm1.dbpp, wm1.num_line_pair);
 		else
 			b.full = wm1.num_line_pair.full;
 		a.full += b.full;
-		fill_rate.full = rfixed_div(wm0.sclk, a);
+		fill_rate.full = dfixed_div(wm0.sclk, a);
 		if (wm0.consumption_rate.full > fill_rate.full) {
 			b.full = wm0.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm0.active_time);
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			b.full = dfixed_mul(b, wm0.active_time);
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
 			a.full = a.full + b.full;
-			b.full = rfixed_const(16 * 1000);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark02.full = dfixed_div(a, b);
 		} else {
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark02.full = dfixed_div(a, b);
 		}
 		if (wm1.consumption_rate.full > fill_rate.full) {
 			b.full = wm1.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm1.active_time);
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			b.full = dfixed_mul(b, wm1.active_time);
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
 			a.full = a.full + b.full;
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		} else {
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		}
 		if (wm0.priority_mark.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark.full;
-		if (rfixed_trunc(priority_mark02) < 0)
+		if (dfixed_trunc(priority_mark02) < 0)
 			priority_mark02.full = 0;
 		if (wm0.priority_mark_max.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark_max.full;
 		if (wm1.priority_mark.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark.full;
-		if (rfixed_trunc(priority_mark12) < 0)
+		if (dfixed_trunc(priority_mark12) < 0)
 			priority_mark12.full = 0;
 		if (wm1.priority_mark_max.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark_max.full;
-		d1mode_priority_a_cnt = rfixed_trunc(priority_mark02);
-		d2mode_priority_a_cnt = rfixed_trunc(priority_mark12);
+		d1mode_priority_a_cnt = dfixed_trunc(priority_mark02);
+		d2mode_priority_a_cnt = dfixed_trunc(priority_mark12);
 		if (rdev->disp_priority == 2) {
 			d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1);
 			d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1);
@@ -502,32 +500,32 @@ void rs690_bandwidth_update(struct radeon_device *rdev)
 			d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1);
 		}
 	} else if (mode0) {
-		if (rfixed_trunc(wm0.dbpp) > 64)
-			a.full = rfixed_mul(wm0.dbpp, wm0.num_line_pair);
+		if (dfixed_trunc(wm0.dbpp) > 64)
+			a.full = dfixed_mul(wm0.dbpp, wm0.num_line_pair);
 		else
 			a.full = wm0.num_line_pair.full;
-		fill_rate.full = rfixed_div(wm0.sclk, a);
+		fill_rate.full = dfixed_div(wm0.sclk, a);
 		if (wm0.consumption_rate.full > fill_rate.full) {
 			b.full = wm0.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm0.active_time);
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			b.full = dfixed_mul(b, wm0.active_time);
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
 			a.full = a.full + b.full;
-			b.full = rfixed_const(16 * 1000);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark02.full = dfixed_div(a, b);
 		} else {
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark02.full = dfixed_div(a, b);
 		}
 		if (wm0.priority_mark.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark.full;
-		if (rfixed_trunc(priority_mark02) < 0)
+		if (dfixed_trunc(priority_mark02) < 0)
 			priority_mark02.full = 0;
 		if (wm0.priority_mark_max.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark_max.full;
-		d1mode_priority_a_cnt = rfixed_trunc(priority_mark02);
+		d1mode_priority_a_cnt = dfixed_trunc(priority_mark02);
 		if (rdev->disp_priority == 2)
 			d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1);
 		WREG32(R_006548_D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt);
@@ -537,32 +535,32 @@ void rs690_bandwidth_update(struct radeon_device *rdev)
 		if (rdev->disp_priority == 2)
 			d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1);
 	} else if (mode1) {
-		if (rfixed_trunc(wm1.dbpp) > 64)
-			a.full = rfixed_mul(wm1.dbpp, wm1.num_line_pair);
+		if (dfixed_trunc(wm1.dbpp) > 64)
+			a.full = dfixed_mul(wm1.dbpp, wm1.num_line_pair);
 		else
 			a.full = wm1.num_line_pair.full;
-		fill_rate.full = rfixed_div(wm1.sclk, a);
+		fill_rate.full = dfixed_div(wm1.sclk, a);
 		if (wm1.consumption_rate.full > fill_rate.full) {
 			b.full = wm1.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm1.active_time);
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			b.full = dfixed_mul(b, wm1.active_time);
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
 			a.full = a.full + b.full;
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		} else {
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		}
 		if (wm1.priority_mark.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark.full;
-		if (rfixed_trunc(priority_mark12) < 0)
+		if (dfixed_trunc(priority_mark12) < 0)
 			priority_mark12.full = 0;
 		if (wm1.priority_mark_max.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark_max.full;
-		d2mode_priority_a_cnt = rfixed_trunc(priority_mark12);
+		d2mode_priority_a_cnt = dfixed_trunc(priority_mark12);
 		if (rdev->disp_priority == 2)
 			d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1);
 		WREG32(R_006548_D1MODE_PRIORITY_A_CNT,
@@ -653,7 +651,7 @@ int rs690_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	rv515_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -678,7 +676,6 @@ int rs690_suspend(struct radeon_device *rdev)
 
 void rs690_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -717,7 +714,7 @@ int rs690_init(struct radeon_device *rdev)
 		return -EINVAL;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -729,8 +726,6 @@ int rs690_init(struct radeon_device *rdev)
 
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize memory controller */
 	rs690_mc_init(rdev);
 	rv515_debugfs(rdev);
diff --git a/drivers/gpu/drm/radeon/rv515.c b/drivers/gpu/drm/radeon/rv515.c
index 9035121..7d9a7b0 100644
--- a/drivers/gpu/drm/radeon/rv515.c
+++ b/drivers/gpu/drm/radeon/rv515.c
@@ -147,16 +147,11 @@ void rv515_gpu_init(struct radeon_device *rdev)
 {
 	unsigned pipe_select_current, gb_pipe_select, tmp;
 
-	r100_hdp_reset(rdev);
-	r100_rb2d_reset(rdev);
-
 	if (r100_gui_wait_for_idle(rdev)) {
 		printk(KERN_WARNING "Failed to wait GUI idle while "
 		       "reseting GPU. Bad things might happen.\n");
 	}
-
 	rv515_vga_render_disable(rdev);
-
 	r420_pipes_init(rdev);
 	gb_pipe_select = RREG32(0x402C);
 	tmp = RREG32(0x170C);
@@ -174,91 +169,6 @@ void rv515_gpu_init(struct radeon_device *rdev)
 	}
 }
 
-int rv515_ga_reset(struct radeon_device *rdev)
-{
-	uint32_t tmp;
-	bool reinit_cp;
-	int i;
-
-	reinit_cp = rdev->cp.ready;
-	rdev->cp.ready = false;
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		WREG32(CP_CSQ_MODE, 0);
-		WREG32(CP_CSQ_CNTL, 0);
-		WREG32(RBBM_SOFT_RESET, 0x32005);
-		(void)RREG32(RBBM_SOFT_RESET);
-		udelay(200);
-		WREG32(RBBM_SOFT_RESET, 0);
-		/* Wait to prevent race in RBBM_STATUS */
-		mdelay(1);
-		tmp = RREG32(RBBM_STATUS);
-		if (tmp & ((1 << 20) | (1 << 26))) {
-			DRM_ERROR("VAP & CP still busy (RBBM_STATUS=0x%08X)\n", tmp);
-			/* GA still busy soft reset it */
-			WREG32(0x429C, 0x200);
-			WREG32(VAP_PVS_STATE_FLUSH_REG, 0);
-			WREG32(0x43E0, 0);
-			WREG32(0x43E4, 0);
-			WREG32(0x24AC, 0);
-		}
-		/* Wait to prevent race in RBBM_STATUS */
-		mdelay(1);
-		tmp = RREG32(RBBM_STATUS);
-		if (!(tmp & ((1 << 20) | (1 << 26)))) {
-			break;
-		}
-	}
-	for (i = 0; i < rdev->usec_timeout; i++) {
-		tmp = RREG32(RBBM_STATUS);
-		if (!(tmp & ((1 << 20) | (1 << 26)))) {
-			DRM_INFO("GA reset succeed (RBBM_STATUS=0x%08X)\n",
-				 tmp);
-			DRM_INFO("GA_IDLE=0x%08X\n", RREG32(0x425C));
-			DRM_INFO("RB3D_RESET_STATUS=0x%08X\n", RREG32(0x46f0));
-			DRM_INFO("ISYNC_CNTL=0x%08X\n", RREG32(0x1724));
-			if (reinit_cp) {
-				return r100_cp_init(rdev, rdev->cp.ring_size);
-			}
-			return 0;
-		}
-		DRM_UDELAY(1);
-	}
-	tmp = RREG32(RBBM_STATUS);
-	DRM_ERROR("Failed to reset GA ! (RBBM_STATUS=0x%08X)\n", tmp);
-	return -1;
-}
-
-int rv515_gpu_reset(struct radeon_device *rdev)
-{
-	uint32_t status;
-
-	/* reset order likely matter */
-	status = RREG32(RBBM_STATUS);
-	/* reset HDP */
-	r100_hdp_reset(rdev);
-	/* reset rb2d */
-	if (status & ((1 << 17) | (1 << 18) | (1 << 27))) {
-		r100_rb2d_reset(rdev);
-	}
-	/* reset GA */
-	if (status & ((1 << 20) | (1 << 26))) {
-		rv515_ga_reset(rdev);
-	}
-	/* reset CP */
-	status = RREG32(RBBM_STATUS);
-	if (status & (1 << 16)) {
-		r100_cp_reset(rdev);
-	}
-	/* Check if GPU is idle */
-	status = RREG32(RBBM_STATUS);
-	if (status & (1 << 31)) {
-		DRM_ERROR("Failed to reset GPU (RBBM_STATUS=0x%08X)\n", status);
-		return -1;
-	}
-	DRM_INFO("GPU reset succeed (RBBM_STATUS=0x%08X)\n", status);
-	return 0;
-}
-
 static void rv515_vram_get_type(struct radeon_device *rdev)
 {
 	uint32_t tmp;
@@ -335,7 +245,7 @@ static int rv515_debugfs_ga_info(struct seq_file *m, void *data)
 
 	tmp = RREG32(0x2140);
 	seq_printf(m, "VAP_CNTL_STATUS 0x%08x\n", tmp);
-	radeon_gpu_reset(rdev);
+	radeon_asic_reset(rdev);
 	tmp = RREG32(0x425C);
 	seq_printf(m, "GA_IDLE 0x%08x\n", tmp);
 	return 0;
@@ -503,7 +413,7 @@ int rv515_resume(struct radeon_device *rdev)
 	/* Resume clock before doing reset */
 	rv515_clock_startup(rdev);
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
 			RREG32(R_0007C0_CP_STAT));
@@ -535,7 +445,6 @@ void rv515_set_safe_registers(struct radeon_device *rdev)
 
 void rv515_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r100_cp_fini(rdev);
 	r100_wb_fini(rdev);
 	r100_ib_fini(rdev);
@@ -573,7 +482,7 @@ int rv515_init(struct radeon_device *rdev)
 		return -EINVAL;
 	}
 	/* Reset gpu before posting otherwise ATOM will enter infinite loop */
-	if (radeon_gpu_reset(rdev)) {
+	if (radeon_asic_reset(rdev)) {
 		dev_warn(rdev->dev,
 			"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
 			RREG32(R_000E40_RBBM_STATUS),
@@ -584,8 +493,6 @@ int rv515_init(struct radeon_device *rdev)
 		return -EINVAL;
 	/* Initialize clocks */
 	radeon_get_clock_info(rdev->ddev);
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* initialize AGP */
 	if (rdev->flags & RADEON_IS_AGP) {
 		r = radeon_agp_init(rdev);
@@ -885,20 +792,20 @@ void rv515_crtc_bandwidth_compute(struct radeon_device *rdev,
 		return;
 	}
 
-	if (crtc->vsc.full > rfixed_const(2))
-		wm->num_line_pair.full = rfixed_const(2);
+	if (crtc->vsc.full > dfixed_const(2))
+		wm->num_line_pair.full = dfixed_const(2);
 	else
-		wm->num_line_pair.full = rfixed_const(1);
-
-	b.full = rfixed_const(mode->crtc_hdisplay);
-	c.full = rfixed_const(256);
-	a.full = rfixed_div(b, c);
-	request_fifo_depth.full = rfixed_mul(a, wm->num_line_pair);
-	request_fifo_depth.full = rfixed_ceil(request_fifo_depth);
-	if (a.full < rfixed_const(4)) {
+		wm->num_line_pair.full = dfixed_const(1);
+
+	b.full = dfixed_const(mode->crtc_hdisplay);
+	c.full = dfixed_const(256);
+	a.full = dfixed_div(b, c);
+	request_fifo_depth.full = dfixed_mul(a, wm->num_line_pair);
+	request_fifo_depth.full = dfixed_ceil(request_fifo_depth);
+	if (a.full < dfixed_const(4)) {
 		wm->lb_request_fifo_depth = 4;
 	} else {
-		wm->lb_request_fifo_depth = rfixed_trunc(request_fifo_depth);
+		wm->lb_request_fifo_depth = dfixed_trunc(request_fifo_depth);
 	}
 
 	/* Determine consumption rate
@@ -907,23 +814,23 @@ void rv515_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *  vsc = vertical scaling ratio, defined as source/destination
 	 *  hsc = horizontal scaling ration, defined as source/destination
 	 */
-	a.full = rfixed_const(mode->clock);
-	b.full = rfixed_const(1000);
-	a.full = rfixed_div(a, b);
-	pclk.full = rfixed_div(b, a);
+	a.full = dfixed_const(mode->clock);
+	b.full = dfixed_const(1000);
+	a.full = dfixed_div(a, b);
+	pclk.full = dfixed_div(b, a);
 	if (crtc->rmx_type != RMX_OFF) {
-		b.full = rfixed_const(2);
+		b.full = dfixed_const(2);
 		if (crtc->vsc.full > b.full)
 			b.full = crtc->vsc.full;
-		b.full = rfixed_mul(b, crtc->hsc);
-		c.full = rfixed_const(2);
-		b.full = rfixed_div(b, c);
-		consumption_time.full = rfixed_div(pclk, b);
+		b.full = dfixed_mul(b, crtc->hsc);
+		c.full = dfixed_const(2);
+		b.full = dfixed_div(b, c);
+		consumption_time.full = dfixed_div(pclk, b);
 	} else {
 		consumption_time.full = pclk.full;
 	}
-	a.full = rfixed_const(1);
-	wm->consumption_rate.full = rfixed_div(a, consumption_time);
+	a.full = dfixed_const(1);
+	wm->consumption_rate.full = dfixed_div(a, consumption_time);
 
 
 	/* Determine line time
@@ -931,27 +838,27 @@ void rv515_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *  LineTime = total number of horizontal pixels
 	 *  pclk = pixel clock period(ns)
 	 */
-	a.full = rfixed_const(crtc->base.mode.crtc_htotal);
-	line_time.full = rfixed_mul(a, pclk);
+	a.full = dfixed_const(crtc->base.mode.crtc_htotal);
+	line_time.full = dfixed_mul(a, pclk);
 
 	/* Determine active time
 	 *  ActiveTime = time of active region of display within one line,
 	 *  hactive = total number of horizontal active pixels
 	 *  htotal = total number of horizontal pixels
 	 */
-	a.full = rfixed_const(crtc->base.mode.crtc_htotal);
-	b.full = rfixed_const(crtc->base.mode.crtc_hdisplay);
-	wm->active_time.full = rfixed_mul(line_time, b);
-	wm->active_time.full = rfixed_div(wm->active_time, a);
+	a.full = dfixed_const(crtc->base.mode.crtc_htotal);
+	b.full = dfixed_const(crtc->base.mode.crtc_hdisplay);
+	wm->active_time.full = dfixed_mul(line_time, b);
+	wm->active_time.full = dfixed_div(wm->active_time, a);
 
 	/* Determine chunk time
 	 * ChunkTime = the time it takes the DCP to send one chunk of data
 	 * to the LB which consists of pipeline delay and inter chunk gap
 	 * sclk = system clock(Mhz)
 	 */
-	a.full = rfixed_const(600 * 1000);
-	chunk_time.full = rfixed_div(a, rdev->pm.sclk);
-	read_delay_latency.full = rfixed_const(1000);
+	a.full = dfixed_const(600 * 1000);
+	chunk_time.full = dfixed_div(a, rdev->pm.sclk);
+	read_delay_latency.full = dfixed_const(1000);
 
 	/* Determine the worst case latency
 	 * NumLinePair = Number of line pairs to request(1=2 lines, 2=4 lines)
@@ -961,9 +868,9 @@ void rv515_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 * ChunkTime = time it takes the DCP to send one chunk of data to the LB
 	 *             which consists of pipeline delay and inter chunk gap
 	 */
-	if (rfixed_trunc(wm->num_line_pair) > 1) {
-		a.full = rfixed_const(3);
-		wm->worst_case_latency.full = rfixed_mul(a, chunk_time);
+	if (dfixed_trunc(wm->num_line_pair) > 1) {
+		a.full = dfixed_const(3);
+		wm->worst_case_latency.full = dfixed_mul(a, chunk_time);
 		wm->worst_case_latency.full += read_delay_latency.full;
 	} else {
 		wm->worst_case_latency.full = chunk_time.full + read_delay_latency.full;
@@ -979,34 +886,34 @@ void rv515_crtc_bandwidth_compute(struct radeon_device *rdev,
 	 *              of data to the LB which consists of
 	 *  pipeline delay and inter chunk gap
 	 */
-	if ((2+wm->lb_request_fifo_depth) >= rfixed_trunc(request_fifo_depth)) {
+	if ((2+wm->lb_request_fifo_depth) >= dfixed_trunc(request_fifo_depth)) {
 		tolerable_latency.full = line_time.full;
 	} else {
-		tolerable_latency.full = rfixed_const(wm->lb_request_fifo_depth - 2);
+		tolerable_latency.full = dfixed_const(wm->lb_request_fifo_depth - 2);
 		tolerable_latency.full = request_fifo_depth.full - tolerable_latency.full;
-		tolerable_latency.full = rfixed_mul(tolerable_latency, chunk_time);
+		tolerable_latency.full = dfixed_mul(tolerable_latency, chunk_time);
 		tolerable_latency.full = line_time.full - tolerable_latency.full;
 	}
 	/* We assume worst case 32bits (4 bytes) */
-	wm->dbpp.full = rfixed_const(2 * 16);
+	wm->dbpp.full = dfixed_const(2 * 16);
 
 	/* Determine the maximum priority mark
 	 *  width = viewport width in pixels
 	 */
-	a.full = rfixed_const(16);
-	wm->priority_mark_max.full = rfixed_const(crtc->base.mode.crtc_hdisplay);
-	wm->priority_mark_max.full = rfixed_div(wm->priority_mark_max, a);
-	wm->priority_mark_max.full = rfixed_ceil(wm->priority_mark_max);
+	a.full = dfixed_const(16);
+	wm->priority_mark_max.full = dfixed_const(crtc->base.mode.crtc_hdisplay);
+	wm->priority_mark_max.full = dfixed_div(wm->priority_mark_max, a);
+	wm->priority_mark_max.full = dfixed_ceil(wm->priority_mark_max);
 
 	/* Determine estimated width */
 	estimated_width.full = tolerable_latency.full - wm->worst_case_latency.full;
-	estimated_width.full = rfixed_div(estimated_width, consumption_time);
-	if (rfixed_trunc(estimated_width) > crtc->base.mode.crtc_hdisplay) {
+	estimated_width.full = dfixed_div(estimated_width, consumption_time);
+	if (dfixed_trunc(estimated_width) > crtc->base.mode.crtc_hdisplay) {
 		wm->priority_mark.full = wm->priority_mark_max.full;
 	} else {
-		a.full = rfixed_const(16);
-		wm->priority_mark.full = rfixed_div(estimated_width, a);
-		wm->priority_mark.full = rfixed_ceil(wm->priority_mark);
+		a.full = dfixed_const(16);
+		wm->priority_mark.full = dfixed_div(estimated_width, a);
+		wm->priority_mark.full = dfixed_ceil(wm->priority_mark);
 		wm->priority_mark.full = wm->priority_mark_max.full - wm->priority_mark.full;
 	}
 }
@@ -1035,58 +942,58 @@ void rv515_bandwidth_avivo_update(struct radeon_device *rdev)
 	WREG32(LB_MAX_REQ_OUTSTANDING, tmp);
 
 	if (mode0 && mode1) {
-		if (rfixed_trunc(wm0.dbpp) > 64)
-			a.full = rfixed_div(wm0.dbpp, wm0.num_line_pair);
+		if (dfixed_trunc(wm0.dbpp) > 64)
+			a.full = dfixed_div(wm0.dbpp, wm0.num_line_pair);
 		else
 			a.full = wm0.num_line_pair.full;
-		if (rfixed_trunc(wm1.dbpp) > 64)
-			b.full = rfixed_div(wm1.dbpp, wm1.num_line_pair);
+		if (dfixed_trunc(wm1.dbpp) > 64)
+			b.full = dfixed_div(wm1.dbpp, wm1.num_line_pair);
 		else
 			b.full = wm1.num_line_pair.full;
 		a.full += b.full;
-		fill_rate.full = rfixed_div(wm0.sclk, a);
+		fill_rate.full = dfixed_div(wm0.sclk, a);
 		if (wm0.consumption_rate.full > fill_rate.full) {
 			b.full = wm0.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm0.active_time);
-			a.full = rfixed_const(16);
-			b.full = rfixed_div(b, a);
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			b.full = dfixed_mul(b, wm0.active_time);
+			a.full = dfixed_const(16);
+			b.full = dfixed_div(b, a);
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
 			priority_mark02.full = a.full + b.full;
 		} else {
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark02.full = dfixed_div(a, b);
 		}
 		if (wm1.consumption_rate.full > fill_rate.full) {
 			b.full = wm1.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm1.active_time);
-			a.full = rfixed_const(16);
-			b.full = rfixed_div(b, a);
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			b.full = dfixed_mul(b, wm1.active_time);
+			a.full = dfixed_const(16);
+			b.full = dfixed_div(b, a);
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
 			priority_mark12.full = a.full + b.full;
 		} else {
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		}
 		if (wm0.priority_mark.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark.full;
-		if (rfixed_trunc(priority_mark02) < 0)
+		if (dfixed_trunc(priority_mark02) < 0)
 			priority_mark02.full = 0;
 		if (wm0.priority_mark_max.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark_max.full;
 		if (wm1.priority_mark.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark.full;
-		if (rfixed_trunc(priority_mark12) < 0)
+		if (dfixed_trunc(priority_mark12) < 0)
 			priority_mark12.full = 0;
 		if (wm1.priority_mark_max.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark_max.full;
-		d1mode_priority_a_cnt = rfixed_trunc(priority_mark02);
-		d2mode_priority_a_cnt = rfixed_trunc(priority_mark12);
+		d1mode_priority_a_cnt = dfixed_trunc(priority_mark02);
+		d2mode_priority_a_cnt = dfixed_trunc(priority_mark12);
 		if (rdev->disp_priority == 2) {
 			d1mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
 			d2mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
@@ -1096,32 +1003,32 @@ void rv515_bandwidth_avivo_update(struct radeon_device *rdev)
 			d2mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
 		}
 	} else if (mode0) {
-		if (rfixed_trunc(wm0.dbpp) > 64)
-			a.full = rfixed_div(wm0.dbpp, wm0.num_line_pair);
+		if (dfixed_trunc(wm0.dbpp) > 64)
+			a.full = dfixed_div(wm0.dbpp, wm0.num_line_pair);
 		else
 			a.full = wm0.num_line_pair.full;
-		fill_rate.full = rfixed_div(wm0.sclk, a);
+		fill_rate.full = dfixed_div(wm0.sclk, a);
 		if (wm0.consumption_rate.full > fill_rate.full) {
 			b.full = wm0.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm0.active_time);
-			a.full = rfixed_const(16);
-			b.full = rfixed_div(b, a);
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			b.full = dfixed_mul(b, wm0.active_time);
+			a.full = dfixed_const(16);
+			b.full = dfixed_div(b, a);
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
 			priority_mark02.full = a.full + b.full;
 		} else {
-			a.full = rfixed_mul(wm0.worst_case_latency,
+			a.full = dfixed_mul(wm0.worst_case_latency,
 						wm0.consumption_rate);
-			b.full = rfixed_const(16);
-			priority_mark02.full = rfixed_div(a, b);
+			b.full = dfixed_const(16);
+			priority_mark02.full = dfixed_div(a, b);
 		}
 		if (wm0.priority_mark.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark.full;
-		if (rfixed_trunc(priority_mark02) < 0)
+		if (dfixed_trunc(priority_mark02) < 0)
 			priority_mark02.full = 0;
 		if (wm0.priority_mark_max.full > priority_mark02.full)
 			priority_mark02.full = wm0.priority_mark_max.full;
-		d1mode_priority_a_cnt = rfixed_trunc(priority_mark02);
+		d1mode_priority_a_cnt = dfixed_trunc(priority_mark02);
 		if (rdev->disp_priority == 2)
 			d1mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
 		WREG32(D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt);
@@ -1129,32 +1036,32 @@ void rv515_bandwidth_avivo_update(struct radeon_device *rdev)
 		if (rdev->disp_priority == 2)
 			d1mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
 	} else if (mode1) {
-		if (rfixed_trunc(wm1.dbpp) > 64)
-			a.full = rfixed_div(wm1.dbpp, wm1.num_line_pair);
+		if (dfixed_trunc(wm1.dbpp) > 64)
+			a.full = dfixed_div(wm1.dbpp, wm1.num_line_pair);
 		else
 			a.full = wm1.num_line_pair.full;
-		fill_rate.full = rfixed_div(wm1.sclk, a);
+		fill_rate.full = dfixed_div(wm1.sclk, a);
 		if (wm1.consumption_rate.full > fill_rate.full) {
 			b.full = wm1.consumption_rate.full - fill_rate.full;
-			b.full = rfixed_mul(b, wm1.active_time);
-			a.full = rfixed_const(16);
-			b.full = rfixed_div(b, a);
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			b.full = dfixed_mul(b, wm1.active_time);
+			a.full = dfixed_const(16);
+			b.full = dfixed_div(b, a);
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
 			priority_mark12.full = a.full + b.full;
 		} else {
-			a.full = rfixed_mul(wm1.worst_case_latency,
+			a.full = dfixed_mul(wm1.worst_case_latency,
 						wm1.consumption_rate);
-			b.full = rfixed_const(16 * 1000);
-			priority_mark12.full = rfixed_div(a, b);
+			b.full = dfixed_const(16 * 1000);
+			priority_mark12.full = dfixed_div(a, b);
 		}
 		if (wm1.priority_mark.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark.full;
-		if (rfixed_trunc(priority_mark12) < 0)
+		if (dfixed_trunc(priority_mark12) < 0)
 			priority_mark12.full = 0;
 		if (wm1.priority_mark_max.full > priority_mark12.full)
 			priority_mark12.full = wm1.priority_mark_max.full;
-		d2mode_priority_a_cnt = rfixed_trunc(priority_mark12);
+		d2mode_priority_a_cnt = dfixed_trunc(priority_mark12);
 		if (rdev->disp_priority == 2)
 			d2mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON;
 		WREG32(D1MODE_PRIORITY_A_CNT, MODE_PRIORITY_OFF);
diff --git a/drivers/gpu/drm/radeon/rv515d.h b/drivers/gpu/drm/radeon/rv515d.h
index fc216e4..590309a 100644
--- a/drivers/gpu/drm/radeon/rv515d.h
+++ b/drivers/gpu/drm/radeon/rv515d.h
@@ -217,6 +217,52 @@
 #define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
 
 /* Registers */
+#define R_0000F0_RBBM_SOFT_RESET                     0x0000F0
+#define   S_0000F0_SOFT_RESET_CP(x)                    (((x) & 0x1) << 0)
+#define   G_0000F0_SOFT_RESET_CP(x)                    (((x) >> 0) & 0x1)
+#define   C_0000F0_SOFT_RESET_CP                       0xFFFFFFFE
+#define   S_0000F0_SOFT_RESET_HI(x)                    (((x) & 0x1) << 1)
+#define   G_0000F0_SOFT_RESET_HI(x)                    (((x) >> 1) & 0x1)
+#define   C_0000F0_SOFT_RESET_HI                       0xFFFFFFFD
+#define   S_0000F0_SOFT_RESET_VAP(x)                   (((x) & 0x1) << 2)
+#define   G_0000F0_SOFT_RESET_VAP(x)                   (((x) >> 2) & 0x1)
+#define   C_0000F0_SOFT_RESET_VAP                      0xFFFFFFFB
+#define   S_0000F0_SOFT_RESET_RE(x)                    (((x) & 0x1) << 3)
+#define   G_0000F0_SOFT_RESET_RE(x)                    (((x) >> 3) & 0x1)
+#define   C_0000F0_SOFT_RESET_RE                       0xFFFFFFF7
+#define   S_0000F0_SOFT_RESET_PP(x)                    (((x) & 0x1) << 4)
+#define   G_0000F0_SOFT_RESET_PP(x)                    (((x) >> 4) & 0x1)
+#define   C_0000F0_SOFT_RESET_PP                       0xFFFFFFEF
+#define   S_0000F0_SOFT_RESET_E2(x)                    (((x) & 0x1) << 5)
+#define   G_0000F0_SOFT_RESET_E2(x)                    (((x) >> 5) & 0x1)
+#define   C_0000F0_SOFT_RESET_E2                       0xFFFFFFDF
+#define   S_0000F0_SOFT_RESET_RB(x)                    (((x) & 0x1) << 6)
+#define   G_0000F0_SOFT_RESET_RB(x)                    (((x) >> 6) & 0x1)
+#define   C_0000F0_SOFT_RESET_RB                       0xFFFFFFBF
+#define   S_0000F0_SOFT_RESET_HDP(x)                   (((x) & 0x1) << 7)
+#define   G_0000F0_SOFT_RESET_HDP(x)                   (((x) >> 7) & 0x1)
+#define   C_0000F0_SOFT_RESET_HDP                      0xFFFFFF7F
+#define   S_0000F0_SOFT_RESET_MC(x)                    (((x) & 0x1) << 8)
+#define   G_0000F0_SOFT_RESET_MC(x)                    (((x) >> 8) & 0x1)
+#define   C_0000F0_SOFT_RESET_MC                       0xFFFFFEFF
+#define   S_0000F0_SOFT_RESET_AIC(x)                   (((x) & 0x1) << 9)
+#define   G_0000F0_SOFT_RESET_AIC(x)                   (((x) >> 9) & 0x1)
+#define   C_0000F0_SOFT_RESET_AIC                      0xFFFFFDFF
+#define   S_0000F0_SOFT_RESET_VIP(x)                   (((x) & 0x1) << 10)
+#define   G_0000F0_SOFT_RESET_VIP(x)                   (((x) >> 10) & 0x1)
+#define   C_0000F0_SOFT_RESET_VIP                      0xFFFFFBFF
+#define   S_0000F0_SOFT_RESET_DISP(x)                  (((x) & 0x1) << 11)
+#define   G_0000F0_SOFT_RESET_DISP(x)                  (((x) >> 11) & 0x1)
+#define   C_0000F0_SOFT_RESET_DISP                     0xFFFFF7FF
+#define   S_0000F0_SOFT_RESET_CG(x)                    (((x) & 0x1) << 12)
+#define   G_0000F0_SOFT_RESET_CG(x)                    (((x) >> 12) & 0x1)
+#define   C_0000F0_SOFT_RESET_CG                       0xFFFFEFFF
+#define   S_0000F0_SOFT_RESET_GA(x)                    (((x) & 0x1) << 13)
+#define   G_0000F0_SOFT_RESET_GA(x)                    (((x) >> 13) & 0x1)
+#define   C_0000F0_SOFT_RESET_GA                       0xFFFFDFFF
+#define   S_0000F0_SOFT_RESET_IDCT(x)                  (((x) & 0x1) << 14)
+#define   G_0000F0_SOFT_RESET_IDCT(x)                  (((x) >> 14) & 0x1)
+#define   C_0000F0_SOFT_RESET_IDCT                     0xFFFFBFFF
 #define R_0000F8_CONFIG_MEMSIZE                      0x0000F8
 #define   S_0000F8_CONFIG_MEMSIZE(x)                   (((x) & 0xFFFFFFFF) << 0)
 #define   G_0000F8_CONFIG_MEMSIZE(x)                   (((x) >> 0) & 0xFFFFFFFF)
diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
index 97958a6..cec536c 100644
--- a/drivers/gpu/drm/radeon/rv770.c
+++ b/drivers/gpu/drm/radeon/rv770.c
@@ -42,6 +42,21 @@
 static void rv770_gpu_init(struct radeon_device *rdev);
 void rv770_fini(struct radeon_device *rdev);
 
+void rv770_pm_misc(struct radeon_device *rdev)
+{
+	int req_ps_idx = rdev->pm.requested_power_state_index;
+	int req_cm_idx = rdev->pm.requested_clock_mode_index;
+	struct radeon_power_state *ps = &rdev->pm.power_state[req_ps_idx];
+	struct radeon_voltage *voltage = &ps->clock_info[req_cm_idx].voltage;
+
+	if ((voltage->type == VOLTAGE_SW) && voltage->voltage) {
+		if (voltage->voltage != rdev->pm.current_vddc) {
+			radeon_atom_set_voltage(rdev, voltage->voltage);
+			rdev->pm.current_vddc = voltage->voltage;
+			DRM_DEBUG("Setting: v: %d\n", voltage->voltage);
+		}
+	}
+}
 
 /*
  * GART
@@ -237,7 +252,6 @@ void r700_cp_stop(struct radeon_device *rdev)
 	WREG32(CP_ME_CNTL, (CP_ME_HALT | CP_PFP_HALT));
 }
 
-
 static int rv770_cp_load_microcode(struct radeon_device *rdev)
 {
 	const __be32 *fw_data;
@@ -272,6 +286,11 @@ static int rv770_cp_load_microcode(struct radeon_device *rdev)
 	return 0;
 }
 
+void r700_cp_fini(struct radeon_device *rdev)
+{
+	r700_cp_stop(rdev);
+	radeon_ring_fini(rdev);
+}
 
 /*
  * Core functions
@@ -906,23 +925,12 @@ int rv770_mc_init(struct radeon_device *rdev)
 	rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE);
 	rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE);
 	rdev->mc.visible_vram_size = rdev->mc.aper_size;
-	/* FIXME remove this once we support unmappable VRAM */
-	if (rdev->mc.mc_vram_size > rdev->mc.aper_size) {
-		rdev->mc.mc_vram_size = rdev->mc.aper_size;
-		rdev->mc.real_vram_size = rdev->mc.aper_size;
-	}
 	r600_vram_gtt_location(rdev, &rdev->mc);
 	radeon_update_bandwidth_info(rdev);
 
 	return 0;
 }
 
-int rv770_gpu_reset(struct radeon_device *rdev)
-{
-	/* FIXME: implement any rv770 specific bits */
-	return r600_gpu_reset(rdev);
-}
-
 static int rv770_startup(struct radeon_device *rdev)
 {
 	int r;
@@ -1094,8 +1102,6 @@ int rv770_init(struct radeon_device *rdev)
 	r = radeon_clocks_init(rdev);
 	if (r)
 		return r;
-	/* Initialize power management */
-	radeon_pm_init(rdev);
 	/* Fence driver */
 	r = radeon_fence_driver_init(rdev);
 	if (r)
@@ -1132,7 +1138,7 @@ int rv770_init(struct radeon_device *rdev)
 	r = rv770_startup(rdev);
 	if (r) {
 		dev_err(rdev->dev, "disabling GPU acceleration\n");
-		r600_cp_fini(rdev);
+		r700_cp_fini(rdev);
 		r600_wb_fini(rdev);
 		r600_irq_fini(rdev);
 		radeon_irq_kms_fini(rdev);
@@ -1164,9 +1170,8 @@ int rv770_init(struct radeon_device *rdev)
 
 void rv770_fini(struct radeon_device *rdev)
 {
-	radeon_pm_fini(rdev);
 	r600_blit_fini(rdev);
-	r600_cp_fini(rdev);
+	r700_cp_fini(rdev);
 	r600_wb_fini(rdev);
 	r600_irq_fini(rdev);
 	radeon_irq_kms_fini(rdev);
diff --git a/drivers/gpu/drm/savage/savage_bci.c b/drivers/gpu/drm/savage/savage_bci.c
index bff6fc2..2d0c9ca 100644
--- a/drivers/gpu/drm/savage/savage_bci.c
+++ b/drivers/gpu/drm/savage/savage_bci.c
@@ -539,11 +539,10 @@ int savage_driver_load(struct drm_device *dev, unsigned long chipset)
 {
 	drm_savage_private_t *dev_priv;
 
-	dev_priv = kmalloc(sizeof(drm_savage_private_t), GFP_KERNEL);
+	dev_priv = kzalloc(sizeof(drm_savage_private_t), GFP_KERNEL);
 	if (dev_priv == NULL)
 		return -ENOMEM;
 
-	memset(dev_priv, 0, sizeof(drm_savage_private_t));
 	dev->dev_private = (void *)dev_priv;
 
 	dev_priv->chipset = (enum savage_family)chipset;
diff --git a/drivers/gpu/drm/ttm/Makefile b/drivers/gpu/drm/ttm/Makefile
index 1e138f5..4256e20 100644
--- a/drivers/gpu/drm/ttm/Makefile
+++ b/drivers/gpu/drm/ttm/Makefile
@@ -4,6 +4,6 @@
 ccflags-y := -Iinclude/drm
 ttm-y := ttm_agp_backend.o ttm_memory.o ttm_tt.o ttm_bo.o \
 	ttm_bo_util.o ttm_bo_vm.o ttm_module.o ttm_global.o \
-	ttm_object.o ttm_lock.o ttm_execbuf_util.o
+	ttm_object.o ttm_lock.o ttm_execbuf_util.o ttm_page_alloc.o
 
 obj-$(CONFIG_DRM_TTM) += ttm.o
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 0e3754a..555ebb1 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -79,8 +79,6 @@ static void ttm_mem_type_debug(struct ttm_bo_device *bdev, int mem_type)
 	printk(KERN_ERR TTM_PFX "    use_type: %d\n", man->use_type);
 	printk(KERN_ERR TTM_PFX "    flags: 0x%08X\n", man->flags);
 	printk(KERN_ERR TTM_PFX "    gpu_offset: 0x%08lX\n", man->gpu_offset);
-	printk(KERN_ERR TTM_PFX "    io_offset: 0x%08lX\n", man->io_offset);
-	printk(KERN_ERR TTM_PFX "    io_size: %ld\n", man->io_size);
 	printk(KERN_ERR TTM_PFX "    size: %llu\n", man->size);
 	printk(KERN_ERR TTM_PFX "    available_caching: 0x%08X\n",
 		man->available_caching);
@@ -357,7 +355,8 @@ static int ttm_bo_add_ttm(struct ttm_buffer_object *bo, bool zero_alloc)
 
 static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
 				  struct ttm_mem_reg *mem,
-				  bool evict, bool interruptible, bool no_wait)
+				  bool evict, bool interruptible,
+				  bool no_wait_reserve, bool no_wait_gpu)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
 	bool old_is_pci = ttm_mem_reg_is_pci(bdev, &bo->mem);
@@ -402,12 +401,12 @@ static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
 
 	if (!(old_man->flags & TTM_MEMTYPE_FLAG_FIXED) &&
 	    !(new_man->flags & TTM_MEMTYPE_FLAG_FIXED))
-		ret = ttm_bo_move_ttm(bo, evict, no_wait, mem);
+		ret = ttm_bo_move_ttm(bo, evict, no_wait_reserve, no_wait_gpu, mem);
 	else if (bdev->driver->move)
 		ret = bdev->driver->move(bo, evict, interruptible,
-					 no_wait, mem);
+					 no_wait_reserve, no_wait_gpu, mem);
 	else
-		ret = ttm_bo_move_memcpy(bo, evict, no_wait, mem);
+		ret = ttm_bo_move_memcpy(bo, evict, no_wait_reserve, no_wait_gpu, mem);
 
 	if (ret)
 		goto out_err;
@@ -605,8 +604,22 @@ void ttm_bo_unref(struct ttm_buffer_object **p_bo)
 }
 EXPORT_SYMBOL(ttm_bo_unref);
 
+int ttm_bo_lock_delayed_workqueue(struct ttm_bo_device *bdev)
+{
+	return cancel_delayed_work_sync(&bdev->wq);
+}
+EXPORT_SYMBOL(ttm_bo_lock_delayed_workqueue);
+
+void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev, int resched)
+{
+	if (resched)
+		schedule_delayed_work(&bdev->wq,
+				      ((HZ / 100) < 1) ? 1 : HZ / 100);
+}
+EXPORT_SYMBOL(ttm_bo_unlock_delayed_workqueue);
+
 static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
-			bool no_wait)
+			bool no_wait_reserve, bool no_wait_gpu)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_bo_global *glob = bo->glob;
@@ -615,7 +628,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
 	int ret = 0;
 
 	spin_lock(&bo->lock);
-	ret = ttm_bo_wait(bo, false, interruptible, no_wait);
+	ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
 	spin_unlock(&bo->lock);
 
 	if (unlikely(ret != 0)) {
@@ -631,6 +644,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
 
 	evict_mem = bo->mem;
 	evict_mem.mm_node = NULL;
+	evict_mem.bus.io_reserved = false;
 
 	placement.fpfn = 0;
 	placement.lpfn = 0;
@@ -638,7 +652,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
 	placement.num_busy_placement = 0;
 	bdev->driver->evict_flags(bo, &placement);
 	ret = ttm_bo_mem_space(bo, &placement, &evict_mem, interruptible,
-				no_wait);
+				no_wait_reserve, no_wait_gpu);
 	if (ret) {
 		if (ret != -ERESTARTSYS) {
 			printk(KERN_ERR TTM_PFX
@@ -650,7 +664,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible,
 	}
 
 	ret = ttm_bo_handle_move_mem(bo, &evict_mem, true, interruptible,
-				     no_wait);
+				     no_wait_reserve, no_wait_gpu);
 	if (ret) {
 		if (ret != -ERESTARTSYS)
 			printk(KERN_ERR TTM_PFX "Buffer eviction failed\n");
@@ -670,7 +684,8 @@ out:
 
 static int ttm_mem_evict_first(struct ttm_bo_device *bdev,
 				uint32_t mem_type,
-				bool interruptible, bool no_wait)
+				bool interruptible, bool no_wait_reserve,
+				bool no_wait_gpu)
 {
 	struct ttm_bo_global *glob = bdev->glob;
 	struct ttm_mem_type_manager *man = &bdev->man[mem_type];
@@ -687,11 +702,11 @@ retry:
 	bo = list_first_entry(&man->lru, struct ttm_buffer_object, lru);
 	kref_get(&bo->list_kref);
 
-	ret = ttm_bo_reserve_locked(bo, false, true, false, 0);
+	ret = ttm_bo_reserve_locked(bo, false, no_wait_reserve, false, 0);
 
 	if (unlikely(ret == -EBUSY)) {
 		spin_unlock(&glob->lru_lock);
-		if (likely(!no_wait))
+		if (likely(!no_wait_gpu))
 			ret = ttm_bo_wait_unreserved(bo, interruptible);
 
 		kref_put(&bo->list_kref, ttm_bo_release_list);
@@ -713,7 +728,7 @@ retry:
 	while (put_count--)
 		kref_put(&bo->list_kref, ttm_bo_ref_bug);
 
-	ret = ttm_bo_evict(bo, interruptible, no_wait);
+	ret = ttm_bo_evict(bo, interruptible, no_wait_reserve, no_wait_gpu);
 	ttm_bo_unreserve(bo);
 
 	kref_put(&bo->list_kref, ttm_bo_release_list);
@@ -764,7 +779,9 @@ static int ttm_bo_mem_force_space(struct ttm_buffer_object *bo,
 					uint32_t mem_type,
 					struct ttm_placement *placement,
 					struct ttm_mem_reg *mem,
-					bool interruptible, bool no_wait)
+					bool interruptible,
+					bool no_wait_reserve,
+					bool no_wait_gpu)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_bo_global *glob = bdev->glob;
@@ -785,7 +802,7 @@ static int ttm_bo_mem_force_space(struct ttm_buffer_object *bo,
 		}
 		spin_unlock(&glob->lru_lock);
 		ret = ttm_mem_evict_first(bdev, mem_type, interruptible,
-						no_wait);
+						no_wait_reserve, no_wait_gpu);
 		if (unlikely(ret != 0))
 			return ret;
 	} while (1);
@@ -855,7 +872,8 @@ static bool ttm_bo_mt_compatible(struct ttm_mem_type_manager *man,
 int ttm_bo_mem_space(struct ttm_buffer_object *bo,
 			struct ttm_placement *placement,
 			struct ttm_mem_reg *mem,
-			bool interruptible, bool no_wait)
+			bool interruptible, bool no_wait_reserve,
+			bool no_wait_gpu)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_mem_type_manager *man;
@@ -952,7 +970,7 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
 		}
 
 		ret = ttm_bo_mem_force_space(bo, mem_type, placement, mem,
-						interruptible, no_wait);
+						interruptible, no_wait_reserve, no_wait_gpu);
 		if (ret == 0 && mem->mm_node) {
 			mem->placement = cur_flags;
 			mem->mm_node->private = bo;
@@ -978,7 +996,8 @@ EXPORT_SYMBOL(ttm_bo_wait_cpu);
 
 int ttm_bo_move_buffer(struct ttm_buffer_object *bo,
 			struct ttm_placement *placement,
-			bool interruptible, bool no_wait)
+			bool interruptible, bool no_wait_reserve,
+			bool no_wait_gpu)
 {
 	struct ttm_bo_global *glob = bo->glob;
 	int ret = 0;
@@ -992,20 +1011,21 @@ int ttm_bo_move_buffer(struct ttm_buffer_object *bo,
 	 * instead of doing it here.
 	 */
 	spin_lock(&bo->lock);
-	ret = ttm_bo_wait(bo, false, interruptible, no_wait);
+	ret = ttm_bo_wait(bo, false, interruptible, no_wait_gpu);
 	spin_unlock(&bo->lock);
 	if (ret)
 		return ret;
 	mem.num_pages = bo->num_pages;
 	mem.size = mem.num_pages << PAGE_SHIFT;
 	mem.page_alignment = bo->mem.page_alignment;
+	mem.bus.io_reserved = false;
 	/*
 	 * Determine where to move the buffer.
 	 */
-	ret = ttm_bo_mem_space(bo, placement, &mem, interruptible, no_wait);
+	ret = ttm_bo_mem_space(bo, placement, &mem, interruptible, no_wait_reserve, no_wait_gpu);
 	if (ret)
 		goto out_unlock;
-	ret = ttm_bo_handle_move_mem(bo, &mem, false, interruptible, no_wait);
+	ret = ttm_bo_handle_move_mem(bo, &mem, false, interruptible, no_wait_reserve, no_wait_gpu);
 out_unlock:
 	if (ret && mem.mm_node) {
 		spin_lock(&glob->lru_lock);
@@ -1039,7 +1059,8 @@ static int ttm_bo_mem_compat(struct ttm_placement *placement,
 
 int ttm_bo_validate(struct ttm_buffer_object *bo,
 			struct ttm_placement *placement,
-			bool interruptible, bool no_wait)
+			bool interruptible, bool no_wait_reserve,
+			bool no_wait_gpu)
 {
 	int ret;
 
@@ -1054,7 +1075,7 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
 	 */
 	ret = ttm_bo_mem_compat(placement, &bo->mem);
 	if (ret < 0) {
-		ret = ttm_bo_move_buffer(bo, placement, interruptible, no_wait);
+		ret = ttm_bo_move_buffer(bo, placement, interruptible, no_wait_reserve, no_wait_gpu);
 		if (ret)
 			return ret;
 	} else {
@@ -1153,6 +1174,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
 	bo->mem.num_pages = bo->num_pages;
 	bo->mem.mm_node = NULL;
 	bo->mem.page_alignment = page_alignment;
+	bo->mem.bus.io_reserved = false;
 	bo->buffer_start = buffer_start & PAGE_MASK;
 	bo->priv_flags = 0;
 	bo->mem.placement = (TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED);
@@ -1175,7 +1197,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev,
 			goto out_err;
 	}
 
-	ret = ttm_bo_validate(bo, placement, interruptible, false);
+	ret = ttm_bo_validate(bo, placement, interruptible, false, false);
 	if (ret)
 		goto out_err;
 
@@ -1249,7 +1271,7 @@ static int ttm_bo_force_list_clean(struct ttm_bo_device *bdev,
 	spin_lock(&glob->lru_lock);
 	while (!list_empty(&man->lru)) {
 		spin_unlock(&glob->lru_lock);
-		ret = ttm_mem_evict_first(bdev, mem_type, false, false);
+		ret = ttm_mem_evict_first(bdev, mem_type, false, false, false);
 		if (ret) {
 			if (allow_errors) {
 				return ret;
@@ -1553,26 +1575,6 @@ bool ttm_mem_reg_is_pci(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
 	return true;
 }
 
-int ttm_bo_pci_offset(struct ttm_bo_device *bdev,
-		      struct ttm_mem_reg *mem,
-		      unsigned long *bus_base,
-		      unsigned long *bus_offset, unsigned long *bus_size)
-{
-	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-
-	*bus_size = 0;
-	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
-		return -EINVAL;
-
-	if (ttm_mem_reg_is_pci(bdev, mem)) {
-		*bus_offset = mem->mm_node->start << PAGE_SHIFT;
-		*bus_size = mem->num_pages << PAGE_SHIFT;
-		*bus_base = man->io_offset;
-	}
-
-	return 0;
-}
-
 void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
@@ -1581,8 +1583,8 @@ void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo)
 
 	if (!bdev->dev_mapping)
 		return;
-
 	unmap_mapping_range(bdev->dev_mapping, offset, holelen, 1);
+	ttm_mem_io_free(bdev, &bo->mem);
 }
 EXPORT_SYMBOL(ttm_bo_unmap_virtual);
 
@@ -1811,7 +1813,7 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
 		evict_mem.mem_type = TTM_PL_SYSTEM;
 
 		ret = ttm_bo_handle_move_mem(bo, &evict_mem, true,
-					     false, false);
+					     false, false, false);
 		if (unlikely(ret != 0))
 			goto out;
 	}
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index d764e82..13012a1 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -50,7 +50,8 @@ void ttm_bo_free_old_node(struct ttm_buffer_object *bo)
 }
 
 int ttm_bo_move_ttm(struct ttm_buffer_object *bo,
-		    bool evict, bool no_wait, struct ttm_mem_reg *new_mem)
+		    bool evict, bool no_wait_reserve,
+		    bool no_wait_gpu, struct ttm_mem_reg *new_mem)
 {
 	struct ttm_tt *ttm = bo->ttm;
 	struct ttm_mem_reg *old_mem = &bo->mem;
@@ -81,30 +82,51 @@ int ttm_bo_move_ttm(struct ttm_buffer_object *bo,
 }
 EXPORT_SYMBOL(ttm_bo_move_ttm);
 
+int ttm_mem_io_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+	int ret;
+
+	if (!mem->bus.io_reserved) {
+		mem->bus.io_reserved = true;
+		ret = bdev->driver->io_mem_reserve(bdev, mem);
+		if (unlikely(ret != 0))
+			return ret;
+	}
+	return 0;
+}
+
+void ttm_mem_io_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+	if (bdev->driver->io_mem_reserve) {
+		if (mem->bus.io_reserved) {
+			mem->bus.io_reserved = false;
+			bdev->driver->io_mem_free(bdev, mem);
+		}
+	}
+}
+
 int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem,
 			void **virtual)
 {
-	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
-	unsigned long bus_offset;
-	unsigned long bus_size;
-	unsigned long bus_base;
 	int ret;
 	void *addr;
 
 	*virtual = NULL;
-	ret = ttm_bo_pci_offset(bdev, mem, &bus_base, &bus_offset, &bus_size);
-	if (ret || bus_size == 0)
+	ret = ttm_mem_io_reserve(bdev, mem);
+	if (ret || !mem->bus.is_iomem)
 		return ret;
 
-	if (!(man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP))
-		addr = (void *)(((u8 *) man->io_addr) + bus_offset);
-	else {
+	if (mem->bus.addr) {
+		addr = mem->bus.addr;
+	} else {
 		if (mem->placement & TTM_PL_FLAG_WC)
-			addr = ioremap_wc(bus_base + bus_offset, bus_size);
+			addr = ioremap_wc(mem->bus.base + mem->bus.offset, mem->bus.size);
 		else
-			addr = ioremap_nocache(bus_base + bus_offset, bus_size);
-		if (!addr)
+			addr = ioremap_nocache(mem->bus.base + mem->bus.offset, mem->bus.size);
+		if (!addr) {
+			ttm_mem_io_free(bdev, mem);
 			return -ENOMEM;
+		}
 	}
 	*virtual = addr;
 	return 0;
@@ -117,8 +139,9 @@ void ttm_mem_reg_iounmap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem,
 
 	man = &bdev->man[mem->mem_type];
 
-	if (virtual && (man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP))
+	if (virtual && mem->bus.addr == NULL)
 		iounmap(virtual);
+	ttm_mem_io_free(bdev, mem);
 }
 
 static int ttm_copy_io_page(void *dst, void *src, unsigned long page)
@@ -208,7 +231,8 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void *dst,
 }
 
 int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
-		       bool evict, bool no_wait, struct ttm_mem_reg *new_mem)
+		       bool evict, bool no_wait_reserve, bool no_wait_gpu,
+		       struct ttm_mem_reg *new_mem)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_mem_type_manager *man = &bdev->man[new_mem->mem_type];
@@ -369,26 +393,23 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp)
 EXPORT_SYMBOL(ttm_io_prot);
 
 static int ttm_bo_ioremap(struct ttm_buffer_object *bo,
-			  unsigned long bus_base,
-			  unsigned long bus_offset,
-			  unsigned long bus_size,
+			  unsigned long offset,
+			  unsigned long size,
 			  struct ttm_bo_kmap_obj *map)
 {
-	struct ttm_bo_device *bdev = bo->bdev;
 	struct ttm_mem_reg *mem = &bo->mem;
-	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
 
-	if (!(man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP)) {
+	if (bo->mem.bus.addr) {
 		map->bo_kmap_type = ttm_bo_map_premapped;
-		map->virtual = (void *)(((u8 *) man->io_addr) + bus_offset);
+		map->virtual = (void *)(((u8 *)bo->mem.bus.addr) + offset);
 	} else {
 		map->bo_kmap_type = ttm_bo_map_iomap;
 		if (mem->placement & TTM_PL_FLAG_WC)
-			map->virtual = ioremap_wc(bus_base + bus_offset,
-						  bus_size);
+			map->virtual = ioremap_wc(bo->mem.bus.base + bo->mem.bus.offset + offset,
+						  size);
 		else
-			map->virtual = ioremap_nocache(bus_base + bus_offset,
-						       bus_size);
+			map->virtual = ioremap_nocache(bo->mem.bus.base + bo->mem.bus.offset + offset,
+						       size);
 	}
 	return (!map->virtual) ? -ENOMEM : 0;
 }
@@ -441,13 +462,12 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
 		unsigned long start_page, unsigned long num_pages,
 		struct ttm_bo_kmap_obj *map)
 {
+	unsigned long offset, size;
 	int ret;
-	unsigned long bus_base;
-	unsigned long bus_offset;
-	unsigned long bus_size;
 
 	BUG_ON(!list_empty(&bo->swap));
 	map->virtual = NULL;
+	map->bo = bo;
 	if (num_pages > bo->num_pages)
 		return -EINVAL;
 	if (start_page > bo->num_pages)
@@ -456,16 +476,15 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
 	if (num_pages > 1 && !DRM_SUSER(DRM_CURPROC))
 		return -EPERM;
 #endif
-	ret = ttm_bo_pci_offset(bo->bdev, &bo->mem, &bus_base,
-				&bus_offset, &bus_size);
+	ret = ttm_mem_io_reserve(bo->bdev, &bo->mem);
 	if (ret)
 		return ret;
-	if (bus_size == 0) {
+	if (!bo->mem.bus.is_iomem) {
 		return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
 	} else {
-		bus_offset += start_page << PAGE_SHIFT;
-		bus_size = num_pages << PAGE_SHIFT;
-		return ttm_bo_ioremap(bo, bus_base, bus_offset, bus_size, map);
+		offset = start_page << PAGE_SHIFT;
+		size = num_pages << PAGE_SHIFT;
+		return ttm_bo_ioremap(bo, offset, size, map);
 	}
 }
 EXPORT_SYMBOL(ttm_bo_kmap);
@@ -477,6 +496,7 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 	switch (map->bo_kmap_type) {
 	case ttm_bo_map_iomap:
 		iounmap(map->virtual);
+		ttm_mem_io_free(map->bo->bdev, &map->bo->mem);
 		break;
 	case ttm_bo_map_vmap:
 		vunmap(map->virtual);
@@ -494,39 +514,11 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 }
 EXPORT_SYMBOL(ttm_bo_kunmap);
 
-int ttm_bo_pfn_prot(struct ttm_buffer_object *bo,
-		    unsigned long dst_offset,
-		    unsigned long *pfn, pgprot_t *prot)
-{
-	struct ttm_mem_reg *mem = &bo->mem;
-	struct ttm_bo_device *bdev = bo->bdev;
-	unsigned long bus_offset;
-	unsigned long bus_size;
-	unsigned long bus_base;
-	int ret;
-	ret = ttm_bo_pci_offset(bdev, mem, &bus_base, &bus_offset,
-			&bus_size);
-	if (ret)
-		return -EINVAL;
-	if (bus_size != 0)
-		*pfn = (bus_base + bus_offset + dst_offset) >> PAGE_SHIFT;
-	else
-		if (!bo->ttm)
-			return -EINVAL;
-		else
-			*pfn = page_to_pfn(ttm_tt_get_page(bo->ttm,
-							   dst_offset >>
-							   PAGE_SHIFT));
-	*prot = (mem->placement & TTM_PL_FLAG_CACHED) ?
-		PAGE_KERNEL : ttm_io_prot(mem->placement, PAGE_KERNEL);
-
-	return 0;
-}
-
 int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
 			      void *sync_obj,
 			      void *sync_obj_arg,
-			      bool evict, bool no_wait,
+			      bool evict, bool no_wait_reserve,
+			      bool no_wait_gpu,
 			      struct ttm_mem_reg *new_mem)
 {
 	struct ttm_bo_device *bdev = bo->bdev;
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 668dbe8..fe6cb77 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -74,9 +74,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
 	    vma->vm_private_data;
 	struct ttm_bo_device *bdev = bo->bdev;
-	unsigned long bus_base;
-	unsigned long bus_offset;
-	unsigned long bus_size;
 	unsigned long page_offset;
 	unsigned long page_last;
 	unsigned long pfn;
@@ -84,7 +81,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	struct page *page;
 	int ret;
 	int i;
-	bool is_iomem;
 	unsigned long address = (unsigned long)vmf->virtual_address;
 	int retval = VM_FAULT_NOPAGE;
 
@@ -101,8 +97,21 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 		return VM_FAULT_NOPAGE;
 	}
 
-	if (bdev->driver->fault_reserve_notify)
-		bdev->driver->fault_reserve_notify(bo);
+	if (bdev->driver->fault_reserve_notify) {
+		ret = bdev->driver->fault_reserve_notify(bo);
+		switch (ret) {
+		case 0:
+			break;
+		case -EBUSY:
+			set_need_resched();
+		case -ERESTARTSYS:
+			retval = VM_FAULT_NOPAGE;
+			goto out_unlock;
+		default:
+			retval = VM_FAULT_SIGBUS;
+			goto out_unlock;
+		}
+	}
 
 	/*
 	 * Wait for buffer data in transit, due to a pipelined
@@ -122,15 +131,12 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 		spin_unlock(&bo->lock);
 
 
-	ret = ttm_bo_pci_offset(bdev, &bo->mem, &bus_base, &bus_offset,
-				&bus_size);
-	if (unlikely(ret != 0)) {
+	ret = ttm_mem_io_reserve(bdev, &bo->mem);
+	if (ret) {
 		retval = VM_FAULT_SIGBUS;
 		goto out_unlock;
 	}
 
-	is_iomem = (bus_size != 0);
-
 	page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) +
 	    bo->vm_node->start - vma->vm_pgoff;
 	page_last = ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) +
@@ -154,8 +160,7 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	 * vma->vm_page_prot when the object changes caching policy, with
 	 * the correct locks held.
 	 */
-
-	if (is_iomem) {
+	if (bo->mem.bus.is_iomem) {
 		vma->vm_page_prot = ttm_io_prot(bo->mem.placement,
 						vma->vm_page_prot);
 	} else {
@@ -171,10 +176,8 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	 */
 
 	for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) {
-
-		if (is_iomem)
-			pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) +
-			    page_offset;
+		if (bo->mem.bus.is_iomem)
+			pfn = ((bo->mem.bus.base + bo->mem.bus.offset) >> PAGE_SHIFT) + page_offset;
 		else {
 			page = ttm_tt_get_page(ttm, page_offset);
 			if (unlikely(!page && i == 0)) {
@@ -198,7 +201,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 			retval =
 			    (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS;
 			goto out_unlock;
-
 		}
 
 		address += PAGE_SIZE;
@@ -221,8 +223,7 @@ static void ttm_bo_vm_open(struct vm_area_struct *vma)
 
 static void ttm_bo_vm_close(struct vm_area_struct *vma)
 {
-	struct ttm_buffer_object *bo =
-	    (struct ttm_buffer_object *)vma->vm_private_data;
+	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)vma->vm_private_data;
 
 	ttm_bo_unref(&bo);
 	vma->vm_private_data = NULL;
diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
index 801b702..e70ddd8 100644
--- a/drivers/gpu/drm/ttm/ttm_memory.c
+++ b/drivers/gpu/drm/ttm/ttm_memory.c
@@ -27,6 +27,7 @@
 
 #include "ttm/ttm_memory.h"
 #include "ttm/ttm_module.h"
+#include "ttm/ttm_page_alloc.h"
 #include <linux/spinlock.h>
 #include <linux/sched.h>
 #include <linux/wait.h>
@@ -393,6 +394,7 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
 		       "Zone %7s: Available graphics memory: %llu kiB.\n",
 		       zone->name, (unsigned long long) zone->max_mem >> 10);
 	}
+	ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
 	return 0;
 out_no_zone:
 	ttm_mem_global_release(glob);
@@ -405,6 +407,9 @@ void ttm_mem_global_release(struct ttm_mem_global *glob)
 	unsigned int i;
 	struct ttm_mem_zone *zone;
 
+	/* let the page allocator first stop the shrink work. */
+	ttm_page_alloc_fini();
+
 	flush_workqueue(glob->swap_queue);
 	destroy_workqueue(glob->swap_queue);
 	glob->swap_queue = NULL;
@@ -412,7 +417,7 @@ void ttm_mem_global_release(struct ttm_mem_global *glob)
 		zone = glob->zones[i];
 		kobject_del(&zone->kobj);
 		kobject_put(&zone->kobj);
-	}
+			}
 	kobject_del(&glob->kobj);
 	kobject_put(&glob->kobj);
 }
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
new file mode 100644
index 0000000..ef91069
--- /dev/null
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -0,0 +1,855 @@
+/*
+ * Copyright (c) Red Hat Inc.
+
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sub license,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie <airlied@redhat.com>
+ *          Jerome Glisse <jglisse@redhat.com>
+ *          Pauli Nieminen <suokkos@gmail.com>
+ */
+
+/* simple list based uncached page pool
+ * - Pool collects resently freed pages for reuse
+ * - Use page->lru to keep a free list
+ * - doesn't track currently in use pages
+ */
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/highmem.h>
+#include <linux/mm_types.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/seq_file.h> /* for seq_printf */
+#include <linux/slab.h>
+
+#include <asm/atomic.h>
+#include <asm/agp.h>
+
+#include "ttm/ttm_bo_driver.h"
+#include "ttm/ttm_page_alloc.h"
+
+
+#define NUM_PAGES_TO_ALLOC		(PAGE_SIZE/sizeof(struct page *))
+#define SMALL_ALLOCATION		16
+#define FREE_ALL_PAGES			(~0U)
+/* times are in msecs */
+#define PAGE_FREE_INTERVAL		1000
+
+/**
+ * struct ttm_page_pool - Pool to reuse recently allocated uc/wc pages.
+ *
+ * @lock: Protects the shared pool from concurrnet access. Must be used with
+ * irqsave/irqrestore variants because pool allocator maybe called from
+ * delayed work.
+ * @fill_lock: Prevent concurrent calls to fill.
+ * @list: Pool of free uc/wc pages for fast reuse.
+ * @gfp_flags: Flags to pass for alloc_page.
+ * @npages: Number of pages in pool.
+ */
+struct ttm_page_pool {
+	spinlock_t		lock;
+	bool			fill_lock;
+	struct list_head	list;
+	int			gfp_flags;
+	unsigned		npages;
+	char			*name;
+	unsigned long		nfrees;
+	unsigned long		nrefills;
+};
+
+/**
+ * Limits for the pool. They are handled without locks because only place where
+ * they may change is in sysfs store. They won't have immediate effect anyway
+ * so forcing serialization to access them is pointless.
+ */
+
+struct ttm_pool_opts {
+	unsigned	alloc_size;
+	unsigned	max_size;
+	unsigned	small;
+};
+
+#define NUM_POOLS 4
+
+/**
+ * struct ttm_pool_manager - Holds memory pools for fst allocation
+ *
+ * Manager is read only object for pool code so it doesn't need locking.
+ *
+ * @free_interval: minimum number of jiffies between freeing pages from pool.
+ * @page_alloc_inited: reference counting for pool allocation.
+ * @work: Work that is used to shrink the pool. Work is only run when there is
+ * some pages to free.
+ * @small_allocation: Limit in number of pages what is small allocation.
+ *
+ * @pools: All pool objects in use.
+ **/
+struct ttm_pool_manager {
+	struct kobject		kobj;
+	struct shrinker		mm_shrink;
+	atomic_t		page_alloc_inited;
+	struct ttm_pool_opts	options;
+
+	union {
+		struct ttm_page_pool	pools[NUM_POOLS];
+		struct {
+			struct ttm_page_pool	wc_pool;
+			struct ttm_page_pool	uc_pool;
+			struct ttm_page_pool	wc_pool_dma32;
+			struct ttm_page_pool	uc_pool_dma32;
+		} ;
+	};
+};
+
+static struct attribute ttm_page_pool_max = {
+	.name = "pool_max_size",
+	.mode = S_IRUGO | S_IWUSR
+};
+static struct attribute ttm_page_pool_small = {
+	.name = "pool_small_allocation",
+	.mode = S_IRUGO | S_IWUSR
+};
+static struct attribute ttm_page_pool_alloc_size = {
+	.name = "pool_allocation_size",
+	.mode = S_IRUGO | S_IWUSR
+};
+
+static struct attribute *ttm_pool_attrs[] = {
+	&ttm_page_pool_max,
+	&ttm_page_pool_small,
+	&ttm_page_pool_alloc_size,
+	NULL
+};
+
+static void ttm_pool_kobj_release(struct kobject *kobj)
+{
+	struct ttm_pool_manager *m =
+		container_of(kobj, struct ttm_pool_manager, kobj);
+	(void)m;
+}
+
+static ssize_t ttm_pool_store(struct kobject *kobj,
+		struct attribute *attr, const char *buffer, size_t size)
+{
+	struct ttm_pool_manager *m =
+		container_of(kobj, struct ttm_pool_manager, kobj);
+	int chars;
+	unsigned val;
+	chars = sscanf(buffer, "%u", &val);
+	if (chars == 0)
+		return size;
+
+	/* Convert kb to number of pages */
+	val = val / (PAGE_SIZE >> 10);
+
+	if (attr == &ttm_page_pool_max)
+		m->options.max_size = val;
+	else if (attr == &ttm_page_pool_small)
+		m->options.small = val;
+	else if (attr == &ttm_page_pool_alloc_size) {
+		if (val > NUM_PAGES_TO_ALLOC*8) {
+			printk(KERN_ERR TTM_PFX
+			       "Setting allocation size to %lu "
+			       "is not allowed. Recommended size is "
+			       "%lu\n",
+			       NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 7),
+			       NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 10));
+			return size;
+		} else if (val > NUM_PAGES_TO_ALLOC) {
+			printk(KERN_WARNING TTM_PFX
+			       "Setting allocation size to "
+			       "larger than %lu is not recommended.\n",
+			       NUM_PAGES_TO_ALLOC*(PAGE_SIZE >> 10));
+		}
+		m->options.alloc_size = val;
+	}
+
+	return size;
+}
+
+static ssize_t ttm_pool_show(struct kobject *kobj,
+		struct attribute *attr, char *buffer)
+{
+	struct ttm_pool_manager *m =
+		container_of(kobj, struct ttm_pool_manager, kobj);
+	unsigned val = 0;
+
+	if (attr == &ttm_page_pool_max)
+		val = m->options.max_size;
+	else if (attr == &ttm_page_pool_small)
+		val = m->options.small;
+	else if (attr == &ttm_page_pool_alloc_size)
+		val = m->options.alloc_size;
+
+	val = val * (PAGE_SIZE >> 10);
+
+	return snprintf(buffer, PAGE_SIZE, "%u\n", val);
+}
+
+static const struct sysfs_ops ttm_pool_sysfs_ops = {
+	.show = &ttm_pool_show,
+	.store = &ttm_pool_store,
+};
+
+static struct kobj_type ttm_pool_kobj_type = {
+	.release = &ttm_pool_kobj_release,
+	.sysfs_ops = &ttm_pool_sysfs_ops,
+	.default_attrs = ttm_pool_attrs,
+};
+
+static struct ttm_pool_manager _manager = {
+	.page_alloc_inited	= ATOMIC_INIT(0)
+};
+
+#ifndef CONFIG_X86
+static int set_pages_array_wb(struct page **pages, int addrinarray)
+{
+#ifdef TTM_HAS_AGP
+	int i;
+
+	for (i = 0; i < addrinarray; i++)
+		unmap_page_from_agp(pages[i]);
+#endif
+	return 0;
+}
+
+static int set_pages_array_wc(struct page **pages, int addrinarray)
+{
+#ifdef TTM_HAS_AGP
+	int i;
+
+	for (i = 0; i < addrinarray; i++)
+		map_page_into_agp(pages[i]);
+#endif
+	return 0;
+}
+
+static int set_pages_array_uc(struct page **pages, int addrinarray)
+{
+#ifdef TTM_HAS_AGP
+	int i;
+
+	for (i = 0; i < addrinarray; i++)
+		map_page_into_agp(pages[i]);
+#endif
+	return 0;
+}
+#endif
+
+/**
+ * Select the right pool or requested caching state and ttm flags. */
+static struct ttm_page_pool *ttm_get_pool(int flags,
+		enum ttm_caching_state cstate)
+{
+	int pool_index;
+
+	if (cstate == tt_cached)
+		return NULL;
+
+	if (cstate == tt_wc)
+		pool_index = 0x0;
+	else
+		pool_index = 0x1;
+
+	if (flags & TTM_PAGE_FLAG_DMA32)
+		pool_index |= 0x2;
+
+	return &_manager.pools[pool_index];
+}
+
+/* set memory back to wb and free the pages. */
+static void ttm_pages_put(struct page *pages[], unsigned npages)
+{
+	unsigned i;
+	if (set_pages_array_wb(pages, npages))
+		printk(KERN_ERR TTM_PFX "Failed to set %d pages to wb!\n",
+				npages);
+	for (i = 0; i < npages; ++i)
+		__free_page(pages[i]);
+}
+
+static void ttm_pool_update_free_locked(struct ttm_page_pool *pool,
+		unsigned freed_pages)
+{
+	pool->npages -= freed_pages;
+	pool->nfrees += freed_pages;
+}
+
+/**
+ * Free pages from pool.
+ *
+ * To prevent hogging the ttm_swap process we only free NUM_PAGES_TO_ALLOC
+ * number of pages in one go.
+ *
+ * @pool: to free the pages from
+ * @free_all: If set to true will free all pages in pool
+ **/
+static int ttm_page_pool_free(struct ttm_page_pool *pool, unsigned nr_free)
+{
+	unsigned long irq_flags;
+	struct page *p;
+	struct page **pages_to_free;
+	unsigned freed_pages = 0,
+		 npages_to_free = nr_free;
+
+	if (NUM_PAGES_TO_ALLOC < nr_free)
+		npages_to_free = NUM_PAGES_TO_ALLOC;
+
+	pages_to_free = kmalloc(npages_to_free * sizeof(struct page *),
+			GFP_KERNEL);
+	if (!pages_to_free) {
+		printk(KERN_ERR TTM_PFX
+		       "Failed to allocate memory for pool free operation.\n");
+		return 0;
+	}
+
+restart:
+	spin_lock_irqsave(&pool->lock, irq_flags);
+
+	list_for_each_entry_reverse(p, &pool->list, lru) {
+		if (freed_pages >= npages_to_free)
+			break;
+
+		pages_to_free[freed_pages++] = p;
+		/* We can only remove NUM_PAGES_TO_ALLOC at a time. */
+		if (freed_pages >= NUM_PAGES_TO_ALLOC) {
+			/* remove range of pages from the pool */
+			__list_del(p->lru.prev, &pool->list);
+
+			ttm_pool_update_free_locked(pool, freed_pages);
+			/**
+			 * Because changing page caching is costly
+			 * we unlock the pool to prevent stalling.
+			 */
+			spin_unlock_irqrestore(&pool->lock, irq_flags);
+
+			ttm_pages_put(pages_to_free, freed_pages);
+			if (likely(nr_free != FREE_ALL_PAGES))
+				nr_free -= freed_pages;
+
+			if (NUM_PAGES_TO_ALLOC >= nr_free)
+				npages_to_free = nr_free;
+			else
+				npages_to_free = NUM_PAGES_TO_ALLOC;
+
+			freed_pages = 0;
+
+			/* free all so restart the processing */
+			if (nr_free)
+				goto restart;
+
+			/* Not allowed to fall tough or break because
+			 * following context is inside spinlock while we are
+			 * outside here.
+			 */
+			goto out;
+
+		}
+	}
+
+	/* remove range of pages from the pool */
+	if (freed_pages) {
+		__list_del(&p->lru, &pool->list);
+
+		ttm_pool_update_free_locked(pool, freed_pages);
+		nr_free -= freed_pages;
+	}
+
+	spin_unlock_irqrestore(&pool->lock, irq_flags);
+
+	if (freed_pages)
+		ttm_pages_put(pages_to_free, freed_pages);
+out:
+	kfree(pages_to_free);
+	return nr_free;
+}
+
+/* Get good estimation how many pages are free in pools */
+static int ttm_pool_get_num_unused_pages(void)
+{
+	unsigned i;
+	int total = 0;
+	for (i = 0; i < NUM_POOLS; ++i)
+		total += _manager.pools[i].npages;
+
+	return total;
+}
+
+/**
+ * Callback for mm to request pool to reduce number of page held.
+ */
+static int ttm_pool_mm_shrink(int shrink_pages, gfp_t gfp_mask)
+{
+	static atomic_t start_pool = ATOMIC_INIT(0);
+	unsigned i;
+	unsigned pool_offset = atomic_add_return(1, &start_pool);
+	struct ttm_page_pool *pool;
+
+	pool_offset = pool_offset % NUM_POOLS;
+	/* select start pool in round robin fashion */
+	for (i = 0; i < NUM_POOLS; ++i) {
+		unsigned nr_free = shrink_pages;
+		if (shrink_pages == 0)
+			break;
+		pool = &_manager.pools[(i + pool_offset)%NUM_POOLS];
+		shrink_pages = ttm_page_pool_free(pool, nr_free);
+	}
+	/* return estimated number of unused pages in pool */
+	return ttm_pool_get_num_unused_pages();
+}
+
+static void ttm_pool_mm_shrink_init(struct ttm_pool_manager *manager)
+{
+	manager->mm_shrink.shrink = &ttm_pool_mm_shrink;
+	manager->mm_shrink.seeks = 1;
+	register_shrinker(&manager->mm_shrink);
+}
+
+static void ttm_pool_mm_shrink_fini(struct ttm_pool_manager *manager)
+{
+	unregister_shrinker(&manager->mm_shrink);
+}
+
+static int ttm_set_pages_caching(struct page **pages,
+		enum ttm_caching_state cstate, unsigned cpages)
+{
+	int r = 0;
+	/* Set page caching */
+	switch (cstate) {
+	case tt_uncached:
+		r = set_pages_array_uc(pages, cpages);
+		if (r)
+			printk(KERN_ERR TTM_PFX
+			       "Failed to set %d pages to uc!\n",
+			       cpages);
+		break;
+	case tt_wc:
+		r = set_pages_array_wc(pages, cpages);
+		if (r)
+			printk(KERN_ERR TTM_PFX
+			       "Failed to set %d pages to wc!\n",
+			       cpages);
+		break;
+	default:
+		break;
+	}
+	return r;
+}
+
+/**
+ * Free pages the pages that failed to change the caching state. If there is
+ * any pages that have changed their caching state already put them to the
+ * pool.
+ */
+static void ttm_handle_caching_state_failure(struct list_head *pages,
+		int ttm_flags, enum ttm_caching_state cstate,
+		struct page **failed_pages, unsigned cpages)
+{
+	unsigned i;
+	/* Failed pages have to be freed */
+	for (i = 0; i < cpages; ++i) {
+		list_del(&failed_pages[i]->lru);
+		__free_page(failed_pages[i]);
+	}
+}
+
+/**
+ * Allocate new pages with correct caching.
+ *
+ * This function is reentrant if caller updates count depending on number of
+ * pages returned in pages array.
+ */
+static int ttm_alloc_new_pages(struct list_head *pages, int gfp_flags,
+		int ttm_flags, enum ttm_caching_state cstate, unsigned count)
+{
+	struct page **caching_array;
+	struct page *p;
+	int r = 0;
+	unsigned i, cpages;
+	unsigned max_cpages = min(count,
+			(unsigned)(PAGE_SIZE/sizeof(struct page *)));
+
+	/* allocate array for page caching change */
+	caching_array = kmalloc(max_cpages*sizeof(struct page *), GFP_KERNEL);
+
+	if (!caching_array) {
+		printk(KERN_ERR TTM_PFX
+		       "Unable to allocate table for new pages.");
+		return -ENOMEM;
+	}
+
+	for (i = 0, cpages = 0; i < count; ++i) {
+		p = alloc_page(gfp_flags);
+
+		if (!p) {
+			printk(KERN_ERR TTM_PFX "Unable to get page %u.\n", i);
+
+			/* store already allocated pages in the pool after
+			 * setting the caching state */
+			if (cpages) {
+				r = ttm_set_pages_caching(caching_array,
+							  cstate, cpages);
+				if (r)
+					ttm_handle_caching_state_failure(pages,
+						ttm_flags, cstate,
+						caching_array, cpages);
+			}
+			r = -ENOMEM;
+			goto out;
+		}
+
+#ifdef CONFIG_HIGHMEM
+		/* gfp flags of highmem page should never be dma32 so we
+		 * we should be fine in such case
+		 */
+		if (!PageHighMem(p))
+#endif
+		{
+			caching_array[cpages++] = p;
+			if (cpages == max_cpages) {
+
+				r = ttm_set_pages_caching(caching_array,
+						cstate, cpages);
+				if (r) {
+					ttm_handle_caching_state_failure(pages,
+						ttm_flags, cstate,
+						caching_array, cpages);
+					goto out;
+				}
+				cpages = 0;
+			}
+		}
+
+		list_add(&p->lru, pages);
+	}
+
+	if (cpages) {
+		r = ttm_set_pages_caching(caching_array, cstate, cpages);
+		if (r)
+			ttm_handle_caching_state_failure(pages,
+					ttm_flags, cstate,
+					caching_array, cpages);
+	}
+out:
+	kfree(caching_array);
+
+	return r;
+}
+
+/**
+ * Fill the given pool if there isn't enough pages and requested number of
+ * pages is small.
+ */
+static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool,
+		int ttm_flags, enum ttm_caching_state cstate, unsigned count,
+		unsigned long *irq_flags)
+{
+	struct page *p;
+	int r;
+	unsigned cpages = 0;
+	/**
+	 * Only allow one pool fill operation at a time.
+	 * If pool doesn't have enough pages for the allocation new pages are
+	 * allocated from outside of pool.
+	 */
+	if (pool->fill_lock)
+		return;
+
+	pool->fill_lock = true;
+
+	/* If allocation request is small and there is not enough
+	 * pages in pool we fill the pool first */
+	if (count < _manager.options.small
+		&& count > pool->npages) {
+		struct list_head new_pages;
+		unsigned alloc_size = _manager.options.alloc_size;
+
+		/**
+		 * Can't change page caching if in irqsave context. We have to
+		 * drop the pool->lock.
+		 */
+		spin_unlock_irqrestore(&pool->lock, *irq_flags);
+
+		INIT_LIST_HEAD(&new_pages);
+		r = ttm_alloc_new_pages(&new_pages, pool->gfp_flags, ttm_flags,
+				cstate,	alloc_size);
+		spin_lock_irqsave(&pool->lock, *irq_flags);
+
+		if (!r) {
+			list_splice(&new_pages, &pool->list);
+			++pool->nrefills;
+			pool->npages += alloc_size;
+		} else {
+			printk(KERN_ERR TTM_PFX
+			       "Failed to fill pool (%p).", pool);
+			/* If we have any pages left put them to the pool. */
+			list_for_each_entry(p, &pool->list, lru) {
+				++cpages;
+			}
+			list_splice(&new_pages, &pool->list);
+			pool->npages += cpages;
+		}
+
+	}
+	pool->fill_lock = false;
+}
+
+/**
+ * Cut count nubmer of pages from the pool and put them to return list
+ *
+ * @return count of pages still to allocate to fill the request.
+ */
+static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool,
+		struct list_head *pages, int ttm_flags,
+		enum ttm_caching_state cstate, unsigned count)
+{
+	unsigned long irq_flags;
+	struct list_head *p;
+	unsigned i;
+
+	spin_lock_irqsave(&pool->lock, irq_flags);
+	ttm_page_pool_fill_locked(pool, ttm_flags, cstate, count, &irq_flags);
+
+	if (count >= pool->npages) {
+		/* take all pages from the pool */
+		list_splice_init(&pool->list, pages);
+		count -= pool->npages;
+		pool->npages = 0;
+		goto out;
+	}
+	/* find the last pages to include for requested number of pages. Split
+	 * pool to begin and halves to reduce search space. */
+	if (count <= pool->npages/2) {
+		i = 0;
+		list_for_each(p, &pool->list) {
+			if (++i == count)
+				break;
+		}
+	} else {
+		i = pool->npages + 1;
+		list_for_each_prev(p, &pool->list) {
+			if (--i == count)
+				break;
+		}
+	}
+	/* Cut count number of pages from pool */
+	list_cut_position(pages, &pool->list, p);
+	pool->npages -= count;
+	count = 0;
+out:
+	spin_unlock_irqrestore(&pool->lock, irq_flags);
+	return count;
+}
+
+/*
+ * On success pages list will hold count number of correctly
+ * cached pages.
+ */
+int ttm_get_pages(struct list_head *pages, int flags,
+		enum ttm_caching_state cstate, unsigned count)
+{
+	struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
+	struct page *p = NULL;
+	int gfp_flags = 0;
+	int r;
+
+	/* set zero flag for page allocation if required */
+	if (flags & TTM_PAGE_FLAG_ZERO_ALLOC)
+		gfp_flags |= __GFP_ZERO;
+
+	/* No pool for cached pages */
+	if (pool == NULL) {
+		if (flags & TTM_PAGE_FLAG_DMA32)
+			gfp_flags |= GFP_DMA32;
+		else
+			gfp_flags |= GFP_HIGHUSER;
+
+		for (r = 0; r < count; ++r) {
+			p = alloc_page(gfp_flags);
+			if (!p) {
+
+				printk(KERN_ERR TTM_PFX
+				       "Unable to allocate page.");
+				return -ENOMEM;
+			}
+
+			list_add(&p->lru, pages);
+		}
+		return 0;
+	}
+
+
+	/* combine zero flag to pool flags */
+	gfp_flags |= pool->gfp_flags;
+
+	/* First we take pages from the pool */
+	count = ttm_page_pool_get_pages(pool, pages, flags, cstate, count);
+
+	/* clear the pages coming from the pool if requested */
+	if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) {
+		list_for_each_entry(p, pages, lru) {
+			clear_page(page_address(p));
+		}
+	}
+
+	/* If pool didn't have enough pages allocate new one. */
+	if (count > 0) {
+		/* ttm_alloc_new_pages doesn't reference pool so we can run
+		 * multiple requests in parallel.
+		 **/
+		r = ttm_alloc_new_pages(pages, gfp_flags, flags, cstate, count);
+		if (r) {
+			/* If there is any pages in the list put them back to
+			 * the pool. */
+			printk(KERN_ERR TTM_PFX
+			       "Failed to allocate extra pages "
+			       "for large request.");
+			ttm_put_pages(pages, 0, flags, cstate);
+			return r;
+		}
+	}
+
+
+	return 0;
+}
+
+/* Put all pages in pages list to correct pool to wait for reuse */
+void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+		enum ttm_caching_state cstate)
+{
+	unsigned long irq_flags;
+	struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
+	struct page *p, *tmp;
+
+	if (pool == NULL) {
+		/* No pool for this memory type so free the pages */
+
+		list_for_each_entry_safe(p, tmp, pages, lru) {
+			__free_page(p);
+		}
+		/* Make the pages list empty */
+		INIT_LIST_HEAD(pages);
+		return;
+	}
+	if (page_count == 0) {
+		list_for_each_entry_safe(p, tmp, pages, lru) {
+			++page_count;
+		}
+	}
+
+	spin_lock_irqsave(&pool->lock, irq_flags);
+	list_splice_init(pages, &pool->list);
+	pool->npages += page_count;
+	/* Check that we don't go over the pool limit */
+	page_count = 0;
+	if (pool->npages > _manager.options.max_size) {
+		page_count = pool->npages - _manager.options.max_size;
+		/* free at least NUM_PAGES_TO_ALLOC number of pages
+		 * to reduce calls to set_memory_wb */
+		if (page_count < NUM_PAGES_TO_ALLOC)
+			page_count = NUM_PAGES_TO_ALLOC;
+	}
+	spin_unlock_irqrestore(&pool->lock, irq_flags);
+	if (page_count)
+		ttm_page_pool_free(pool, page_count);
+}
+
+static void ttm_page_pool_init_locked(struct ttm_page_pool *pool, int flags,
+		char *name)
+{
+	spin_lock_init(&pool->lock);
+	pool->fill_lock = false;
+	INIT_LIST_HEAD(&pool->list);
+	pool->npages = pool->nfrees = 0;
+	pool->gfp_flags = flags;
+	pool->name = name;
+}
+
+int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages)
+{
+	int ret;
+	if (atomic_add_return(1, &_manager.page_alloc_inited) > 1)
+		return 0;
+
+	printk(KERN_INFO TTM_PFX "Initializing pool allocator.\n");
+
+	ttm_page_pool_init_locked(&_manager.wc_pool, GFP_HIGHUSER, "wc");
+
+	ttm_page_pool_init_locked(&_manager.uc_pool, GFP_HIGHUSER, "uc");
+
+	ttm_page_pool_init_locked(&_manager.wc_pool_dma32, GFP_USER | GFP_DMA32,
+			"wc dma");
+
+	ttm_page_pool_init_locked(&_manager.uc_pool_dma32, GFP_USER | GFP_DMA32,
+			"uc dma");
+
+	_manager.options.max_size = max_pages;
+	_manager.options.small = SMALL_ALLOCATION;
+	_manager.options.alloc_size = NUM_PAGES_TO_ALLOC;
+
+	kobject_init(&_manager.kobj, &ttm_pool_kobj_type);
+	ret = kobject_add(&_manager.kobj, &glob->kobj, "pool");
+	if (unlikely(ret != 0)) {
+		kobject_put(&_manager.kobj);
+		return ret;
+	}
+
+	ttm_pool_mm_shrink_init(&_manager);
+
+	return 0;
+}
+
+void ttm_page_alloc_fini()
+{
+	int i;
+
+	if (atomic_sub_return(1, &_manager.page_alloc_inited) > 0)
+		return;
+
+	printk(KERN_INFO TTM_PFX "Finalizing pool allocator.\n");
+	ttm_pool_mm_shrink_fini(&_manager);
+
+	for (i = 0; i < NUM_POOLS; ++i)
+		ttm_page_pool_free(&_manager.pools[i], FREE_ALL_PAGES);
+
+	kobject_put(&_manager.kobj);
+}
+
+int ttm_page_alloc_debugfs(struct seq_file *m, void *data)
+{
+	struct ttm_page_pool *p;
+	unsigned i;
+	char *h[] = {"pool", "refills", "pages freed", "size"};
+	if (atomic_read(&_manager.page_alloc_inited) == 0) {
+		seq_printf(m, "No pool allocator running.\n");
+		return 0;
+	}
+	seq_printf(m, "%6s %12s %13s %8s\n",
+			h[0], h[1], h[2], h[3]);
+	for (i = 0; i < NUM_POOLS; ++i) {
+		p = &_manager.pools[i];
+
+		seq_printf(m, "%6s %12ld %13ld %8d\n",
+				p->name, p->nrefills,
+				p->nfrees, p->npages);
+	}
+	return 0;
+}
+EXPORT_SYMBOL(ttm_page_alloc_debugfs);
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index d5fd5b8..a7bab87 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -39,6 +39,7 @@
 #include "ttm/ttm_module.h"
 #include "ttm/ttm_bo_driver.h"
 #include "ttm/ttm_placement.h"
+#include "ttm/ttm_page_alloc.h"
 
 static int ttm_tt_swapin(struct ttm_tt *ttm);
 
@@ -56,21 +57,6 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm)
 	ttm->pages = NULL;
 }
 
-static struct page *ttm_tt_alloc_page(unsigned page_flags)
-{
-	gfp_t gfp_flags = GFP_USER;
-
-	if (page_flags & TTM_PAGE_FLAG_ZERO_ALLOC)
-		gfp_flags |= __GFP_ZERO;
-
-	if (page_flags & TTM_PAGE_FLAG_DMA32)
-		gfp_flags |= __GFP_DMA32;
-	else
-		gfp_flags |= __GFP_HIGHMEM;
-
-	return alloc_page(gfp_flags);
-}
-
 static void ttm_tt_free_user_pages(struct ttm_tt *ttm)
 {
 	int write;
@@ -111,15 +97,21 @@ static void ttm_tt_free_user_pages(struct ttm_tt *ttm)
 static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index)
 {
 	struct page *p;
+	struct list_head h;
 	struct ttm_mem_global *mem_glob = ttm->glob->mem_glob;
 	int ret;
 
 	while (NULL == (p = ttm->pages[index])) {
-		p = ttm_tt_alloc_page(ttm->page_flags);
 
-		if (!p)
+		INIT_LIST_HEAD(&h);
+
+		ret = ttm_get_pages(&h, ttm->page_flags, ttm->caching_state, 1);
+
+		if (ret != 0)
 			return NULL;
 
+		p = list_first_entry(&h, struct page, lru);
+
 		ret = ttm_mem_global_alloc_page(mem_glob, p, false, false);
 		if (unlikely(ret != 0))
 			goto out_err;
@@ -228,10 +220,10 @@ static int ttm_tt_set_caching(struct ttm_tt *ttm,
 	if (ttm->caching_state == c_state)
 		return 0;
 
-	if (c_state != tt_cached) {
-		ret = ttm_tt_populate(ttm);
-		if (unlikely(ret != 0))
-			return ret;
+	if (ttm->state == tt_unpopulated) {
+		/* Change caching but don't populate */
+		ttm->caching_state = c_state;
+		return 0;
 	}
 
 	if (ttm->caching_state == tt_cached)
@@ -282,13 +274,17 @@ EXPORT_SYMBOL(ttm_tt_set_placement_caching);
 static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm)
 {
 	int i;
+	unsigned count = 0;
+	struct list_head h;
 	struct page *cur_page;
 	struct ttm_backend *be = ttm->be;
 
+	INIT_LIST_HEAD(&h);
+
 	if (be)
 		be->func->clear(be);
-	(void)ttm_tt_set_caching(ttm, tt_cached);
 	for (i = 0; i < ttm->num_pages; ++i) {
+
 		cur_page = ttm->pages[i];
 		ttm->pages[i] = NULL;
 		if (cur_page) {
@@ -298,9 +294,11 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm)
 				       "Leaking pages.\n");
 			ttm_mem_global_free_page(ttm->glob->mem_glob,
 						 cur_page);
-			__free_page(cur_page);
+			list_add(&cur_page->lru, &h);
+			count++;
 		}
 	}
+	ttm_put_pages(&h, count, ttm->page_flags, ttm->caching_state);
 	ttm->state = tt_unpopulated;
 	ttm->first_himem_page = ttm->num_pages;
 	ttm->last_lomem_page = -1;
diff --git a/drivers/gpu/drm/vmwgfx/Makefile b/drivers/gpu/drm/vmwgfx/Makefile
index 1a3cb68..4505e17 100644
--- a/drivers/gpu/drm/vmwgfx/Makefile
+++ b/drivers/gpu/drm/vmwgfx/Makefile
@@ -4,6 +4,6 @@ ccflags-y := -Iinclude/drm
 vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \
 	    vmwgfx_fb.o vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_buffer.o \
 	    vmwgfx_fifo.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \
-	    vmwgfx_overlay.o
+	    vmwgfx_overlay.o vmwgfx_fence.o
 
 obj-$(CONFIG_DRM_VMWGFX) := vmwgfx.o
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c
index 825ebe3..c4f5114 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c
@@ -137,9 +137,6 @@ int vmw_invalidate_caches(struct ttm_bo_device *bdev, uint32_t flags)
 int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 		      struct ttm_mem_type_manager *man)
 {
-	struct vmw_private *dev_priv =
-	    container_of(bdev, struct vmw_private, bdev);
-
 	switch (type) {
 	case TTM_PL_SYSTEM:
 		/* System memory */
@@ -151,11 +148,7 @@ int vmw_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 	case TTM_PL_VRAM:
 		/* "On-card" video ram */
 		man->gpu_offset = 0;
-		man->io_offset = dev_priv->vram_start;
-		man->io_size = dev_priv->vram_size;
-		man->flags = TTM_MEMTYPE_FLAG_FIXED |
-		    TTM_MEMTYPE_FLAG_NEEDS_IOREMAP | TTM_MEMTYPE_FLAG_MAPPABLE;
-		man->io_addr = NULL;
+		man->flags = TTM_MEMTYPE_FLAG_FIXED | TTM_MEMTYPE_FLAG_MAPPABLE;
 		man->available_caching = TTM_PL_MASK_CACHING;
 		man->default_caching = TTM_PL_FLAG_WC;
 		break;
@@ -193,6 +186,42 @@ static void vmw_swap_notify(struct ttm_buffer_object *bo)
 	vmw_dmabuf_gmr_unbind(bo);
 }
 
+static int vmw_ttm_io_mem_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+	struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
+	struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev);
+
+	mem->bus.addr = NULL;
+	mem->bus.is_iomem = false;
+	mem->bus.offset = 0;
+	mem->bus.size = mem->num_pages << PAGE_SHIFT;
+	mem->bus.base = 0;
+	if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
+		return -EINVAL;
+	switch (mem->mem_type) {
+	case TTM_PL_SYSTEM:
+		/* System memory */
+		return 0;
+	case TTM_PL_VRAM:
+		mem->bus.offset = mem->mm_node->start << PAGE_SHIFT;
+		mem->bus.base = dev_priv->vram_start;
+		mem->bus.is_iomem = true;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static void vmw_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
+{
+}
+
+static int vmw_ttm_fault_reserve_notify(struct ttm_buffer_object *bo)
+{
+	return 0;
+}
+
 /**
  * FIXME: We're using the old vmware polling method to sync.
  * Do this with fences instead.
@@ -248,5 +277,8 @@ struct ttm_bo_driver vmw_bo_driver = {
 	.sync_obj_unref = vmw_sync_obj_unref,
 	.sync_obj_ref = vmw_sync_obj_ref,
 	.move_notify = vmw_move_notify,
-	.swap_notify = vmw_swap_notify
+	.swap_notify = vmw_swap_notify,
+	.fault_reserve_notify = &vmw_ttm_fault_reserve_notify,
+	.io_mem_reserve = &vmw_ttm_io_mem_reserve,
+	.io_mem_free = &vmw_ttm_io_mem_free,
 };
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index 0c9c081..b793c8c 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -88,6 +88,9 @@
 #define DRM_IOCTL_VMW_FENCE_WAIT				\
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VMW_FENCE_WAIT,		\
 		 struct drm_vmw_fence_wait_arg)
+#define DRM_IOCTL_VMW_UPDATE_LAYOUT				\
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VMW_UPDATE_LAYOUT,	\
+		 struct drm_vmw_update_layout_arg)
 
 
 /**
@@ -135,7 +138,9 @@ static struct drm_ioctl_desc vmw_ioctls[] = {
 	VMW_IOCTL_DEF(DRM_IOCTL_VMW_FIFO_DEBUG, vmw_fifo_debug_ioctl,
 		      DRM_AUTH | DRM_ROOT_ONLY | DRM_MASTER | DRM_UNLOCKED),
 	VMW_IOCTL_DEF(DRM_IOCTL_VMW_FENCE_WAIT, vmw_fence_wait_ioctl,
-		      DRM_AUTH | DRM_UNLOCKED)
+		      DRM_AUTH | DRM_UNLOCKED),
+	VMW_IOCTL_DEF(DRM_IOCTL_VMW_UPDATE_LAYOUT, vmw_kms_update_layout_ioctl,
+		      DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED)
 };
 
 static struct pci_device_id vmw_pci_id_list[] = {
@@ -318,6 +323,15 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
 		goto out_err3;
 	}
 
+	/* Need mmio memory to check for fifo pitchlock cap. */
+	if (!(dev_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY) &&
+	    !(dev_priv->capabilities & SVGA_CAP_PITCHLOCK) &&
+	    !vmw_fifo_have_pitchlock(dev_priv)) {
+		ret = -ENOSYS;
+		DRM_ERROR("Hardware has no pitchlock\n");
+		goto out_err4;
+	}
+
 	dev_priv->tdev = ttm_object_device_init
 	    (dev_priv->mem_global_ref.object, 12);
 
@@ -399,8 +413,6 @@ static int vmw_driver_unload(struct drm_device *dev)
 {
 	struct vmw_private *dev_priv = vmw_priv(dev);
 
-	DRM_INFO(VMWGFX_DRIVER_NAME " unload.\n");
-
 	unregister_pm_notifier(&dev_priv->pm_nb);
 
 	vmw_fb_close(dev_priv);
@@ -546,7 +558,6 @@ static int vmw_master_create(struct drm_device *dev,
 {
 	struct vmw_master *vmaster;
 
-	DRM_INFO("Master create.\n");
 	vmaster = kzalloc(sizeof(*vmaster), GFP_KERNEL);
 	if (unlikely(vmaster == NULL))
 		return -ENOMEM;
@@ -563,7 +574,6 @@ static void vmw_master_destroy(struct drm_device *dev,
 {
 	struct vmw_master *vmaster = vmw_master(master);
 
-	DRM_INFO("Master destroy.\n");
 	master->driver_priv = NULL;
 	kfree(vmaster);
 }
@@ -579,8 +589,6 @@ static int vmw_master_set(struct drm_device *dev,
 	struct vmw_master *vmaster = vmw_master(file_priv->master);
 	int ret = 0;
 
-	DRM_INFO("Master set.\n");
-
 	if (active) {
 		BUG_ON(active != &dev_priv->fbdev_master);
 		ret = ttm_vt_lock(&active->lock, false, vmw_fp->tfile);
@@ -622,8 +630,6 @@ static void vmw_master_drop(struct drm_device *dev,
 	struct vmw_master *vmaster = vmw_master(file_priv->master);
 	int ret;
 
-	DRM_INFO("Master drop.\n");
-
 	/**
 	 * Make sure the master doesn't disappear while we have
 	 * it locked.
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
index 356dc93..eaad520 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
@@ -41,12 +41,13 @@
 
 #define VMWGFX_DRIVER_DATE "20100209"
 #define VMWGFX_DRIVER_MAJOR 1
-#define VMWGFX_DRIVER_MINOR 0
+#define VMWGFX_DRIVER_MINOR 2
 #define VMWGFX_DRIVER_PATCHLEVEL 0
 #define VMWGFX_FILE_PAGE_OFFSET 0x00100000
 #define VMWGFX_FIFO_STATIC_SIZE (1024*1024)
 #define VMWGFX_MAX_RELOCATIONS 2048
 #define VMWGFX_MAX_GMRS 2048
+#define VMWGFX_MAX_DISPLAYS 16
 
 struct vmw_fpriv {
 	struct drm_master *locked_master;
@@ -102,6 +103,13 @@ struct vmw_surface {
 	struct vmw_cursor_snooper snooper;
 };
 
+struct vmw_fence_queue {
+	struct list_head head;
+	struct timespec lag;
+	struct timespec lag_time;
+	spinlock_t lock;
+};
+
 struct vmw_fifo_state {
 	unsigned long reserved_size;
 	__le32 *dynamic_buffer;
@@ -115,6 +123,7 @@ struct vmw_fifo_state {
 	uint32_t capabilities;
 	struct mutex fifo_mutex;
 	struct rw_semaphore rwsem;
+	struct vmw_fence_queue fence_queue;
 };
 
 struct vmw_relocation {
@@ -144,6 +153,14 @@ struct vmw_master {
 	struct ttm_lock lock;
 };
 
+struct vmw_vga_topology_state {
+	uint32_t width;
+	uint32_t height;
+	uint32_t primary;
+	uint32_t pos_x;
+	uint32_t pos_y;
+};
+
 struct vmw_private {
 	struct ttm_bo_device bdev;
 	struct ttm_bo_global_ref bo_global_ref;
@@ -171,14 +188,19 @@ struct vmw_private {
 	 * VGA registers.
 	 */
 
+	struct vmw_vga_topology_state vga_save[VMWGFX_MAX_DISPLAYS];
 	uint32_t vga_width;
 	uint32_t vga_height;
 	uint32_t vga_depth;
 	uint32_t vga_bpp;
 	uint32_t vga_pseudo;
 	uint32_t vga_red_mask;
-	uint32_t vga_blue_mask;
 	uint32_t vga_green_mask;
+	uint32_t vga_blue_mask;
+	uint32_t vga_bpl;
+	uint32_t vga_pitchlock;
+
+	uint32_t num_displays;
 
 	/*
 	 * Framebuffer info.
@@ -393,6 +415,7 @@ extern int vmw_fifo_send_fence(struct vmw_private *dev_priv,
 extern void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason);
 extern int vmw_fifo_mmap(struct file *filp, struct vm_area_struct *vma);
 extern bool vmw_fifo_have_3d(struct vmw_private *dev_priv);
+extern bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv);
 
 /**
  * TTM glue - vmwgfx_ttm_glue.c
@@ -441,6 +464,23 @@ extern int vmw_fallback_wait(struct vmw_private *dev_priv,
 			     uint32_t sequence,
 			     bool interruptible,
 			     unsigned long timeout);
+extern void vmw_update_sequence(struct vmw_private *dev_priv,
+				struct vmw_fifo_state *fifo_state);
+
+
+/**
+ * Rudimentary fence objects currently used only for throttling -
+ * vmwgfx_fence.c
+ */
+
+extern void vmw_fence_queue_init(struct vmw_fence_queue *queue);
+extern void vmw_fence_queue_takedown(struct vmw_fence_queue *queue);
+extern int vmw_fence_push(struct vmw_fence_queue *queue,
+			  uint32_t sequence);
+extern int vmw_fence_pull(struct vmw_fence_queue *queue,
+			  uint32_t signaled_sequence);
+extern int vmw_wait_lag(struct vmw_private *dev_priv,
+			struct vmw_fence_queue *queue, uint32_t us);
 
 /**
  * Kernel framebuffer - vmwgfx_fb.c
@@ -466,6 +506,11 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf,
 			  struct ttm_object_file *tfile,
 			  struct ttm_buffer_object *bo,
 			  SVGA3dCmdHeader *header);
+void vmw_kms_write_svga(struct vmw_private *vmw_priv,
+			unsigned width, unsigned height, unsigned pitch,
+			unsigned bbp, unsigned depth);
+int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *file_priv);
 
 /**
  * Overlay control - vmwgfx_overlay.c
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
index 0897359..8e39685 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -570,7 +570,7 @@ static int vmw_validate_single_buffer(struct vmw_private *dev_priv,
 	 * Put BO in VRAM, only if there is space.
 	 */
 
-	ret = ttm_bo_validate(bo, &vmw_vram_sys_placement, true, false);
+	ret = ttm_bo_validate(bo, &vmw_vram_sys_placement, true, false, false);
 	if (unlikely(ret == -ERESTARTSYS))
 		return ret;
 
@@ -590,7 +590,7 @@ static int vmw_validate_single_buffer(struct vmw_private *dev_priv,
 	 * previous contents.
 	 */
 
-	ret = ttm_bo_validate(bo, &vmw_vram_placement, true, false);
+	ret = ttm_bo_validate(bo, &vmw_vram_placement, true, false, false);
 	return ret;
 }
 
@@ -644,6 +644,7 @@ int vmw_execbuf_ioctl(struct drm_device *dev, void *data,
 	ret = copy_from_user(cmd, user_cmd, arg->command_size);
 
 	if (unlikely(ret != 0)) {
+		ret = -EFAULT;
 		DRM_ERROR("Failed copying commands.\n");
 		goto out_commit;
 	}
@@ -669,6 +670,15 @@ int vmw_execbuf_ioctl(struct drm_device *dev, void *data,
 		goto out_err;
 
 	vmw_apply_relocations(sw_context);
+
+	if (arg->throttle_us) {
+		ret = vmw_wait_lag(dev_priv, &dev_priv->fifo.fence_queue,
+				   arg->throttle_us);
+
+		if (unlikely(ret != 0))
+			goto out_err;
+	}
+
 	vmw_fifo_commit(dev_priv, arg->command_size);
 
 	ret = vmw_fifo_send_fence(dev_priv, &sequence);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
index a933670..b0866f0 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
@@ -132,16 +132,14 @@ static int vmw_fb_check_var(struct fb_var_screeninfo *var,
 		return -EINVAL;
 	}
 
-	/* without multimon its hard to resize */
-	if (!(vmw_priv->capabilities & SVGA_CAP_MULTIMON) &&
-	    (var->xres != par->max_width ||
-	     var->yres != par->max_height)) {
-		DRM_ERROR("Tried to resize, but we don't have multimon\n");
+	if (!(vmw_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY) &&
+	    (var->xoffset != 0 || var->yoffset != 0)) {
+		DRM_ERROR("Can not handle panning without display topology\n");
 		return -EINVAL;
 	}
 
-	if (var->xres > par->max_width ||
-	    var->yres > par->max_height) {
+	if ((var->xoffset + var->xres) > par->max_width ||
+	    (var->yoffset + var->yres) > par->max_height) {
 		DRM_ERROR("Requested geom can not fit in framebuffer\n");
 		return -EINVAL;
 	}
@@ -154,27 +152,11 @@ static int vmw_fb_set_par(struct fb_info *info)
 	struct vmw_fb_par *par = info->par;
 	struct vmw_private *vmw_priv = par->vmw_priv;
 
-	if (vmw_priv->capabilities & SVGA_CAP_MULTIMON) {
-		vmw_write(vmw_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 1);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_IS_PRIMARY, true);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_X, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
-
-		vmw_write(vmw_priv, SVGA_REG_ENABLE, 1);
-		vmw_write(vmw_priv, SVGA_REG_WIDTH, par->max_width);
-		vmw_write(vmw_priv, SVGA_REG_HEIGHT, par->max_height);
-		vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, par->bpp);
-		vmw_write(vmw_priv, SVGA_REG_DEPTH, par->depth);
-		vmw_write(vmw_priv, SVGA_REG_RED_MASK, 0x00ff0000);
-		vmw_write(vmw_priv, SVGA_REG_GREEN_MASK, 0x0000ff00);
-		vmw_write(vmw_priv, SVGA_REG_BLUE_MASK, 0x000000ff);
-
+	vmw_kms_write_svga(vmw_priv, info->var.xres, info->var.yres,
+			   info->fix.line_length,
+			   par->bpp, par->depth);
+	if (vmw_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY) {
 		/* TODO check if pitch and offset changes */
-
 		vmw_write(vmw_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 1);
 		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, 0);
 		vmw_write(vmw_priv, SVGA_REG_DISPLAY_IS_PRIMARY, true);
@@ -183,13 +165,13 @@ static int vmw_fb_set_par(struct fb_info *info)
 		vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, info->var.xres);
 		vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, info->var.yres);
 		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
-	} else {
-		vmw_write(vmw_priv, SVGA_REG_WIDTH, info->var.xres);
-		vmw_write(vmw_priv, SVGA_REG_HEIGHT, info->var.yres);
-
-		/* TODO check if pitch and offset changes */
 	}
 
+	/* This is really helpful since if this fails the user
+	 * can probably not see anything on the screen.
+	 */
+	WARN_ON(vmw_read(vmw_priv, SVGA_REG_FB_OFFSET) != 0);
+
 	return 0;
 }
 
@@ -416,48 +398,23 @@ int vmw_fb_init(struct vmw_private *vmw_priv)
 	unsigned fb_bbp, fb_depth, fb_offset, fb_pitch, fb_size;
 	int ret;
 
+	/* XXX These shouldn't be hardcoded. */
 	initial_width = 800;
 	initial_height = 600;
 
 	fb_bbp = 32;
 	fb_depth = 24;
 
-	if (vmw_priv->capabilities & SVGA_CAP_MULTIMON) {
-		fb_width = min(vmw_priv->fb_max_width, (unsigned)2048);
-		fb_height = min(vmw_priv->fb_max_height, (unsigned)2048);
-	} else {
-		fb_width = min(vmw_priv->fb_max_width, initial_width);
-		fb_height = min(vmw_priv->fb_max_height, initial_height);
-	}
+	/* XXX As shouldn't these be as well. */
+	fb_width = min(vmw_priv->fb_max_width, (unsigned)2048);
+	fb_height = min(vmw_priv->fb_max_height, (unsigned)2048);
 
 	initial_width = min(fb_width, initial_width);
 	initial_height = min(fb_height, initial_height);
 
-	vmw_write(vmw_priv, SVGA_REG_WIDTH, fb_width);
-	vmw_write(vmw_priv, SVGA_REG_HEIGHT, fb_height);
-	vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, fb_bbp);
-	vmw_write(vmw_priv, SVGA_REG_DEPTH, fb_depth);
-	vmw_write(vmw_priv, SVGA_REG_RED_MASK, 0x00ff0000);
-	vmw_write(vmw_priv, SVGA_REG_GREEN_MASK, 0x0000ff00);
-	vmw_write(vmw_priv, SVGA_REG_BLUE_MASK, 0x000000ff);
-
-	fb_size = vmw_read(vmw_priv, SVGA_REG_FB_SIZE);
+	fb_pitch = fb_width * fb_bbp / 8;
+	fb_size = fb_pitch * fb_height;
 	fb_offset = vmw_read(vmw_priv, SVGA_REG_FB_OFFSET);
-	fb_pitch = vmw_read(vmw_priv, SVGA_REG_BYTES_PER_LINE);
-
-	DRM_DEBUG("width  %u\n", vmw_read(vmw_priv, SVGA_REG_MAX_WIDTH));
-	DRM_DEBUG("height %u\n", vmw_read(vmw_priv, SVGA_REG_MAX_HEIGHT));
-	DRM_DEBUG("width  %u\n", vmw_read(vmw_priv, SVGA_REG_WIDTH));
-	DRM_DEBUG("height %u\n", vmw_read(vmw_priv, SVGA_REG_HEIGHT));
-	DRM_DEBUG("bpp    %u\n", vmw_read(vmw_priv, SVGA_REG_BITS_PER_PIXEL));
-	DRM_DEBUG("depth  %u\n", vmw_read(vmw_priv, SVGA_REG_DEPTH));
-	DRM_DEBUG("bpl    %u\n", vmw_read(vmw_priv, SVGA_REG_BYTES_PER_LINE));
-	DRM_DEBUG("r mask %08x\n", vmw_read(vmw_priv, SVGA_REG_RED_MASK));
-	DRM_DEBUG("g mask %08x\n", vmw_read(vmw_priv, SVGA_REG_GREEN_MASK));
-	DRM_DEBUG("b mask %08x\n", vmw_read(vmw_priv, SVGA_REG_BLUE_MASK));
-	DRM_DEBUG("fb_offset 0x%08x\n", fb_offset);
-	DRM_DEBUG("fb_pitch  %u\n", fb_pitch);
-	DRM_DEBUG("fb_size   %u kiB\n", fb_size / 1024);
 
 	info = framebuffer_alloc(sizeof(*par), device);
 	if (!info)
@@ -559,8 +516,13 @@ int vmw_fb_init(struct vmw_private *vmw_priv)
 	info->pixmap.scan_align = 1;
 #endif
 
-	info->aperture_base = vmw_priv->vram_start;
-	info->aperture_size = vmw_priv->vram_size;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		ret = -ENOMEM;
+		goto err_aper;
+	}
+	info->apertures->ranges[0].base = vmw_priv->vram_start;
+	info->apertures->ranges[0].size = vmw_priv->vram_size;
 
 	/*
 	 * Dirty & Deferred IO
@@ -580,6 +542,7 @@ int vmw_fb_init(struct vmw_private *vmw_priv)
 
 err_defio:
 	fb_deferred_io_cleanup(info);
+err_aper:
 	ttm_bo_kunmap(&par->map);
 err_unref:
 	ttm_bo_unref((struct ttm_buffer_object **)&par->vmw_bo);
@@ -628,7 +591,7 @@ int vmw_dmabuf_from_vram(struct vmw_private *vmw_priv,
 	if (unlikely(ret != 0))
 		return ret;
 
-	ret = ttm_bo_validate(bo, &vmw_sys_placement, false, false);
+	ret = ttm_bo_validate(bo, &vmw_sys_placement, false, false, false);
 	ttm_bo_unreserve(bo);
 
 	return ret;
@@ -652,7 +615,11 @@ int vmw_dmabuf_to_start_of_vram(struct vmw_private *vmw_priv,
 	if (unlikely(ret != 0))
 		goto err_unlock;
 
-	ret = ttm_bo_validate(bo, &ne_placement, false, false);
+	ret = ttm_bo_validate(bo, &ne_placement, false, false, false);
+
+	/* Could probably bug on */
+	WARN_ON(bo->offset != 0);
+
 	ttm_bo_unreserve(bo);
 err_unlock:
 	ttm_write_unlock(&vmw_priv->active_master->lock);
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
new file mode 100644
index 0000000..61eacc1
--- /dev/null
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
@@ -0,0 +1,173 @@
+/**************************************************************************
+ *
+ * Copyright (C) 2010 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ **************************************************************************/
+
+
+#include "vmwgfx_drv.h"
+
+struct vmw_fence {
+	struct list_head head;
+	uint32_t sequence;
+	struct timespec submitted;
+};
+
+void vmw_fence_queue_init(struct vmw_fence_queue *queue)
+{
+	INIT_LIST_HEAD(&queue->head);
+	queue->lag = ns_to_timespec(0);
+	getrawmonotonic(&queue->lag_time);
+	spin_lock_init(&queue->lock);
+}
+
+void vmw_fence_queue_takedown(struct vmw_fence_queue *queue)
+{
+	struct vmw_fence *fence, *next;
+
+	spin_lock(&queue->lock);
+	list_for_each_entry_safe(fence, next, &queue->head, head) {
+		kfree(fence);
+	}
+	spin_unlock(&queue->lock);
+}
+
+int vmw_fence_push(struct vmw_fence_queue *queue,
+		   uint32_t sequence)
+{
+	struct vmw_fence *fence = kmalloc(sizeof(*fence), GFP_KERNEL);
+
+	if (unlikely(!fence))
+		return -ENOMEM;
+
+	fence->sequence = sequence;
+	getrawmonotonic(&fence->submitted);
+	spin_lock(&queue->lock);
+	list_add_tail(&fence->head, &queue->head);
+	spin_unlock(&queue->lock);
+
+	return 0;
+}
+
+int vmw_fence_pull(struct vmw_fence_queue *queue,
+		   uint32_t signaled_sequence)
+{
+	struct vmw_fence *fence, *next;
+	struct timespec now;
+	bool updated = false;
+
+	spin_lock(&queue->lock);
+	getrawmonotonic(&now);
+
+	if (list_empty(&queue->head)) {
+		queue->lag = ns_to_timespec(0);
+		queue->lag_time = now;
+		updated = true;
+		goto out_unlock;
+	}
+
+	list_for_each_entry_safe(fence, next, &queue->head, head) {
+		if (signaled_sequence - fence->sequence > (1 << 30))
+			continue;
+
+		queue->lag = timespec_sub(now, fence->submitted);
+		queue->lag_time = now;
+		updated = true;
+		list_del(&fence->head);
+		kfree(fence);
+	}
+
+out_unlock:
+	spin_unlock(&queue->lock);
+
+	return (updated) ? 0 : -EBUSY;
+}
+
+static struct timespec vmw_timespec_add(struct timespec t1,
+					struct timespec t2)
+{
+	t1.tv_sec += t2.tv_sec;
+	t1.tv_nsec += t2.tv_nsec;
+	if (t1.tv_nsec >= 1000000000L) {
+		t1.tv_sec += 1;
+		t1.tv_nsec -= 1000000000L;
+	}
+
+	return t1;
+}
+
+static struct timespec vmw_fifo_lag(struct vmw_fence_queue *queue)
+{
+	struct timespec now;
+
+	spin_lock(&queue->lock);
+	getrawmonotonic(&now);
+	queue->lag = vmw_timespec_add(queue->lag,
+				      timespec_sub(now, queue->lag_time));
+	queue->lag_time = now;
+	spin_unlock(&queue->lock);
+	return queue->lag;
+}
+
+
+static bool vmw_lag_lt(struct vmw_fence_queue *queue,
+		       uint32_t us)
+{
+	struct timespec lag, cond;
+
+	cond = ns_to_timespec((s64) us * 1000);
+	lag = vmw_fifo_lag(queue);
+	return (timespec_compare(&lag, &cond) < 1);
+}
+
+int vmw_wait_lag(struct vmw_private *dev_priv,
+		 struct vmw_fence_queue *queue, uint32_t us)
+{
+	struct vmw_fence *fence;
+	uint32_t sequence;
+	int ret;
+
+	while (!vmw_lag_lt(queue, us)) {
+		spin_lock(&queue->lock);
+		if (list_empty(&queue->head))
+			sequence = atomic_read(&dev_priv->fence_seq);
+		else {
+			fence = list_first_entry(&queue->head,
+						 struct vmw_fence, head);
+			sequence = fence->sequence;
+		}
+		spin_unlock(&queue->lock);
+
+		ret = vmw_wait_fence(dev_priv, false, sequence, true,
+				     3*HZ);
+
+		if (unlikely(ret != 0))
+			return ret;
+
+		(void) vmw_fence_pull(queue, sequence);
+	}
+	return 0;
+}
+
+
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
index 39d43a0..e6a1eb7 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
@@ -34,6 +34,9 @@ bool vmw_fifo_have_3d(struct vmw_private *dev_priv)
 	__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
 	uint32_t fifo_min, hwversion;
 
+	if (!(dev_priv->capabilities & SVGA_CAP_EXTENDED_FIFO))
+		return false;
+
 	fifo_min = ioread32(fifo_mem  + SVGA_FIFO_MIN);
 	if (fifo_min <= SVGA_FIFO_3D_HWVERSION * sizeof(unsigned int))
 		return false;
@@ -48,6 +51,21 @@ bool vmw_fifo_have_3d(struct vmw_private *dev_priv)
 	return true;
 }
 
+bool vmw_fifo_have_pitchlock(struct vmw_private *dev_priv)
+{
+	__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
+	uint32_t caps;
+
+	if (!(dev_priv->capabilities & SVGA_CAP_EXTENDED_FIFO))
+		return false;
+
+	caps = ioread32(fifo_mem + SVGA_FIFO_CAPABILITIES);
+	if (caps & SVGA_FIFO_CAP_PITCHLOCK)
+		return true;
+
+	return false;
+}
+
 int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo)
 {
 	__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
@@ -120,7 +138,7 @@ int vmw_fifo_init(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo)
 
 	atomic_set(&dev_priv->fence_seq, dev_priv->last_read_sequence);
 	iowrite32(dev_priv->last_read_sequence, fifo_mem + SVGA_FIFO_FENCE);
-
+	vmw_fence_queue_init(&fifo->fence_queue);
 	return vmw_fifo_send_fence(dev_priv, &dummy);
 out_err:
 	vfree(fifo->static_buffer);
@@ -159,6 +177,7 @@ void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo)
 		  dev_priv->enable_state);
 
 	mutex_unlock(&dev_priv->hw_mutex);
+	vmw_fence_queue_takedown(&fifo->fence_queue);
 
 	if (likely(fifo->last_buffer != NULL)) {
 		vfree(fifo->last_buffer);
@@ -484,6 +503,8 @@ int vmw_fifo_send_fence(struct vmw_private *dev_priv, uint32_t *sequence)
 	fifo_state->last_buffer_add = true;
 	vmw_fifo_commit(dev_priv, bytes);
 	fifo_state->last_buffer_add = false;
+	(void) vmw_fence_push(&fifo_state->fence_queue, *sequence);
+	vmw_update_sequence(dev_priv, fifo_state);
 
 out_err:
 	return ret;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
index 4d7cb53..e92298a 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
@@ -64,22 +64,33 @@ static bool vmw_fifo_idle(struct vmw_private *dev_priv, uint32_t sequence)
 	return (busy == 0);
 }
 
+void vmw_update_sequence(struct vmw_private *dev_priv,
+			 struct vmw_fifo_state *fifo_state)
+{
+	__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
+
+	uint32_t sequence = ioread32(fifo_mem + SVGA_FIFO_FENCE);
+
+	if (dev_priv->last_read_sequence != sequence) {
+		dev_priv->last_read_sequence = sequence;
+		vmw_fence_pull(&fifo_state->fence_queue, sequence);
+	}
+}
 
 bool vmw_fence_signaled(struct vmw_private *dev_priv,
 			uint32_t sequence)
 {
-	__le32 __iomem *fifo_mem = dev_priv->mmio_virt;
 	struct vmw_fifo_state *fifo_state;
 	bool ret;
 
 	if (likely(dev_priv->last_read_sequence - sequence < VMW_FENCE_WRAP))
 		return true;
 
-	dev_priv->last_read_sequence = ioread32(fifo_mem + SVGA_FIFO_FENCE);
+	fifo_state = &dev_priv->fifo;
+	vmw_update_sequence(dev_priv, fifo_state);
 	if (likely(dev_priv->last_read_sequence - sequence < VMW_FENCE_WRAP))
 		return true;
 
-	fifo_state = &dev_priv->fifo;
 	if (!(fifo_state->capabilities & SVGA_FIFO_CAP_FENCE) &&
 	    vmw_fifo_idle(dev_priv, sequence))
 		return true;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
index 31f9afe..f1d6261 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
@@ -30,6 +30,8 @@
 /* Might need a hrtimer here? */
 #define VMWGFX_PRESENT_RATE ((HZ / 60 > 0) ? HZ / 60 : 1)
 
+static int vmw_surface_dmabuf_pin(struct vmw_framebuffer *vfb);
+static int vmw_surface_dmabuf_unpin(struct vmw_framebuffer *vfb);
 
 void vmw_display_unit_cleanup(struct vmw_display_unit *du)
 {
@@ -326,6 +328,7 @@ int vmw_framebuffer_create_handle(struct drm_framebuffer *fb,
 struct vmw_framebuffer_surface {
 	struct vmw_framebuffer base;
 	struct vmw_surface *surface;
+	struct vmw_dma_buffer *buffer;
 	struct delayed_work d_work;
 	struct mutex work_lock;
 	bool present_fs;
@@ -500,8 +503,8 @@ int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
 	vfbs->base.base.depth = 24;
 	vfbs->base.base.width = width;
 	vfbs->base.base.height = height;
-	vfbs->base.pin = NULL;
-	vfbs->base.unpin = NULL;
+	vfbs->base.pin = &vmw_surface_dmabuf_pin;
+	vfbs->base.unpin = &vmw_surface_dmabuf_unpin;
 	vfbs->surface = surface;
 	mutex_init(&vfbs->work_lock);
 	INIT_DELAYED_WORK(&vfbs->d_work, &vmw_framebuffer_present_fs_callback);
@@ -589,6 +592,40 @@ static struct drm_framebuffer_funcs vmw_framebuffer_dmabuf_funcs = {
 	.create_handle = vmw_framebuffer_create_handle,
 };
 
+static int vmw_surface_dmabuf_pin(struct vmw_framebuffer *vfb)
+{
+	struct vmw_private *dev_priv = vmw_priv(vfb->base.dev);
+	struct vmw_framebuffer_surface *vfbs =
+		vmw_framebuffer_to_vfbs(&vfb->base);
+	unsigned long size = vfbs->base.base.pitch * vfbs->base.base.height;
+	int ret;
+
+	vfbs->buffer = kzalloc(sizeof(*vfbs->buffer), GFP_KERNEL);
+	if (unlikely(vfbs->buffer == NULL))
+		return -ENOMEM;
+
+	vmw_overlay_pause_all(dev_priv);
+	ret = vmw_dmabuf_init(dev_priv, vfbs->buffer, size,
+			       &vmw_vram_ne_placement,
+			       false, &vmw_dmabuf_bo_free);
+	vmw_overlay_resume_all(dev_priv);
+
+	return ret;
+}
+
+static int vmw_surface_dmabuf_unpin(struct vmw_framebuffer *vfb)
+{
+	struct ttm_buffer_object *bo;
+	struct vmw_framebuffer_surface *vfbs =
+		vmw_framebuffer_to_vfbs(&vfb->base);
+
+	bo = &vfbs->buffer->base;
+	ttm_bo_unref(&bo);
+	vfbs->buffer = NULL;
+
+	return 0;
+}
+
 static int vmw_framebuffer_dmabuf_pin(struct vmw_framebuffer *vfb)
 {
 	struct vmw_private *dev_priv = vmw_priv(vfb->base.dev);
@@ -596,33 +633,15 @@ static int vmw_framebuffer_dmabuf_pin(struct vmw_framebuffer *vfb)
 		vmw_framebuffer_to_vfbd(&vfb->base);
 	int ret;
 
+
 	vmw_overlay_pause_all(dev_priv);
 
 	ret = vmw_dmabuf_to_start_of_vram(dev_priv, vfbd->buffer);
 
-	if (dev_priv->capabilities & SVGA_CAP_MULTIMON) {
-		vmw_write(dev_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 1);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_ID, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_IS_PRIMARY, true);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_POSITION_X, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_POSITION_Y, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_WIDTH, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_HEIGHT, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
-
-		vmw_write(dev_priv, SVGA_REG_ENABLE, 1);
-		vmw_write(dev_priv, SVGA_REG_WIDTH, vfb->base.width);
-		vmw_write(dev_priv, SVGA_REG_HEIGHT, vfb->base.height);
-		vmw_write(dev_priv, SVGA_REG_BITS_PER_PIXEL, vfb->base.bits_per_pixel);
-		vmw_write(dev_priv, SVGA_REG_DEPTH, vfb->base.depth);
-		vmw_write(dev_priv, SVGA_REG_RED_MASK, 0x00ff0000);
-		vmw_write(dev_priv, SVGA_REG_GREEN_MASK, 0x0000ff00);
-		vmw_write(dev_priv, SVGA_REG_BLUE_MASK, 0x000000ff);
-	} else
-		WARN_ON(true);
-
 	vmw_overlay_resume_all(dev_priv);
 
+	WARN_ON(ret != 0);
+
 	return 0;
 }
 
@@ -668,7 +687,7 @@ int vmw_kms_new_framebuffer_dmabuf(struct vmw_private *dev_priv,
 
 	/* XXX get the first 3 from the surface info */
 	vfbd->base.base.bits_per_pixel = 32;
-	vfbd->base.base.pitch = width * 32 / 4;
+	vfbd->base.base.pitch = width * vfbd->base.base.bits_per_pixel / 8;
 	vfbd->base.base.depth = 24;
 	vfbd->base.base.width = width;
 	vfbd->base.base.height = height;
@@ -752,14 +771,8 @@ err_not_scanout:
 	return NULL;
 }
 
-static int vmw_kms_fb_changed(struct drm_device *dev)
-{
-	return 0;
-}
-
 static struct drm_mode_config_funcs vmw_kms_funcs = {
 	.fb_create = vmw_kms_fb_create,
-	.fb_changed = vmw_kms_fb_changed,
 };
 
 int vmw_kms_init(struct vmw_private *dev_priv)
@@ -771,8 +784,9 @@ int vmw_kms_init(struct vmw_private *dev_priv)
 	dev->mode_config.funcs = &vmw_kms_funcs;
 	dev->mode_config.min_width = 1;
 	dev->mode_config.min_height = 1;
-	dev->mode_config.max_width = dev_priv->fb_max_width;
-	dev->mode_config.max_height = dev_priv->fb_max_height;
+	/* assumed largest fb size */
+	dev->mode_config.max_width = 8192;
+	dev->mode_config.max_height = 8192;
 
 	ret = vmw_kms_init_legacy_display_system(dev_priv);
 
@@ -832,49 +846,140 @@ out:
 	return ret;
 }
 
+void vmw_kms_write_svga(struct vmw_private *vmw_priv,
+			unsigned width, unsigned height, unsigned pitch,
+			unsigned bbp, unsigned depth)
+{
+	if (vmw_priv->capabilities & SVGA_CAP_PITCHLOCK)
+		vmw_write(vmw_priv, SVGA_REG_PITCHLOCK, pitch);
+	else if (vmw_fifo_have_pitchlock(vmw_priv))
+		iowrite32(pitch, vmw_priv->mmio_virt + SVGA_FIFO_PITCHLOCK);
+	vmw_write(vmw_priv, SVGA_REG_WIDTH, width);
+	vmw_write(vmw_priv, SVGA_REG_HEIGHT, height);
+	vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, bbp);
+	vmw_write(vmw_priv, SVGA_REG_DEPTH, depth);
+	vmw_write(vmw_priv, SVGA_REG_RED_MASK, 0x00ff0000);
+	vmw_write(vmw_priv, SVGA_REG_GREEN_MASK, 0x0000ff00);
+	vmw_write(vmw_priv, SVGA_REG_BLUE_MASK, 0x000000ff);
+}
+
 int vmw_kms_save_vga(struct vmw_private *vmw_priv)
 {
-	/*
-	 * setup a single multimon monitor with the size
-	 * of 0x0, this stops the UI from resizing when we
-	 * change the framebuffer size
-	 */
-	if (vmw_priv->capabilities & SVGA_CAP_MULTIMON) {
-		vmw_write(vmw_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 1);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_IS_PRIMARY, true);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_X, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, 0);
-		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
-	}
+	struct vmw_vga_topology_state *save;
+	uint32_t i;
 
 	vmw_priv->vga_width = vmw_read(vmw_priv, SVGA_REG_WIDTH);
 	vmw_priv->vga_height = vmw_read(vmw_priv, SVGA_REG_HEIGHT);
-	vmw_priv->vga_bpp = vmw_read(vmw_priv, SVGA_REG_BITS_PER_PIXEL);
 	vmw_priv->vga_depth = vmw_read(vmw_priv, SVGA_REG_DEPTH);
+	vmw_priv->vga_bpp = vmw_read(vmw_priv, SVGA_REG_BITS_PER_PIXEL);
 	vmw_priv->vga_pseudo = vmw_read(vmw_priv, SVGA_REG_PSEUDOCOLOR);
 	vmw_priv->vga_red_mask = vmw_read(vmw_priv, SVGA_REG_RED_MASK);
-	vmw_priv->vga_green_mask = vmw_read(vmw_priv, SVGA_REG_GREEN_MASK);
 	vmw_priv->vga_blue_mask = vmw_read(vmw_priv, SVGA_REG_BLUE_MASK);
+	vmw_priv->vga_green_mask = vmw_read(vmw_priv, SVGA_REG_GREEN_MASK);
+	if (vmw_priv->capabilities & SVGA_CAP_PITCHLOCK)
+		vmw_priv->vga_pitchlock =
+		  vmw_read(vmw_priv, SVGA_REG_PITCHLOCK);
+	else if (vmw_fifo_have_pitchlock(vmw_priv))
+		vmw_priv->vga_pitchlock = ioread32(vmw_priv->mmio_virt +
+						       SVGA_FIFO_PITCHLOCK);
+
+	if (!(vmw_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY))
+		return 0;
 
+	vmw_priv->num_displays = vmw_read(vmw_priv,
+					  SVGA_REG_NUM_GUEST_DISPLAYS);
+
+	for (i = 0; i < vmw_priv->num_displays; ++i) {
+		save = &vmw_priv->vga_save[i];
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, i);
+		save->primary = vmw_read(vmw_priv, SVGA_REG_DISPLAY_IS_PRIMARY);
+		save->pos_x = vmw_read(vmw_priv, SVGA_REG_DISPLAY_POSITION_X);
+		save->pos_y = vmw_read(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y);
+		save->width = vmw_read(vmw_priv, SVGA_REG_DISPLAY_WIDTH);
+		save->height = vmw_read(vmw_priv, SVGA_REG_DISPLAY_HEIGHT);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
+	}
 	return 0;
 }
 
 int vmw_kms_restore_vga(struct vmw_private *vmw_priv)
 {
+	struct vmw_vga_topology_state *save;
+	uint32_t i;
+
 	vmw_write(vmw_priv, SVGA_REG_WIDTH, vmw_priv->vga_width);
 	vmw_write(vmw_priv, SVGA_REG_HEIGHT, vmw_priv->vga_height);
-	vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, vmw_priv->vga_bpp);
 	vmw_write(vmw_priv, SVGA_REG_DEPTH, vmw_priv->vga_depth);
+	vmw_write(vmw_priv, SVGA_REG_BITS_PER_PIXEL, vmw_priv->vga_bpp);
 	vmw_write(vmw_priv, SVGA_REG_PSEUDOCOLOR, vmw_priv->vga_pseudo);
 	vmw_write(vmw_priv, SVGA_REG_RED_MASK, vmw_priv->vga_red_mask);
 	vmw_write(vmw_priv, SVGA_REG_GREEN_MASK, vmw_priv->vga_green_mask);
 	vmw_write(vmw_priv, SVGA_REG_BLUE_MASK, vmw_priv->vga_blue_mask);
+	if (vmw_priv->capabilities & SVGA_CAP_PITCHLOCK)
+		vmw_write(vmw_priv, SVGA_REG_PITCHLOCK,
+			  vmw_priv->vga_pitchlock);
+	else if (vmw_fifo_have_pitchlock(vmw_priv))
+		iowrite32(vmw_priv->vga_pitchlock,
+			  vmw_priv->mmio_virt + SVGA_FIFO_PITCHLOCK);
+
+	if (!(vmw_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY))
+		return 0;
 
-	/* TODO check for multimon */
-	vmw_write(vmw_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 0);
+	for (i = 0; i < vmw_priv->num_displays; ++i) {
+		save = &vmw_priv->vga_save[i];
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, i);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_IS_PRIMARY, save->primary);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_X, save->pos_x);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_POSITION_Y, save->pos_y);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_WIDTH, save->width);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_HEIGHT, save->height);
+		vmw_write(vmw_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
+	}
 
 	return 0;
 }
+
+int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *file_priv)
+{
+	struct vmw_private *dev_priv = vmw_priv(dev);
+	struct drm_vmw_update_layout_arg *arg =
+		(struct drm_vmw_update_layout_arg *)data;
+	struct vmw_master *vmaster = vmw_master(file_priv->master);
+	void __user *user_rects;
+	struct drm_vmw_rect *rects;
+	unsigned rects_size;
+	int ret;
+
+	ret = ttm_read_lock(&vmaster->lock, true);
+	if (unlikely(ret != 0))
+		return ret;
+
+	if (!arg->num_outputs) {
+		struct drm_vmw_rect def_rect = {0, 0, 800, 600};
+		vmw_kms_ldu_update_layout(dev_priv, 1, &def_rect);
+		goto out_unlock;
+	}
+
+	rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect);
+	rects = kzalloc(rects_size, GFP_KERNEL);
+	if (unlikely(!rects)) {
+		ret = -ENOMEM;
+		goto out_unlock;
+	}
+
+	user_rects = (void __user *)(unsigned long)arg->rects;
+	ret = copy_from_user(rects, user_rects, rects_size);
+	if (unlikely(ret != 0)) {
+		DRM_ERROR("Failed to get rects.\n");
+		goto out_free;
+	}
+
+	vmw_kms_ldu_update_layout(dev_priv, arg->num_outputs, rects);
+
+out_free:
+	kfree(rects);
+out_unlock:
+	ttm_read_unlock(&vmaster->lock);
+	return ret;
+}
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
index 8b95249..8a398a0 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
@@ -94,9 +94,11 @@ int vmw_du_crtc_cursor_set(struct drm_crtc *crtc, struct drm_file *file_priv,
 int vmw_du_crtc_cursor_move(struct drm_crtc *crtc, int x, int y);
 
 /*
- * Legacy display unit functions - vmwgfx_ldu.h
+ * Legacy display unit functions - vmwgfx_ldu.c
  */
 int vmw_kms_init_legacy_display_system(struct vmw_private *dev_priv);
 int vmw_kms_close_legacy_display_system(struct vmw_private *dev_priv);
+int vmw_kms_ldu_update_layout(struct vmw_private *dev_priv, unsigned num,
+			      struct drm_vmw_rect *rects);
 
 #endif
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
index 9089159..cfaf690 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
@@ -38,6 +38,7 @@ struct vmw_legacy_display {
 	struct list_head active;
 
 	unsigned num_active;
+	unsigned last_num_active;
 
 	struct vmw_framebuffer *fb;
 };
@@ -48,9 +49,12 @@ struct vmw_legacy_display {
 struct vmw_legacy_display_unit {
 	struct vmw_display_unit base;
 
-	struct list_head active;
+	unsigned pref_width;
+	unsigned pref_height;
+	bool pref_active;
+	struct drm_display_mode *pref_mode;
 
-	unsigned unit;
+	struct list_head active;
 };
 
 static void vmw_ldu_destroy(struct vmw_legacy_display_unit *ldu)
@@ -88,23 +92,44 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
 {
 	struct vmw_legacy_display *lds = dev_priv->ldu_priv;
 	struct vmw_legacy_display_unit *entry;
-	struct drm_crtc *crtc;
+	struct drm_framebuffer *fb = NULL;
+	struct drm_crtc *crtc = NULL;
 	int i = 0;
 
-	/* to stop the screen from changing size on resize */
-	vmw_write(dev_priv, SVGA_REG_NUM_GUEST_DISPLAYS, 0);
-	for (i = 0; i < lds->num_active; i++) {
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_ID, i);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_IS_PRIMARY, !i);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_POSITION_X, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_POSITION_Y, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_WIDTH, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_HEIGHT, 0);
-		vmw_write(dev_priv, SVGA_REG_DISPLAY_ID, SVGA_ID_INVALID);
+	/* If there is no display topology the host just assumes
+	 * that the guest will set the same layout as the host.
+	 */
+	if (!(dev_priv->capabilities & SVGA_CAP_DISPLAY_TOPOLOGY)) {
+		int w = 0, h = 0;
+		list_for_each_entry(entry, &lds->active, active) {
+			crtc = &entry->base.crtc;
+			w = max(w, crtc->x + crtc->mode.hdisplay);
+			h = max(h, crtc->y + crtc->mode.vdisplay);
+			i++;
+		}
+
+		if (crtc == NULL)
+			return 0;
+		fb = entry->base.crtc.fb;
+
+		vmw_kms_write_svga(dev_priv, w, h, fb->pitch,
+				   fb->bits_per_pixel, fb->depth);
+
+		return 0;
 	}
 
-	/* Now set the mode */
-	vmw_write(dev_priv, SVGA_REG_NUM_GUEST_DISPLAYS, lds->num_active);
+	if (!list_empty(&lds->active)) {
+		entry = list_entry(lds->active.next, typeof(*entry), active);
+		fb = entry->base.crtc.fb;
+
+		vmw_kms_write_svga(dev_priv, fb->width, fb->height, fb->pitch,
+				   fb->bits_per_pixel, fb->depth);
+	}
+
+	/* Make sure we always show something. */
+	vmw_write(dev_priv, SVGA_REG_NUM_GUEST_DISPLAYS,
+		  lds->num_active ? lds->num_active : 1);
+
 	i = 0;
 	list_for_each_entry(entry, &lds->active, active) {
 		crtc = &entry->base.crtc;
@@ -120,6 +145,10 @@ static int vmw_ldu_commit_list(struct vmw_private *dev_priv)
 		i++;
 	}
 
+	BUG_ON(i != lds->num_active);
+
+	lds->last_num_active = lds->num_active;
+
 	return 0;
 }
 
@@ -130,6 +159,7 @@ static int vmw_ldu_del_active(struct vmw_private *vmw_priv,
 	if (list_empty(&ldu->active))
 		return 0;
 
+	/* Must init otherwise list_empty(&ldu->active) will not work. */
 	list_del_init(&ldu->active);
 	if (--(ld->num_active) == 0) {
 		BUG_ON(!ld->fb);
@@ -149,24 +179,29 @@ static int vmw_ldu_add_active(struct vmw_private *vmw_priv,
 	struct vmw_legacy_display_unit *entry;
 	struct list_head *at;
 
+	BUG_ON(!ld->num_active && ld->fb);
+	if (vfb != ld->fb) {
+		if (ld->fb && ld->fb->unpin)
+			ld->fb->unpin(ld->fb);
+		if (vfb->pin)
+			vfb->pin(vfb);
+		ld->fb = vfb;
+	}
+
 	if (!list_empty(&ldu->active))
 		return 0;
 
 	at = &ld->active;
 	list_for_each_entry(entry, &ld->active, active) {
-		if (entry->unit > ldu->unit)
+		if (entry->base.unit > ldu->base.unit)
 			break;
 
 		at = &entry->active;
 	}
 
 	list_add(&ldu->active, at);
-	if (ld->num_active++ == 0) {
-		BUG_ON(ld->fb);
-		if (vfb->pin)
-			vfb->pin(vfb);
-		ld->fb = vfb;
-	}
+
+	ld->num_active++;
 
 	return 0;
 }
@@ -208,6 +243,8 @@ static int vmw_ldu_crtc_set_config(struct drm_mode_set *set)
 
 	/* ldu only supports one fb active at the time */
 	if (dev_priv->ldu_priv->fb && vfb &&
+	    !(dev_priv->ldu_priv->num_active == 1 &&
+	      !list_empty(&ldu->active)) &&
 	    dev_priv->ldu_priv->fb != vfb) {
 		DRM_ERROR("Multiple framebuffers not supported\n");
 		return -EINVAL;
@@ -300,8 +337,7 @@ static void vmw_ldu_connector_restore(struct drm_connector *connector)
 static enum drm_connector_status
 	vmw_ldu_connector_detect(struct drm_connector *connector)
 {
-	/* XXX vmwctrl should control connection status */
-	if (vmw_connector_to_ldu(connector)->base.unit == 0)
+	if (vmw_connector_to_ldu(connector)->pref_active)
 		return connector_status_connected;
 	return connector_status_disconnected;
 }
@@ -312,10 +348,9 @@ static struct drm_display_mode vmw_ldu_connector_builtin[] = {
 		   752, 800, 0, 480, 489, 492, 525, 0,
 		   DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC) },
 	/* 800x600@60Hz */
-	{ DRM_MODE("800x600",
-		   DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED,
-		   40000, 800, 840, 968, 1056, 0, 600, 601, 605, 628,
-		   0, DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
+	{ DRM_MODE("800x600", DRM_MODE_TYPE_DRIVER, 40000, 800, 840,
+		   968, 1056, 0, 600, 601, 605, 628, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC) },
 	/* 1024x768@60Hz */
 	{ DRM_MODE("1024x768", DRM_MODE_TYPE_DRIVER, 65000, 1024, 1048,
 		   1184, 1344, 0, 768, 771, 777, 806, 0,
@@ -387,10 +422,34 @@ static struct drm_display_mode vmw_ldu_connector_builtin[] = {
 static int vmw_ldu_connector_fill_modes(struct drm_connector *connector,
 					uint32_t max_width, uint32_t max_height)
 {
+	struct vmw_legacy_display_unit *ldu = vmw_connector_to_ldu(connector);
 	struct drm_device *dev = connector->dev;
 	struct drm_display_mode *mode = NULL;
+	struct drm_display_mode prefmode = { DRM_MODE("preferred",
+		DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED,
+		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+		DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
+	};
 	int i;
 
+	/* Add preferred mode */
+	{
+		mode = drm_mode_duplicate(dev, &prefmode);
+		if (!mode)
+			return 0;
+		mode->hdisplay = ldu->pref_width;
+		mode->vdisplay = ldu->pref_height;
+		mode->vrefresh = drm_mode_vrefresh(mode);
+		drm_mode_probed_add(connector, mode);
+
+		if (ldu->pref_mode) {
+			list_del_init(&ldu->pref_mode->head);
+			drm_mode_destroy(dev, ldu->pref_mode);
+		}
+
+		ldu->pref_mode = mode;
+	}
+
 	for (i = 0; vmw_ldu_connector_builtin[i].type != 0; i++) {
 		if (vmw_ldu_connector_builtin[i].hdisplay > max_width ||
 		    vmw_ldu_connector_builtin[i].vdisplay > max_height)
@@ -443,18 +502,21 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
 	if (!ldu)
 		return -ENOMEM;
 
-	ldu->unit = unit;
+	ldu->base.unit = unit;
 	crtc = &ldu->base.crtc;
 	encoder = &ldu->base.encoder;
 	connector = &ldu->base.connector;
 
+	INIT_LIST_HEAD(&ldu->active);
+
+	ldu->pref_active = (unit == 0);
+	ldu->pref_width = 800;
+	ldu->pref_height = 600;
+	ldu->pref_mode = NULL;
+
 	drm_connector_init(dev, connector, &vmw_legacy_connector_funcs,
 			   DRM_MODE_CONNECTOR_LVDS);
-	/* Initial status */
-	if (unit == 0)
-		connector->status = connector_status_connected;
-	else
-		connector->status = connector_status_disconnected;
+	connector->status = vmw_ldu_connector_detect(connector);
 
 	drm_encoder_init(dev, encoder, &vmw_legacy_encoder_funcs,
 			 DRM_MODE_ENCODER_LVDS);
@@ -462,8 +524,6 @@ static int vmw_ldu_init(struct vmw_private *dev_priv, unsigned unit)
 	encoder->possible_crtcs = (1 << unit);
 	encoder->possible_clones = 0;
 
-	INIT_LIST_HEAD(&ldu->active);
-
 	drm_crtc_init(dev, crtc, &vmw_legacy_crtc_funcs);
 
 	drm_connector_attach_property(connector,
@@ -487,18 +547,22 @@ int vmw_kms_init_legacy_display_system(struct vmw_private *dev_priv)
 
 	INIT_LIST_HEAD(&dev_priv->ldu_priv->active);
 	dev_priv->ldu_priv->num_active = 0;
+	dev_priv->ldu_priv->last_num_active = 0;
 	dev_priv->ldu_priv->fb = NULL;
 
 	drm_mode_create_dirty_info_property(dev_priv->dev);
 
 	vmw_ldu_init(dev_priv, 0);
-	vmw_ldu_init(dev_priv, 1);
-	vmw_ldu_init(dev_priv, 2);
-	vmw_ldu_init(dev_priv, 3);
-	vmw_ldu_init(dev_priv, 4);
-	vmw_ldu_init(dev_priv, 5);
-	vmw_ldu_init(dev_priv, 6);
-	vmw_ldu_init(dev_priv, 7);
+	/* for old hardware without multimon only enable one display */
+	if (dev_priv->capabilities & SVGA_CAP_MULTIMON) {
+		vmw_ldu_init(dev_priv, 1);
+		vmw_ldu_init(dev_priv, 2);
+		vmw_ldu_init(dev_priv, 3);
+		vmw_ldu_init(dev_priv, 4);
+		vmw_ldu_init(dev_priv, 5);
+		vmw_ldu_init(dev_priv, 6);
+		vmw_ldu_init(dev_priv, 7);
+	}
 
 	return 0;
 }
@@ -514,3 +578,42 @@ int vmw_kms_close_legacy_display_system(struct vmw_private *dev_priv)
 
 	return 0;
 }
+
+int vmw_kms_ldu_update_layout(struct vmw_private *dev_priv, unsigned num,
+			      struct drm_vmw_rect *rects)
+{
+	struct drm_device *dev = dev_priv->dev;
+	struct vmw_legacy_display_unit *ldu;
+	struct drm_connector *con;
+	int i;
+
+	mutex_lock(&dev->mode_config.mutex);
+
+#if 0
+	DRM_INFO("%s: new layout ", __func__);
+	for (i = 0; i < (int)num; i++)
+		DRM_INFO("(%i, %i %ux%u) ", rects[i].x, rects[i].y,
+			 rects[i].w, rects[i].h);
+	DRM_INFO("\n");
+#else
+	(void)i;
+#endif
+
+	list_for_each_entry(con, &dev->mode_config.connector_list, head) {
+		ldu = vmw_connector_to_ldu(con);
+		if (num > ldu->base.unit) {
+			ldu->pref_width = rects[ldu->base.unit].w;
+			ldu->pref_height = rects[ldu->base.unit].h;
+			ldu->pref_active = true;
+		} else {
+			ldu->pref_width = 800;
+			ldu->pref_height = 600;
+			ldu->pref_active = false;
+		}
+		con->status = vmw_ldu_connector_detect(con);
+	}
+
+	mutex_unlock(&dev->mode_config.mutex);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
index 5b6eabe..df2036e 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
@@ -118,7 +118,7 @@ static int vmw_dmabuf_pin_in_vram(struct vmw_private *dev_priv,
 	if (pin)
 		overlay_placement = &vmw_vram_ne_placement;
 
-	ret = ttm_bo_validate(bo, overlay_placement, interruptible, false);
+	ret = ttm_bo_validate(bo, overlay_placement, interruptible, false, false);
 
 	ttm_bo_unreserve(bo);
 
@@ -358,6 +358,8 @@ static int vmw_overlay_update_stream(struct vmw_private *dev_priv,
 	if (stream->buf != buf)
 		stream->buf = vmw_dmabuf_reference(buf);
 	stream->saved = *arg;
+	/* stream is no longer stopped/paused */
+	stream->paused = false;
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
index f8fbbc6..8612378 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
@@ -597,8 +597,10 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
 
 	ret = copy_from_user(srf->sizes, user_sizes,
 			     srf->num_sizes * sizeof(*srf->sizes));
-	if (unlikely(ret != 0))
+	if (unlikely(ret != 0)) {
+		ret = -EFAULT;
 		goto out_err1;
+	}
 
 	if (srf->scanout &&
 	    srf->num_sizes == 1 &&
@@ -697,9 +699,11 @@ int vmw_surface_reference_ioctl(struct drm_device *dev, void *data,
 	if (user_sizes)
 		ret = copy_to_user(user_sizes, srf->sizes,
 				   srf->num_sizes * sizeof(*srf->sizes));
-	if (unlikely(ret != 0))
+	if (unlikely(ret != 0)) {
 		DRM_ERROR("copy_to_user failed %p %u\n",
 			  user_sizes, srf->num_sizes);
+		ret = -EFAULT;
+	}
 out_bad_resource:
 out_no_reference:
 	ttm_base_object_unref(&base);
diff --git a/drivers/gpu/vga/Kconfig b/drivers/gpu/vga/Kconfig
index 61ab4da..8d0e31a 100644
--- a/drivers/gpu/vga/Kconfig
+++ b/drivers/gpu/vga/Kconfig
@@ -18,12 +18,12 @@ config VGA_ARB_MAX_GPUS
 	  multiple GPUS.  The overhead for each GPU is very small.
 
 config VGA_SWITCHEROO
-	bool "Laptop Hybrid Grapics - GPU switching support"
+	bool "Laptop Hybrid Graphics - GPU switching support"
 	depends on X86
 	depends on ACPI
 	help
-	  Many laptops released in 2008/9/10 have two gpus with a multiplxer
+	  Many laptops released in 2008/9/10 have two GPUs with a multiplexer
 	  to switch between them. This adds support for dynamic switching when
           X isn't running and delayed switching until the next logoff. This
-	  features is called hybrid graphics, ATI PowerXpress, and Nvidia
+	  feature is called hybrid graphics, ATI PowerXpress, and Nvidia
 	  HybridPower.
diff --git a/drivers/gpu/vga/vgaarb.c b/drivers/gpu/vga/vgaarb.c
index 441e38c..b87569e 100644
--- a/drivers/gpu/vga/vgaarb.c
+++ b/drivers/gpu/vga/vgaarb.c
@@ -1,12 +1,32 @@
 /*
- * vgaarb.c
+ * vgaarb.c: Implements the VGA arbitration. For details refer to
+ * Documentation/vgaarbiter.txt
+ *
  *
  * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
  * (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com>
  * (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org>
  *
- * Implements the VGA arbitration. For details refer to
- * Documentation/vgaarbiter.txt
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS
+ * IN THE SOFTWARE.
+ *
  */
 
 #include <linux/module.h>
@@ -155,8 +175,8 @@ static struct vga_device *__vga_tryget(struct vga_device *vgadev,
 	    (vgadev->decodes & VGA_RSRC_LEGACY_MEM))
 		rsrc |= VGA_RSRC_LEGACY_MEM;
 
-	pr_devel("%s: %d\n", __func__, rsrc);
-	pr_devel("%s: owns: %d\n", __func__, vgadev->owns);
+	pr_debug("%s: %d\n", __func__, rsrc);
+	pr_debug("%s: owns: %d\n", __func__, vgadev->owns);
 
 	/* Check what resources we need to acquire */
 	wants = rsrc & ~vgadev->owns;
@@ -268,7 +288,7 @@ static void __vga_put(struct vga_device *vgadev, unsigned int rsrc)
 {
 	unsigned int old_locks = vgadev->locks;
 
-	pr_devel("%s\n", __func__);
+	pr_debug("%s\n", __func__);
 
 	/* Update our counters, and account for equivalent legacy resources
 	 * if we decode them
@@ -575,6 +595,7 @@ static inline void vga_update_device_decodes(struct vga_device *vgadev,
 		else
 			vga_decode_count--;
 	}
+	pr_debug("vgaarb: decoding count now is: %d\n", vga_decode_count);
 }
 
 void __vga_set_legacy_decoding(struct pci_dev *pdev, unsigned int decodes, bool userspace)
@@ -831,7 +852,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 		curr_pos += 5;
 		remaining -= 5;
 
-		pr_devel("client 0x%p called 'lock'\n", priv);
+		pr_debug("client 0x%p called 'lock'\n", priv);
 
 		if (!vga_str_to_iostate(curr_pos, remaining, &io_state)) {
 			ret_val = -EPROTO;
@@ -867,7 +888,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 		curr_pos += 7;
 		remaining -= 7;
 
-		pr_devel("client 0x%p called 'unlock'\n", priv);
+		pr_debug("client 0x%p called 'unlock'\n", priv);
 
 		if (strncmp(curr_pos, "all", 3) == 0)
 			io_state = VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM;
@@ -917,7 +938,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 		curr_pos += 8;
 		remaining -= 8;
 
-		pr_devel("client 0x%p called 'trylock'\n", priv);
+		pr_debug("client 0x%p called 'trylock'\n", priv);
 
 		if (!vga_str_to_iostate(curr_pos, remaining, &io_state)) {
 			ret_val = -EPROTO;
@@ -961,7 +982,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 
 		curr_pos += 7;
 		remaining -= 7;
-		pr_devel("client 0x%p called 'target'\n", priv);
+		pr_debug("client 0x%p called 'target'\n", priv);
 		/* if target is default */
 		if (!strncmp(curr_pos, "default", 7))
 			pdev = pci_dev_get(vga_default_device());
@@ -971,11 +992,11 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 				ret_val = -EPROTO;
 				goto done;
 			}
-			pr_devel("vgaarb: %s ==> %x:%x:%x.%x\n", curr_pos,
+			pr_debug("vgaarb: %s ==> %x:%x:%x.%x\n", curr_pos,
 				domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
 
 			pbus = pci_find_bus(domain, bus);
-			pr_devel("vgaarb: pbus %p\n", pbus);
+			pr_debug("vgaarb: pbus %p\n", pbus);
 			if (pbus == NULL) {
 				pr_err("vgaarb: invalid PCI domain and/or bus address %x:%x\n",
 					domain, bus);
@@ -983,7 +1004,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 				goto done;
 			}
 			pdev = pci_get_slot(pbus, devfn);
-			pr_devel("vgaarb: pdev %p\n", pdev);
+			pr_debug("vgaarb: pdev %p\n", pdev);
 			if (!pdev) {
 				pr_err("vgaarb: invalid PCI address %x:%x\n",
 					bus, devfn);
@@ -993,7 +1014,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 		}
 
 		vgadev = vgadev_find(pdev);
-		pr_devel("vgaarb: vgadev %p\n", vgadev);
+		pr_debug("vgaarb: vgadev %p\n", vgadev);
 		if (vgadev == NULL) {
 			pr_err("vgaarb: this pci device is not a vga device\n");
 			pci_dev_put(pdev);
@@ -1029,7 +1050,7 @@ static ssize_t vga_arb_write(struct file *file, const char __user * buf,
 	} else if (strncmp(curr_pos, "decodes ", 8) == 0) {
 		curr_pos += 8;
 		remaining -= 8;
-		pr_devel("vgaarb: client 0x%p called 'decodes'\n", priv);
+		pr_debug("vgaarb: client 0x%p called 'decodes'\n", priv);
 
 		if (!vga_str_to_iostate(curr_pos, remaining, &io_state)) {
 			ret_val = -EPROTO;
@@ -1058,7 +1079,7 @@ static unsigned int vga_arb_fpoll(struct file *file, poll_table * wait)
 {
 	struct vga_arb_private *priv = file->private_data;
 
-	pr_devel("%s\n", __func__);
+	pr_debug("%s\n", __func__);
 
 	if (priv == NULL)
 		return -ENODEV;
@@ -1071,7 +1092,7 @@ static int vga_arb_open(struct inode *inode, struct file *file)
 	struct vga_arb_private *priv;
 	unsigned long flags;
 
-	pr_devel("%s\n", __func__);
+	pr_debug("%s\n", __func__);
 
 	priv = kmalloc(sizeof(struct vga_arb_private), GFP_KERNEL);
 	if (priv == NULL)
@@ -1101,7 +1122,7 @@ static int vga_arb_release(struct inode *inode, struct file *file)
 	unsigned long flags;
 	int i;
 
-	pr_devel("%s\n", __func__);
+	pr_debug("%s\n", __func__);
 
 	if (priv == NULL)
 		return -ENODEV;
@@ -1112,7 +1133,7 @@ static int vga_arb_release(struct inode *inode, struct file *file)
 		uc = &priv->cards[i];
 		if (uc->pdev == NULL)
 			continue;
-		pr_devel("uc->io_cnt == %d, uc->mem_cnt == %d\n",
+		pr_debug("uc->io_cnt == %d, uc->mem_cnt == %d\n",
 			 uc->io_cnt, uc->mem_cnt);
 		while (uc->io_cnt--)
 			vga_put(uc->pdev, VGA_RSRC_LEGACY_IO);
@@ -1165,7 +1186,7 @@ static int pci_notify(struct notifier_block *nb, unsigned long action,
 	struct pci_dev *pdev = to_pci_dev(dev);
 	bool notify = false;
 
-	pr_devel("%s\n", __func__);
+	pr_debug("%s\n", __func__);
 
 	/* For now we're only intereted in devices added and removed. I didn't
 	 * test this thing here, so someone needs to double check for the
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index 7696a66..82cb8ff 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -91,8 +91,6 @@ source "drivers/staging/line6/Kconfig"
 
 source "drivers/gpu/drm/vmwgfx/Kconfig"
 
-source "drivers/gpu/drm/nouveau/Kconfig"
-
 source "drivers/staging/octeon/Kconfig"
 
 source "drivers/staging/serqt_usb2/Kconfig"
diff --git a/drivers/video/efifb.c b/drivers/video/efifb.c
index ecf4055..4a56f46 100644
--- a/drivers/video/efifb.c
+++ b/drivers/video/efifb.c
@@ -168,7 +168,7 @@ static void efifb_destroy(struct fb_info *info)
 {
 	if (info->screen_base)
 		iounmap(info->screen_base);
-	release_mem_region(info->aperture_base, info->aperture_size);
+	release_mem_region(info->apertures->ranges[0].base, info->apertures->ranges[0].size);
 	framebuffer_release(info);
 }
 
@@ -292,8 +292,13 @@ static int __devinit efifb_probe(struct platform_device *dev)
 	info->pseudo_palette = info->par;
 	info->par = NULL;
 
-	info->aperture_base = efifb_fix.smem_start;
-	info->aperture_size = size_remap;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		err = -ENOMEM;
+		goto err_release_fb;
+	}
+	info->apertures->ranges[0].base = efifb_fix.smem_start;
+	info->apertures->ranges[0].size = size_remap;
 
 	info->screen_base = ioremap(efifb_fix.smem_start, efifb_fix.smem_len);
 	if (!info->screen_base) {
diff --git a/drivers/video/fbmem.c b/drivers/video/fbmem.c
index a15b44e..e08b7b5 100644
--- a/drivers/video/fbmem.c
+++ b/drivers/video/fbmem.c
@@ -1468,16 +1468,67 @@ static int fb_check_foreignness(struct fb_info *fi)
 	return 0;
 }
 
-static bool fb_do_apertures_overlap(struct fb_info *gen, struct fb_info *hw)
+static bool apertures_overlap(struct aperture *gen, struct aperture *hw)
 {
 	/* is the generic aperture base the same as the HW one */
-	if (gen->aperture_base == hw->aperture_base)
+	if (gen->base == hw->base)
 		return true;
 	/* is the generic aperture base inside the hw base->hw base+size */
-	if (gen->aperture_base > hw->aperture_base && gen->aperture_base <= hw->aperture_base + hw->aperture_size)
+	if (gen->base > hw->base && gen->base <= hw->base + hw->size)
 		return true;
 	return false;
 }
+
+static bool fb_do_apertures_overlap(struct apertures_struct *gena,
+				    struct apertures_struct *hwa)
+{
+	int i, j;
+	if (!hwa || !gena)
+		return false;
+
+	for (i = 0; i < hwa->count; ++i) {
+		struct aperture *h = &hwa->ranges[i];
+		for (j = 0; j < gena->count; ++j) {
+			struct aperture *g = &gena->ranges[j];
+			printk(KERN_DEBUG "checking generic (%llx %llx) vs hw (%llx %llx)\n",
+				g->base, g->size, h->base, h->size);
+			if (apertures_overlap(g, h))
+				return true;
+		}
+	}
+
+	return false;
+}
+
+#define VGA_FB_PHYS 0xA0000
+void remove_conflicting_framebuffers(struct apertures_struct *a,
+				     const char *name, bool primary)
+{
+	int i;
+
+	/* check all firmware fbs and kick off if the base addr overlaps */
+	for (i = 0 ; i < FB_MAX; i++) {
+		struct apertures_struct *gen_aper;
+		if (!registered_fb[i])
+			continue;
+
+		if (!(registered_fb[i]->flags & FBINFO_MISC_FIRMWARE))
+			continue;
+
+		gen_aper = registered_fb[i]->apertures;
+		if (fb_do_apertures_overlap(gen_aper, a) ||
+			(primary && gen_aper && gen_aper->count &&
+			 gen_aper->ranges[0].base == VGA_FB_PHYS)) {
+
+			printk(KERN_ERR "fb: conflicting fb hw usage "
+			       "%s vs %s - removing generic driver\n",
+			       name, registered_fb[i]->fix.id);
+			unregister_framebuffer(registered_fb[i]);
+		}
+	}
+}
+EXPORT_SYMBOL(remove_conflicting_framebuffers);
+
 /**
  *	register_framebuffer - registers a frame buffer device
  *	@fb_info: frame buffer info structure
@@ -1501,21 +1552,8 @@ register_framebuffer(struct fb_info *fb_info)
 	if (fb_check_foreignness(fb_info))
 		return -ENOSYS;
 
-	/* check all firmware fbs and kick off if the base addr overlaps */
-	for (i = 0 ; i < FB_MAX; i++) {
-		if (!registered_fb[i])
-			continue;
-
-		if (registered_fb[i]->flags & FBINFO_MISC_FIRMWARE) {
-			if (fb_do_apertures_overlap(registered_fb[i], fb_info)) {
-				printk(KERN_ERR "fb: conflicting fb hw usage "
-				       "%s vs %s - removing generic driver\n",
-				       fb_info->fix.id,
-				       registered_fb[i]->fix.id);
-				unregister_framebuffer(registered_fb[i]);
-			}
-		}
-	}
+	remove_conflicting_framebuffers(fb_info->apertures, fb_info->fix.id,
+					 fb_is_primary_device(fb_info));
 
 	num_registered_fb++;
 	for (i = 0 ; i < FB_MAX; i++)
diff --git a/drivers/video/fbsysfs.c b/drivers/video/fbsysfs.c
index 81aa312..0a08f13 100644
--- a/drivers/video/fbsysfs.c
+++ b/drivers/video/fbsysfs.c
@@ -80,6 +80,7 @@ EXPORT_SYMBOL(framebuffer_alloc);
  */
 void framebuffer_release(struct fb_info *info)
 {
+	kfree(info->apertures);
 	kfree(info);
 }
 EXPORT_SYMBOL(framebuffer_release);
diff --git a/drivers/video/offb.c b/drivers/video/offb.c
index 61f8b8f..46dda7d 100644
--- a/drivers/video/offb.c
+++ b/drivers/video/offb.c
@@ -285,7 +285,7 @@ static void offb_destroy(struct fb_info *info)
 {
 	if (info->screen_base)
 		iounmap(info->screen_base);
-	release_mem_region(info->aperture_base, info->aperture_size);
+	release_mem_region(info->apertures->ranges[0].base, info->apertures->ranges[0].size);
 	framebuffer_release(info);
 }
 
@@ -491,8 +491,11 @@ static void __init offb_init_fb(const char *name, const char *full_name,
 	var->vmode = FB_VMODE_NONINTERLACED;
 
 	/* set offb aperture size for generic probing */
-	info->aperture_base = address;
-	info->aperture_size = fix->smem_len;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures)
+		goto out_aper;
+	info->apertures->ranges[0].base = address;
+	info->apertures->ranges[0].size = fix->smem_len;
 
 	info->fbops = &offb_ops;
 	info->screen_base = ioremap(address, fix->smem_len);
@@ -501,17 +504,20 @@ static void __init offb_init_fb(const char *name, const char *full_name,
 
 	fb_alloc_cmap(&info->cmap, 256, 0);
 
-	if (register_framebuffer(info) < 0) {
-		iounmap(par->cmap_adr);
-		par->cmap_adr = NULL;
-		iounmap(info->screen_base);
-		framebuffer_release(info);
-		release_mem_region(res_start, res_size);
-		return;
-	}
+	if (register_framebuffer(info) < 0)
+		goto out_err;
 
 	printk(KERN_INFO "fb%d: Open Firmware frame buffer device on %s\n",
 	       info->node, full_name);
+	return;
+
+out_err:
+	iounmap(info->screen_base);
+out_aper:
+	iounmap(par->cmap_adr);
+	par->cmap_adr = NULL;
+	framebuffer_release(info);
+	release_mem_region(res_start, res_size);
 }
 
 
diff --git a/drivers/video/vesafb.c b/drivers/video/vesafb.c
index 0cadf7a..090aa1a 100644
--- a/drivers/video/vesafb.c
+++ b/drivers/video/vesafb.c
@@ -177,7 +177,7 @@ static void vesafb_destroy(struct fb_info *info)
 {
 	if (info->screen_base)
 		iounmap(info->screen_base);
-	release_mem_region(info->aperture_base, info->aperture_size);
+	release_mem_region(info->apertures->ranges[0].base, info->apertures->ranges[0].size);
 	framebuffer_release(info);
 }
 
@@ -295,8 +295,13 @@ static int __init vesafb_probe(struct platform_device *dev)
 	info->par = NULL;
 
 	/* set vesafb aperture size for generic probing */
-	info->aperture_base = screen_info.lfb_base;
-	info->aperture_size = size_total;
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		err = -ENOMEM;
+		goto err;
+	}
+	info->apertures->ranges[0].base = screen_info.lfb_base;
+	info->apertures->ranges[0].size = size_total;
 
 	info->screen_base = ioremap(vesafb_fix.smem_start, vesafb_fix.smem_len);
 	if (!info->screen_base) {
diff --git a/drivers/video/vga16fb.c b/drivers/video/vga16fb.c
index bf638a4..149c47a 100644
--- a/drivers/video/vga16fb.c
+++ b/drivers/video/vga16fb.c
@@ -1263,10 +1263,19 @@ static void vga16fb_imageblit(struct fb_info *info, const struct fb_image *image
 		vga_imageblit_color(info, image);
 }
 
+static void vga16fb_destroy(struct fb_info *info)
+{
+	iounmap(info->screen_base);
+	fb_dealloc_cmap(&info->cmap);
+	/* XXX unshare VGA regions */
+	framebuffer_release(info);
+}
+
 static struct fb_ops vga16fb_ops = {
 	.owner		= THIS_MODULE,
 	.fb_open        = vga16fb_open,
 	.fb_release     = vga16fb_release,
+	.fb_destroy	= vga16fb_destroy,
 	.fb_check_var	= vga16fb_check_var,
 	.fb_set_par	= vga16fb_set_par,
 	.fb_setcolreg 	= vga16fb_setcolreg,
@@ -1306,6 +1315,11 @@ static int __devinit vga16fb_probe(struct platform_device *dev)
 		ret = -ENOMEM;
 		goto err_fb_alloc;
 	}
+	info->apertures = alloc_apertures(1);
+	if (!info->apertures) {
+		ret = -ENOMEM;
+		goto err_ioremap;
+	}
 
 	/* XXX share VGA_FB_PHYS and I/O region with vgacon and others */
 	info->screen_base = (void __iomem *)VGA_MAP_MEM(VGA_FB_PHYS, 0);
@@ -1335,7 +1349,7 @@ static int __devinit vga16fb_probe(struct platform_device *dev)
 	info->fix = vga16fb_fix;
 	/* supports rectangles with widths of multiples of 8 */
 	info->pixmap.blit_x = 1 << 7 | 1 << 15 | 1 << 23 | 1 << 31;
-	info->flags = FBINFO_FLAG_DEFAULT |
+	info->flags = FBINFO_FLAG_DEFAULT | FBINFO_MISC_FIRMWARE |
 		FBINFO_HWACCEL_YPAN;
 
 	i = (info->var.bits_per_pixel == 8) ? 256 : 16;
@@ -1354,6 +1368,9 @@ static int __devinit vga16fb_probe(struct platform_device *dev)
 
 	vga16fb_update_fix(info);
 
+	info->apertures->ranges[0].base = VGA_FB_PHYS;
+	info->apertures->ranges[0].size = VGA_FB_PHYS_LEN;
+
 	if (register_framebuffer(info) < 0) {
 		printk(KERN_ERR "vga16fb: unable to register framebuffer\n");
 		ret = -EINVAL;
@@ -1380,13 +1397,8 @@ static int vga16fb_remove(struct platform_device *dev)
 {
 	struct fb_info *info = platform_get_drvdata(dev);
 
-	if (info) {
+	if (info)
 		unregister_framebuffer(info);
-		iounmap(info->screen_base);
-		fb_dealloc_cmap(&info->cmap);
-	/* XXX unshare VGA regions */
-		framebuffer_release(info);
-	}
 
 	return 0;
 }
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 2f3b3a0..c1b9871 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -1428,10 +1428,13 @@ extern void drm_sysfs_connector_remove(struct drm_connector *connector);
 /* Graphics Execution Manager library functions (drm_gem.c) */
 int drm_gem_init(struct drm_device *dev);
 void drm_gem_destroy(struct drm_device *dev);
+void drm_gem_object_release(struct drm_gem_object *obj);
 void drm_gem_object_free(struct kref *kref);
 void drm_gem_object_free_unlocked(struct kref *kref);
 struct drm_gem_object *drm_gem_object_alloc(struct drm_device *dev,
 					    size_t size);
+int drm_gem_object_init(struct drm_device *dev,
+			struct drm_gem_object *obj, size_t size);
 void drm_gem_object_handle_free(struct kref *kref);
 void drm_gem_vm_open(struct vm_area_struct *vma);
 void drm_gem_vm_close(struct vm_area_struct *vma);
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index 1347524..93a1a31 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -31,6 +31,7 @@
 #include <linux/idr.h>
 
 #include <linux/fb.h>
+#include <linux/slow-work.h>
 
 struct drm_device;
 struct drm_mode_set;
@@ -271,8 +272,6 @@ struct drm_framebuffer {
 	unsigned int depth;
 	int bits_per_pixel;
 	int flags;
-	struct fb_info *fbdev;
-	u32 pseudo_palette[17];
 	struct list_head filp_head;
 	/* if you are using the helper */
 	void *helper_private;
@@ -369,9 +368,6 @@ struct drm_crtc_funcs {
  * @enabled: is this CRTC enabled?
  * @x: x position on screen
  * @y: y position on screen
- * @desired_mode: new desired mode
- * @desired_x: desired x for desired_mode
- * @desired_y: desired y for desired_mode
  * @funcs: CRTC control functions
  *
  * Each CRTC may have one or more connectors associated with it.  This structure
@@ -391,8 +387,6 @@ struct drm_crtc {
 	struct drm_display_mode mode;
 
 	int x, y;
-	struct drm_display_mode *desired_mode;
-	int desired_x, desired_y;
 	const struct drm_crtc_funcs *funcs;
 
 	/* CRTC gamma size for reporting to userspace */
@@ -467,6 +461,15 @@ enum drm_connector_force {
 	DRM_FORCE_ON_DIGITAL, /* for DVI-I use digital connector */
 };
 
+/* should we poll this connector for connects and disconnects */
+/* hot plug detectable */
+#define DRM_CONNECTOR_POLL_HPD (1 << 0)
+/* poll for connections */
+#define DRM_CONNECTOR_POLL_CONNECT (1 << 1)
+/* can cleanly poll for disconnections without flickering the screen */
+/* DACs should rarely do this without a lot of testing */
+#define DRM_CONNECTOR_POLL_DISCONNECT (1 << 2)
+
 /**
  * drm_connector - central DRM connector control structure
  * @crtc: CRTC this connector is currently connected to, NULL if none
@@ -511,6 +514,8 @@ struct drm_connector {
 	u32 property_ids[DRM_CONNECTOR_MAX_PROPERTY];
 	uint64_t property_values[DRM_CONNECTOR_MAX_PROPERTY];
 
+	uint8_t polled; /* DRM_CONNECTOR_POLL_* */
+
 	/* requested DPMS state */
 	int dpms;
 
@@ -521,7 +526,6 @@ struct drm_connector {
 	uint32_t encoder_ids[DRM_CONNECTOR_MAX_ENCODER];
 	uint32_t force_encoder_id;
 	struct drm_encoder *encoder; /* currently active encoder */
-	void *fb_helper_private;
 };
 
 /**
@@ -548,16 +552,10 @@ struct drm_mode_set {
 
 /**
  * struct drm_mode_config_funcs - configure CRTCs for a given screen layout
- * @resize: adjust CRTCs as necessary for the proposed layout
- *
- * Currently only a resize hook is available.  DRM will call back into the
- * driver with a new screen width and height.  If the driver can't support
- * the proposed size, it can return false.  Otherwise it should adjust
- * the CRTC<->connector mappings as needed and update its view of the screen.
  */
 struct drm_mode_config_funcs {
 	struct drm_framebuffer *(*fb_create)(struct drm_device *dev, struct drm_file *file_priv, struct drm_mode_fb_cmd *mode_cmd);
-	int (*fb_changed)(struct drm_device *dev);
+	void (*output_poll_changed)(struct drm_device *dev);
 };
 
 struct drm_mode_group {
@@ -590,14 +588,15 @@ struct drm_mode_config {
 
 	struct list_head property_list;
 
-	/* in-kernel framebuffers - hung of filp_head in drm_framebuffer */
-	struct list_head fb_kernel_list;
-
 	int min_width, min_height;
 	int max_width, max_height;
 	struct drm_mode_config_funcs *funcs;
 	resource_size_t fb_base;
 
+	/* output poll support */
+	bool poll_enabled;
+	struct delayed_slow_work output_poll_slow_work;
+
 	/* pointers to standard properties */
 	struct list_head property_blob_list;
 	struct drm_property *edid_property;
@@ -666,8 +665,6 @@ extern void drm_fb_release(struct drm_file *file_priv);
 extern int drm_mode_group_init_legacy_group(struct drm_device *dev, struct drm_mode_group *group);
 extern struct edid *drm_get_edid(struct drm_connector *connector,
 				 struct i2c_adapter *adapter);
-extern int drm_do_probe_ddc_edid(struct i2c_adapter *adapter,
-				 unsigned char *buf, int len);
 extern int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid);
 extern void drm_mode_probed_add(struct drm_connector *connector, struct drm_display_mode *mode);
 extern void drm_mode_remove(struct drm_connector *connector, struct drm_display_mode *mode);
@@ -799,8 +796,14 @@ extern struct drm_display_mode *drm_cvt_mode(struct drm_device *dev,
 extern struct drm_display_mode *drm_gtf_mode(struct drm_device *dev,
 				int hdisplay, int vdisplay, int vrefresh,
 				bool interlaced, int margins);
+extern struct drm_display_mode *drm_gtf_mode_complex(struct drm_device *dev,
+				int hdisplay, int vdisplay, int vrefresh,
+				bool interlaced, int margins, int GTF_M,
+				int GTF_2C, int GTF_K, int GTF_2J);
 extern int drm_add_modes_noedid(struct drm_connector *connector,
 				int hdisplay, int vdisplay);
 
 extern bool drm_edid_is_valid(struct edid *edid);
+struct drm_display_mode *drm_mode_find_dmt(struct drm_device *dev,
+					   int hsize, int vsize, int fresh);
 #endif /* __DRM_CRTC_H__ */
diff --git a/include/drm/drm_crtc_helper.h b/include/drm/drm_crtc_helper.h
index b29e201..1121f77 100644
--- a/include/drm/drm_crtc_helper.h
+++ b/include/drm/drm_crtc_helper.h
@@ -39,7 +39,6 @@
 
 #include <linux/fb.h>
 
-#include "drm_fb_helper.h"
 struct drm_crtc_helper_funcs {
 	/*
 	 * Control power levels on the CRTC.  If the mode passed in is
@@ -96,8 +95,6 @@ struct drm_connector_helper_funcs {
 
 extern int drm_helper_probe_single_connector_modes(struct drm_connector *connector, uint32_t maxX, uint32_t maxY);
 extern void drm_helper_disable_unused_functions(struct drm_device *dev);
-extern int drm_helper_hotplug_stage_two(struct drm_device *dev);
-extern bool drm_helper_initial_config(struct drm_device *dev);
 extern int drm_crtc_helper_set_config(struct drm_mode_set *set);
 extern bool drm_crtc_helper_set_mode(struct drm_crtc *crtc,
 				     struct drm_display_mode *mode,
@@ -123,12 +120,17 @@ static inline void drm_encoder_helper_add(struct drm_encoder *encoder,
 	encoder->helper_private = (void *)funcs;
 }
 
-static inline int drm_connector_helper_add(struct drm_connector *connector,
+static inline void drm_connector_helper_add(struct drm_connector *connector,
 					    const struct drm_connector_helper_funcs *funcs)
 {
 	connector->helper_private = (void *)funcs;
-	return drm_fb_helper_add_connector(connector);
 }
 
 extern int drm_helper_resume_force_mode(struct drm_device *dev);
+extern void drm_kms_helper_poll_init(struct drm_device *dev);
+extern void drm_kms_helper_poll_fini(struct drm_device *dev);
+extern void drm_helper_hpd_irq_event(struct drm_device *dev);
+
+extern void drm_kms_helper_poll_disable(struct drm_device *dev);
+extern void drm_kms_helper_poll_enable(struct drm_device *dev);
 #endif
diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
index b420989..39e2cc5 100644
--- a/include/drm/drm_edid.h
+++ b/include/drm/drm_edid.h
@@ -120,7 +120,7 @@ struct detailed_non_pixel {
 		struct detailed_data_string str;
 		struct detailed_data_monitor_range range;
 		struct detailed_data_wpindex color;
-		struct std_timing timings[5];
+		struct std_timing timings[6];
 		struct cvt_timing cvt[4];
 	} data;
 } __attribute__((packed));
@@ -201,7 +201,4 @@ struct edid {
 
 #define EDID_PRODUCT_ID(e) ((e)->prod_code[0] | ((e)->prod_code[1] << 8))
 
-/* define the number of Extension EDID block */
-#define DRM_MAX_EDID_EXT_NUM 4
-
 #endif /* __DRM_EDID_H__ */
diff --git a/include/drm/drm_fb_helper.h b/include/drm/drm_fb_helper.h
index 58c892a..f0a6afc 100644
--- a/include/drm/drm_fb_helper.h
+++ b/include/drm/drm_fb_helper.h
@@ -30,17 +30,12 @@
 #ifndef DRM_FB_HELPER_H
 #define DRM_FB_HELPER_H
 
+struct drm_fb_helper;
+
 struct drm_fb_helper_crtc {
 	uint32_t crtc_id;
 	struct drm_mode_set mode_set;
-};
-
-
-struct drm_fb_helper_funcs {
-	void (*gamma_set)(struct drm_crtc *crtc, u16 red, u16 green,
-			  u16 blue, int regno);
-	void (*gamma_get)(struct drm_crtc *crtc, u16 *red, u16 *green,
-			  u16 *blue, int regno);
+	struct drm_display_mode *desired_mode;
 };
 
 /* mode specified on the command line */
@@ -57,8 +52,28 @@ struct drm_fb_helper_cmdline_mode {
 	bool margins;
 };
 
+struct drm_fb_helper_surface_size {
+	u32 fb_width;
+	u32 fb_height;
+	u32 surface_width;
+	u32 surface_height;
+	u32 surface_bpp;
+	u32 surface_depth;
+};
+
+struct drm_fb_helper_funcs {
+	void (*gamma_set)(struct drm_crtc *crtc, u16 red, u16 green,
+			  u16 blue, int regno);
+	void (*gamma_get)(struct drm_crtc *crtc, u16 *red, u16 *green,
+			  u16 *blue, int regno);
+
+	int (*fb_probe)(struct drm_fb_helper *helper,
+			struct drm_fb_helper_surface_size *sizes);
+};
+
 struct drm_fb_helper_connector {
 	struct drm_fb_helper_cmdline_mode cmdline_mode;
+	struct drm_connector *connector;
 };
 
 struct drm_fb_helper {
@@ -67,24 +82,26 @@ struct drm_fb_helper {
 	struct drm_display_mode *mode;
 	int crtc_count;
 	struct drm_fb_helper_crtc *crtc_info;
+	int connector_count;
+	struct drm_fb_helper_connector **connector_info;
 	struct drm_fb_helper_funcs *funcs;
 	int conn_limit;
+	struct fb_info *fbdev;
+	u32 pseudo_palette[17];
 	struct list_head kernel_fb_list;
+
+	/* we got a hotplug but fbdev wasn't running the console
+	   delay until next set_par */
+	bool delayed_hotplug;
 };
 
-int drm_fb_helper_single_fb_probe(struct drm_device *dev,
-				  int preferred_bpp,
-				  int (*fb_create)(struct drm_device *dev,
-						   uint32_t fb_width,
-						   uint32_t fb_height,
-						   uint32_t surface_width,
-						   uint32_t surface_height,
-						   uint32_t surface_depth,
-						   uint32_t surface_bpp,
-						   struct drm_framebuffer **fb_ptr));
-int drm_fb_helper_init_crtc_count(struct drm_fb_helper *helper, int crtc_count,
-				  int max_conn);
-void drm_fb_helper_free(struct drm_fb_helper *helper);
+int drm_fb_helper_single_fb_probe(struct drm_fb_helper *helper,
+				  int preferred_bpp);
+
+int drm_fb_helper_init(struct drm_device *dev,
+		       struct drm_fb_helper *helper, int crtc_count,
+		       int max_conn);
+void drm_fb_helper_fini(struct drm_fb_helper *helper);
 int drm_fb_helper_blank(int blank, struct fb_info *info);
 int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
 			      struct fb_info *info);
@@ -99,13 +116,15 @@ int drm_fb_helper_setcolreg(unsigned regno,
 			    struct fb_info *info);
 
 void drm_fb_helper_restore(void);
-void drm_fb_helper_fill_var(struct fb_info *info, struct drm_framebuffer *fb,
+void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helper,
 			    uint32_t fb_width, uint32_t fb_height);
 void drm_fb_helper_fill_fix(struct fb_info *info, uint32_t pitch,
 			    uint32_t depth);
 
-int drm_fb_helper_add_connector(struct drm_connector *connector);
-int drm_fb_helper_parse_command_line(struct drm_device *dev);
 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info);
 
+bool drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper);
+bool drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel);
+int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper);
+
 #endif
diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
new file mode 100644
index 0000000..4a08a66
--- /dev/null
+++ b/include/drm/drm_fixed.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright 2009 Red Hat Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie
+ */
+#ifndef DRM_FIXED_H
+#define DRM_FIXED_H
+
+typedef union dfixed {
+	u32 full;
+} fixed20_12;
+
+
+#define dfixed_const(A) (u32)(((A) << 12))/*  + ((B + 0.000122)*4096)) */
+#define dfixed_const_half(A) (u32)(((A) << 12) + 2048)
+#define dfixed_const_666(A) (u32)(((A) << 12) + 2731)
+#define dfixed_const_8(A) (u32)(((A) << 12) + 3277)
+#define dfixed_mul(A, B) ((u64)((u64)(A).full * (B).full + 2048) >> 12)
+#define dfixed_init(A) { .full = dfixed_const((A)) }
+#define dfixed_init_half(A) { .full = dfixed_const_half((A)) }
+#define dfixed_trunc(A) ((A).full >> 12)
+
+static inline u32 dfixed_floor(fixed20_12 A)
+{
+	u32 non_frac = dfixed_trunc(A);
+
+	return dfixed_const(non_frac);
+}
+
+static inline u32 dfixed_ceil(fixed20_12 A)
+{
+	u32 non_frac = dfixed_trunc(A);
+
+	if (A.full > dfixed_const(non_frac))
+		return dfixed_const(non_frac + 1);
+	else
+		return dfixed_const(non_frac);
+}
+
+static inline u32 dfixed_div(fixed20_12 A, fixed20_12 B)
+{
+	u64 tmp = ((u64)A.full << 13);
+
+	do_div(tmp, B.full);
+	tmp += 1;
+	tmp /= 2;
+	return lower_32_bits(tmp);
+}
+#endif
diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h
index b64a8d7..7f0028e 100644
--- a/include/drm/i915_drm.h
+++ b/include/drm/i915_drm.h
@@ -275,6 +275,7 @@ typedef struct drm_i915_irq_wait {
 #define I915_PARAM_HAS_OVERLAY           7
 #define I915_PARAM_HAS_PAGEFLIPPING	 8
 #define I915_PARAM_HAS_EXECBUF2          9
+#define I915_PARAM_HAS_BSD		 10
 
 typedef struct drm_i915_getparam {
 	int param;
@@ -616,7 +617,9 @@ struct drm_i915_gem_execbuffer2 {
 	__u32 num_cliprects;
 	/** This is a struct drm_clip_rect *cliprects */
 	__u64 cliprects_ptr;
-	__u64 flags; /* currently unused */
+#define I915_EXEC_RENDER                 (1<<0)
+#define I915_EXEC_BSD                    (1<<1)
+	__u64 flags;
 	__u64 rsvd1;
 	__u64 rsvd2;
 };
diff --git a/include/drm/nouveau_drm.h b/include/drm/nouveau_drm.h
index a6a9f4a..fe917de 100644
--- a/include/drm/nouveau_drm.h
+++ b/include/drm/nouveau_drm.h
@@ -79,6 +79,7 @@ struct drm_nouveau_gpuobj_free {
 #define NOUVEAU_GETPARAM_CHIPSET_ID      11
 #define NOUVEAU_GETPARAM_VM_VRAM_BASE    12
 #define NOUVEAU_GETPARAM_GRAPH_UNITS     13
+#define NOUVEAU_GETPARAM_PTIMER_TIME     14
 struct drm_nouveau_getparam {
 	uint64_t param;
 	uint64_t value;
diff --git a/include/drm/radeon_drm.h b/include/drm/radeon_drm.h
index 81e614b..5347063 100644
--- a/include/drm/radeon_drm.h
+++ b/include/drm/radeon_drm.h
@@ -902,6 +902,8 @@ struct drm_radeon_cs {
 #define RADEON_INFO_NUM_GB_PIPES	0x01
 #define RADEON_INFO_NUM_Z_PIPES 	0x02
 #define RADEON_INFO_ACCEL_WORKING	0x03
+#define RADEON_INFO_CRTC_FROM_ID	0x04
+#define RADEON_INFO_ACCEL_WORKING2	0x05
 
 struct drm_radeon_info {
 	uint32_t		request;
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 81eb9f4..267a86c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -66,6 +66,26 @@ struct ttm_placement {
 	const uint32_t	*busy_placement;
 };
 
+/**
+ * struct ttm_bus_placement
+ *
+ * @addr:		mapped virtual address
+ * @base:		bus base address
+ * @is_iomem:		is this io memory ?
+ * @size:		size in byte
+ * @offset:		offset from the base address
+ *
+ * Structure indicating the bus placement of an object.
+ */
+struct ttm_bus_placement {
+	void		*addr;
+	unsigned long	base;
+	unsigned long	size;
+	unsigned long	offset;
+	bool		is_iomem;
+	bool		io_reserved;
+};
+
 
 /**
  * struct ttm_mem_reg
@@ -75,6 +95,7 @@ struct ttm_placement {
  * @num_pages: Actual size of memory region in pages.
  * @page_alignment: Page alignment.
  * @placement: Placement flags.
+ * @bus: Placement on io bus accessible to the CPU
  *
  * Structure indicating the placement and space resources used by a
  * buffer object.
@@ -87,6 +108,7 @@ struct ttm_mem_reg {
 	uint32_t page_alignment;
 	uint32_t mem_type;
 	uint32_t placement;
+	struct ttm_bus_placement bus;
 };
 
 /**
@@ -274,6 +296,7 @@ struct ttm_bo_kmap_obj {
 		ttm_bo_map_kmap         = 3,
 		ttm_bo_map_premapped    = 4 | TTM_BO_MAP_IOMEM_MASK,
 	} bo_kmap_type;
+	struct ttm_buffer_object *bo;
 };
 
 /**
@@ -313,7 +336,8 @@ extern int ttm_bo_wait(struct ttm_buffer_object *bo, bool lazy,
  * @bo: The buffer object.
  * @placement: Proposed placement for the buffer object.
  * @interruptible: Sleep interruptible if sleeping.
- * @no_wait: Return immediately if the buffer is busy.
+ * @no_wait_reserve: Return immediately if other buffers are busy.
+ * @no_wait_gpu: Return immediately if the GPU is busy.
  *
  * Changes placement and caching policy of the buffer object
  * according proposed placement.
@@ -325,7 +349,8 @@ extern int ttm_bo_wait(struct ttm_buffer_object *bo, bool lazy,
  */
 extern int ttm_bo_validate(struct ttm_buffer_object *bo,
 				struct ttm_placement *placement,
-				bool interruptible, bool no_wait);
+				bool interruptible, bool no_wait_reserve,
+				bool no_wait_gpu);
 
 /**
  * ttm_bo_unref
@@ -337,6 +362,23 @@ extern int ttm_bo_validate(struct ttm_buffer_object *bo,
 extern void ttm_bo_unref(struct ttm_buffer_object **bo);
 
 /**
+ * ttm_bo_lock_delayed_workqueue
+ *
+ * Prevent the delayed workqueue from running.
+ * Returns
+ * True if the workqueue was queued at the time
+ */
+extern int ttm_bo_lock_delayed_workqueue(struct ttm_bo_device *bdev);
+
+/**
+ * ttm_bo_unlock_delayed_workqueue
+ *
+ * Allows the delayed workqueue to run.
+ */
+extern void ttm_bo_unlock_delayed_workqueue(struct ttm_bo_device *bdev,
+					    int resched);
+
+/**
  * ttm_bo_synccpu_write_grab
  *
  * @bo: The buffer object:
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 6b9db91..0ea602d 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -176,8 +176,6 @@ struct ttm_tt {
 
 #define TTM_MEMTYPE_FLAG_FIXED         (1 << 0)	/* Fixed (on-card) PCI memory */
 #define TTM_MEMTYPE_FLAG_MAPPABLE      (1 << 1)	/* Memory mappable */
-#define TTM_MEMTYPE_FLAG_NEEDS_IOREMAP (1 << 2)	/* Fixed memory needs ioremap
-						   before kernel access. */
 #define TTM_MEMTYPE_FLAG_CMA           (1 << 3)	/* Can't map aperture */
 
 /**
@@ -189,13 +187,6 @@ struct ttm_tt {
  * managed by this memory type.
  * @gpu_offset: If used, the GPU offset of the first managed page of
  * fixed memory or the first managed location in an aperture.
- * @io_offset: The io_offset of the first managed page of IO memory or
- * the first managed location in an aperture. For TTM_MEMTYPE_FLAG_CMA
- * memory, this should be set to NULL.
- * @io_size: The size of a managed IO region (fixed memory or aperture).
- * @io_addr: Virtual kernel address if the io region is pre-mapped. For
- * TTM_MEMTYPE_FLAG_NEEDS_IOREMAP there is no pre-mapped io map and
- * @io_addr should be set to NULL.
  * @size: Size of the managed region.
  * @available_caching: A mask of available caching types, TTM_PL_FLAG_XX,
  * as defined in ttm_placement_common.h
@@ -221,9 +212,6 @@ struct ttm_mem_type_manager {
 	bool use_type;
 	uint32_t flags;
 	unsigned long gpu_offset;
-	unsigned long io_offset;
-	unsigned long io_size;
-	void *io_addr;
 	uint64_t size;
 	uint32_t available_caching;
 	uint32_t default_caching;
@@ -311,7 +299,8 @@ struct ttm_bo_driver {
 	 */
 	int (*move) (struct ttm_buffer_object *bo,
 		     bool evict, bool interruptible,
-		     bool no_wait, struct ttm_mem_reg *new_mem);
+		     bool no_wait_reserve, bool no_wait_gpu,
+		     struct ttm_mem_reg *new_mem);
 
 	/**
 	 * struct ttm_bo_driver_member verify_access
@@ -351,12 +340,21 @@ struct ttm_bo_driver {
 			    struct ttm_mem_reg *new_mem);
 	/* notify the driver we are taking a fault on this BO
 	 * and have reserved it */
-	void (*fault_reserve_notify)(struct ttm_buffer_object *bo);
+	int (*fault_reserve_notify)(struct ttm_buffer_object *bo);
 
 	/**
 	 * notify the driver that we're about to swap out this bo
 	 */
 	void (*swap_notify) (struct ttm_buffer_object *bo);
+
+	/**
+	 * Driver callback on when mapping io memory (for bo_move_memcpy
+	 * for instance). TTM will take care to call io_mem_free whenever
+	 * the mapping is not use anymore. io_mem_reserve & io_mem_free
+	 * are balanced.
+	 */
+	int (*io_mem_reserve)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem);
+	void (*io_mem_free)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem);
 };
 
 /**
@@ -633,7 +631,8 @@ extern bool ttm_mem_reg_is_pci(struct ttm_bo_device *bdev,
  * @proposed_placement: Proposed new placement for the buffer object.
  * @mem: A struct ttm_mem_reg.
  * @interruptible: Sleep interruptible when sliping.
- * @no_wait: Don't sleep waiting for space to become available.
+ * @no_wait_reserve: Return immediately if other buffers are busy.
+ * @no_wait_gpu: Return immediately if the GPU is busy.
  *
  * Allocate memory space for the buffer object pointed to by @bo, using
  * the placement flags in @mem, potentially evicting other idle buffer objects.
@@ -647,7 +646,8 @@ extern bool ttm_mem_reg_is_pci(struct ttm_bo_device *bdev,
 extern int ttm_bo_mem_space(struct ttm_buffer_object *bo,
 				struct ttm_placement *placement,
 				struct ttm_mem_reg *mem,
-				bool interruptible, bool no_wait);
+				bool interruptible,
+				bool no_wait_reserve, bool no_wait_gpu);
 /**
  * ttm_bo_wait_for_cpu
  *
@@ -682,6 +682,11 @@ extern int ttm_bo_pci_offset(struct ttm_bo_device *bdev,
 			     unsigned long *bus_offset,
 			     unsigned long *bus_size);
 
+extern int ttm_mem_io_reserve(struct ttm_bo_device *bdev,
+				struct ttm_mem_reg *mem);
+extern void ttm_mem_io_free(struct ttm_bo_device *bdev,
+				struct ttm_mem_reg *mem);
+
 extern void ttm_bo_global_release(struct ttm_global_reference *ref);
 extern int ttm_bo_global_init(struct ttm_global_reference *ref);
 
@@ -798,7 +803,8 @@ extern int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo,
  *
  * @bo: A pointer to a struct ttm_buffer_object.
  * @evict: 1: This is an eviction. Don't try to pipeline.
- * @no_wait: Never sleep, but rather return with -EBUSY.
+ * @no_wait_reserve: Return immediately if other buffers are busy.
+ * @no_wait_gpu: Return immediately if the GPU is busy.
  * @new_mem: struct ttm_mem_reg indicating where to move.
  *
  * Optimized move function for a buffer object with both old and
@@ -812,15 +818,16 @@ extern int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo,
  */
 
 extern int ttm_bo_move_ttm(struct ttm_buffer_object *bo,
-			   bool evict, bool no_wait,
-			   struct ttm_mem_reg *new_mem);
+			   bool evict, bool no_wait_reserve,
+			   bool no_wait_gpu, struct ttm_mem_reg *new_mem);
 
 /**
  * ttm_bo_move_memcpy
  *
  * @bo: A pointer to a struct ttm_buffer_object.
  * @evict: 1: This is an eviction. Don't try to pipeline.
- * @no_wait: Never sleep, but rather return with -EBUSY.
+ * @no_wait_reserve: Return immediately if other buffers are busy.
+ * @no_wait_gpu: Return immediately if the GPU is busy.
  * @new_mem: struct ttm_mem_reg indicating where to move.
  *
  * Fallback move function for a mappable buffer object in mappable memory.
@@ -834,8 +841,8 @@ extern int ttm_bo_move_ttm(struct ttm_buffer_object *bo,
  */
 
 extern int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
-			      bool evict,
-			      bool no_wait, struct ttm_mem_reg *new_mem);
+			      bool evict, bool no_wait_reserve,
+			      bool no_wait_gpu, struct ttm_mem_reg *new_mem);
 
 /**
  * ttm_bo_free_old_node
@@ -854,7 +861,8 @@ extern void ttm_bo_free_old_node(struct ttm_buffer_object *bo);
  * @sync_obj_arg: An argument to pass to the sync object idle / wait
  * functions.
  * @evict: This is an evict move. Don't return until the buffer is idle.
- * @no_wait: Never sleep, but rather return with -EBUSY.
+ * @no_wait_reserve: Return immediately if other buffers are busy.
+ * @no_wait_gpu: Return immediately if the GPU is busy.
  * @new_mem: struct ttm_mem_reg indicating where to move.
  *
  * Accelerated move function to be called when an accelerated move
@@ -868,7 +876,8 @@ extern void ttm_bo_free_old_node(struct ttm_buffer_object *bo);
 extern int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
 				     void *sync_obj,
 				     void *sync_obj_arg,
-				     bool evict, bool no_wait,
+				     bool evict, bool no_wait_reserve,
+				     bool no_wait_gpu,
 				     struct ttm_mem_reg *new_mem);
 /**
  * ttm_io_prot
diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h
new file mode 100644
index 0000000..8bb4de5
--- /dev/null
+++ b/include/drm/ttm/ttm_page_alloc.h
@@ -0,0 +1,74 @@
+/*
+ * Copyright (c) Red Hat Inc.
+
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sub license,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: Dave Airlie <airlied@redhat.com>
+ *          Jerome Glisse <jglisse@redhat.com>
+ */
+#ifndef TTM_PAGE_ALLOC
+#define TTM_PAGE_ALLOC
+
+#include "ttm_bo_driver.h"
+#include "ttm_memory.h"
+
+/**
+ * Get count number of pages from pool to pages list.
+ *
+ * @pages: heado of empty linked list where pages are filled.
+ * @flags: ttm flags for page allocation.
+ * @cstate: ttm caching state for the page.
+ * @count: number of pages to allocate.
+ */
+int ttm_get_pages(struct list_head *pages,
+		  int flags,
+		  enum ttm_caching_state cstate,
+		  unsigned count);
+/**
+ * Put linked list of pages to pool.
+ *
+ * @pages: list of pages to free.
+ * @page_count: number of pages in the list. Zero can be passed for unknown
+ * count.
+ * @flags: ttm flags for page allocation.
+ * @cstate: ttm caching state.
+ */
+void ttm_put_pages(struct list_head *pages,
+		   unsigned page_count,
+		   int flags,
+		   enum ttm_caching_state cstate);
+/**
+ * Initialize pool allocator.
+ *
+ * Pool allocator is internaly reference counted so it can be initialized
+ * multiple times but ttm_page_alloc_fini has to be called same number of
+ * times.
+ */
+int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages);
+/**
+ * Free pool allocator.
+ */
+void ttm_page_alloc_fini(void);
+
+/**
+ * Output the state of pools to debugfs file
+ */
+extern int ttm_page_alloc_debugfs(struct seq_file *m, void *data);
+#endif
diff --git a/include/drm/vmwgfx_drm.h b/include/drm/vmwgfx_drm.h
index c7645f4..4d08423 100644
--- a/include/drm/vmwgfx_drm.h
+++ b/include/drm/vmwgfx_drm.h
@@ -50,6 +50,8 @@
 #define DRM_VMW_EXECBUF              12
 #define DRM_VMW_FIFO_DEBUG           13
 #define DRM_VMW_FENCE_WAIT           14
+/* guarded by minor version >= 2 */
+#define DRM_VMW_UPDATE_LAYOUT        15
 
 
 /*************************************************************************/
@@ -585,4 +587,28 @@ struct drm_vmw_stream_arg {
  * sure that the stream has been stopped.
  */
 
+/*************************************************************************/
+/**
+ * DRM_VMW_UPDATE_LAYOUT - Update layout
+ *
+ * Updates the prefered modes and connection status for connectors. The
+ * command conisits of one drm_vmw_update_layout_arg pointing out a array
+ * of num_outputs drm_vmw_rect's.
+ */
+
+/**
+ * struct drm_vmw_update_layout_arg
+ *
+ * @num_outputs: number of active
+ * @rects: pointer to array of drm_vmw_rect
+ *
+ * Input argument to the DRM_VMW_UPDATE_LAYOUT Ioctl.
+ */
+
+struct drm_vmw_update_layout_arg {
+	uint32_t num_outputs;
+	uint32_t pad64;
+	uint64_t rects;
+};
+
 #endif
diff --git a/include/linux/fb.h b/include/linux/fb.h
index c10163b..1296af4 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -403,6 +403,7 @@ struct fb_cursor {
 #include <linux/notifier.h>
 #include <linux/list.h>
 #include <linux/backlight.h>
+#include <linux/slab.h>
 #include <asm/io.h>
 
 struct vm_area_struct;
@@ -862,10 +863,22 @@ struct fb_info {
 	/* we need the PCI or similiar aperture base/size not
 	   smem_start/size as smem_start may just be an object
 	   allocated inside the aperture so may not actually overlap */
-	resource_size_t aperture_base;
-	resource_size_t aperture_size;
+	struct apertures_struct {
+		unsigned int count;
+		struct aperture {
+			resource_size_t base;
+			resource_size_t size;
+		} ranges[0];
+	} *apertures;
 };
 
+static inline struct apertures_struct *alloc_apertures(unsigned int max_num) {
+	struct apertures_struct *a = kzalloc(sizeof(struct apertures_struct)
+			+ max_num * sizeof(struct aperture), GFP_KERNEL);
+	a->count = max_num;
+	return a;
+}
+
 #ifdef MODULE
 #define FBINFO_DEFAULT	FBINFO_MODULE
 #else
@@ -958,6 +971,8 @@ extern ssize_t fb_sys_write(struct fb_info *info, const char __user *buf,
 /* drivers/video/fbmem.c */
 extern int register_framebuffer(struct fb_info *fb_info);
 extern int unregister_framebuffer(struct fb_info *fb_info);
+extern void remove_conflicting_framebuffers(struct apertures_struct *a,
+				const char *name, bool primary);
 extern int fb_prepare_logo(struct fb_info *fb_info, int rotate);
 extern int fb_show_logo(struct fb_info *fb_info, int rotate);
 extern char* fb_get_buffer_offset(struct fb_info *info, struct fb_pixmap *buf, u32 size);
diff --git a/include/linux/vgaarb.h b/include/linux/vgaarb.h
index 2dfaa29..c9a9759 100644
--- a/include/linux/vgaarb.h
+++ b/include/linux/vgaarb.h
@@ -5,6 +5,27 @@
  * (C) Copyright 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org>
  * (C) Copyright 2007 Paulo R. Zanoni <przanoni@gmail.com>
  * (C) Copyright 2007, 2009 Tiago Vignatti <vignatti@freedesktop.org>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS
+ * IN THE SOFTWARE.
+ *
  */
 
 #ifndef LINUX_VGA_H