The Following 2 Users Say Thank You to wmarone For This Useful Post: | ||
|
2011-04-12
, 06:01
|
|
Posts: 306 |
Thanked: 350 times |
Joined on Oct 2009
@ Sydney
|
#2852
|
The Following 2 Users Say Thank You to H3llb0und For This Useful Post: | ||
|
2011-04-12
, 10:31
|
|
Posts: 4,384 |
Thanked: 5,524 times |
Joined on Jul 2007
@ ˙ǝɹǝɥʍou
|
#2853
|
The Following 2 Users Say Thank You to ysss For This Useful Post: | ||
|
2011-04-12
, 12:52
|
Posts: 874 |
Thanked: 316 times |
Joined on Jun 2007
@ London UK
|
#2854
|
I find this introspection about size very interesting, especially from someone that has had experience with a bunch of different sized devices.
While I could see purchasing more than one device, at the rate these things are evolving, I would hate to have to upgrade 3 at once!
The Following 2 Users Say Thank You to Rebski For This Useful Post: | ||
|
2011-04-12
, 13:11
|
|
Posts: 2,427 |
Thanked: 2,986 times |
Joined on Dec 2007
|
#2855
|
Check this out and go 'WTF'
http://www.youtube.com/watch?v=bBQQE...layer_embedded
The Following 2 Users Say Thank You to daperl For This Useful Post: | ||
|
2011-04-12
, 14:00
|
|
Posts: 3,524 |
Thanked: 2,958 times |
Joined on Oct 2007
@ Delta Quadrant
|
#2856
|
Very simple. That's zero lines of code inside the OpenGL render loop. When the program starts it makes the assumption that the pad is perpendicular to your line-of-sight. An accelerometer/gyro thread fills the line-of-sight delta coordinates for the transformation matrix.
Cool, but very far from magic.
|
2011-04-12
, 14:53
|
|
Posts: 3,524 |
Thanked: 2,958 times |
Joined on Oct 2007
@ Delta Quadrant
|
#2857
|
|
2011-04-12
, 15:15
|
|
Posts: 2,427 |
Thanked: 2,986 times |
Joined on Dec 2007
|
#2858
|
The 'head tracking' in the title may imply that there's more to this technique than relative device movement. Even the first shot shows an individuals head being quite smoothly tracked.
If they are using head tracking, I suspect that it's a combination of positional adjustments (eg. gyro), with facial detection via the camera. The resultant animation looks exceptionally smooth, and although I quite sure that modern SoCs have DSPs that can handle these sort of tasks, I am flabbergasted at how smooth it truly is. Perhaps a clever usage of GPU shaders?
The Following 2 Users Say Thank You to daperl For This Useful Post: | ||
|
2011-04-12
, 15:15
|
|
Posts: 4,384 |
Thanked: 5,524 times |
Joined on Jul 2007
@ ˙ǝɹǝɥʍou
|
#2859
|
Very simple. That's zero lines of code inside the OpenGL render loop. When the program starts it makes the assumption that the pad is perpendicular to your line-of-sight. An accelerometer/gyro thread fills the line-of-sight delta coordinates for the transformation matrix.
The Following 2 Users Say Thank You to ysss For This Useful Post: | ||
|
2011-04-12
, 15:31
|
|
Posts: 3,524 |
Thanked: 2,958 times |
Joined on Oct 2007
@ Delta Quadrant
|
#2860
|
You're right, their doing head tracking. But again, the head tracking thread is just setting the variables for the matrix transformation. That's it. There's no extra or special code. And if you'll notice, there's no complex lighting whatsoever; the pixel shader code is basically byte copies. You should not be flabbergasted; this is why GPUs were developed in the first place. Think of the pad movement as a joystick movement for a stationary camera.
head tracking thread: CPU and DSP
most everything else: GPU
A dual-core CPU and that monster GPU makes this a simple task for an iPad 2.
EDIT:
And actually, this would be simple for an n900, or any similarly configured device.
The Following User Says Thank You to Capt'n Corrupt For This Useful Post: | ||
Tags |
android envy, buzz..buzz buzz, core failure, crapdroid, galaxy fap, galaxy tab, ipad killer, samsung, tab trolls, tablet envy |
|
Last edited by wmarone; 2011-04-12 at 02:28.