![]() |
Mobile System on Chip Thread
Coming This Fall. The roadmap has been leaked:
http://armdevices.net/wp-content/upl...11-roadmap.jpg Charbax of armdevices.net has the scoop: http://armdevices.net/2011/01/24/teg...oadmap-leaked/ It looks like Nvidia is looking to bring their blistering performance and timely innovation to the low-power SoC market. Quad core? 3x better performance that the T20? Able to drive a 1900x1200 display? Wow. This chip should give ATOM a serious run for its money, only deliverable in devices with a smartphone/tablet form factor. |
Re: Tegra 3
Very interesting.
I also found this on xda, if legit, the OMAP 4440 is a beast! http://forum.xda-developers.com/show...php?p=10822670 |
Re: Tegra 3
cool. FYI the new ATOM chips "Medfield" are going to be pretty competitive with these specs if Intel actually delivers. We are supposed to see a single core hyperthreaded cpu at about 1.6ghz and graphics performance 5x faster than what we have now. Well, maybe not that competitive, but it should beat the Tegra 3 to market by a few months...
|
Re: Tegra 3
in my opinion it all seems a bit TOO powerful for a small device like a phone. smartphones dont really need all this power, thats what a computer is for. :)
|
Re: Tegra 3
Yeah, everyone has different preferences on the ideal device. More power is great to have available, but for a phone it still needs to be efficient. I don't see much point in having a CPU that kills the battery in 2 hours of 100% usage, the risk seems too high that you will accidentally drain it. These new CPUs require a very solid foundation of power-saving and efficiency in the operating system, this is why Intel joined MeeGo rather than just releasing their processors. Maybe it's a mistake by Nvidia not to join too, ARM was quite efficient but a quad core has a high potential for power drain.
|
Re: Tegra 3
Some games would be nice to play on these Cortex's.
No point having all this power with no f*****g games. |
Re: Tegra 3
Quote:
EDIT: And as cool as quad-A9-s are, I hate the marketing spin (1.5 GHz, 13800 MIPS). There is a reason people call MIPS Meaningless Indication of Processor Speed. |
Re: Tegra 3
Nvidia seems pretty confident that heat and battery life won't be an issue with these Tegra chips in real life scenarios - lets hope they're right.
|
Re: Tegra 3
Quote:
But if there is a chip im waiting to get my hands on then that has to be the ST-Ericsson U8500 SoC. That is one hell of a chip... |
Re: Tegra 3
Are those quad still going to be the most castrated and crippled A9s like in current Tegra?
|
Re: Tegra 3
Quote:
|
Re: Tegra 3
Citing the "Benefits of multi-core CPUs in mobile devices" whitepaper released by Nvidia, multi-core CPUs might actually be more power efficient:
Quote:
More cores make multiprocessing more seamless, not only in relation to running multiple applications simultaneously but also for single applications - make one thread run a lag free GUI and other thread/s can do all the needed time intensive data processing and computations in the background. All of this is easier when you have multiple cores. |
Re: Tegra 3
Nvidia has released a whitepaper for it's ULP GeForce GPU line, citing advanced features and outstanding performance.
There is no mention of Tegra 3 (AFAICT), but this should give some insight into their future direction as well as a high-level perspective of architecture design. http://www.nvidia.com/content/PDF/te...ld_Devices.pdf The Tegra 3 will be a force to be reckoned with. I fully expect NVidia to hold the lead as they are planning yearly releases! :eek: |
Re: Tegra 3
Quote:
|
Re: Tegra 3
Quote:
|
Re: Tegra 3
Quote:
I'd also guess that the more cores are available, the less one has to worry about about the threads contending for a single core (like a background worker thread starving the GUI thread for CPU time, causing the GUI to lag). |
Re: Tegra 3
Quote:
I'm not sure what you mean by 'downplay', but Android uses explicit events to separate process-level multi-tasking concerns and gives the OS the ability to choose what to do with the process once backgrounded based on its configuration. This is certainly more complicated than the brute-force approach to simply having the kernel handle task scheduling treating all processes as more-or-less equal and in no way 'downplays' it. It also gives developers explicit control over what parts (if any) are put to sleep when the app loses focus and allows them to run their own code during these events. This is a boon to battery life and performance; upon losing focus, only key code can remain running in the background while other code is shut down. Consider how long our laptops would run if such event separation were included in applications from the get-go! <offtopic> But arguably the largest benefit of such a system is not having to explicitly close applications (well, the ones that are coded properly anyway) -- one of the great features of palmOS of old. When the OS needs resources, it can dump idle processes and state save. For apps that are intended to be backgrounded, the developers simply need structure the application accordingly and the app will continue running in the background. But the problem is as you say: many developers (including google's own) seem not to use the facilities properly leading to bad resource utilization and the need for task killers. It seems, as always, the additional layer of complexity introduces the chance to err at an increasing rate. </offtopic> |
Re: Tegra 3
What the Sony NGP has pwns the Tegra3!
XD |
Re: Tegra 3
Quote:
|
Re: Tegra 3
Quote:
|
Re: Tegra 3
Quote:
|
Re: Tegra 3
I've done a bit of reading on NEON and I'm curious how often it's used. Certainly it seems useful, but even ARM seems to be heavily pushing the media [de|en]code capabilities of this SIMD implementation which is something that may very well be handled by a DSP (though in a much less generalized way). But is it generally used outside of this?
Does anyone have links to NVIDIAs rational for leaving NEON out (beyond space savings)? |
Re: Tegra 3
http://www.ubergizmo.com/2011/01/tegra-2-benchmark/
space, cost, and they probably have a pretty good gpu and it will be fine for graphics without neon, including video decoding to supported codecs. not sure what happens if you try and do some post processing or use an unsupported codec though... |
Re: Tegra 3
Quote:
|
Re: Tegra 3
Quote:
|
Re: Nvidia Tegra
Nvidia shows its 'Kal-el' quad-core A9 at GDC 2011.
Wow. Nvidia is agressive, and it looks as if the next Tegra is going to be a monster. http://www.youtube.com/watch?v=41eF43ianK4 |
Re: Tegra 3
Quote:
|
Re: Nvidia Tegra
Mexican friend = aMeeGo
.... that post was before Elopcopalypse! |
Re: Nvidia Tegra
Here is Anandtech's take of the next Tegra!
It seems that devices may have this as early as August of this year! http://images.anandtech.com/reviews/.../kal-el_sm.jpg http://www.anandtech.com/show/4181/n...lets-this-year Yep, in only 5 months, we may be seeing an ultra-mobile quad-core SoC with performance comparable to Core2Duo's (according to Nvidia)! :eek: If true, this will certainly offer performance comparable to ATOM and usher in a new wave of laptops, and tablet apps capable of replacing legacy x86. I don't know about you, but an ultra-thin laptop that has 12hour battery life sounds pretty enticing! Even a tablet with a keyboard dock, or a BT keyboard/mouse will be enticing. Expect Android, Ubuntu, and possibly MeeGo to lead the charge in this radical productivity paradigm shift. While Android allows for mouse/keyboard, traditional linux has (I'm assuming) better productivity support. But with the fast development of Android, and many services being pushed through the web, an Android tablet may be quite competent for productivity. Oh, brave new world! |
Re: Nvidia Tegra
Get ready for a new tool to benchmark 3D performance! GLBenchmark 3.0 has launched!
http://www.glbenchmark.com/images/glb3splash.png http://www.glbenchmark.com/ VIDEO: http://www.glbenchmark.com/glb3_video3.mov |
Re: Nvidia Tegra
I'm thinking of changing this thread to be a general discussion about ultra-mobile SoCs. What do you think?
|
Re: Nvidia Tegra
This is *very* interesting. It seems that the ODROID-A development tablet (EXYNOS 4210 w/ Mali400) is getting a healthy 36.6 and 45fps on GLBenchmark 2.0/PRO @ 1366x768.
http://www.glbenchmark.com/phonedeta...ernel+ODROID-A This is very encouraging, and is suggestive that performance is still tightly coupled to resolution. Running at 1024x768, these numbers would increase yet again. This may begin to explain the disparity between the XOOM and the iPad2's numbers. Apple was wise to keep the resolution to 1024x768 from a gamer-performance perspective. I wouldn't be surprised if the clock rate was also hiked up a bit. It would be nice if these benchmarks allowed to be run at different resolutions, so that we might get a better feel of chip/system performance rather than trying to compare across wildly different setups. |
Re: Nvidia Tegra
Quote:
|
Re: Nvidia Tegra
Quote:
Of course, this may just be marketing fluff, but it's exciting to think that it may be true. Even with Atom performance, the world of legacy productivity will be opened more fully to these devices. I would ditch my laptop in a New York Minute if I knew I could use a thin/light tablet that I could actually work from (thank goodness for Ubuntu's ARM support!). |
Re: Mobile System on Chip Thread
It looks like the Archos Gen9 tablet may have the fastest SoC yet!
http://armdevices.net/2011/03/18/arc...unced-in-june/ It's using a modified OMAP4440 clocked to a blistering 1.6GHz! This is a dual-core CPU with a SGX540 (who knows how high this is clocked). I'd love to see how this CPU compares to something like Atom running the same frequency on a few benchmarks. We need some Tablets that accept Mini-PCI-E adapters. It would be nice to be able to choose 3G for whichever network with a cheap $50 card. |
Re: Mobile System on Chip Thread
You are a very excitable man, Captain Corrupt.
|
Re: Mobile System on Chip Thread
Quote:
I suspect if I used fewer exclamation points, I would seem more sober... |
Re: Mobile System on Chip Thread
Here's an interesting comparison between a 1st-gen ATOM and a 500MHz Cortex-A9.
http://www.geek.com/articles/chips/a...rried-2010016/ The test shows web rendering speeds on the different units despite the same browser, OS, and connection. The A9 performs more than admirably and compares quite well with the ATOM with most loads being a fraction slower. Not exactly scientific, but interesting nonetheless. ARM went on to claim that a cortex A9 @ 1GHz loads pages faster than ATOM! http://netbooked.net/blog/arm-vs-ato...s-performance/ It seems that current gen A9 is more than capable of not too demanding productivity applications. The next generation is surely to improve on this greatly. |
Re: Mobile System on Chip Thread
Hey CC!
I was also wondering the same thing: How much until ARM is more powerful than x86 mobile chips? And I just love your post because it reminded me of a graph I drew back in January 2010 (that's 14months ago!). Shockingly it still seems accurate! http://img21.imageshack.us/i/73554987.png/ It looks like the OMAP4440 is equivalent to the power of a Intel SU3500, which is a great solution for Windows7 netbooks/tablets. What's even more shocking back then Cortex A9 had yet to surface in any device, and dual-core Atoms weren't even announced! edit: I just remembered what those lines meant; it was illustrating the minimum level of performance needed to have a responsive Win7 experience, and the minimum level of battery life for Win7 netbooks/ultraportables/tablets. My insight freaked the daylights out of me! |
Re: Mobile System on Chip Thread
Quote:
|
All times are GMT. The time now is 05:41. |
vBulletin® Version 3.8.8