Intel will ship computers with an experimental 48-core processor to researchers by the end of the second quarter as the company tries to reshape its future chips PC World report. Limited quantities of the processor will be sent primarily to academic institutions, said Sean Koehl, technology evangelist with Intel Labs, during an event in New York on Wednesday. The chip may not become commercially available as it is part of a research project, but features from the processor could be implemented in future chips. Development of the processor is part of Intel's terascale computing research program. A focus area of the program is to put more cores in a single processor to enable faster computing in devices ranging from mobiles to servers. The 48-core chip operates at about the clock speed of Atom-based chips, said Christopher Anderson, an engineer with Intel Labs. Intel's latest Atom chips are power-efficient, are targeted at netbooks and small desktops, and run at clock speeds between 1.66GHz and 1.83GHz. The 48-core processor, built on a mesh architecture, could lead to a massive performance boost when all the chips communicate with each other, Anderson said. Adding cores to processors is considered a power-efficient way of boosting chip performance. The traditional way of boosting performance was by cranking up CPU clock speed, but that led to excessive heat dissipation and power consumption.
When I first saw this on the latest post section all I saw was "Intel to Ship Samples of...", and first I thought it'd say "Intel to Ship Samples of Anthrax to AMD users". But, heh, 48 cores eh? Sounds pretty cool, for super threaded computing. This time they might be the first to do something! They got beat out of the park with dual and quad core as well as 64-bit support. In reality though, I can't see these CPUs being very usable in a home environment, most people don't run programs that are thread aware, and when they do, maybe a max of 4 threads are in that program, these CPUs are more aimed at servers I believe, being able to run 64 server sessions at once without scheduling is pretty badass for a server. Of course they could also VPS these off, one core per customer for their VPS, it would save VPS and shell providers tonnes of money!
maybe the programs on computers dont take more than 2-4 cores, but imagine software that can use all 48. like rendering software or the dolphin emu, it uses more or less only the CPU.
Must agree with Titcher in a virtual server environment I can see processors like that been a great investment. For for apps and games in a home environment ive not seen many that really support more than two cores, even the Dolphin emulator mentioned above will not support more than two cores, apparently it would have little benefit to the overall speed of the emulator.
Perhaps optimizing the programming itself will work? It's actually amazing to see many of today's digital tasks can be done on an 80's computer like the Commodore 64. Thus done at 1 Mhz. Problem is that those computers aren't multimedia machines like today's computers, and actually doing something usefull requires lots of programming. After all, the hunt for faster clockspeeds is what was always driving the PC market forward, that doesn't optimize actual programming in any way.
No amount of optimisation will help realistically. Using multiple cores comes at the cost of CPU power, as the CPUs have to be able to synchronise with each other. For things people do at home, this really isn't possible, as most home software needs to be very responsive and generally relies on tasks that MUST be synchronised. Now for things like media encoding and rendering, a whole bunch of cores is ideal, as these sorts of tasks don't need much in the way of status checks, the CPU just sends the instructions and lets it get on with it. Maybe I can think of a simple analogy. Yeah I've got one, think of an average program like building a house, and each core is a worker, each worker can be left alone to some degree, but the workers must be able to know what each other worker has completed and so on, otherwise you'll get things like the roof being built before the walls happening. Where as most server type programs and rendering programs don't rely so much on the other cores being finished.
Instead of all the cores needing to work together it is possible to adjust that by opening up Windows Task Manager and setting the Affinity, thus allowing you to set which programs use which cores and how many cores you use. So the cores would need to work together when set up that way but it could still be useful if you have certain cores to only use certain programs and however many cores you wanted in that program. Imagine having 16 cores and being able to process lots of different programs on each core (Dolphin Cores 0 and 1, Photoshop Core 2, Filezilla Core 3.etc). As long as you have the RAM for it I couldn't imagine there being a problem with using this method on just a regular home based PC and I must say it would be damn fast...(maybe I just took it out of context and totally took it in a way that wasn't even being discussed about..)
You're right that multi-tasking would be improved drastically. But we were talking about the merits of multi-core aware programs. Just in case you was curious.
I feel Intel has something awesome in store for us to offer in the processor segment, which might be for the desktops or the servers, by the end of this year or early next year after the core i7 and i5 series. So, lets see.