Showing posts with label intel. Show all posts
Showing posts with label intel. Show all posts

Friday, August 22, 2008

OLPC - a lesson in product development

a couple of months ago i wrote why microsoft needed to kill the OLPC. this time, i felt like writing why the OLPC project nearly failed from a product development view point.

they had a near feature-complete product that was ready to be manufactured and then shipped. the product should have been in design lock down after the first prototype was spec'd out. the sales agreements should have been put down in writing. and then followed through (i.e. make, boxed and then shipped).

so what happened. a big pile of $$ was dangled in front of their noses - allowed intel to smeared Negroponte with the "you can't sell the classmate" PR disaster. and then they let microsoft in to the party. again, loosing sight of the road map. this should have been version 2.0 for a brand spankin new OS implementation, feature locked and signed off on the hardware platform.


there is something to be said about being ahead of your time.
OLPC Founder Negroponte Wanted to Make Multitouch XO-2 Laptop 20 Years Ago [Olpc]
june 2 2008

but you should not compromise your release cycle by being distracted with the bells and whistles of what could be. i mean, there was all of these dual touchscreen articles popping up all over the place like this one:
V12 Design Delivering Dual Touchscreen Laptop Within Two Years [Laptops]
july 9 2008

again, they lost focus on releasing their first product out the door. with all of these add ons, did they honestly thought they were going to keep the costs down by adding two full touch screens?


when all of this was happening, a whole heap of cheap "netbooks" started to pop up from all kinds of manufacturers. the ASUS eeePC is an excellent example of them. they've manage to release different versions upon different versions all aiming at the same market the OLPC was in. the eeePC 700 was the linux book, 701 had XP, 900A a faster processor, 1000 a larger screen. there is a giant table of all the different models they did that it's mind boggling trying to keep track of it. and since there are so many of these eeePC out there, people have started to mod them with things like touchscreen capabilities.



finally, an interesting read on how the OLPC journey was taken:

How the OLPC Changed Laptops Forever: The Untold Story [Origins]
aug 14, 2008
and what's also curious is that this article link used to be found through gizmodo's pages:
even their twitter feed has it:
don't they normally mark the article as inaccurate or strike the lines in the article to "update" their findings? i have never seen them pull down a post like this. and now it's:
maybe it wasn't popular enough for people to take interest in this, it seems to only have 18 diggs so -- no one will miss it...

update: it seems that a new article has been posted on gizmodo:
Secret Origin of the OLPC: Genius, Hubris and the Birth of the Netbook
aug 26, 2008

update: here's another fancy article that bears a similar headline to my original article:
Why Microsoft and Intel tried to kill the XO $100 laptop
aug 10, 2008

Friday, August 15, 2008

real time ray tracing, huh...

there's been a lot of graphics technology goodies piling up. too good for me to pass up on some more of my thoughts.

Intel takes another jab at Nvidia during its research day
june 12, 2008

"[ray tracing] doesn't waste time drawing things that are hidden and, according to Intel, it is best done on the CPU."
i think the über managers doesn't realize the complexity of raytracing. there is no such thing as "things that are hidden" object in ray tracing. in fact, it's quite the opposite. you need all of the objects in the scene to calculate for reflection and refraction, scattering, and chromatic aberration (wiki). to make a high quality rendered scene depends on the number of times the ray is bounced (reflection) or passed through (refraction) an object. hidden (off screen) or not, the object needs to be in the scene no matter what.
16-core, four-chip monster CPU, running last fall’s game, “Quake Wars: Enemy Territory.” It ran at 16 frames per second, not the usual 60 frames per second that is possible on a good graphics chip.
wow, that's a lot of hardware for 16 frames. granted this is ray tracing but still. i would throw my computer out the window if i can't get my frag on smooth silky frame rates.
Nvidia video: No quad-core chip needed for extreme PC
june 20, 2008

from intel, again:
"...programmers will like ray tracing better because they can do many complex tasks with simple one-line programs."
huh? did i just read that right? what hell are function and macros again?

this guy has a much better thought on these issues.
Ray tracing for PCs-- a bad idea whose time has come
june 12, 2008
and now, back to more graphic goodies. to achieve some pretty good results, there's been a lot of work going into the multi-core chip solutions.
AMD, Nvidia graphics chip designs diverge
june 16, 2008

Intel aims x86 at GPU market
aug 04, 2008
and, it seems to be kicking ass so far:
AMD ATI Radeon HD 4870 X2 Preview
july 14, 2008


finally, john dvorak said this on his podcast:
Save Your Phone Books
06/18/2008 06:35 PM @3:55 mins:secs
"... it would be difficult for a CPU company to make a GPU as it is for a GPU company to make a CPU... mindset and marketing style are different..."
i think that's only true for CPU to GPU. going the other way should be much easier for the GPU maker. a CPU works much more simplier in a linear fashion with a lot of instruction sets that are more in tuned with logic processing. the GPU on the other hand has a lot of pipelining effiency designs that needs to be taken into consideration with data that are normally processed/passed over multiple times.

plus, it's funny how all the major CPU makers are into the low power processor:
AMD may be planning its own CPU to compete with [VIA] Nano [and Intel] Atom
june 19, 2008

and the lone GPU maker:
NVIDIA pays Transmeta $25 million for LongRun technology
aug 8 2008


but i hope there's more work like this being done from all graphics makers:
AMD Cinema 2.0 tech demo: real-time photo-realistic human models
aug 12 2008

there's nothing more outstanding than showing what you are working on and what your graphics chip can do. nvidia used to always showcase their cards by constantly displaying all of these cool demos, samples and apps. it seems that they are kind of slipping as of late.

Saturday, April 12, 2008

the GPU is the new CPU

there's seems to be a lot of smack talk with the nVidia versus Intel shenanigans in the feeds lately.

Nvidia CEO "Not Afraid" of CPU-GPU Hybrids

CE-Oh no he didn't! Part LV: NVIDIA boss says "We're going to open a can of whoop-ass" on Intel


NVIDIA continues to hate on Intel, promises sub-$45 integrated chipset


basically, the VIA Isaiah + nVidia GeForce(5?) combo are pit against the Intel Celeron + G945. and really, the performance they are talking about is akin to my solar powered calculator.

however, i do like the idea of putting a full fledged GPU on the motherboard (and not just an IGP).

i predict that motherboard manufacturers will one day make mobos targeted with GPU specific chipsets just like how they are currently CPU centric. zif sockets for the GPU on the motherboard is an evolution that needs to happen and long overdue. the dedicated memory chips on a speedy bus found on all modern graphics cards is probably going to prevent this from happening. but hopefully, as the xbox consoles have demonstrated, a unified memory architecture will satisfy that limitation / requirement.

to offset the dedicated RAM + speedy bus loss, programmers will need to learn to make their code go "one-way" in a "fire and forget" style coding. set top box, portable media players, the playstation 1 2 & 3, etc., all perform very well when you push data through one end of the hardware accelerated "blackbox" and just let it do its thing spitting stuff up on the other side (normally the end destination as well).


but, what's most likely going to happen is:
the CPU will gobble up the GPU

in terms of raw number crunching, the CPU will become less important in pushing the performance bar while the GPU will dominate the clock cycles for painting and positioning pretty pixels. and i'm not talking about those office or other desktop apps, but as a graphics / multimedia combo system.

in light of all of this, i don't know why AMD/ATI hasn't made a run for a mobo with sockets for their CPU and GPU separately. it would have been quicker for them to push this out in to the market instead of them trying to complete their CPU/GPU combo first. and who knows how much that's going to cost.

just as the CPU has gobbled up the once separate math co-processor, this shouldn't be a surprise to see the clash of the GPU and CPU colliding. hopefully, the GPU will win this time, 'cause the plain box PC isn't exciting anymore. the cool stuff is all of the eye candy you see now-a-days on stuff like the latest OSX and compiz goodness. what? vista?!? you're insane...

i
the original human powered GPU
who bitch-slapped the CPU
into submission.