Go Back   Sports Car Forum - MotorWorld.net > Hobbies and Leisure Time > Computers, Consoles, Gadgets And Gizmos



Reply
 
Thread Tools Display Modes
Old 08-03-2006, 08:01 PM   #1
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default Official PS3 thead. Pre Order at GameStop/eb games OCT 10!

Yes very similiar to the PSP's but with a few twists. Check out the 3 min long video.

http://www.aeropause.com/archives/20...le_interfa.php

Also what is interesting is how they compare PS3's interface vs. The 360's interface. Keep on reading towards the bottom for that.

*EDIT*

If its all right with everyone I made this the official PS3 thread. I got ALOT of stuff I want to post so I dont wanna make like 10 million new threads for each thing. BTW I will add alot of things tommorow and as I get more time.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 08:37 PM   #2
evoWalo
Regular User
 
Join Date: May 2004
Location: Naples, FL
Posts: 1,407
Default

nice 3d accelerated GUI

evoWalo is offline   Reply With Quote
Old 08-03-2006, 10:44 PM   #3
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

PS3 Hardware:
The Playstation 3 is a gaming console(or computer system) that utilizes a Cell processor with 7 operational SPEs with access to 256MB of XDR RAM, an RSX graphics chip with 256MBs of GDDR3 RAM and access to the Cell’s main memory, a blu-ray drive for gaming and movie playback, and a 2.5” hard disc drive. Other components of the system are Bluetooth support used for wireless motion-sensing controllers, 4 USB ports, and a gigabit Ethernet port. On the more expensive version of the Playstation 3 there is also a Sony Memory Stick reader, Compact Flash reader, SD card reader, WiFi support(basically an extra network interface which is wireless), and an HDMI output.

The Playstation is capable of outputting 1080p signals through all of its outputs, though it is possible that with blu-ray movie playback, a token(ICT) can be present which forces down-sampling of 1080p to 540p if the signal goes through a non-certified interface (non-HDMI).

The Playstation 3’s audio will be handled by the Cell processor. There are many supported codecs representing high quality formats for digital entertainment, but since it is done on the Cell processor, game developers are at leisure to output any format they wish. This means 6.1, 7.1, 8.1, or beyond audio is possible unless a later change actually does restrict what is possible to output.

The Cell Processor:
The Cell inside the Playstation 3 is an 8 core asymmetrical CPU. It consists of one Power Processing Element(PPE), and 7 Synergistic Processing Elements(SPE). Each of these elements are clocked at 3.2GHz and are connected on a 4 ring Element Interconnect Bus(EIB) capable of a peak performance of ~204.8GB/s. Every processing element on the bus has its own memory flow controller and direct memory access (DMA) controller. Other elements on the bus are the memory controller to the 256MB XDR RAM, and the Flex phas i/o controller(FlexIO).

The FlexIO bus is capable of ~60GB/s bandwidth. Massive chunk of this bandwidth is allocated to communicate with the RSX graphics chip, and the remaining bandwidth is where the southbridge elements lie such as sound, optical media(blu-ray/dvd/cd), network interface card, hard drive, USB, memory card readers, Bluetooth devices(controllers), and WiFi. This may sound like a lot to share with the RSX, but consider that aside from the RSX, the other components are using bandwidth in the MB/s scale, not GB/s, so even if add all of them up there is still plenty of bandwidth left.

I actually recommend you skip down to the Xbox360 hardware comparison and look at the Cell and Playstation 3 hardware diagrams before you continue reading so you get a better idea of how things come together on the system as I explain it.

Power Processing Element:
The PPE is based on IBM’s POWER architecture. It is a general purpose RISC(reduced instruction set) core clocked at 3.2GHz, 16kb L1 instruction cache and 16kb L1 data cache, with a 512kb L2 cache. It is a 64-bit processor with the ability to fetch four instructions and issue two in one clock cycle. It is also able to handle two hardware threads. It comes with a VMX-128 vector unit with 32 register. The PPE is an in-order processor with delayed execution and limited out-of-order support for load instructions.

PPE Design Goals:
The PPE is designed to handle the general purpose workload for the Cell processor. While the SPEs are capable of executing general purpose code, they are not the best suited to do so. Compared to Intel/AMD chips, the PPE isn’t as fast for general purpose computing considering its in-order architecture and comparably less complex branch prediction hardware. This likely will prevent the Cell from replacing or competing with Intel/AMD chips on desktops, but in the console and multimedia world, the PPE is more than capable in terms of keeping up with the general purpose code used in games and household devices. Playstation 3 will not be running MS word.

The PPE is also simplified to save space and improve power efficiency with less heat dissipation. This also allows the processor to be clocked at higher rates. To compensate for some of the hardware shortcomings of the PPE, IBM is an effort to improve compiler generated code to utilize better instruction level parallelism. This would reduce the penalties of in order execution.

The VMX-128 unit on the PPE is actually a SIMD unit. This gives the PPE some vector processing ability, but as you’ll read in the next section; the SPEs are better equipped for vector processing tasks. The vector unit on the PPE is probably there in case a task that is better run on the PPE has some vector computations needed, but doesn’t perform overall better if the task was being done on an SPE, or if the specific chunk of work had to be handed off to an SPE, it bring in the

Synergistic Processing Element and the SIMD paradigm:
The SPEs on the Cell are the computing powerhouses of the Cell processor. They are independent vector processors running at 3.2GHz. A vector processor is also known to be a single instruction multiple data (SIMD) processor. This means that for a single instruction, let’s say addition, that operation can be performed in one cycle using more than one operand, effectively adding pairs, triples, quadruples of numbers in one cycle instead of taking up 4 cycles in sequence. Here is an example of the different approaches to an example problem of adding the numbers 1 and 2 together, 3 and 4, 5 and 6, and 7 and 8 to produce 4 different sums.
On a traditional desktop CPU (scalar), the instructions are handled sequentially.
Code:

1. Do 1 + 2 -> Store result somewhere
2. Do 3 + 4 -> Store result somewhere
3. Do 5 + 6 -> Store result somewhere
4. Do 7 + 8 -> Store result somewhere

On a vector/SIMD CPU (superscalar) the instruction is issued once, and executed simultaneously for all operands.
Code:

1. Do [1, 3, 5, 7] + [2, 4, 6, 8] -> Store result vector [3, 7, 11, 15] somewhere

You can see how SIMD processors can outdo scalar processors by an order of magnitude when computations are parallel. The situation does change when the task isn’t parallel like in the case of adding a chain of numbers like, 1 + 2 + 3. Quite simply, a processor has to get the result of 1 + 2, before adding 3 to it and nothing can avoid the fact that this operation will take 2 instructions that cannot occur simultaneously. Just to get your mind a bit deeper into this paradigm, consider 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8. On the surface, you might count 7 operations are necessary to accomplish this problem assuming the sums have to be calculated before moving forward. However, if you try to SIMD-ize it, you would realize that this is actually still only 3 operations. Allow me to walk you through it:
Code:

1. Do [1, 3, 5, 7] + [2, 4, 6, 8] -> Store result in two vectors [SUM1, SUM2, 0, 0] and [SUM3, SUM4, 0, 0;
2. Do [SUM1, SUM2, 0, 0] + [SUM3, SUM4, 0, 0] -> Store result in two vectors [SUM5, 0, 0, 0]; [SUM6, 0, 0, 0].
3. Do [SUM5, 0, 0, 0] + [SUM6, 0, 0, 0] -> Store result in vector.

Careful inspection of the previous solution would show two flaws. One is the optimization issue of parts of the vector not being used for the operation. Those used parts of the vector could have been used to perform operations useful for other parts of the program. It would be a huge investment on time if developers tried to solve this problem manually by filling vectors where their code isn’t already plainly vector based. That type of thing IBM is placing on compilers to be able to look into the code for parallelism – specifically instruction level parallelism (ILP).

The other huge problem (which I know is there but know less about), is in the fact that vector processors probably naturally store results in a single vector. It would require some interesting misaligned calculations, shifts and/or copies of data to place the results in a position where they are ready to perform the next step. I am not too well versed in how this can be accomplished or if the SPEs have the ability to do something like this so I’ll leave it up to further discussion. Upon further research, “vector permute/alignment” seems to be the topic that address this problem. It seems the SPE instruction set down allow for inter-vector operations. Dot products, are one instruction.

The SPE inside of the Playstation 3 sports a 128*128bit register file (128 registers, at 128bits each), which is a lot of room to also unroll loops to avoid branching. At 128 bits per register, this means that an SPE is able to perform operations on 4 operands 32bits wide each. Single precision floating point numbers are 32 bits which also explains why Playstation 3 sports such a high single precision floating point performance. Double precision floating point numbers are 64-bits long and slows the processing down an order of magnitude because only 2 operands can fit inside a vector, and I’m pretty sure it also breaks the SIMD processing ability since no execution unit can work on 2 double precision floating points at the same time, meaning that the SPE will perform double precision computations in a scalar fashion.

Quote:
“An SPE can operate on 16 8-bit integers, 8 16-bit integers, 4 32-bit integers, or 4 single precision floating-point numbers in a single clock cycle.”
– Cell microprocessor wiki. That matches up with my prediction pretty much, but I haven’t been able to find any other sources that suggest or state this. It is a very logical explanation.

The important thing to note is that vector processing, and vector processors are synonymous with SIMD architectures. Vectorized code, is best run on a SIMD architecture and general purpose CPUs will perform much worse on these types of tasks.

SIMD Applications:
Digital signal processing (DSP), is one of the areas where vector processors are used. I only bring that up because *you know who* would like to claim that it is the only practical application for SIMD architectures.

3D graphics are also a huge application for SIMD processing. A vertex/vector(term used interchangeably in 3D graphics) is a 3D position, usually stored with 4 elements. X, Y, Z, and W. I won’t explain the W because I don’t even remember exactly how it’s used myself, but it is there in 3D graphics. Processing many vertices would be very slow on a traditional CPU which would have to individually process each element of the vector instead of processing the whole thing simultaneously. Needless to say, GPUs most definitely have many SIMD units (possibly even MIMD), and is why they vastly out perform CPUs in this respect. Operations done on the individual components of a vector are independent which makes the SIMD paradigm an optimal solution to operate on them.

To put this in context, I don’t know if any of you remember 3D computer gaming between low end and high end computers between 1995 and 2000. Although graphics accelerators were out, some of them didn’t have “Hardware T&L”(transform and lighting). If you recall games that had the option to turn this on or off (assuming you had it in hardware), you could see the huge speed difference if it was done in hardware vs not. The software version still looked worse after they generally tried to hide the speed difference by using less accurate algorithms/models. It is this type of situation, the Cell is actually equipped to do relatively well, and traditional scalar CPUs would still perform vastly worse.

It is worthwhile to note that “hardware” in the case of 3D graphics generally refers to things done on the GPU, and “software” just means it is running on the CPU – even though they are both pieces of hardware executing the commands in the end. Software just refers to the part that is controlled by the software the programmer writes.

There are image filters algorithms that occur in applications like Adobe Photoshop which are better executed by vector processors too. Many simulations that occur on super computers are better suited to run on SPEs (toned down in accuracy appropriate for gaming). Some of these simulations include cloth simulation, terrain generation, physics, and particle effects.

SPE Design Goals – no cache, such small memory, branch prediction?
The SPEs don’t have a cache in the traditional sense of it being under hardware control. It uses 256kb of on-chip, software controlled SRAM. It reeks of the acronym “RAM” but offers latency similar to those of a cache and in fact, some caches are implemented using the exact same hardware – for all practical purposes, this memory is a controlled cache.

Having this memory under software control places the work on the compiler tools, or programmers to control the flow of memory in and out of the local store. For games programming, this is actually generally the better approach if performance is a high priority. Traditional caches have the downside of being non-deterministic for access times. If a program tries to access memory that is in discovered in cache(cache-hit), the latency is only around 5-20 cycles and not much time is lost. If the memory is not discovered in cache(cache-miss), the latency is in the hundreds of cycles. This variance in performance is very undesirable in games as steady frame rates are much more visually pleasing than variable ones.

IBM is placing importance on compiler technology to manage the local storage well unless the application wishes to take explicit control of this memory themselves (which higher end games will probably end up doing). If it is accomplished by compilers, then to a programmer, that local storage is a cache either way since they don’t have to do anything to manage it.

The local storage is the location for both code and data for an SPE. This does make the size seem extremely limited but rest assured that code size is generally small, especially with SIMD architectures where the data size is going to be much larger. Additionally, the SPEs are all connected to other elements at extremely high speeds through the EIB, so the idea is that even though the memory is small, data will be updated very quickly and flow in and out of them. To better handle that, the SPE is also a VLIW processor that can dual can dual-issue instructions to an execution pipe, and to a load/store pipe. Basically, this means the SPE can simultaneously perform computations on data while loading new data and moving out processed data.

The SPEs have no branch prediction except for a branch-target buffer(hardware), coupled with numerous branch hint instructions to avoid the penalties of branching through software controlled mechanisms. Just to be clear right here – this information comes from the Cell BE Programming Handbook itself and thus overrides the numerous sources that generally have said “SPEs have no branch prediction hardware.” It’s there, but very limited and is controlled by software and not hardware, similar to how the local storage is controlled by software and is thus not called a “cache” in the traditional sense.

How the Cell “Works”:
This could get very detailed if I really wanted to explain every little thing about the inner workings of the Cell. In the interest of time, I will only mention some of the key aspects so you may get a better understanding of what is and isn’t possible on the Cell.

There are 11 major elements connected to the EIB in the Cell. They are 1 PPE, 8 SPEs, 1 FlexIO controller, and 1 memory controller. In the setup for the Playstation 3, one SPE is disabled so there are only 10 operational elements on it. When any of these elements needs to send data or commands to another element on the bus, it sends a request to an arbiter that manages the EIB. It decides what ring to put the data on, and when to do it to efficiently distribute resources and avoid contention. With the exception of the memory controller (connected to RAM), any of the elements on the EIB can make requests to read or write data from other elements on the EIB. IBM has actually filed quite a number of patents on how the EIB works alone to make the most efficient use of its bandwidth. The system of bandwidth allocation does breakdown in detail, and in general, I/O requests are handled with the highest priority.

Each processing element on the Cell has its own memory controller. For the PPE, this is transparent since it is the general purpose processor. A load/store instruction executed on the PPE will go through L2 cache and ultimate make changes to the main system memory without further intervention. Underneath the hood though, the memory controller the PPE sets up a request to the arbiter of the EIB to send its data to the memory controller of the system memory. This event is transparent to the load/store instruction on the PPE so that RAM is its main memory. The SPEs are under a different mode of operation. To the SPEs, a load/store instruction works on its local storage. The SPE has its own memory controller to access system RAM just like the PPE, but it is under software control. This means that programs written for the SPE have to set up manual requests on their own to read or write to the system memory that the PPE primarily uses. The messages could also be used to send data or commands to another element on the EIB.

This is important to remember because it means that all of the elements on the EIB have equal access to any of the hardware connected to the Cell on the Playstaiton 3. Rendering commands could come from the PPE or and SPE seeing as they both have to ultimately send commands and/or data to the I/O controller which is where the RSX is connected. On the same idea, if any I/O devices connected through FlexIO have a need to read or write from system memory, it can also send messages directly to the XDR memory controller, or send a signal to the PPE or an SPE instead.

The communication system between elements on the Cell processor is high advanced and planned out and probably constitutes a huge portion, if not most, of the research budget for the Cell processor. It allows for extreme performance and flexibility for whoever develops any kind of software for the Cell processor. There are several new patents IBM has submitted that relate to transfers over the EIB and how they are setup alone. After all, as execution gets faster and faster, the general problem is having memory keeping up to speed.

Note: The section is extremely scaled down and simplified. It is to the point where if you read the Cell BE Handbook, you could say I’m wrong in many places if I implied or suggested that only one method or communication is possible or if you use my literal word choice against theirs. If you are wondering how something would or should be accomplished on the Cell, you’d have to dive deeper into the problem to figure out which method is the best to use. The messaging system between elements on the EIB is extremely complex and detailed in nature and just couldn’t be explained in a compact form.

Multithreading?
Threading is simply a word used to describe a sequence of execution. Technically, a single core CPU can handle infinite threads. The issue is that performance drops at a certain point depending on what the individual tasks are doing. The PPE has two threads on the same processor. This makes communication between these two threads easier since they are using the exact same memory resources. Sharing data between these threads is only an issue of using the same variables in code and keeping threads synchronized – much of which has been done and thoroughly studied.

On the other hand, the SPEs are more isolated execution cores that have their own primary memory which is their local store. Sharing data between SPEs and the PPE means putting data on the EIB, which means that one of the messaging methods has to be used to get it there. There are various options for this depending on what needs to be transferred and how both ends are using the data. Needless to say, synchronization between code running on SPEs and the PPE is a harder problem. It is better to think of the code running on separate SPEs as separate programs rather than threads to scale the synchronization and communication issues appropriately.

That being said, it isn’t a problem that hasn’t been seen before as it is pretty much the same as inter-process communication between programs running on an operating system. Each application individually thinks it has exclusive access to the hardware. If it becomes aware of other programs running, it has to consider how to send and receive data from the other application too. The only added considerations on the Cell are the hardware implementation details of the various transfers to maximize performance even of more than one method works.

Programming Paradigms/Approaches for the Cell:
Honestly, the most important thing to mention here is that the Cell is not bound to any paradigm. Any developer should assess what the Cell hardware offers, and find a paradigm that will either be executed fastest, or sacrifice speed for ease of development and find a solution that’s just easy to implement. That being said, here are some common paradigms that come up in various sources:

PPE task management, SPEs task handling:
This seems to be the most logical to many due to the SPEs being the computational powerhouse inside of the Cell while the PPE is the general purpose core. The keyword is computational which should indicate that the SPEs are good for computing tasks, but not all tasks. Tasks in the general purpose nature would perform better on the PPE since it has a cache and branch prediction hardware – making coding for it much easier without having to control those issues. Limiting the PPE to dictating tasks is stupid if the entire task is general purpose in nature. If the PPE can handle it alone, it should do so and not spend time handing off tasks to other elements. However, if the PPE is overloaded with general purpose tasks to accomplish, or has a need to certain computations which the SPEs are better suited for, it should hand it off to an SPE as the gain in doing so will be worthwhile as opposed to being bogged down running multiple jobs that can be divided up more efficiently.

Having the PPE fill a task manager role may also means that all SPEs report or send its data back to the PPE. This has a negative impact on achievable bandwidth as the EIB doesn’t perform as well when massive amounts of data are all goin to a single destination element inside the Cell. This might not happen if the task the elements are running talk to other elements including external hardware devices, main memory, or other SPEs.

SPE Chaining:
This solution is basically using the SPEs in sequence to accomplish steps of a task such as decoding audio/video. Basically, an SPE sucks in data continuously, processes it continuously, and spits it out to the next SPE continuously. The chain can utilize any number of SPEs available and necessary to complete the task. This setup is considered largely due to the EIB on the Cell being able to support massive bandwidth, and the fact that the SPEs can be classified as an array of processors.

This setup doesn’t make sense with everything as dependencies may require that data revisit certain stages more than once and not simply pass through once and be done. Sometimes, due to dependencies a certain amount of data has to be received before processing can actually be completed. Lastly, various elements may not produce output that a strict “next” element needs. Some of it may be needed by one element, and more to another.

CPU cooking up a storm before throwing it over the wall:
This honestly was a paradigm I initially thought about independently early into my research on the details of the Cell processor. It’s not really a paradigm, but rather is an approach/thought process. Even the Warhawk designer/producer mentioned an approach like this The Cell is a really powerful chip and can do a lot of computational work that is very fast inside the processor. The problem is bandwidth to other components outside of the chip bring in communication overheads and those bottlenecks as well. It seems like a less optimal use of computing resources if the PPE on the Cell writes output to memory, and all of the SPEs pick up work from there if the PPE can directly send data to the SPEs, removing the bottleneck of them all sharing the 25.6GB/s bandwidth to system memory. It appears to make the most sense to let the Cell load and process the game objects as much as possible, before handing it off to the RSX or writing back to memory.

This approach does make sense, but by no means is a restriction if a game has serious uses and demands for a tight relationship between the RSX or other off chip elements and Cell throughout the game loop.

Where does the operating system go?
Some sources propose that an operational SPE will be reserved by Sony for the operating system while games are running. As far as I researched, I have found nothing official to support this being the case with PS3 other than Ken Kutaragi saying an OS could run on an SPE, and IBM’s papers suggesting various Cell operating system configurations.

The specific configuration of running an OS(kernel only) on an SPE makes sense from a security perspective. I will not explain it in this post, but the Cell does have a security architecture which can enable an SPE to be secured through hardware mechanisms. Given this ability, if Sony wanted an easy method to protect its operating system from games and homebrew, then they would probably resort to running a kernel with light OS features in an SPE.

Otherwise, the short answer is that the OS could run as a tiny thread on the PPE, or on an SPE. Sony will do what has the least impact on gaming and still delivers on the functional requirements of the OS.

The RSX Graphics Chip:
The RSX specs are largely undefined and unknown at this point, and I will refrain from even analyzing it too deeply if it comes to the clearly unknown aspects. The only information available has been around since E3 2005 and is likely to have changed since then. Various statements have been made after this point that compare the RSX to other graphics chips nVidia has made. Some press sources have used these statements to analyze the RSX as if they actually knew what it was or in a speculative manner, but readers should not forget that they simply do not know for sure. I have read a lot of those sources and am throwing out specific execution speed numbers and am focusing on the more likely final aspects of the RSX specs.

The only thing that can be said with a pretty high degree of certainty is that the RSX will have 256MB of GDDR3 video memory, access to the Cell’s 256MB XDR memory, and a fixed function shader pipeline – meaning dedicated vertex shader pipelines, and dedicated pixel shader pipelines as opposed to a unified shader architecture that the Xenos on the Xbox360 has. The RSX will also be connected to the Cell through the FlexIO interface.

Due to the nature of the SPEs on the Cell, there is quite an overlap in function concerning vertex processors on the RSX. It would be up to the programmer to decide where to accomplish those tasks depending on the flexibility they need, and what resources they have available to them. The Cell could also handle some post processing(pixel) effects if the bandwidth is there and each pass through the RSX is relatively quick to process, but this will most likely not happen due to pixel shading occurring late in the rendering pipeline only for it to be taken out of the pipeline and put back in again.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 10:44 PM   #4
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

What can the PS3 do for gaming? What can’t it do?
I challenge you to answer that question mostly by yourself. Mostly, but here is my view on it:

To me, it seems as if the Cell is a simulation monster. “Supercomputer on a chip” isn’t entirely far from the truth. If the developers fall into a computational mindset for accomplishing tasks on the Playstation 3, the Cell’s advantage with SIMD and parallelism will be utilized and it could bring some truly impressive physics, graphics, and sound to the table. These things will not be done to the level of accuracy as supercomputers since they are fully dedicated to usually one of those tasks at a time, but the accuracy would be reduced to a realistic enough level for the purposes of game play visuals or mechanics. Basic/static AI routines are probably better done on general purpose processors, but I can see certain routines being developed with a computational approach in mind. I wouldn’t expect any “oh sh*z” from Playstation 3 game AI anytime soon though unless major advancements are made in the field entirely.

It is sad to say that most game play elements that aren’t technically deep and are a breeze to run on processors. Consider that fun games with many varying elements of game play have been available since the 16-bit era or earlier and have only expanded due to features related to external devices like networking and controllers. Don’t expect games to necessary be “more fun” in the game play aspect just because the hardware is more powerful.

Powerful is also by no means a linear term for CPUs. There are difference dimensions of power, and for general purpose code, Intel and AMD processors are still considerably more powerful on the general purpose axis. Comparisons that propose that the Cell may be able to outperform those processors are generally considering where the Cell would pick up slack if a general purpose processor would lag in. General purpose processing is somewhat of a unified axis of everything that has to be done, and anything the Cell does better, technically does raise it on that axis too. Additionally, considerations for the Cell processor being used for general purpose execution also are probably also considering that the developer will put substantial effort in getting general purpose code up to speed on the SPE – this means, they’ll be in control of the cache, they’ll have to manage shared memory, and all that other good stuff no application developer would want to do. Unless tools make general purpose programming for the SPEs acceptable, don’t expect Cell to really step in and take some kind of lead in general purpose computing.

What can and can’t be upgraded?
Honestly, many people do not realize how close the lower price point can come up to the more expensive version of the Playstation 3. In short, HDMI is the only real functionality that you’d completely miss if you didn’t get the $600 version. There is reasonable assurance that for the next 4-5 years, ICT wont be turned on which would allow 1080p signals through component video which the $500 dollar version support. As for the other options the $500 version lacks:

USB Compact Flash/SD/Memory Stick Pro Duo readers are sold in computer stores and online shops like newegg.com. They cost anywhere from 10-50 bucks depending on how many formats you want to be able to read. Will the Playstation 3 work with them? There’s a very high chance the PS3 implements standard USB protocols that will allow USB hubs/devices to be connected transparently. The difference is, the memory card device wouldn’t be distinguishable from the viewpoint of the PS3 if it was connected through the USB interface as opposed to the pre-installed version – i.e. it wouldn’t know if it was an SD card, Memory Stick Pro Duo or Compact Flash drive. It would just see “USB Storage Device.”

WiFi can be made up easily by buying a wireless router with Ethernet ports on it. Simply connect the PS3 to the Ethernet and any other devices on the router’s wireless network can be talked to. This would not be transparent to the Playstation 3 due to the more expensive version having two separate network interfaces as opposed to one. If a feature was implemented that only looks for wireless devices to talk to through the wireless network interface attached to the $600 Playstation 3, they wouldn’t find it and never attempt to see if the same device exists on the network the Ethernet network card is connected to. Although if a feature was implemented such that it attempted to communicate with devices through any available network, it would find the Ethernet NIC on the $500 PS3, and attempt to search for devices – wireless or not – through that interface. It’s kind of up in the air if Sony developers will be smart enough to realize this. Sony has also said that purchasing a wireless network interface would allow the PS3 to perform wireless communication to. Doing this would require more work on Sony’s end as they would have to implement drivers for USB network cards.

Are developers in for a nightmare?
I would LOL if someone seriously asked me this, but it is a reasonable question that I’m sure people have on their minds.

Some developers would piss in their pants upon looking at the Cell and realizing what they have to do to get it to accomplish certain things. The amount of mathematical, scientific, and computer science talent and knowledge needed to tackle the whole setup of the Playstation 3 is astounding. While there are many things the Cell naturally excels at, some of these problems sets aren’t as obvious and it requires a deeper understanding of the base problem area which may be sound, physics, graphics, and AI just to understand the many ways of possible solving the problem. Then in addition to understand the problem better, the developer must figure out the most efficient way to implement it on the Playstation 3 and have the skills to actually write it in code. This is a very high bar for games programmer.

Other developers wouldn’t piss in their pants and would be confused at what SIMD actually means for them. They might be too stuck in their old ways to see how SIMD processors can drastically increase game performance and only consider the general purpose abilities of the SPEs scoffing at them for not having a cache. If they think want this type of computing power, they would think the PS3 is probably a piece of crap to program for and clearly measure Xbox360 to be superior or closely matched with its 3 cores and much easier to use developement tools.

Undoubtedly, there are developers who don’t already have the knowledge to implement the efficient SIMD solutions to games processing problems. Thankfully the nature of the Playstation 2 Emotion Engine has already been related to SIMD processing as the V0 and V1 units were vector processors which developers had to make good use of to push the system to its limits – and they did. Unlike the days of Playstation 2, they now have an extreme amount of SIMD processing power coming out of the SPEs so there is far more developers can do on the CPU. They could actually render 3D worlds entirely in real time on the Cell alone if they wanted to ignore the RSX. That being said, they wouldn’t do this due to not being able to show much else for the game, and it would waste an entire component in the PS3.

Look for the development studios that pushed the PS2 to its limits to do similar with the Playstation 3. Multiplatform titles are probably not going to do much justice for the Playstation 3’s hardware as the transition between SIMD and non-SIMD processing presents a huge performance gap and they don’t want to alienate either end of the spectrum.

The important thing with technology advancements is certain steps at taken at the right time. Ten years ago, a processor like the Cell would fall flat on its face due to complexity and the industry not supporting games with costs as high as they are for any platform today. But it isn’t ten years ago and game budgets are high. Some of the budgets still aren’t enough to support Playstation 3. Others are. As time goes on and the knowledge is more widespread, developing for the Playstation 3 will be cheaper as more people will have experience working with it.

The Future for Playstation 3:
Playstation 3 is a console built with the future in mind. Playstation 1 had a very long life, and Playstation 2 is still going strong. Considering the length of time people will continue to play these consoles, it is important that they are not outdone by future advancements. The best a console can do is look to the horizon to see what’s coming, and that is what Sony is doing with the Playstation 3.

Blu-ray may not be the next generation movie format. If it is, then all the more reason to buy one. If not, the vast space is still there for games should something come up that does motivate the increase the size of games.

HDMI is future proof in the sense that even though the image constraint token(ICT) may not be used until 2010, if it ever comes into play the $600 Playstation 3 can still support it. The fact that it will support the newest HDMI 1.3 spec that TVs don’t even support yet also shows that once these things become mainstream, Playstation 3 will be right there to utilize it.

Gigabit Ethernet may not be commonplace today, but in 2-5 years, I’m positive that gigabit Ethernet home routers (really just switches running IPNAT), will be down to the price of 100mbps routers today. Although the internet will not be moving that fast because of ISP limitations, at least your internal networks will be and LAN features could take advantage of this bandwidth for something like HD video streaming.

Playstation 3 and Xbox360 – Comparing and Contrasting:
Before I compare and contrast with the Xbox360 hardware, here are some quick facts about the Xbox360 hardware:
Xbox360 Quick Hardware Summary:
The Xbox360 has a tri-symmetrical core CPU. Each one of the cores is based on the POWER architecture like the PPE inside the Cell, and is clocked at 3.2GHz. Each core has 32kb L1 instruction and 32kb LI data cache, and has a 1MB shared L2 cache. Each chip also sports an enhanced version of the VMX-128 instruction set and execution units. This enhanced version expands the register file from 32 128-bit registers, to a pair of 128 128-bit registers – with one execution unit per core. Each of these cores can also dual-issue instruction and handles two hardware threads, bringing the Xbox360 thread total to 6 hardware threads. The CPU and GPU share 512MB of GDDR3 RAM. Xbox360’s GPU, codenamed “Xenos” is designed by ATI and sports 48 shader pipelines using a unified shader architecture. The Xbox360 GPU also has 10MB of eDRAM for the frame buffer and over 200GB/s of bandwidth between this eDRAM and a simple logic uni, for a limited set of 3D processing effects such as anti-aliasing and z-buffering.

The system sports a DVD9 optical media drive from which games are loaded, a controller with rumble features, and 100mbps Ethernet.

Head To Head:
General Architecture Differences:
One thing I think is important when looking at CPU architecture is visuals. In the world of computing, physical distance between parts of a computer system generally corresponds with the speed (latency-wise) of their communication. Also a diagram shows the flow of memory, outlining where bottlenecks might exist for certain components to access large amounts of data from specific areas of memory.

IMAGES UPLOADED BY ME TO IMAGESHACK. CLICK TO ENLARGE!

Here are two diagrams of the major components on the Xbox360 motherboard:



Here are two diagrams of the Xenon CPU:


Comparably it is harder to find verbose diagrams of PS3 hardware but here is one I found on AnandTech:

This diagram has a likely discrepancy relating southbridge (I/O) being connected through the RSX. It is likely the southbridge will connect to the Cell directly via Flex I/O given the large bandwidth available through the interface and the GPU not being a recipient of I/O.

There are plenty of other Cell diagrams on the internet and here are two of them:


Bandwidth Assessment:
I recall an article IGN released short after or during E3 2005 comparing Playstation 3 and Xbox360. Microsoft analyzed their total system bandwidth in the Xbox360 and came up with some outrageous numbers compared to the Playstation 3. One of the big reasons for this total number being higher is the 256GB/s bandwidth between the daughter die and parent die in the Xenos(graphics chip). I will explain the use of the eDRAM memory later, but it is important to know that the logic performed between those two components with 256GB/s bandwidth hardly constitutes a system component where considering game processing takes place. Additionally, Microsoft added up bandwidths that weren’t relevant to major component destinations such as “to CPU” or “to GPU.” Context like that matters a lot, because bandwidth between any two elements is only as fast as the slowest memory bus in-between. The only bandwidth figures that make sense to add together are those on separate buses to the end destination.

The biggest ugly (and this really is a big one) in the Xbox360 diagram should be the location of the CPU relative to the main system memory. It has to be accessed through the GPU’s memory controller. The Xbox360 GPU’s memory has 22.4GB/s bandwidth to the system’s unified memory, and this bandwidth is split between the GPU’s needs and the CPU’s. A simple investigation would show that if the Xenon(Xbox360 CPU) was using its full 21.6GB/s bandwidth to system memory, there would be 800MB/s left for the GPU. If the GPU was using it’s full bandwidth to this memory, none would be left for anything else. Additionally, the southbridge(I/O devices) is connected through the GPU also, and all of these devices are actually destined to go to the CPU unless sound for the Xbox360 is done on the Xenos. The impact of this is considerably less since I/O devices probably won’t exceed more than a few hundred MB/s during a game, and isn’t shared by GPUs 22.4GB/s access to main memory. This bandwidth is still going through the same bus that the CPU uses to access RAM though.

Looking at the diagram of the Playstation 3, you can see that the RSX has a dedicated 22.4 GB/s to its video memory, and the Cell has a dedicated 25.6GB/s to its main memory. Additionally, if you wanted to find the bandwidth the RSX could use from the Cell’s main memory, it go through the 35GB/s link between the Cell and itself, and then go through the Cell processor’s FlexIO controller, on the EIB, to the Cells memory controller which is the gatekeeper to RAM. The slowest link in the line is the bandwidth the XDR memory controller provides which is 25.6GB/s. If the RSX uses this extra bandwidth it is being shared with the Cell. In general though, the major components in the Playstation 3 have their own memory to work with which provides maximum bandwidth.

In terms of peak performance, if both the GPU and CPU for both consoles were pushing the maximum bandwidths from their respective memory banks, the total for Xbox360 would be 22.4GB/s, and the total for the Playstation 3 would be 48GB/s. I believe this to be the most important bandwidth measure as both of these elements are the major programmable elements of a gaming machine. They will be processing game data or graphics data independently, and need fast access and high bandwidth to what they are working on.

While the Xbox360 shared bandwidth is a big downside on the grand scheme of things considering potential, Microsoft probably allowed this due to the nature of a game loops often not involving both the CPU and GPU needing high bandwidth simultaneously. Overall, during a game loop, Xbox360 will probably use its 22.4GB/s bandwidth almost constantly due to the CPU using it heavily for a part of the game loop, and the GPU using extreme bandwidth during another part of the game loop. While a Playstation 3 game, if it uses a typical game loop design, would show half of the frame time, the CPU is using high bandwidth to its memory, the other half being mostly unused; and the same thing for the GPU’s use of video RAM. That isn’t a disadvantage of the Playstation 3’s part, but it is a lack of using its full potential. A modified game loop that kept both rendering and CPU processing high would fare far better on the Playstation 3’s bandwidth and design than the Xbox360.

In the worst case scenario for the Playstation 3, if the GPU literally only used bandwidth for half of the game loop, overtime, you could consider it’s bandwidth to be half of its peak. Same thing applied to the Cell and XDR RAM would yield 12.8GB/s bandwidth if it only used XDR half of the time. Although Playstaiton 3 not to be outdone - if the situation of a game loop is like this, the RSX might as well take the XDR RAM bandwidth while the CPU is idling and increase its total bandwidth to 48GB/s.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 10:44 PM   #5
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

Xbox360 “Xenon” compared to Playstation 3’s “Cell” – the CPUs:
Inter-core communication speed:
Another mystery with the Xbox360 (at least in my view) exists with the inter-core communication on the Xenos CPU between its cores. IBM clearly documents the Cell’s inter-core communication mechanism physically and how it is implemented in hardware and software. This bandwidth needs to be extremely high if separate cores need to communicate and share data effectively. The EIB on the Cell is documented at a peak performance of 204GB/s with an observed rate at 197GB/s. The major factor that affects this rate is the direction, source, and destination of data flow between the SPE and PPEs on the Cell. I tried to find out the equivalent piece of hardware inside the Xenon CPU and haven’t found a direct answer.

Looking at the second architectural diagram of the Xenon, it seems that the fastest method the cores can use to talk to each other is through the L2 cache. Granted, the Xenon only has 3 cores, game modules are usually highly dependent and will need to talk to each other frequently. I might be a jumping the gun a bit, but given the L2 cache and FSB are running at half of the core speed, as opposed to the Playstation 3’s EIB which runs at the same clock speed as the cores, I’m pretty positive using L2 cache to communicate is not going to be very fast. It seems that independent threads are really what Microsoft was aiming for with the Xbox360 CPU design, and games are not optimally implemented if they have massive streaming transfers to hand off to other cores. What would suggest that the Xbox360 cores can communicate quickly and with high bandwidth, would be evidence that the reading and writing to the L2 cache are in larger segments than the writes to the EIB, compensating for the lower clock speed. Additionally, just writing to memory isn’t enough as the receiver needs some sort of notification that it has new data unless it is a permanent buffer. If anyone wants to do research on the topic, please add it to the discussion and include links to your sources.

Enhanced VMX-128 instruction set:
This is one of the features Microsoft boasts to claim they have a better gaming machine than Sony. They focus on the fact that their enhancements support a single cycle dot product instruction, and the larger register file. The problem with this boast over the Playstation 3 is that it compares it to the PPE’s VMX-128 unit which comparably only has 1 set of 32 128-bit registers and presumably less instructions. If the code requires 128 128-bit registers, or more complex instructions, then the code is most definitely vector processing heavy and should be run on an SPE which sports the exact same register file size, and includes a superset of the VMX instructions in terms of functionality(it is not a superset in terms of being binary compatible).

While each core in the Xbox360 also has two VMX-128 register sets, this is done to support the dual threaded nature of the cores better. It doesn’t actually have two vector execution units. Each core only has one VMX-128 execution unit meaning that even though there are two sets of registers per core, two threads that are using vector code have to share this single execution unit.

Comparably, the Cell’s PPE has the limited 32 128-bit register file with a single VMX vector unit on the PPE. This is what Microsoft usually singles out when they compare Playstation 3 to the Xbox360’s CPU. They forget(purposefully) that the Cell has 7 SPEs running at 3.2 GHZ, which is far greater SIMD performance than their 3 enhanced VMX-128 execution units. For vector based computations, the Playstation 3 undeniably outdoes the Xbox360 by an order of magnitude.

The dot product instruction claim is matched at least on the SPEs on the Playstation 3 though a simple multiply-add instruction. For those of you that aren’t mathematically inclined, a dot product is basically a measure of how parallel or perpendicular two lines are. The calculation of a dot product is basically multiplying each corresponding dimension value together, and then taking those products and adding them all together. Take two vectors <2, 3, 4> and <6, 7, 8>. The dot product would be: 2*6 + 3*7 + 4*8 = 65. If you read the earlier section in this post covering the SPES and SIMD architectures, you should remember that at the very least, an SPE can do all of the multiplying in one cycle, and all that needs to be done is a follow up add between the elements in the result vector. I do know that the SPEs have a few multiply-add instructions, but the bit of haziness is if the multiply can be an intra-vector(between two separate vectors) operation, while the add instruction is an inter-vector(between elements in the same vector) instruction from the result of the multiply. Sony claims that the dot product can be done in one cycle on an SPE, and it is very reasonable that this is the case as there are vector permute/shuffles/shift instructions in the SPE instruction set. There just isn’t a labeled dot product instruction in the SPE instruction set – but an intelligent programmer should find what he needs.

I found the multiply-add instruction in the Cell BE Handbook. It takes 4 vectors, one is definitely the result vector and two are operands, but the third parameter named ‘rc’, which I think represents a control register that dictates how to perform inter and intra vector operations. That means the multiply-add instruction has to operate on only two vectors, and the control vector is able to dictate an add between the result components of the multiply.

Symmetrical Cores?:
Symmetrical cores means identical cores. The appeal to this setup is entirely for developers. It represents no actual horsepower advantage over asymmetric cores since code running on any of the cores, will run exactly the same as it would run if it were on another core. Relocating code to different cores has absolutely no performance gain or loss unless it means something with respect to how the 3 cores talk to each other. It should be noted though, that thread relocation does matter between the cores, as a thread might not co-exist well with another thread that is trying to use the same hardware that isn’t duplicated on the core. In that case, the thread would be better located on a core that has that execution resource free or less used. The only case of this I can think of is the VMX-128 execution unit. I think most other hardware is duplicated on the cores in the 360 to allow for two threads to co-exist with almost no problem.

The Cell chip has asymmetrical cores, which means they are not all identical. That being said, the SPEs are all symmetrical with each other and the code that runs on an SPE could be relocated to any other SPE in the Cell. While the execution speed local to the SPEs are the same, there are performance issues related to the bandwidth the SPE is using and who it’s talking to on the EIB. Developers should look at where their SPE code is executing to ensure optimal bandwidth is being observed on the EIB, but once they find an optimal location to execute the code on, they can just put it there without rewriting anything. If a task was running on the PPE or PPE’s VMX unit, then it would have to be recompiled with C, and probably rewritten if hardware specific instructions are in the code(C or ASM) before it moves to an SPE, and the same applies in reverse. Good design and architecture should immediately let developers know what should run on the PPE and what should run on the SPEs, eliminating the chance of rewriting code if they see something better fit to run on an SPE later in development.

Is general purpose needed?:
Another one of Microsoft’s claims for the Xbox360’s superiority in gaming is the general purpose processing advantage since they have 3 general purpose cores instead of 1.

To say “most of the code is general purpose” probably refers to code size, not execution time. First, it should be clarified that “general purpose code” is only a label for the garden variety of instructions that may be given to hardware. On the hardware end, this code fits into various classifications such as arithmetic, load/store, SIMD, floating point, and possibly more. General purpose applications are programs made up of general purpose code on the scale that one function might be arithmetically heavy, and another might be memory bound. Good examples of this are MS Word, a web browser, or an entire operating system. With MS Word there is a lot of string processing which involves some arithmetic, comparison, a lot of branching, and memory operations. When you click import or export and save to various file formats, it is an I/O heavy operation. Applications like these tend to not execute the same code over an over, and have many different functions that can occur on relatively a small set of data depending on what the user does. These functions can vary from being very I/O device bound (saving to disk), to string processing intensive (spelling/grammar check), to floating point intensive(embedded Flash media game or resizing an image). Ultimately, there is a large amount of code written to handle the small set of data and most of it never gets executed.

Games are not general purpose programs. Any basic game programming book will introduce you to the concept of a game loop. This loop contains all of the functionality a game performs each frame. This loop handles all of the events that can occur in the game. An important principle in a game loop is to avoid branches when unnecessary as it slows down execution and makes the code on screen extremely and unnecessarily long. A good example of this is the Cohen-Sutherland line clipping algorithm. Instead of writing lengthy and complicated branches to check the 9 regions a point lies in, the code performs 4 simpler checks, and computes a region code which can be easily be used.

This automatic and repetitive processing has to occur for many game objects which represents a massive amount of data, with a relatively small code size. This is opposite of the general purpose paradigm, which typically has a small set of data (word document or html) and performs many various functions on it representing a large code size. Games processing has a large data size, but much smaller code size. Game objects also tend to be very parallel in nature as game objects are typically independent until they interact (collision) – which means they can be processed well on SIMD architectures if they are well thought out..

The whole integer advantage claim for the Xbox360 CPU is pretty stupid considering the SIMD architectures can operate on 4 32-bit integers at the same time, and integer processing abilities of games are not the bottleneck of 3D games processing.

What this general purpose power does grant Xbox360 owners over Playstation 3 is the ability to run general purpose applications faster. If the Xbox360 had a web browser(official or not), the design for such an application would work better on a general purpose CPU(s). That being said, it’s too bad Xbox360 doesn’t come with one, and web browsers don’t put the highest demand on general purpose processors to begin with. Most general purpose applications remain idle until the user gives actually input. The application will then process the task and complete before sitting idle again.

AI routines that navigate through large game trees are probably another area where general purpose processing power might be better utilized since this code tends to be more branch laden and varying depending on the task the AI is actually trying to accomplish. The plus side for the Playstation 3 is generating of these game trees, which is also time consuming. Generating a game tree is a more computational oriented task, and is likely to be executed faster by SIMD architectures. I am largely speaking speculatively under my Computer Science knowledge in this area. Anyone who knows more or has done more research on AI algorithms is welcome to add to discussion in this area.

The only case I can really see the general purpose computing power of the Xbox360 cores manifesting itself as a true advantage over the Playstation 3, is if Windows or similar OS was put on an Xbox360, having multiple applications running simultaneously along with some background services. Again, it is funny that Playstation 3 is more likely to have a general purpose operating system running on it than Xbox360 even though it would perform worse doing such a task.

XDR vs GDDR3 – System Memory Latency:
XDR stands for eXtreme Data Rate while GDDR3 stands for Graphics Double Data Rate version 3. XDR RAM is a new next generation RAM technology from those old folks at Rambus, who brought out that extremely high bandwidth RDRAM back during the onset of Pentium 4 processors. DDR was released soon after and offered comparable bandwidth at a much lower cost. RDRAM also had increased latency, higher cost, and a few other drawbacks which ultimately led to it being dropped very quickly by Intel back when it was released. Take note that DDR RAM is not the same as GDDR RAM.

Anyways, it was hard to make a good assessment on what the exact nature of the performance difference between these two RAM architectures are, but from what I gathered, GDDR3 is primarily meant to serve GPUS which means bandwidth is the goal of the architecture, at the cost of increased latency. For GPUs this is accepatable since, large streaming chunks of data are being worked on instead of small random accesses. In the case of CPU main memory, when more general purpose tasks are being performed, latency has increased importance on memory access times because data will be accessed at random more frequently than a GPU would.

That being said, the Xbox360’s CPUs bandwidth to RAM tops out at 21.6GB/s while the Cell processor still has more bandwidth to its RAM at 25.6GB/s. XDR RAM also does this without incurring high latency, and I’m almost positive its latency is lower than GDDR3 which is considered to actually have high latency. Games are not going to be performing a lot of general purpose tasks so the latency advantage for the Playstation 3 might not be that large, but the CPU will be performing more random accesses to memory regardless. The Xbox360’s CPU latency may be made worse than the already inherent GDDR3 latency issues due to being separated by the GPU.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 10:45 PM   #6
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

Xbox360 “Xenos” compared to Playstation 3’s “RSX” – the GPUs:
Since the specs on the RSX are not fully known, I’ll only make comparisons on the solid aspects of the RSX that are unlikely to change from what Sony has reported at E3 2005 (unless they change for the better).

Unified Shaders vs Fixed Function Pipelined Shaders – the GPUs:
The general move to unified shaders was done after examining the hardware differences between the vertex and pixel shader pipelines. There was enough duplicate and similar hardware that unified shaders were favored and the pipeline differences were consolidated into one and the number of total pipelines increased.

The general trend/nature of computing hardware is that the more variety of code types the hardware had to handle, the more complex it gets in hardware, and it will run slower. This remains true with the pipelines of the RSX compared to the pipelines in the Xenos. A pixel shader pipeline in the RSX, at a one to one ratio with the abstract pipeline in the Xenos would perform faster, and the same thing in respect to the vertex shader pipeline. How much faster are the RSX fixed function pipelines individually when compared to a single pipline in the Xenos performing a specific task? I really don’t know and it depends on what that is to say which card has more shader horsepower.

It should also be noted that ATI’s current highest end video card, still sports a fixed function pipeline. This strongly suggests that unified shaders are not the way to go.
The above statement is under strict review and is likely invalid. Further discussion will ensue related to this topic in this thread. A revision change may be made later.

Xenos’ eDRAM:
On the Xbox360’s GPU, there are 10MB of eDRAM which provides an assortment “free” frame buffer effects such as anti-aliasing, alpha blending, and z-buffering. This daughter die is connected to the parent die with 32gb/s bandwidth, and has 256GB/s bandwidth between the eDRAM and the logic to perform the aforementioned operations. These operations are considered “free” with respect to bandwidth since they are performed by hardware and memory that isn’t shared by the rest of the GPU or CPU.

The exact nature of the AA advantage is 4xMSAA or 2xFSAA at 720p. Any larger or higher of a resolution and the 10 megabytes become insufficient to accomplish these tasks. The basic premise is that any operations that require a frame buffer of over 10MBs will make this eDRAM unavailable unless a tiling method is used for rendering. Examples of typical methods that increase are HDR(certain types)

The RSX doesn’t have anything to compare to this free bandwidth for anti-aliasing and other effects, but I don’t think Playstation 3 fans have to worry too much for a few reasons. First, even PC cards don’t sport eDRAM and AA still accomplished even with other effects enabled. Additionally, games can step up to 1080p on the Playstation 3 to lower the need for anti-aliasing. Lastly, this eDRAM is probably in the Xenos as a necessity rather than luxury, since the main memory bandwidth between the GPU and CPU on the Xbox360 is shared. The RSX and standard PC cards have dedicated bandwidth to video memory, which is definitely where the frame buffer resides.

The Cell Advantage:
The Cell will not, and should not be performing all rendering operations like the E3 2005 demos displayed. It should prove as very interesting that the Cell does perform well at those types of operations since rendering on a CPU offers more flexibility than vertex and pixel shader programs. It is unlikely the Cell would be processing the latter type of shader operation since it would involve the RSX processing an almost finished frame, before giving it up to the Cell, only for the Cell to send it back to continue down the graphics pipeline again with almost no work to be done.

Granted, 3D pipelines are configurable and you can speed up processing through it by disabling unnecessary features that you might have already accomplished on the CPU already. It is likely that developers will do some basic/macro level 3D operations on geometry before passing it off to the RSX to do more time consuming fine detailed processing.

The Xbox360 CPU could do the same thing too and aid in rendering task, but general purpose computing power doesn’t exactly lend itself well to the types of operations it would have to perform, and the vector processing capabilities of the Cell greatly out perform the Xenon in this respect.

Other Peripherals:
Hard Disc Drive:
In the case of the Xbox360, a 20GB hard drive is included in the premium version, and it is an upgradeable feature in the core version. Playstation 3 offers a 20GB hard drive on its “core” version, and a 60GB hard drive on its premium version. Advantages of a hard drive are generally well known to anyone who has a PC and has ever played a game for it. Both systems having a hard drive considered, there is nothing much to speak of except for the fact that you can get a bigger hard drive for the Playstation 3 if you are a person looking to store and playback larger amounts of media. It is likely both Microsoft and Sony will provide upgrades in the future.

The issue here is the fact that the hard drive is non-standard on the Xbox360. Some people get really defensive when this comes up. It is an issue that will and should be brought up since with the Xbox360 developers may not develop a hard drive feature they don’t feel enough consumers will see and enjoy. With the Playstation 3, developers know every consumer will have a hard drive and see the benefits of the feature they implemented.

It isn’t quite clear at this point though whether or not Sony is using a standard 2.5” SATA drive. If they are, then you could upgrade a PS3 hard drive as soon as any consumer SATA drive is released.

Optical Media Drive:
You know it was going to come up – Blue Ray vs DVD9. This isn’t really a fair versus. Blue-ray is superior to DVD9 in every respect. The only disadvantage Playstation 3 has in this respect is data reading speed. The 2x BD read speed is considerably slower than the 12x DVD read speed. The difference is between 72mbps vs ~130mbps, which in terms of common data rates known in the computer world are 8.6MB/s and 15.4MB/s. Should PS3 fans worry about their load times? I don’t think so as this is still higher than Playstation 2’s read speed, and since the hard drive is standard on Playstation 3, this will be large motivation for developers to use hard drive caching methods as a standard – not merely as feature.

The clear advantage of blu-ray is capacity and the possibility of playing the next generation standard for HD movie content. Blu-ray is looking good for becoming the next generation standard for movies as Hollywood has far more support for Blu-Ray than HD-DVD. If movie fans go where the movies are (which they will), then it will be blu-ray decisively. Playstation 3 is playing a part in getting consumers to match up with the studios by sporting a blu-ray player. Playstation 3 will probably be the majority of blu-ray player sales this year, and may even continue in 2007. That being said, it isn’t set in stone just yet so don’t hold your breath…

Capacity for games is where the bigger debate still exists with blu-ray and DVD9 with respect to the console wars. Will blu-ray be needed for this next generation? I can’t say it will be needed by any genre except any games that will decide to include HD FMV sequences on the media. But that is under the current way things are looking now. In a few years, or 5 years, that could all change and the space for blu-ray media is needed or wanted. Right now, you can’t make too strong of an argument for blu-ray being needed for the capacity of games, but it is an advantage.

Controllers:
Both consoles now sport pretty much the exact same button layout. All “who copied who”s aside, Playstation 3’s controller has motion sensitivity for better primary control in some game types, and a very large possibility to improve secondary control in all genres (i.e. tilting head around corners in an FPS, cameras, etc). Xbox360 has rumble feedback which was much enjoyed last generation, and PS3 fans will miss if it doesn’t come back (which it likely wont). Another significant difference is the pressure sensitivity of the face buttons. Playstation 2 had this, and Playstation 3 is most definitely going to include the same (it’s impossible to find out if it really is there or not). Xbox360, surprisingly, doesn’t do this even though the original Xbox controller did. Functionally, the major difference is merely that PS3’s controller has motion sensing.

Xbox360’s supports 4 RF(radio frequency) wireless controllers. Playstation 3 supports up to 7 wireless Bluetooth devices – not the keyword “device” as it means Sony isn’t limiting it to only controllers. Bluetooth notably has a shorter battery life due to its increased bandwidth capability although this shouldn’t be an issue as Sony’s controller appears to be using a built in rechargeable battery that charges through USB. Looking at the player number support, Playstation 3 has jumped to the lead over all other consoles this generation out of the box. Will you do 7 player multiplayer? Probably not split screen, 4 players is a comfortable maximum there, but for multiplayer games where the screen is shared and all players are on the same screen, 7 players is definitely feasible.

Bluetooth:
In reference to the last section – Playstation 3’s Bluetooth support is labeled with the word device as to be clear that it is not limited to controllers. This means that the Playstation 3 could utilize other Bluetooth devices on the market such as mice and keyboards. Bluetooth is basically aiming to be the wireless USB for computer equipment since RF devices are typically propriety end to end.

The Final Verdict?:
The Playstation 3 really does have a considerable hardware lead when it comes to games processing power. Despite Microsoft’s claims of the Xbox360 having more bandwidth, the evaluation brings in play numbers that make no sense to add up in the context of the “system” and throws in numbers which also shouldn’t be added together due to the buses being connected in series. Vector/SIMD/stream processing is very relevant and needed in games programming to achieve a lot of high end calculations that occur in games today.

Consider why a number of PC games in the past year or two have been tapping into the GPU hardware to get it to accomplish a few things. Consider why research has supported that GPUs are much faster than CPUs at performing many tasks that people though desktop CPUs dominated in. Consider why Ageia is proposing a new major piece of hardware on PCs to aid in processing physics in games. The answer is clear that a certain type of processing is needed, and it is not found in traditional desktop CPUs with general purpose processing power. Desktop CPUs are also not heading in a direction to ever compensate for these deficiencies either. If this post isn’t enough to convince you, you can go out and do research on the various topics yourself.

Microsoft has nice tools to help developers get the most out of Xbox360, which is a noble and needed effort for developing better games. But in the end, Xbox360 has a lower roof than the Playstatation 3, and over time the lead will show and be undeniable. Taste in games is purely subjective so I won’t say Playstation 3 will have better games as a whole, but they will be technically superior over time.
Playstation 3 and PC – Comparing and Contrasting:
Unlike consoles PCs are not static and evolve over time – or rather, the components of a PC evolve over time. In the case of a PC, CPUs, GPUs are the fastest evolving parts of it that are the most relevant to games processing. The downside to a PC is that is not purely a gaming platform and the CPUs are more general purpose in nature to handle code coming from an operating system running many applications at once. It has to perform integer math, floating point math, memory loading and storing, and branching all at an acceptable level of performance such that no area noticeably slows down processing. The other downside to PCs is that motherboards do not advance as rapidly and they represent some significant bottlenecks for PC games today. Here is a quick rundown of what is inside of a PC as it relates to game processing.

PC Architecture Summary:
PC Motherboard – AGP/PCI-E:
Motherboards dictate a baseline functionality limits you can get out of a PC. A motherboard is where you connect your CPU to the GPU, RAM, and other peripherals that connect to your PC. Because this is where you connect these components, it effectively sets the rate at which these parts can talk to the CPU*. If a motherboard uses AGP 4x, an AGP 8x card will be capped to communicating with the CPU at 4x speeds and the same goes to PCI-express.

To put some numbers on the speeds of these buses, AGP 8x runs at roughly 2GB/s peak bandwidth, and PCI-E runs around the same speed at 8x. PCI-E is however being upped to 16x which puts this speed at 4GB/s. If the graphics card and motherboard PCI-E or AGP speeds differ, the max bandwidth that can be obtained is the lower of the two speeds.

*On a PC, devices talk to each other through the CPU by sending signals (I think interrupts). The CPU in turn forwards or retransmits information to whatever the destination device is. On a PC, heavy bandwidth coming from the network to the hard drive, will have an impact on the CPU. On a console setup, this can be avoided as every byte transferred doesn’t actually have to take up cycles on the CPU.

PC Motherboard – RAM:
PCs today typically use DDR ram at varying clock speeds. The fastest variant of DDR RAM is DDR400 which runs at around 4GB/s in single channel mode, and 8.5GB/s in dual channel mode. DDR offers very low latency access to RAM which is important for desktop CPUs.

PC Graphics Cards:
Graphics cards are probably the single most important factor in determining the visual performance of games on the PC platform. PC games are typically the first to show the latest and greatest rendering methods and pushing certain features to the max due to hardware improvements that consumers can buy at a rate at which they please, and developers are free to use these expanded hardware features as they are released.

PC graphics cards also typically come with on-board memory so the GPU doesn’t have to gather resources through the slow AGP or PCI bus. PC graphics cards typically offer very high bandwidth to video ram since the video card manufacturer is completely in charge of building the link between the video ram and the actual GPU.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 10:45 PM   #7
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

Head to Head:
Bandwidth Assessment:
If there was a diagram showing PC motherboards compared to the bandwidth diagram of the Playstation 3, you might be shocked to see some of the narrow bandwidths provided in PCs, but you’d also notice that the bandwidth provided in top end graphics cards today are already around double the currently known bandwidth for the RSX and Xenos to video memory. A top end GeForce or Radeon card has around 50GB/s bandwidth between the GPU and its video ram, while the RSX only has 22.4 GB/s. This factors in greatly with the texture detail displayed on PC games as compared to those in console games. On a PC, you can push higher quality textures onto your polygons, and use bandwidth expensive filters liberally with this added bandwidth. Many games enable these features and it isn’t even significantly necessary for the game’s visuals, or it could easily be compensated for using cheaper methods.

Comparably, PCs use an AGP or PCI-E bus for CPU to graphics card (memory or GPU) communication. It is extremely low at 8GB/s on the top end (PCI-E 16x). On a PC, it is safe to say that the graphics card will not put video memory that’s needed frame by frame, on the CPU’s main memory with such a slow link. The Playstaiton 3 sports a link of 35GB/s bandwidth between its CPU and GPU alone to allow them to work together to accomplish tasks without going through a huge bandwidth bottleneck. It effectively allows the RSX to not be excluded from the 256MB XDR RAM if it needs extra video memory.

PC CPUs also have much lower bandwidth to RAM compared to the Playstation 3. Today the fastest(common) RAM on desktop PCs runs at 4GB/s, and a gaming rig might try to setup dual channel upping this bandwidth to 8GB/s. On the PC end this bandwidth is so low due to the fact that general purpose computing generally doesn’t have a demand to transfer or process massive chunks of data at such a fast rate. For PC games, this does put a limitation on games that might want to process massive amounts of data on the CPU. PC games just don’t do this type of thing.

CPU performance:
On that note, CPUs on PCs are general purpose CPUs. The mainstream ones are all x86 based and are scalar processors – meaning they execute one operation at a time (on a single pipeline per core) on one piece of data. General purpose CPUs have gotten extremely fast at executing ALU related instructions, but this improvement has not been kept up with by memory(RAM). Due to this, a large part of die space is taken up by hardware aimed to hide general purpose CPU access times. This added hardware dissipates a lot of heat and lowers the overall efficiency of the CPU to keep it running fast. This hardware is needed in the general purpose computing scene since random accesses to memory are frequent due to application switching, and even a single application has many random variables to keep track of in memory. This need however, is not needed as much for games and the extra hardware would be a much greater waste of space and power. I already mentioned the lack of need for general purpose computing power in the Xbox360 contrasting so I won’t mention it again from a software standpoint.

Intel/AMD are the primary manufacturers of desktop CPUs today and all have huge amounts of die space allocated to general purpose computing. However, to not be completely outdone by the world of SIMD processing, MMX, 3DNow!, and SSE technologies were added to these general purpose CPUs to improve their 3D gaming and multimedia functions. These SIMD instruction sets and hardware associated with them are still behind the single VMX-128 instruction set and hardware included in the Cell’s PPE as they only have 16 registers as opposed to 32. SSE only recently supported operations that apply between elements in the same vector register with the latest version SSE3, although 3DNow! had this functionality from the start. MMX and 3DNow! also shared registers with the x86 floating architecture at the start which meant they couldn’t be executed simultaneously with x86 floating point code(x87). Since then, this may have changed though.

SSE, MMX, 3DNow! don’t even begin to scratch the power offered on a single SPE on the Cell. Not to mention the Cell has 7 of them in addition to the VMX-128 instruction set. For games processing, Intel/AMD CPUs are vastly outdone, and they will not be catching up this generation or the next. Buying newer and newer CPUs will not increase PC gaming performance drastically, and they won’t be catching up to the Cell for a long time.

Graphics performance:
In purely assessing the graphics cards compared the RSX, the RSX likely doesn’t weigh in along side of the heaviest hitter today. As I said before in the bandwidth assessment, graphics cards have extremely high bandwidth between video RAM and the graphics rendering pipelines that make up the GPU. The bandwidth and processing capability in graphics chips increases quickly as new cards are released on the market, which is about 3-4 per year, and a new generation adding more filters, methods, or effects every year. Consoles are quickly outdone in the eyes of PC game developers in the graphics department. When you see the latest top-end PC game, remember that it’s running on the latest top-end graphics card, and in some cases, these games are targeting cards that aren’t going to run well until the next generation of graphics cards is released.

The “Cell factor” added into graphics processing should also be considered in boosting the visuals of Playstation 3’s graphics when compared to PC games. Unlike a desktop CPU, the Cell is actually equipped to process many of the tasks that are performed on a graphics card, and there is enough bandwidth between the Cell and RSX so they both can render graphics. The most obvious approach to getting more out of the Cell is using it to do hardware transform and lighting (T&L), and other basic or complex vertex operations that a vertex shader might do. Upon entering the geometry to the GPU, a developer would disable these rendering steps since they have been performed already and it goes through these stages on GPU’s rendering pipeline quicker, giving it more time to accomplish something in another stage like pixel shading, AA, or HDR. There is actually feasible bandwidth for pixel shader operations to be done on the Cell before it is handed back to the RSX to do nothing but move it to the frame buffer and send the output signal to the display.

How much processing can be done on the Cell to make up for the PC graphics card advantage? I can’t answer that well since GPU specs and statistics are usually documented in results with little introspection as to what hardware does what, and how quickly it is doing it. If anyone knows a bit more about this, it would be a good area to get into deeper discussion with. I am pretty confident that the Playstation 3 with the Cell + RSX working together can look on par with many PC games that will be released in 2007.

One thing that the Playstation 3 developers couldn’t easily make up for is the bandwidth limitations of the RSX. No matter what, the RSX is limited to its 22.4GB/s link to GDDR3 RAM which limits the rate large textures can be rendered, which couldn’t be made up by Cell’s processing power. The wildcard in this scenario is obvious if you look at the RSX/Cell diagram and remember that RSX has full access to the Cell’s 256 MB/s of XDR RAM. The channel would first have to go through the Cell’s 25.6GB/s link to RAM, then the Cell/RSX link at 35GB/s – limiting bandwidth being the 25.6GB/s. If this RAM could be used simultaneously with the GDDR3 RAM, then the total peak bandwidth with memory that the RSX can use is 48GB/s through two buses, which is still on par with high end graphics cards today.* Do note that this scenario drains the Cell of all bandwidth to XDR RAM while the bandwidth is being used. This could be a non-issue by the nature of a game loop since the CPU is less likely to need such high bandwidth to RAM during rendering stage of a game loop.

Even if the 48GB/s bandwidth on the RSX is on par with top end PC cards today such as the GeForce 7900GTX or the ATI X1900 XTX, that number is static. Next year graphics cards could (and likely will) be sporting bandwidth figures in excess of 70-80GB/s.** They will push larger and more detailed textures faster than what 48GB/s can do, and will eventually have execution speeds that the Cell’s processing will not be able make up for.

*In a press interview with the Heavenly Sword developer(Ninja Theory) a few weeks ago, this idea was hinted on. I believe a developer said something about the RSX having two buses to memory and not just one. This very well could be what he was referring to without getting into the details.

**After I wrote that, I looked up the bandwidth on the GeForce 7950GX2 and see that it has 76.5GB/s bandwidth to video RAM. Next year’s bandwidth for top end PC graphics cards are looking to get up to 150GB/s or more bandwidth at this rate.

Frame-rate:
Frame rates vary for a number of reasons. It actually factors in considerably in the visual department because smoother and stable frame rates look better. While 30 FPS is well-playable, 60 FPS at the same visual quality will just make the game feel much better.

The reason why I mention this here is that PC games typically showcase very unstable frame rates. Unless your PC is far beyond the recommended requirements of a game, you will probably notice that most games have frame rates dropping to around 10-15 in certain parts, and going up to 30 or more during others. I’m not completely blaming this on developers since they have a lot of different hardware to worry about, but it is something that degrades the overall pleasure of playing a game. Playstation and Nintendo (sorry, Xbox360 and original have shown some awfully ugly frame rate drops similar to those seen on PCs), have historically shown games with less frame rate variation.

Controllers:
Mouse and Keyboard vs Playstation 3 controller. When it comes to RTS and FPS games, then Playstation 3 is owned along with every other console. Playing these types of games on the highest multiplayer tiers will always yield better players on the mouse + keyboard combo. That being said, the controls can still work on the Playstation 3, and players can get relatively good.

For many other game types, a PC keyboard and mouse suffer almost like a console controller does with RTS and FPS. You’d probably want a PC gamepad or joystick to play flight sims, fighting games, racing games, sports games, and probably more. The problem with a PC is that these things aren’t standard and not every developer will care to put in rumble features or motion sensing features even if they are out for certain PC gamepads on the market. The number of buttons supported on a decently programmed PC game does scale accordingly though to whatever the user has. PCs are lagging behind in the pressure sensitivity department and I don’t even think DirectX supports detecting pressure on button presses unless they’ve actually updated it since DirectX8(fyi, DirectX9 still used the DirectInput8 API).

OMG Look at Crysis!!!:
Yeah, this game got its own section due to how much it has annoys me on these forums. It is always being compared to the abilities of the next generation consoles processing abilities as it if is some unattainable goal for consoles.

Guess what is responsible for those graphics? I’ve already said it and you probably already know it if you’ve read and understood everything I wrote so far – top end graphics cards. Can the RSX beat it alone? I might lie to you and say “yeah it can do that” and fail to mention the RSX would likely be running at 5 frames per second if it did - as would any comparable PC graphics card would too. But I’d rather try to be a bit more honest than what nVidia would tell you. In order for the Playstation 3 to match or surpass those visuals, the Cell would have to be used to handle some of the parts of the 3D rendering pipeline to speed up rendering through the RSX to levels which could probably even exceed what Crysis looks like. Of course, at some point in the future when GeForce 8950GTX-SLIs come out, you could probably run Crysis at ridiculously high 16xAA, 16xAF, FP32 HDR and what have you settings, but those are just polish related visuals, not the baseline visuals that are a large determinant of what makes games look good.

Short story is that you won’t be disappointed with the Playstation 3’s visuals. It will be quickly outdone by PC graphics cards in terms of the nitty gritty technical settings like AA, HDR, AF, and shader model version whatever. Don’t let that discourage you because artists and improved techniques on the Cell + RSX will make the improvement of Playstation 3 visuals keep up even if it isn’t displaying more polygons with higher settings.

The Final Verdict?
While PCs GPUs are evolving and pushing the visuals beyond consoles due to new graphics card hardware being released yearly, the rest of the PC world is relatively static and offers little to no improvement when it comes to gaming. When multi-core CPUs hit the shelves for desktop PCs, there could be an increase in performance for games and more tasks being done on the CPU, but no more than what Xbox360 has or will show us with its 3 cores.

All of the next generation consoles already possess more games processing power than PCs with their increased and improved SIMD units. Unfortunately, developers aren’t taking the best advantage of this extra power in most cases as writing computational code for games is more difficult than the direct logical approach. Multiplatform development will be the biggest inhibiter of the Playstation 3’s potential.

PCs gaming or PS3 gaming doesn’t really have a clear technical winner. PC’s constantly evolve so in some aspect they will always be better for graphics when you always have the top end graphics card. The Playstation 3 will offer more flexible computational power that can be applied to more accurate physics, sound, or other computational related tasks than a PC. PS3 cannot catch up graphically which seems to be the most important or obvious difference between games. But PCs will not catch up in physics processing and other computational simulations unless the physics card catches on and is integrated well.

TAKEN FROM HERE ALL CREDIT TO THIS SITE!
http://ps3forums.com/viewtopic.php?t=27123

BTW this took me over 20 mins just to copy and paste! ARGH!!!
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-03-2006, 11:21 PM   #8
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

I believe these are PS3 exclusives

1.Warhawk
2.Resistance:Fall of Man
3.Genji 2
4.Ridge racer 7
5.Full Auto 2 (it be exclusive for a couple months)
6.Assains Creed(until they say it be on another system its exclusive)
7.untold legends:dark kingdom
8.Bladestorm:the hundred years war
9.Fatal inertia
10.Virtua Fighter 5
11.Vision Gran turisom
12.Naughty dog project
13.ratchet and clank PS3
14.Coded Arms Assault
15.Final Fantasy 13
16.Final Fantasy 13 Versus
17.Metal Gear Solid Guns of the patriot
18.Lair
19.Nihilistic Ps3 Project
20.DC Comic online
21.Devil May Cry 4
22.Getaway PS3
23.Eight Days
24.Soulcalibur 4
25.Jak and Daxter: the Lost Frontier
26.Spark Unlimited Next-Gen Project
27.Killzone Ps3
28.Zipper Interactive Ps3 Project
29.Gradius Ps3
30.Army of Two
31.Koei "Guan Yu" PS3 Project
32.Eyedentify
33.Rockstar Old West PS3 Project
34.Tekken 6
35.fifth Phantom Saga
and many more.

Taken from IGN.
Aight now I am off to bed...or maybe sharpen up my skills at GT4 first.

*EDIT*

Collin Mcrae Rally
Images uploaded by me from IGN.
CLICK TO MAKE BIGGER. ALSO A NOTE IF ANYONE POSTS IMAGES IN THIS THREAD PLEASE PLEASE PLEASE US THUMBNAILS IF POSSIBLE!




Also for those who think the PS3 is very expensive just please have alook at this for a second!

http://ps3.gamespy.com/articles/710/710710p1.html
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-04-2006, 12:30 AM   #9
ae86_16v
Regular User
 
Join Date: Feb 2004
Location: Oakland, CA
Posts: 3,446
Default

Is there a Cliff's Note version?
ae86_16v is offline   Reply With Quote
Old 08-04-2006, 06:50 AM   #10
evoWalo
Regular User
 
Join Date: May 2004
Location: Naples, FL
Posts: 1,407
Default

whatever tech it packs doesnt matter so long as it costs too much to buy. you could buy a Xbox 360 + Wii = PS3. Then again the PS3 was designed to be Sony's console for the next 10 years.
evoWalo is offline   Reply With Quote
Old 08-04-2006, 08:47 AM   #11
rave426
Regular User
 
Join Date: Nov 2003
Location: Houston, TX
Posts: 486
Default

My goodness!! You must enjoy writting.

Good Info..........

I already have 2 (60gig) PS3's on reserve............Ebay, here I come!
__________________
Each car is special in its own way, Unless its a Hyundai Haha, Im just trippin
rave426 is offline   Reply With Quote
Old 08-04-2006, 01:52 PM   #12
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

Originally Posted by ae86_16v
Is there a Cliff's Note version?
Yes.......The PS3 will kick ass.


Also please note to everyone that I am in no way a fan boy of Sony as I have owned a Xbox 360 before. So Funk off.

Originally Posted by rave426
My goodness!! You must enjoy writting.

Good Info..........

I already have 2 (60gig) PS3's on reserve............Ebay, here I come!
Naw not me, just some guy on another board. Like I stated at the end of post number 7 I just copied and pasted...which took me still over 20 mins because I was confused about how long of a post I could have so I had to split everything up.

Also another thing where did you preorder your PS3's from? I cant find anywhere near me in FL that accepts preorders yet! Some Gamestops and Ebgames they are only in Tampa and I think Miami.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-04-2006, 03:16 PM   #13
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

WOW!!! Has anyone seen this video yet? Just stumbled across it and holly sh*t! It looks WAY better than that crap they showed at E3! A major thing that caught my eye was the reflections.....just stunning.

Let me see if I can find any more videos of this.

http://files.filefront.com/Vision_Gr.../fileinfo.html


*EDIT*

Bikes and Cars all at once on a track? Sounds like FUN!
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Old 08-04-2006, 03:56 PM   #14
5vz-fe
Regular User
 
Join Date: Jun 2003
Location: Toronto
Posts: 6,167
Default

^They could do all the nice replays, but what most ppl care is the game play and the AI. So far we haven't seen anything that demonstrate, more importantly, how many years of delay will it have?
__________________
5vz-fe is offline   Reply With Quote
Old 08-04-2006, 04:00 PM   #15
Pimp Racer
Regular User
 
Join Date: Oct 2004
Location: 30° 4' N 85° 35' W
Posts: 1,783
Default

Originally Posted by 5vz-fe
^They could do all the nice replays, but what most ppl care is the game play and the AI. So far we haven't seen anything that demonstrate, more importantly, how many years of delay will it have?
Yeah I know. As far as what I heard the game is coming out THIS NOVEMBER! Best Buy has it on pre order already! I cant believe it! Either the game is gonna suck ass as in just be like GT4 with a few graphic improvements or they will delay it again.

As much as I hate for them to delay it I want them to delay it so it can include maybe damage and better and more smarter AI and what not rather than just the same we have been seeing in the previous two GT's.

*EDIT*

Here is the link for pre ordering

http://www.bestbuy.com/site/olspage....=1149208576937

Also other games for PS3 on preorder if anyone wants.

http://www.bestbuy.com/site/olspage....equestid=95269

My theory is that PD is holding out on us and wanna blow us away? Never know.
__________________
http://gthotspot.blogspot.com/
Pimp Racer is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump