Battlefield 4 Alpha Benchmarks Uh Oh

RainMotorsports

Leadership
Leadership
Joined
Mar 6, 2011
Messages
8,649
I already had decided I am gonna grab a GTX 770 before years end. But damn the early look is scary. Mind you incomplete software and unoptimized drivers.

GPU Bench:
500x1000px-LL-c436885b_bf419204x.jpeg

My old 570 aint doing shit in Ultra. I already knew based on early estimates about the usage of the higher end capabilities of the existing engine that I was going to be on High or under.

CPU Bench:
500x1000px-LL-04d6fd37_bf4proz.jpeg

Older AMD quad cores and the FAKE quad cores (FX4xxx) will suffer a little bit in the Alpha. Seems like cpu's that were fine in BF3 (the prior AMD's, the i3-530 for example etc) so far are not holding up well. Sandy Bridge quads and above as well as the recent "8 core" amd offering or better seems to be a good place to be.

Memory usage in 1080P is right at 2 GB, anything over 1080P at this point needs SLI and 3 GB of ram (per card).
 
Let's hope this is true. I'd love to see a game push the GPU makers again. They've done nothing but coast for years now. It'll be good for the market, and maybe Intel will pull it's head out of it's collective ass and give us a cpu that really matters.
 
wow six.....you took yer dog sig away! >.< this one is up there with farstars avatar!


oh yea....good info rain!
 
Lol and they claim the consoles will play these games at 60fps 1080p!! And pigs will fly out of Scott's asshole




btws that Aussie hurdler is a fucking fox six
 
Let's hope this is true. I'd love to see a game push the GPU makers again. They've done nothing but coast for years now. It'll be good for the market, and maybe Intel will pull it's head out of it's collective ass and give us a cpu that really matters.

To be fair both AMD and Nvidia are not responsible for the failure to advance performance. Both were held back by the available fabrication facilities and technology. Being stuck on 40nm with a shit process meant having to improve efficiency but not much else. A smaller process potentially reduces the die size. On 40nm the chips were huge which makes them prone to low yields and the shit process made it even worse.

Had the 500 series been on 32 or 28nm they would have been able to bump the clocks higher and had more cores on the chips. Hard to say from what I know how much effect this would have had on the development of the 600 series though.

Both companies used the same fab TSMC and were pretty much at their mercy when it came to limitations.

Sent from my SPH-D710 using Tapatalk 2
 
I'm not talking about the fab process, their problems have been well documented. I'm talking about a new chip design, specifically about Intel. Up until the X58 systems you had huge gains in performance across the board but since then it's been 10-12%.
Are you ready to tell me we've reached the peak computing and design? Nobody's come up with a new design or theory in the last five years that's workable? I think they've been playing it safe with ready made excuse like they use now-the PC market is not selling because of the economy or needs are already met and that's bullshit. People don't upgrade because there's really no reason to with a 10% upgrade. It's the "good enough" mentality. It's good enough so why make better, and for consumers it works good enough so why upgrade? Why? Because nothing better has been produced and marketed.

They've offered nothing innovative and performance enhancing for quite a while, instead focusing on the mobile market which has seen rapid growth because it's the "new" thing and everyone wants the shiny and new and Intel's been killing off the PC market by doing that.
 
Aren't the Maxwells supposed to be a big jump forward next year? Its the next phase after Kepler if I remember correctly.
 
We talking CPUs or GPUs because at this point its 2 different worlds.

GPUs fabs are behind which will leave GPUs behind. There is also the subject of the illegal price fixing both companies are involved in. They were taken to court. It was settled. They never stopped price fixing after that... Video game developers are actually calling for abolishing APIs like DirectX and OpenGL in favor of gaining low level APIs that allow them to squeeze every bit of performance out of the GPUs. However this is basically the exact reason DirectX was invented. To prevent the need to write code to support specific cards.

As far as CPUs go. They definitely have the ability to progress new architectures on Current and upcoming fab technology. AMD wants to focus on graphics and compute technology and if Intel doesn't make any attempt to compete the world seems to go nuts. If operating systems actually used GPGPU for general FPU math some good would come of this but...

Nobody is sure where the wall is. Were close to the limits of silicon in terms of circuitry size reduction which allows for more transistors, lower heat and power consumption as well as increased cycling rates. Which is why there is a search to replace silicon. When architecture improvements grown thin increase clock cycle, when you hit a clock cycle wall add more cores. Adding more cored makes for a bigger chip which means higher failure rates. When that runs out your back to the process size and architecture Reinvention. As you know more cores solves little if the programs can't be threaded in a highly concurrent manner.

But graphics chips are currently bulging at the seems. Until the fabs get their act together its hard to add more transistors without making something so fail prone in manufacturing that a profit can't be turned. This should force innovation in doing things in a better way. Right now nvidia is filtering down they best of what they have in the commercial market down to the consumers. Its not like they are holding back. Where as in the CPU world Intel has turned its resources to the mobile world and power consumption. They might not be sitting on the next big thing but they are sure as hell not putting available resources into it either and that is yes them holding back.
 
I'm not talking about the fab process, their problems have been well documented. I'm talking about a new chip design, specifically about Intel. Up until the X58 systems you had huge gains in performance across the board but since then it's been 10-12%.
Are you ready to tell me we've reached the peak computing and design? Nobody's come up with a new design or theory in the last five years that's workable? I think they've been playing it safe with ready made excuse like they use now-the PC market is not selling because of the economy or needs are already met and that's bullshit. People don't upgrade because there's really no reason to with a 10% upgrade. It's the "good enough" mentality. It's good enough so why make better, and for consumers it works good enough so why upgrade? Why? Because nothing better has been produced and marketed.

They've offered nothing innovative and performance enhancing for quite a while, instead focusing on the mobile market which has seen rapid growth because it's the "new" thing and everyone wants the shiny and new and Intel's been killing off the PC market by doing that.

Is this what you do in the library with your balls hanging out? Come up with arguments about why PC hardware sucks? :)
 
Is this what you do in the library with your balls hanging out? Come up with arguments about why PC hardware sucks? :)

Nah, but it irritates the shit out of me that there have been no significant gains in the industry for years while at the same time the media and manufacturers cry about how the industry is dying and sales are falling. Well, they're falling for a reason, and it's not for lack of interest or even money in most cases. Like anything else, if you build a kick ass product people will flock to it.
 
Back
Top