Intel Is Spreading FUD About Supposedly Huge Ryzen 4000 Performance Drops on Battery

Intel’s slideshow backing up these claims referred to benchmarks the company had run on a range of mobile Ryzen 3, 5, and 7 systems from Lenovo, with a single system sourced from HP.

Intel Is Spreading FUD About Supposedly Huge Ryzen 4000 Performance Drops on Battery

By Joel Hruska

On Friday, Intel gave a presentation to various journalists and analysts alleging a serious discrepancy between AMD CPUs performance on-battery versus the performance of the same systems off-battery.

According to Intel, while AMD’s latest CPUs offer slightly better battery life than their Intel counterparts, they achieve this by reducing CPU performance when running on battery by 38-48 percent. Intel 11th Gen CPUs, according to Intel, hold their performance much more effectively and lose an average of just 8 percent. The company concluded AMD sacrifices its performance for battery life.

We do not agree with Intel’s findings on this topic based on the argument the company presented.

Intel’s slideshow backing up these claims referred to benchmarks the company had run on a range of mobile Ryzen 3, 5, and 7 systems from Lenovo, with a single system sourced from HP.

According to Intel, the performance hit to the various AMD systems when on-battery effectively collapses the distinctions between the various SKUs, leaving no real difference between the various chips. Intel was not circumspect in its assertions on this point; at one point a company representative stated that he felt the information invalidated AMD’s entire product stack.

While Intel acknowledged that AMD systems offered superior battery life to Intel, it argued that system performance on battery life also matters — and that Intel’s Tiger Lake performance is quite a bit higher than AMD’s equivalent, based on an average of the performance of five Ryzen systems versus five Tiger Lake systems, as shown below:

The central thesis of Intel’s presentation is that laptop reviews should not benchmark systems when connected to AC power, or, if systems must be tested in such fashion, that the wall power data should be presented alongside the data for on-battery performance.

The argument presented by the company was backed up by benchmarks like WebXPRT and Sysmark, with some discussion of PCMark results as well.

Intel’s explanation for why AMD CPUs lose so much performance on battery is that the systems wait for 7-10 seconds before engaging turbo mode, while Intel systems engage turbo mode more quickly.

This gap is part of Intel’s purported on-battery performance advantage. According to Intel, most consumer workloads are very short, and this places AMD at a performance disadvantage relative to its own processors. This is the point at which the story starts slipping off the rails.

Even if the graphs above fairly represent the performance of two of the AMD systems Intel tested, the settings that control the amount of time before turbo modes engage and the overall performance delta between AC and DC power are settings that the OEM controls, not AMD. The slide below from AMD lists performance on AC versus DC power as an OEM-tunable option.

AMD allows OEMs to customize performance on AC vs DC power as one of many tuneable options.

Intel did not distinguish between this behavior as something defined by Lenovo versus as something defined by AMD as part of its mobile Ryzen platform standard.

It also did not explain why it chose to highlight the performance of the 4900HS on the left-hand side of the graph above, when that CPU was not part of the set of five systems that were used to average performance.

It did not provide data for each individual system showing that each system boosted in the same delayed fashion, and even if it had, four of the laptops were made by the same vendor. Intel, therefore, failed to demonstrate that this is a common behavior of AMD systems.

Intel’s five comparison systems for itself came from MSI, Lenovo, Intel itself (in the form of a laptop kit), and two from HP. Intel is drawing on a much wider range of manufacturers for its own systems.

I don’t know anything about the NUC laptop kit — haven’t had the opportunity to test one — but I would have preferred the fifth system be a standard commercial comparison, and the AMD systems should have been drawn from an equally diverse pool of hardware as the Intel ones were. There are four manufacturers represented for Intel, and two for AMD.

I had no plans to run a comprehensive battery of laptop tests over the weekend, but I’ve got access to a Lenovo IdeaPad Slim 7 with a Ryzen 7 4800U, as well as last years’ Microsoft Surface with an Ice Lake Core i7-1065G7 CPU in it. While this is not an 11th Gen Intel CPU, it should tell us if the benefits the company is claiming extend to previous-generation products.

Test Results

We ran a range of applications across the Surface and Lenovo laptops, on battery and on AC power. Systems tested on battery were tested in battery-saver mode in all cases.

Because Intel called out burst and short-duration workloads, specifically, we included the JetStream 2 benchmark and Neatbench, both of which run short-duration workloads, in addition to our longer-duration tests. PCMark was also included because Intel identified it as a problematic workload.

I’m afraid we’ve got to skip the graphs this time around — short on time and whatnot — but this chart will tell you what you need to know. Performance for Corona Render and Handbrake is given in minutes, so shorter times = higher performance for those.

The Ryzen 7 4800U inside the Lenovo IdeaPad 7 does not always run more slowly when the machine is on battery in battery saver mode. This system is often slightly faster on-battery than when running on AC.

It’s not much — a few percent — but it’s consistent. At a guess, turboing less often actually allows the CPU to hold a slightly more consistent overall frequency, improving performance. Below are the CPU’s performance comparisons in the Blender Render 2.0.4 benchmark, using Blender 2.9.0.

Blender benchmark 2.0.4 on the Ryzen 7 4800U, AC power.

Blender Render benchmark 2.0.4 on the Ryzen 7 4800U. Battery power.

JetStream 2, PCMark, and NeatBench are the three benchmarks that ran more slowly on the Ryzen 7 4800U on battery. PCMark and NeatBench fall into the range Intel described.

The Core i7-1065G67, however, loses much more performance than Ryzen in Jet Stream 2, and more than Ryzen does in NeatBench. The Core i7-1065G7 loses far more than 8 percent performance. In battery saver mode, the Core i7-1065G7’s sustained performance can drop to 33-50 percent of its sustained AC performance.

Its two worst benchmarks, in terms of sustaining AC-level performance, were JetStream 2 and NeatBench. All of Intel’s performance claims were regarding its 11th Gen CPUs, so nothing in the Ice Lake data refutes them, but TGL’s behavior appears unique to its product family.

When Intel gave its presentation, it made a point of calling out the fact that Cinebench R20 doesn’t show the same behavior as the other benchmarks it had chosen to highlight.

The “oddly” is straight-up FUD. Cinebench R23 also doesn’t show the 30-48 percent pattern of decline Intel claims. Neither does Corona Render. Neither does Handbrake. Neither does JetStream 2. Neither does Blender 2.90. Neither does the Blender 1.0Beta2 benchmark (not shown, but I ran it).

A discussion over which benchmarks are more and less applicable to end-users is a great thing to have, but this isn’t a conversation. This is Intel implying that because Cinebench doesn’t show the same performance degradation as PCMark, Cinebench is somehow odd. But Cinebench isn’t an outlier.

This kind of misrepresentation encourages customers and press not to trust Intel to convey the strengths and weaknesses of its own products against the competition. The only odd thing about the slide above is the assumption that anyone would take Intel’s word that CB20’s results were in any way unusual.

Intel’s performance claims are, at the very least, inaccurate by omission. CB20 is not an outlier. Its performance reflects the performance of multiple benchmarks in various types of computing.

We verified the rough shape of the company’s results in a single test (PCMark) and found evidence to indicate that Intel’s broad conclusions are more sweeping than they ought to be given the quality of the information provided.

Ultimately, the OEM decides to what degree they’re going to target performance versus power consumption, and they often don’t go out of their way to communicate why an HP version of a system might have better or worse battery life than a nearly identical Lenovo with the same CPU.

Painting this as an Intel-versus-AMD issue is a dishonest way to frame the topic, especially when AMD has been overwhelmingly represented by a single OEM in this comparison. Intel hasn’t demonstrated that every Ryzen 4000 system from every vendor has this issue, but that hasn’t stopped the company from claiming it, as you’ll see below.

Conclusion:

Based on a survey of four Lenovo laptops and a single HP model conducted by AMD’s competitor, whose track record when it comes to reaching reasonable conclusions is self-evidently sterling based on the slide above.

While the idea of benchmarking on battery is interesting, the idea of switching to it as the chief mode for evaluating laptops isn’t. The 38 – 48 percent performance hit Intel claims AMD takes on battery is certainly no kind of fair performance average, and if the company’s point was to emphasize that 11th Gen delivers 92 percent of its performance on-battery while other products don’t, it might have spent more time pointing this out as an advantage over Ice Lake, and less time opining on the status of Lenovo’s AMD laptops.

Far from emphasizing the limited, provisional nature of its conclusions, Intel  explicitly pushed for the widest, most damaging interpretation possible.

The reason I keep drawing attention to Intel’s failure to back up its points is that I’m astonished that the company had the temerity to present this as a serious argument.

The claim that AMD’s performance on battery negates the value of its product stack on the basis of the presented information is an overreach that recalls Intel’s behavior from the early 2000s in the most unflattering of ways.

If I want to know whether the company building the fastest CPU core on a per-clock, per-watt basis thinks AMD’s product stack is valid on the basis of its on-battery performance, I’ll ask Apple.

That’s not the cheap shot it might sound like. Not now that we know how ugly the Apple M1 (et al) could make things for Intel in a year or three. Semiconductors are a big boy business, and companies that aren’t willing to face harsh truths get eaten. For Intel, a few of those truths look like this:

ARM is rising. Apple is the first but almost certainly won’t be the last vendor to build an ARM core that can compete with x86, AMD isn’t a pesky mosquito to be dismissed, and nobody is waiting for Intel to tell them what the future of computing looks like right now.

Chipzilla may yet have a defining role to play in AI and machine learning training, among a lot of other areas of computing, but Nvidia isn’t holding off on its own audition so Intel can try out for the part. Neither are Google, Amazon, Nuvia, Ampere, or yes — AMD.

This isn’t 2007.  It’s not even 2017. Intel is now one player among many, its CPUs, upcoming GPUs, and accelerators competing against a steadily widening arena of products from other companies. By its own admission, it does not plan to return to foundry process leadership until the 5nm node.

It is not in a position to dictate how either enthusiasts or the industry view the products of its competitors, and the sooner the company realizes it’s playing catch-up and starts behaving like it, the faster it’s going to regain a leadership position. On the week that Apple unveiled the M1, the last thing I expected Intel to be doing was making bad arguments against AMD. 

Managing to turn the clock speed back up from Sunny Cove to Willow Cove was a noteworthy achievement, but it didn’t answer all the questions about Intel’s ability to compete with AMD outside of mobile, its ability to compete with ARM in mobile, or the long-term future of its foundry business and 7nm manufacturing.

A company in the throes of deciding whether it will continue to manufacture its own leading-edge processors after previously defining itself on its ability to manufacture leading-edge processors isn’t in a position to opine on the categorical, top-to-bottom validity of its competitor’s product stack. Not, at least, on the basis of the “evidence” provided.

We do not consider Intel to have proved its thesis — namely, that we should regard “up to 48 percent slower” as a reasonable evaluation of AMD laptop performance or that performance testing on AC power is so unimportant as to even consider discarding it, under any circumstances whatsoever.

By sourcing 80 percent of its AMD systems from Lenovo, Intel guaranteed that its Intel-versus-AMD turbo behavior comparison would effectively be an Intel-versus-Lenovo comparison, with a single HP system tossed in. This limited comparison does not support the claim that Intel’s findings negate AMD’s product stack, and it does not validate the sweeping changes Intel believes should be made to product testing.

What Intel has claimed to have demonstrated is grossly disproportional to what it has actually demonstrated, even if its claims are viewed in the most positive light possible.

Continuing to engage in this type of messaging will not win over the technical press. It will not win over the industry. No company is exactly trusted to communicate its own performance vis-à-vis the competition, but Intel’s behavior during the aughts left a deep and abiding well of distrust in the enthusiast community where AMD is concerned.

This is exactly the sort of PR move that inflames and deepens that sentiment. It doesn’t communicate strength; it reads as a BTX-level flail, and it burns through good faith slowly accumulated in previous years.

Originally published at Extreme tech