So Long, and Thanks for All the Chips - ExtremeTech

2022-08-13 08:12:35 By : Mr. Ze Ruan

(Credit: David Bauer/Flickr, CC BY-SA 2.0)In my two-decade career in journalism, I’ve been fortunate enough to work for some great sites, but I’ve spent the largest chunk of my time here at ExtremeTech, initially as a part-time contributor back in 2011. I came on board full-time in 2014 and became managing editor last fall. Over 11 years, I’ve written 5,663 stories according to the CMS—including this one, my last.

Contributing to a publication with as storied a history as ExtremeTech has been a heck of a ride, and one I feel lucky to have taken. Over the last decade, I’ve had the opportunity to cover everything from CPU and GPU technology to archaeology and the Internet of Things—this last with perhaps a bit more snark than the others. Working in the press has its ups and downs, but there are few better places to be if you want to see technology evolving over time. It’s been fascinating to watch both the evolution of CPUs and GPUs and the steady advances in software that take advantage of what hardware can actually do.

Two and a half years ago, I decided to write about AI upscaling as the technology emerged from the realm of science fiction. I didn’t pick AI upscaling by accident. I wanted to explore an emerging topic of the sort this site was founded to celebrate. We don’t publish very many 20-page guides to installing Linux these days, mostly because—thankfully—nobody needs a 20-page guide to installing Linux. I thought the practical side of artificial intelligence would be a good modern option, and based on how well these articles have been received, a lot of you agreed. I intend to go on writing about upscaling Star Trek: Deep Space Nine, Voyager, and other shows, for those of you who might be interested.

The Star Trek: Deep Space Nine and Star Trek: Voyager Upscaling Project

I thought a lot about what to say in a story like this. I could wax melodramatic and/or rhapsodical, but I already published a good-bye round-up of my least favorite IoT devices, a terrible bit of CPU-themed poetry, and a farewell, encyclopedia-sized upscaling article. (I give weird going-away presents. Don’t judge me). So, for this story, I decided to cast an eye forward, to the big-picture trends in computing—and a few article topics I wanted to write about but didn’t have time to finish.

Starting about nine years ago, a lot of major companies started to make bullish promises about AI and the jobs it would be capable of performing in the very near future. Google demoed an assistant that could make reservations for you in a (supposedly) believable manner. Companies such as Google, Uber, and Tesla were promising that self-driving cars were right around the corner.

Instead of self-driving cars or curing cancer, the most visible impacts of AI on the world to date are a bunch of moderately improved chatbots and AI-based answering services. I can’t speak for anyone else, but yelling terse phrases into the phone in an (often) vain attempt to convince a glorified Dr. SBAITSO of my need to speak to a human is not one of the top improvements I hoped AI would deliver. It would be easy to dismiss AI as a smoke-and-mirrors charade with a few genuine advances and way too much hype. It would also be a mistake.

I’ve worked with video upscalers for the past several years and watched the evolution of the overall industry. Underneath the hype, there’s a lot of valuable work being done. AI has discovered antibiotics. It has opened new frontiers in video restoration and improvement, even if there are still some asterisks and rough spots. “Asterisks and rough spots” is a good way to describe AI in general right now, in my personal opinion. Teething problems aside, I genuinely think AI will transform computing as much in the long run.

I put the video below together for today. It’s a clip from Deep Space Nine and a rather difficult one to get right. While there’s nothing particularly exciting going on, that’s actually what makes it tougher—with little to look at but faces, you’ve got lots of time to pick out the flaws. And there are flaws, to be sure. But the technology is improving all the time.

If you look back at some of the articles I wrote in the early 2010s, there was a lot of interest in many-core architectures, probabilistic computing, and specialized accelerator units. One proposed use of so-called dark silicon was to build specialized processing blocks that would only kick in during certain scenarios but could execute code more quickly than a conventional CPU while spreading heat production across the die instead of concentrating it in one processing block. Now there are rumors that Intel will ship a VPU in Meteor Lake, and both AMD and Intel have expanded into the FPGA business as well.

One reason I think we’ll see AI adopted relatively quickly in software is that AI workloads don’t necessarily need to run on dedicated AI cores. Inference workloads can generally be run on the CPU, integrated GPU, discrete GPU, or a specialized, built-in accelerator core if one exists. Intel refers to this type of specialized accelerator core as a VPU, while Apple calls the dedicated silicon inside the M1 and M2 its Neural Engine.

Applications like Topaz Video Enhance AI use Apple’s neural engine on lower-end M1s and combine the neural engine with the GPU for faster processing in the M1 Pro, M1 Max, and M2. On the PC side, Intel’s OneAPI is intended to simplify cross-device product development and make it easier for developers to target different devices. Various companies are sliding pieces into place to make AI practically useful.

With AMD adding graphics to future Zen 4 CPUs, the overwhelming majority of PC users will soon have either an integrated GPU, a built-in AI processor, or a discrete GPU capable of executing an AI workload as part of a game engine or separate workload. Nvidia and AMD have both launched noise cancellation algorithms that leverage machine learning and I think we can expect to see more applications aimed at improving video and audio in various ways. I also think there’s real potential for game AI to improve in the long term.

I don’t expect near-term breakthroughs in any particular area, but it would surprise me if computers didn’t leverage AI in some significant and practical ways by 2030. I don’t know if it’ll be CPU and GPU designers using machine learning to build better hardware or as an array of software utilities baked into a system that improves audio and/or video quality in real-time, but I expect such gains to come. And hey—if nothing else, maybe your PC will be lots better at sounding like grandma.

From the moment Nvidia announced that it would add upscaling capabilities to Turing, I’ve wondered how both AMD and Nvidia would compensate for the fact that Moore’s Law ain’t what it used to be.

In the early days of 3D graphics, rapid manufacturing improvements drove 1.6x to 2x performance improvements per year. Nvidia famously demanded a new iteration of a chip every six months and a new family of GPUs every 12. The actual rate of advance wasn’t quite so blistering—the GeForce 2 is a refined GeForce and the GeForce 4 a refined GF3—but even if you count this way, Nvidia was still launching a new GPU architecture every other year. That’s not the case any longer. Moore’s Law still delivers reasonable density improvements, but new manufacturing generations do not deliver the power efficiency and performance gains the industry once enjoyed.

Scaling rasterization performance was already complicated, but tacking ray tracing performance on top makes a difficult situation even harder. The hardware improvements that improve ray tracing performance are not always the same as the hardware improvements that boost rasterization. I do not want to in any way suggest that this is a zero-sum game, but raising your in-game resolution and using ray tracing at the same time puts a heavier burden on the GPU. This is why we see evidence of memory pressure on 8GB video cards when ray tracing is enabled. Test the same resolution without ray tracing and the GPU performs well.

DLSS, FSR 2.0, and XeSS are three efforts towards the same goal: Reducing the power, die size, and dedicated silicon required to deliver future visual improvements. The most straightforward way to improve GPU performance—and this is as true today as it was 20 years ago—is to render at a lower resolution. In the long run, the concept of “native” gaming may itself be a bit of a dinosaur. Whether this happens will depend on whether AMD, Intel, and Nvidia can build upscale solutions that look better than native or if they merely compete to match it. Based on where things are today, I expect these services will regularly beat native resolution image quality in the future while performing better as well.

It’s possible that GPUs in the future may see better performance gains per die area or per watt from increasing on-chip resources dedicated to AI as opposed to beefing up rasterization or ray tracing. I suspect we’re still some years away from this kind of inflection point, but what gamers care about is better image quality. If AI can deliver it, GPU manufacturers will move in this direction.

I wanted to leave some thoughts on this topic because I’d actually started working on an article that addressed it. I spent about six weeks earlier this year with a Radeon 6800 XT installed as my primary GPU. I used it for gaming and for my professional workstation projects. The advantage of the 6800 XT is that it has a full 16GB of VRAM compared to GPUs like the RTX 3080, which only have 10GB. This can specifically come in handy when upscaling. A GPU with more VRAM can upscale to higher resolutions without slowing down dramatically and it can run more upscaling instances (or GPU applications in general) side-by-side. GPU applications do not always share memory space very well and having more VRAM can lessen the chance that two or more apps pick a fight with one another.

AMD’s drivers have some nifty capabilities that Nvidia’s don’t. The in-driver overlays for performance and temperature monitoring are an easy way to monitor these settings in-game and could be useful diagnostically if you were trying to check for heat-related instabilities or confirm that a GPU clock change had taken effect. There are options for on-the-fly setting adjustments that work quite well. You can also program the driver to change your display color and brightness options when a certain game launches, for example. Although I didn’t use this last option very often, The Last of Us has some maps that are dark and quite difficult to play. I really appreciated the ability to boost specific game brightness.

I swapped to an AMD GPU not long after Horizon Zero Dawn came out. Initially, the game had some visual problems, but a driver update from AMD a few days later resolved them. Overall performance across workstation applications was similar if I did not exceed the RTX 3080’s 10GB of VRAM. Once I nudged above that, performance favored the 6800 XT.

The one downside I saw was that the Gaia-HQ model in Topaz Video Enhance AI would trigger reboots if I ran more than one model simultaneously. This only happened with Gaia HQ and it only happened on AMD Radeon silicon. All other AI models ran perfectly (in parallel) across multiple application instances at the same time. The TVEAI Gaia HQ model was written before AMD GPU support was implemented in the software and it may be a lingering bug left over from that. I got in contact with both AMD and TVEAI but do not know if the problem ever got fixed. No other games or applications that I tested had problems.

My experience with the 6800 XT was a lot better than five years ago when I tried to switch to a Vega 64 and soon found myself swapping back. While I intended for this to be a full article, not a bit in a larger story, I wanted to say that my experience was positive. I didn’t get to test as many games as I wanted, but the titles I did play—Zero Dawn, Orcs Must Die 3, They Are Billions, Deathloop, and No Man’s Sky—all ran quite well. I was impressed with how quickly AMD fixed issues with HZD, and it does imply the company is more responsive than it used to be.

I’ve ended a few thousand stories in my time, even if “prematurely murdered” would be a better way to describe the ends some of them came to, but I find myself at a loss as to how to end this one. I think I’m going to have to let a bit of video do it for me.

I owe particular thank-yous to Jamie Lendino for his willingness to edit insanely long articles at strange times of the day, my partner for periodically sacrificing the kitchen table and living room to long bouts of hardware testing, and to anyone who had to read my upscaling diatribes, rants, informative discussions in our internal ET Slack channel.

(Maybe enabling group edits on my farewell story was a bad idea.)

Thank you for your comments, emails, and support over the years. May the wind be at your backs.

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

© 1996-2022 Ziff Davis, LLC. PCMag Digital Group ExtremeTech is among the federally registered trademarks of Ziff Davis, LLC and may not be used by third parties without explicit permission.