Apple’s M1 chip is the fastest chip that Apple has ever released in a Mac based on single-core CPU benchmark scores, and it beats out many high-end Intel Macs when it comes to multi-core performance. Developer Erik Engheim recently shared a deep dive into the M1 chip, exploring the reasons why Apple’s new processor is so much faster than the Intel chips that it replaces.
First and foremost, the M1 isn’t a simple CPU. As Apple has explained, it’s a System-on-a-Chip, which is a series of chips that are all housed together in one silicon package. The M1 houses an 8-core CPU, 8-core GPU (7-core in some MacBook Air models), unified memory, SSD controller, image signal processor, Secure Enclave, and tons more.
Intel and AMD also ship multiple microprocessors in a single package, but as Engheim describes, Apple has a leg up because rather than focusing on general purpose CPU cores like its competitors, Apple is focusing on specialized chips that handle specialized tasks.
In addition to the CPU (with high-performance and high-efficiency cores) and GPU, the M1 has a Neural Engine for machine learning tasks like voice recognition and camera processing, a built-in video decoder/encoder for power-efficient conversion of video files, the Secure Enclave to handle encryption, the Digital Signal Processor for handling mathematically intensive functions like decompressing music files, and the Image Processing Unit that speeds up tasks done by image processing apps.
Notably, there’s also a new unified memory architecture that lets the CPU, GPU, and other cores exchange information between one another, and with unified memory, the CPU and GPU can access memory simultaneously rather than copying data between one area and another. Accessing the same pool of memory without the need for copying speeds up information exchange for faster overall performance.
All of these chips with specific purposes speed up specific tasks, leading to the improvements that people are seeing.
This is part of the reason why a lot of people working on images and video editing with the M1 Macs are seeing such speed improvements. A lot of the tasks they do, can run directly on specialized hardware. That is what allows a cheap M1 Mac Mini to encode a large video file, without breaking sweat while an expensive iMac has all its fans going full blast and still cannot keep up.
Specialized chips have been in use for years, but Apple is taking a “more radical shift towards this direction,” as Engheim describes. Other Arm chip makers like AMD are taking a similar approach, but Intel and AMD rely on selling general purpose CPUs and for licensing reasons, PC manufacturers like Dell and HP are likely not able to design a full SoC in house like Apple is able to do.
Apple is able integrate hardware and software in a way that’s just not possible for most other companies to replicate, which is always something that’s given the iPhone and iPad an edge over other smartphones and tablets.
Sure Intel and AMD may simply begin to sell whole finished SoCs. But what are these to contain? PC makers may have different ideas of what they should contain. You potentially get a conflict between Intel, AMD, Microsoft and PC makers about what sort of specialized chips should be included because these will need software support.
Along with the benefits of an in-house designed System-on-a-Chip, Apple is also using Firestorm CPU cores in the M1 that are “genuinely fast” and able to execute more instructions in parallel through Out-of-Order execution, RISC architecture, and some specific optimizations Apple has implemented, which Engheim has an in-depth explanation of.
Engheim believes that Intel and AMD are in a tough spot because of the limitations of the CISC instruction set and their business models that don’t make it easy to create end-to-end chip solutions for PC manufacturers.
Engheim’s full article is well worth reading for those who are interested in how the M1 works and the technology that Apple has adopted to take a giant leap forward in computing performance.