Which of the Following Is True About Risc Cpu Hardware?

Processor executing one didactics in minimal clock cycles

The Sun Microsystems UltraSPARC processor is a blazon of RISC microprocessor.

In computer engineering, a reduced instruction set estimator (RISC) is a reckoner designed to simplify the private instructions given to the figurer in guild to realise a chore. Unlike the instructions given to a complex education set computer (CISC), with a RISC calculator, a job might require more than instructions (lawmaking) in club to realise a task, because the private instructions are written in simpler code. The goal is to start the need to process more than instructions by increasing the speed of each instruction, in particular, implementing an instruction pipeline may be simpler given simpler instructions.[one]

The central operational concept of the RISC computer is that each teaching performs but one function (e.k. copy a value from retention to a register). The RISC figurer usually has many (16 or 32) high-speed, general-purpose registers with a load/shop compages in which the code for the annals-annals instructions (for performing arithmetics and tests) are split up from the instructions that grant access to the main memory of the computer. The blueprint of the CPU allows RISC computers few simple addressing modes[2] and predictable instruction times that simplify design of the organization as a whole.

The conceptual developments of the RISC estimator architecture began with the IBM 801 project in the belatedly 1970s, but these were non immediately put into apply. Designers in California picked up the 801 concepts in two seminal projects, Stanford MIPS and Berkeley RISC. These were commercialized in the 1980s as the MIPS and SPARC systems. IBM eventually produced RISC designs based on further piece of work on the 801 concept, the IBM POWER pedagogy prepare architecture, PowerPC, and Power ISA. As the projects matured, many similar designs, produced in the late 1980s and early on 1990s, created the central processing units that increased the commercial utility of the Unix workstation and of embedded processors in the light amplification by stimulated emission of radiation printer, the router, and like products.

The varieties of RISC processor blueprint include the ARC processor, the DEC Alpha, the AMD Am29000, the ARM architecture, the Atmel AVR, Blackfin, Intel i860, Intel i960, LoongArch, Motorola 88000, the MIPS compages, the PA-RISC, the Power ISA, the RISC-V, the SuperH, and the SPARC. RISC processors are used in supercomputers, such as the Fugaku.[3]

History and development [edit]

A number of systems, going dorsum to the 1960s, take been credited as the commencement RISC architecture, partly based on their use of load/store approach.[four] The term RISC was coined by David Patterson of the Berkeley RISC project, although somewhat similar concepts had appeared before.[five]

The CDC 6600 designed past Seymour Cray in 1964 used a load/store architecture with only two addressing modes (register+register, and register+immediate abiding) and 74 operation codes, with the bones clock cycle existence x times faster than the memory access time.[6] Partly due to the optimized load/store compages of the CDC 6600, Jack Dongarra says that it can be considered a precursor of modern RISC systems, although a number of other technical barriers needed to be overcome for the evolution of a modern RISC system.[7]

IBM 801 [edit]

Michael J. Flynn views the start RISC system equally the IBM 801 design,[2] begun in 1975 by John Cocke and completed in 1980. The 801 developed out of an effort to build a 24-flake high-speed processor to utilize as the basis for a digital telephone switch. To reach their switching goal of 300 calls per second (1 one thousand thousand per hour) they calculated that the CPU required functioning on the order of 12 MIPS,[8] compared to their fastest mainframe machine of the fourth dimension, the 370/168 which performed at 3.5 MIPS.[9]

The blueprint was based on a written report of IBM's extensive collection of statistics on their existing platforms. These demonstrated that code in high-operation settings fabricated extensive use of registers, and that they frequently ran out of them. This suggested that additional registers would further ameliorate performance. Additionally, they noticed that compilers generally ignored the vast majority of the bachelor instructions, particularly orthogonal addressing modes. Instead, they selected the fastest version of any given instruction and so synthetic small routines using information technology. This suggested that the bulk of instructions could be removed without affecting the resulting code. These ii conclusions worked in concert; removing instructions would allow the didactics codes to be shorter, freeing up bits in the instruction which could then exist used to select among a larger set of registers.[8]

The telephone switch program was canceled in 1975, but past and so the team had demonstrated that the aforementioned design would offer meaning performance gains running only nearly any code. In simulations, they showed that a compiler tuned to use registers wherever possible would run lawmaking about three times equally fast equally traditional designs. Somewhat surprisingly, the same code would run almost 50% faster even on existing machines due to the improved annals use. In do, their experimental PL/viii compiler, a slightly cut-down version of PL/1, consistently produced code that ran much faster on their mainframes.[viii]

A 32-bit version of the 801 was eventually produced in a unmarried-chip grade as the IBM ROMP in 1981, which stood for 'Research OPD [Office Products Partitioning] Micro Processor'.[10] This CPU was designed for "mini" tasks, and was also used in the IBM RT PC in 1986, which turned out to be a commercial failure.[eleven] But the 801 inspired several research projects, including new ones at IBM that would eventually atomic number 82 to the IBM Power instruction ready architecture.[12] [13]

RISC and MIPS [edit]

By the late 1970s, the 801 had get well known in the manufacture. This coincided with new fabrication techniques that were allowing more complex fries to come up to market. The Zilog Z80 of 1976 had viii,000 transistors, whereas the 1979 Motorola 68000 (68k) had 68,000. These newer designs generally used their newfound complexity to expand the instruction set to make it more than orthogonal. Most, like the 68k, used microcode to do this, reading instructions and re-implementing them equally a sequence of simpler internal instructions. In the 68k, a full ane3 of the transistors were used for this microcoding.[fourteen]

In 1979, David Patterson was sent on a breather from Academy of California, Berkeley to assist DEC's due west-declension team improve the VAX microcode. Patterson was struck past the complication of the coding procedure and concluded it was untenable.[15] He outset wrote a paper on ways to improve microcoding, just later on inverse his mind and decided microcode itself was the problem. With funding from the DARPA VLSI Plan, Patterson started the Berkeley RISC endeavor. The Programme, practically unknown today, led to a huge number of advances in chip design, fabrication, and even estimator graphics. Because a multifariousness of programs from their BSD Unix variant, the Berkeley squad found, equally had IBM, that well-nigh programs made no use of the large variety of instructions in the 68k.[16]

Patterson's early on piece of work pointed out an important problem with the traditional more-is-amend arroyo; even those instructions that were critical to overall performance were being delayed by their trip through the microcode. If the microcode was removed, the programs would run faster. And since the microcode ultimately took a complex instruction and broke it into steps, there was no reason the compiler couldn't do this instead. These studies suggested that, even with no other changes, ane could brand a chip with 1three fewer transistors that would run faster.[16] IBM's 801 squad had besides noticed this; when compilers were faced with a choice of possible opcodes, they would choose the one the authors knew had been optimized to run the fastest. This meant the microcode, which constructed a sequence of operations to perform the opcode, was always doing the same thing over and over. That job introduced a delay that could be eliminated if the microcode was removed and the one opcode actually being used was directly available to the compiler.[ citation needed ]

It was also discovered that, on microcoded implementations of sure architectures, complex operations tended to be slower than a sequence of simpler operations doing the aforementioned thing. This was in function an effect of the fact that many designs were rushed, with petty time to optimize or tune every instruction; only those used about frequently were optimized, and a sequence of those instructions could be faster than a less-tuned educational activity performing an equivalent functioning as that sequence. One infamous example was the VAX's Index instruction.[17]

The Berkeley work also turned up a number of additional points. Amid these was the fact that programs spent a significant corporeality of time performing subroutine calls and returns, and it seemed there was the potential to improve overall performance by speeding these calls. This led the Berkeley design to select a method known as register windows which tin can significantly ameliorate subroutine performance although at the cost of some complexity. They besides noticed that the majority of mathematical instructions were elementary assignments, only 1iii of them actually performed an operation like improver or subtraction. But when those operations did occur, they tended to be deadening. This led to far more than emphasis on the underlying arithmetic data unit of measurement, as opposed to previous designs where the majority of the chip was dedicated to control and microcode.[16]

The resulting Berkeley RISC was based on gaining performance through the use of pipelining and aggressive use of annals windowing.[17] [18] In a traditional CPU, one has a pocket-sized number of registers, and a plan can apply whatsoever register at any fourth dimension. In a CPU with annals windows, in that location are a huge number of registers, e.g., 128, but programs can but use a small-scale number of them, eastward.k., eight, at any one fourth dimension. A program that limits itself to eight registers per procedure tin can make very fast procedure calls: The call simply moves the window "down" by viii, to the set of eight registers used by that procedure, and the render moves the window back.[19] The Berkeley RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of almost 100,000 in newer CISC designs of the era) RISC-I had simply 32 instructions, and however completely outperformed any other single-chip design. They followed this up with the twoscore,760 transistor, 39 instruction RISC-II in 1983, which ran over iii times equally fast equally RISC-I.[18]

Every bit the RISC project began to go known in Silicon Valley, a similar project began at Stanford University in 1981. This MIPS project grew out of a graduate course by John L. Hennessy, produced a operation arrangement in 1983, and could run simple programs past 1984.[20] The MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, making sure it could be run every bit "full" as possible.[20] The MIPS organisation was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems.[20] [21] The commercial venture resulted in a new compages that was also chosen MIPS and the R2000 microprocessor in 1985.[21]

The overall philosophy of the RISC concept was widely understood past the second half of the 1980s, and led the designers of the MIPS-Ten to put it this way in 1987:

The goal of any educational activity format should be: 1. elementary decode, 2. simple decode, and 3. elementary decode. Any attempts at improved code density at the expense of CPU performance should be ridiculed at every opportunity.[22]

Commercial breakout [edit]

RISC-V prototype chip (2013).

In the early 1980s, significant uncertainties surrounded the RISC concept. Ane business organization involved the use of memory; a single instruction from a traditional processor like the 68k may exist written out as perhaps a half dozen of the simpler RISC instructions. In theory, this could wearisome the organisation downwards every bit it spent more than fourth dimension fetching instructions from retentivity. Merely past the mid-1980s, the concepts had matured enough to be seen equally commercially viable.[11] [xx]

Commercial RISC designs began to sally in the mid-1980s. The first MIPS R2000 appeared in January 1986, followed soon thereafter past Hewlett Packard's PA-RISC in some of their computers.[xi] In the meantime, the Berkeley effort had become so well known that it somewhen became the name for the entire concept. In 1987 Sun Microsystems began shipping systems with the SPARC processor, direct based on the Berkeley RISC Ii system.[11] [23] The US government Committee on Innovations in Calculating and Communications credits the credence of the viability of the RISC concept to the success of the SPARC system.[xi] The success of SPARC renewed interest within IBM, which released new RISC systems past 1990 and by 1995 RISC processors were the foundation of a $15 billion server manufacture.[eleven]

Past the after 1980s, the new RISC designs were hands outperforming all traditional designs by a wide margin. At that signal, all of the other vendors began RISC efforts of their own. Amidst these were the December Alpha, AMD Am29000, Intel i860 and i960, Motorola 88000, IBM POWER, and, slightly subsequently, the IBM/Apple/Motorola PowerPC. Many of these take since disappeared due to them often offer no competitive advantage over others of the same era. Those that remain are often used only in niche markets or equally parts of other systems, merely SPARC and POWER have any significant remaining marketplace. The outlier is the ARM, who, in partnership with Apple, developed a low-ability pattern and and so specialized in that market, which at the fourth dimension was a niche. With the ascension in mobile calculating, especially subsequently the introduction of the iPhone, ARM is now the most widely used high-end CPU blueprint in the market.

Competition betwixt RISC and conventional CISC approaches was also the subject of theoretical analysis in the early 1980s, leading for example to the iron law of processor performance.

Since 2010 a new open source instruction prepare architecture (ISA), RISC-V, has been under development at the University of California, Berkeley, for research purposes and equally a free alternative to proprietary ISAs. As of 2014, version 2 of the user space ISA is stock-still.[24] The ISA is designed to be extensible from a barebones core sufficient for a small embedded processor to supercomputer and deject computing utilize with standard and fleck designer defined extensions and coprocessors. It has been tested in silicon design with the ROCKET SoC which is also available as an open-source processor generator in the CHISEL linguistic communication.

Characteristics and design philosophy [edit]

Pedagogy set philosophy [edit]

A mutual misunderstanding of the phrase "reduced instruction set computer" is that instructions are just eliminated, resulting in a smaller prepare of instructions.[25] In fact, over the years, RISC instruction sets accept grown in size, and today many of them have a larger fix of instructions than many CISC CPUs.[26] [27] Some RISC processors such equally the PowerPC have education sets equally large as the CISC IBM System/370, for example; conversely, the Dec PDP-8—conspicuously a CISC CPU because many of its instructions involve multiple memory accesses—has only 8 basic instructions and a few extended instructions.[28] The term "reduced" in that phrase was intended to describe the fact that the amount of work any single teaching accomplishes is reduced—at near a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data retention cycles in guild to execute a single instruction.[29]

The term load/store compages is sometimes preferred.

Another way of looking at the RISC/CISC debate is to consider what is exposed to the compiler. In a CISC processor, the hardware may internally use registers and flag scrap in order to implement a single complex teaching such as String MOVE, but hide those details from the compiler. The internal operations of a RISC processor are "exposed to the compiler", leading to the backronym 'Relegate Interesting Stuff to the Compiler'.[30] [31]

Didactics format [edit]

Most RISC architectures have stock-still-length instructions (commonly 32 bits) and a simple encoding, which simplifies fetch, decode, and upshot logic considerably. I drawback of 32-bit instructions is reduced code density, which is more adverse a characteristic in embedded calculating than it is in the workstation and server markets RISC architectures were originally designed to serve. To address this trouble, several architectures, such as ARM, Power ISA, MIPS, RISC-V, and the Adapteva Epiphany, accept an optional short, feature-reduced didactics format or instruction compression feature. The SH5 also follows this pattern, albeit having evolved in the opposite management, having added longer media instructions to an original 16-chip encoding.

Hardware utilization [edit]

For any given level of general performance, a RISC scrap will typically have far fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism.

Other features of RISC architectures include:

  • Processor average throughput nears one instruction per cycle[ needs update ]
  • Compatible instruction format, using single word with the opcode in the same chip positions for simpler decoding
  • All general purpose registers can be used equally every bit source/destination in all instructions, simplifying compiler design (floating point registers are often kept dissever)
  • Uncomplicated addressing modes with complex addressing performed by instruction sequences
  • Few information types in hardware (no byte cord or BCD, for case)

RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the data stream are conceptually separated; this ways that modifying the memory where code is held might not have any effect on the instructions executed by the processor (considering the CPU has a separate instruction and data enshroud), at to the lowest degree until a special synchronization didactics is issued; CISC processors that take dissever educational activity and information caches mostly proceed them synchronized automatically, for backwards compatibility with older processors.

Many early RISC designs also shared the characteristic of having a branch delay slot, an didactics infinite immediately post-obit a jump or co-operative. The instruction in this infinite is executed, whether or not the branch is taken (in other words the effect of the branch is delayed). This didactics keeps the ALU of the CPU busy for the actress time normally needed to perform a branch. Present the branch filibuster slot is considered an unfortunate side upshot of a particular strategy for implementing some RISC designs, and modern RISC designs mostly do away with it (such as PowerPC and more recent versions of SPARC and MIPS).[ citation needed ]

Some aspects attributed to the first RISC-labeled designs around 1975 include the observations that the memory-restricted compilers of the time were often unable to take reward of features intended to facilitate manual assembly coding, and that complex addressing modes accept many cycles to perform due to the required boosted retentivity accesses. It was argued that such functions would be better performed by sequences of simpler instructions if this could yield implementations small plenty to leave room for many registers, reducing the number of boring memory accesses. In these simple designs, most instructions are of compatible length and similar structure, arithmetic operations are restricted to CPU registers and only separate load and store instructions access memory. These backdrop enable a ameliorate balancing of pipeline stages than before, making RISC pipelines significantly more than efficient and assuasive higher clock frequencies.

Yet another impetus of both RISC and other designs came from practical measurements on existent-world programs. Andrew Tanenbaum summed up many of these, demonstrating that processors oft had oversized immediates. For instance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated 16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length auto could store constants in unused $.25 of the instruction word itself, then that they would be immediately set up when the CPU needs them (much like firsthand addressing in a conventional design). This required small opcodes in order to leave room for a reasonably sized abiding in a 32-bit instruction word.

Since many real-world programs spend most of their time executing simple operations, some researchers decided to focus on making those operations as fast as possible. The clock rate of a CPU is limited past the time it takes to execute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution of other instructions.[32] The focus on "reduced instructions" led to the resulting machine being chosen a "reduced education set computer" (RISC). The goal was to make instructions so simple that they could easily be pipelined, in lodge to achieve a single clock throughput at high frequencies.

Later, information technology was noted that one of the most significant characteristics of RISC processors was that external memory was merely accessible past a load or store instruction. All other instructions were limited to internal registers. This simplified many aspects of processor design: allowing instructions to be stock-still-length, simplifying pipelines, and isolating the logic for dealing with the filibuster in completing a retentiveness access (enshroud miss, etc.) to only two instructions. This led to RISC designs being referred to as load/store architectures.[33]

Comparison to other architectures [edit]

Some CPUs accept been specifically designed to have a very small set of instructions – but these designs are very unlike from archetype RISC designs, and then they have been given other names such as minimal instruction set estimator (MISC) or ship triggered architecture (TTA).

RISC architectures have traditionally had few successes in the desktop PC and commodity server markets, where the x86-based platforms remain the ascendant processor compages. However, this may change, equally ARM-based processors are beingness adult for higher performance systems.[34] Manufacturers including Cavium, AMD, and Qualcomm take released server processors based on the ARM architecture.[35] [36] ARM is further partnered with Cray in 2017 to produce an ARM-based supercomputer.[37] On the desktop, Microsoft announced that information technology planned to support the PC version of Windows 10 on Qualcomm Snapdragon-based devices in 2017 equally office of its partnership with Qualcomm. These devices will back up Windows applications compiled for 32-chip x86 via an x86 processor emulator that translates 32-flake x86 code to ARM64 lawmaking.[38] [39] Apple tree appear they volition transition their Mac desktop and laptop computers from Intel processors to internally developed ARM64-based SoCs chosen Apple tree silicon; the first such computers, using the Apple M1 processor, were released in November 2020.[xl] Macs with Apple tree silicon tin run x86-64 binaries with Rosetta 2, an x86-64 to ARM64 translator.[41]

Outside of the desktop arena, however, the ARM RISC architecture is in widespread apply in smartphones, tablets and many forms of embedded device. While early RISC designs differed significantly from gimmicky CISC designs, by 2000 the highest-performing CPUs in the RISC line were nearly indistinguishable from the highest-performing CPUs in the CISC line.[42] [43] [44]

Use of RISC architectures [edit]

RISC architectures are at present used beyond a range of platforms, from smartphones and tablet computers to some of the world's fastest supercomputers such as Fugaku, the fastest on the TOP500 listing equally of Nov 2020[update], and Elevation, Sierra, and Sunway TaihuLight, the next three on that list.[45]

Low-stop and mobile systems [edit]

By the beginning of the 21st century, the bulk of low-end and mobile systems relied on RISC architectures.[46] Examples include:

  • The ARM architecture dominates the market place for low power and depression price embedded systems (typically 200–1800 MHz in 2014). Information technology is used in a number of systems such equally most Android-based systems, the Apple iPhone and iPad, Microsoft Windows Phone (former Windows Mobile), RIM devices, Nintendo Game Male child Advance, DS, 3DS and Switch, Raspberry Pi, etc.
  • IBM's PowerPC was used in the GameCube, Wii, PlayStation 3, Xbox 360 and Wii U gaming consoles.
  • The MIPS line (at one point used in many SGI computers) was used in the PlayStation, PlayStation 2, Nintendo 64, PlayStation Portable game consoles, and residential gateways like Linksys WRT54G serial.
  • Hitachi'southward SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now adult and sold by Renesas as the SH4.
  • Atmel AVR used in a diverseness of products ranging from Xbox handheld controllers and the Arduino open-source microcontroller platform to BMW cars.
  • RISC-V, the open-source fifth Berkeley RISC ISA, with 32- or 64-bit accost spaces, a small core integer instruction set, and an experimental "Compressed" ISA for code density and designed for standard and special purpose extensions.

Desktop and laptop computers [edit]

  • IBM's PowerPC architecture was used in Apple'due south Macintosh computers from 1994, when they began a switch from Motorola 68000 family processors, to 2005, when they transitioned to Intel x86 processors.[47]
  • Some chromebooks use ARM-based platforms since 2012.[48]
  • Apple uses self-designed processors based on the ARM architecture for its lineup of desktop and laptop computers since its transition from Intel processors,[49] and the get-go such computers were released in November 2020.[40]
  • Microsoft uses Qualcomm[50] ARM-based processors for its Surface line.

Workstations, servers, and supercomputers [edit]

  • MIPS, past Silicon Graphics (ceased making MIPS-based systems in 2006).
  • SPARC, by Oracle (previously Sun Microsystems), and Fujitsu.
  • IBM's IBM POWER pedagogy set up architecture, PowerPC, and Power ISA were and are used in many of IBM'south supercomputers, mid-range servers and workstations.
  • Hewlett-Packard'due south PA-RISC, also known as HP-PA (discontinued at the cease of 2008).
  • Alpha, used in unmarried-board computers, workstations, servers and supercomputers from Digital Equipment Corporation, then Compaq and finally Hewlett-Packard (HP)(discontinued every bit of 2007).
  • RISC-V, the open source fifth Berkeley RISC ISA, with 64- or 128-bit address spaces, and the integer cadre extended with floating point, atomics and vector processing, and designed to be extended with instructions for networking, I/O, and data processing. A 64-bit superscalar blueprint, "Rocket", is available for download. It is implemented in the European Processor Initiative processor.

Run across also [edit]

  • Addressing manner
  • Archetype RISC pipeline
  • Complex educational activity fix estimator
  • Instruction prepare compages
  • Microprocessor
  • Minimal pedagogy set computer

References [edit]

  1. ^ Berezinski, John. "RISC: Reduced Didactics set Reckoner". Department of Information science, Northern Illinois University. Archived from the original on 28 February 2017.
  2. ^ a b Flynn, Michael J. (1995). Reckoner Compages: Pipelined and Parallel Processor Pattern. pp. 54–56. ISBN0867202041.
  3. ^ "Japan's Fugaku gains title equally world'south fastest supercomputer". RIKEN . Retrieved 24 June 2020.
  4. ^ Fisher, Joseph A.; Faraboschi, Paolo; Young, Cliff (2005). Embedded Calculating: A VLIW Approach to Architecture, Compilers and Tools . p. 55. ISBN1558607668.
  5. ^ Reilly, Edwin D. (2003). Milestones in computer science and information technology . pp. 50. ISBN1-57356-521-0.
  6. ^ Grishman, Ralph (1974). Assembly Language Programming for the Control Data 6000 Series and the Cyber 70 Series. Algorithmics Press. p. 12. OCLC 425963232.
  7. ^ Dongarra, Jack J.; et al. (1987). Numerical Linear Algebra on High-Performance Computers . pp. vi. ISBN0-89871-428-1.
  8. ^ a b c Cocke, John; Markstein, Victoria (January 1990). "The evolution of RISC technology at IBM" (PDF). IBM Journal of Research and Development. 34 (ane): four–11. doi:10.1147/rd.341.0004.
  9. ^ IBM System/370 System Summary (Technical report). IBM. Jan 1987.
  10. ^ Šilc, Jurij; Robič, Borut; Ungerer, Theo (1999). Processor compages: from dataflow to superscalar and beyond . pp. 33. ISBNthree-540-64798-8.
  11. ^ a b c d due east f Funding a Revolution: Government Back up for Computing Enquiry by Committee on Innovations in Computing and Communications 1999 ISBN 0-309-06278-0 page 239
  12. ^ Nurmi, Jari (2007). Processor design: arrangement-on-chip computing for ASICs and FPGAs . pp. 40–43. ISBN978-1-4020-5529-4.
  13. ^ Hill, Mark Donald; Jouppi, Norman Paul; Sohi, Gurindar (1999). Readings in computer architecture. pp. 252–iv. ISBNi-55860-539-8.
  14. ^ Starnes, Thomas (May 1983). "Design Philosophy Behind Motorola's MC68000". Byte. p. Photo i.
  15. ^ Patterson, David (30 May 2018). "RISCy History". AM SIGARCH.
  16. ^ a b c "Example: Berkeley RISC II".
  17. ^ a b Patterson, D. A.; Ditzel, D. R. (1980). "The example for the reduced instruction set computer". ACM SIGARCH Figurer Architecture News. 8 (half dozen): 25–33. CiteSeerX10.1.1.68.9623. doi:10.1145/641914.641917. S2CID 12034303.
  18. ^ a b Patterson, David A.; Sequin, Carlo H. (1981). RISC I: A Reduced Instruction Set up VLSI Computer. 8th annual symposium on Computer Compages. Minneapolis, MN, Usa. pp. 443–457. doi:ten.1145/285930.285981. Equally PDF
  19. ^ Sequin, Carlo; Patterson, David (July 1982). Pattern and Implementation of RISC I (PDF). Avant-garde Course on VLSI Architecture. University of Bristol. CSD-82-106.
  20. ^ a b c d Chow, Paul (1989). The MIPS-X RISC microprocessor. pp. xix–xx. ISBN0-7923-9045-8.
  21. ^ a b Nurmi 2007, pp. 52–53
  22. ^ Weaver, Vincent; McKee, Sally. Lawmaking Density Concerns for New Architectures (PDF). ICCD 2009.
  23. ^ Tucker, Allen B. (2004). Computer science handbook . pp. 100–6. ISBN1-58488-360-10.
  24. ^ Waterman, Andrew; Lee, Yunsup; Patterson, David A.; Asanovi, Krste. "The RISC-V Teaching Gear up Manual, Volume I: Base User-Level ISA version 2 (Technical Report EECS-2014-54)". Academy of California, Berkeley. Retrieved 26 Dec 2014.
  25. ^ Esponda, Margarita; Rojas, Ra'ul (September 1991). "Section 2: The confusion effectually the RISC concept". The RISC Concept — A Survey of Implementations. Freie Universitat Berlin. B-91-12.
  26. ^ Stokes, Jon "Hannibal". "RISC vs. CISC: the Postal service-RISC Era". Ars Technica.
  27. ^ Borrett, Lloyd (June 1991). "RISC versus CISC". Australian Personal Calculator.
  28. ^ Jones, Douglas W. "Doug Jones's December PDP-8 FAQs". PDP-8 Collection, The University Of Iowa Section of Calculator Scientific discipline.
  29. ^ Dandamudi, Sivarama P. (2005). "Ch. 3: RISC Principles". Guide to RISC Processors for Programmers and Engineers . Springer. pp. 39–44. doi:10.1007/0-387-27446-4_3. ISBN978-0-387-21017-ix. the principal goal was not to reduce the number of instructions, but the complexity
  30. ^ Walls, Colin (18 Apr 2016). "CISC and RISC".
  31. ^ Fisher, Joseph A.; Faraboschi, Paolo; Young, Cliff (2005). Embedded Computing: A VLIW Approach to Architecture, Compilers and Tools. p. 57. ISBN9781558607668.
  32. ^ "Microprocessors From the Programmer's Perspective" past Andrew Schulman 1990
  33. ^ Dowd, Kevin; Loukides, Michael K. (1993). High Operation Computing. O'Reilly. ISBN1565920325.
  34. ^ Vincent, James (9 March 2017). "Microsoft unveils new ARM server designs, threatening Intel's authorisation". The Verge . Retrieved 12 May 2017.
  35. ^ Russell, John (31 May 2016). "Cavium Unveils ThunderX2 Plans, Reports ARM Traction is Growing". HPC Wire. Retrieved viii March 2017.
  36. ^ AMD's first ARM-based processor, the Opteron A1100, is finally hither, ExtremeTech, 14 January 2016, retrieved fourteen Baronial 2016
  37. ^ Feldman, Michael (xviii January 2017). "Cray to Deliver ARM-Powered Supercomputer to UK Consortium". Top500.org. Retrieved 12 May 2017.
  38. ^ "Microsoft is bringing Windows desktop apps to mobile ARM processors". The Verge. Vocalisation Media. 8 December 2016. Retrieved 8 Dec 2016.
  39. ^ "How x86 emulation works on ARM". Microsoft Docs. xv February 2018.
  40. ^ a b "Introducing the next generation of Mac" (Printing release). Apple Inc. 10 November 2020.
  41. ^ "macOS Large Sur is hither" (Press release). Apple Inc. 12 Nov 2020.
  42. ^ Carter, Nicholas P. (2002). Schaum's Outline of Estimator Compages. p. 96. ISBN0-07-136207-10.
  43. ^ Jones, Douglas L. (2000). "CISC, RISC, and DSP Microprocessors" (PDF).
  44. ^ Singh, Amit. "A History of Apple's Operating Systems". Archived from the original on 3 April 2020. the line between RISC and CISC has been growing fuzzier over the years
  45. ^ "Top 500 The List: November 2020". Acme 500 . Retrieved 2 January 2021.
  46. ^ Dandamudi 2005, pp. 121–123
  47. ^ Bennett, Amy (2005). "Apple shifting from PowerPC to Intel". Computerworld . Retrieved 24 August 2020.
  48. ^ Vaughan-Nichols, Steven J. "Review: The ARM-powered Samsung Chromebook". ZDNet . Retrieved 28 April 2021.
  49. ^ DeAngelis, Marc (22 June 2020). "Apple starts its two-year transition to ARM this week". Engadget . Retrieved 24 Baronial 2020. Apple has officially announced that it will exist switching from Intel processors to its own ARM-based, A-series chips in its Mac computers.
  50. ^ "Microsoft to launch a new ARM-based Surface this fall". www.msn.com . Retrieved 28 April 2021.

External links [edit]

  • "RISC vs. CISC". RISC Architecture. Stanford University. 2000.
  • "What is RISC". RISC Architecture. Stanford University. 2000.
  • Savard, John J. Thou. "Not Quite RISC". Computers.
  • Mashey, John R. (5 September 2000). "Yet Another Mail service of the Old RISC Post [unchanged from last fourth dimension]". Newsgroup: comp.curvation. Usenet: 8p20b0$dhh$3@murrow.corp.sgi.com. Nth re-posting of CISC vs RISC (or what is RISC, really)

0 Response to "Which of the Following Is True About Risc Cpu Hardware?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel