Security Challenges of RISC-V Processors—A Brief Analysis of Architecture Vulnerabilities and Attack Risks

background

In the summer of 2010, a new open source instruction set - RISC-V - was born at the University of California, Berkeley, igniting a spark in the chip field.

In the following ten years, RISC-V architecture design blossomed across academia and industry like a spark.

For example, various CPU and DSA chip designs based on RISC-V architecture: Alibaba's Xuantie series processors, Berkeley's SonicBoom/Rocket-chip processors, MIT's riscy-OOO series processors, etc.

It can be said that the RISC-V architecture has broken the monopoly of x86, Arm, Power and other architectures, and is nurturing the next generation of information technology development opportunities with more advanced conceptual needs. At the recent Embedded World expo, Calista Redmond, CEO of the RISC-V Foundation, stated that “there are an estimated 10 billion RISC-V cores on the market.”

The tree is still but the wind is not stopping. When RISC-V takes root in many cutting-edge technology fields such as big data, 5G, Internet of Things, VR, and edge computing, the completeness of the instruction set, virtualization design, RAS extension, security architecture, etc. Problems begin to emerge in different application implementations. Especially with the development of the security field, how to ensure the security of an SoC based on the RISC-V architecture has become the focus of RISC-V architecture developers.

This article briefly analyzes the security structure of RISC-V processors, introduces the current system structure vulnerabilities from cache side channel attacks, memory attacks, etc. , conducts POC onboard testing on the RISC-V core SonicBoom , and verifies RISC -V processor security features.

Cache side channel attack

Side channel is a widely used means of information theft. The attacker does not directly attack the possible theoretical flaws of the algorithm, but obtains confidential data from the physical channels of the system through time measurement, power consumption measurement or even listening to acoustic and electrical signals. , people have been studying software and hardware defense solutions for traditional side channel attacks for more than 20 years.

However, with the rapid development of multi-core superscalar processors, new variants of side-channel attacks targeting processor systems continue to emerge .

Since 2017, multiple research teams have independently reported the "Spectre" and "Meltdown" vulnerabilities and their derivative variants. They each exploit high-performance vulnerabilities such as branch prediction and out-of-order execution at the processor microarchitecture level. Technology, which uses the execution of transient instruction flows to bypass software and hardware security checks, steal user information, or access high-privilege data at a higher level , has attracted widespread attention from academia and industry. These attacks demonstrate the flaws in modern processor microarchitecture design ideas, thereby exposing some core security vulnerabilities.

Security analysis of cache side-channel attacks based on transient instruction flow

Transient instruction flow (Transient Instrcution) refers to instructions that are in a speculative execution state and it is not yet certain whether they can be retired in the end . For example, branch prediction technology allows the processor to select a certain code segment for speculative execution before branch resolution, and the instructions in the speculative execution state can be called a transient instruction stream.

If later branch resolution results indicate that the prediction was correct, the transient instruction stream can eventually retire normally. But if a misprediction is found, the pipeline should be rolled back to the state before these transient instructions were executed. In theory, the rolled-back transient instructions should not have any observable impact on the microarchitecture, otherwise the traces left in the processor by these instructions that should not have been executed may cause information leakage.

Unfortunately, it is difficult for the processor to completely eliminate the execution traces of the transient instruction stream . A typical example is: if the memory load instruction executed in the transient instruction stream brings a piece of data into the cache, even if the pipeline returns During the rollover, the processor discards the memory access result returned by the instruction, but the modified cache state cannot be revoked . From this, the attacker can infer the memory access address of the victim program by monitoring changes in the cache. If the address itself contains sensitive information, it will cause information leakage. The above process constitutes a type of cache side-channel attack based on transient instruction flow.

Different from traditional side-channel attacks, side-channel attacks based on transient instruction flows exploit common security flaws at the hardware microarchitecture level to steal sensitive information. This kind of attack cannot be completely defended through software means . This type of attack generally has the following three characteristics:

  • Wide-ranging impact: Most modern processors have cache structures and support high-performance technologies such as out-of-order superscalars , so the architectural designs of mainstream manufacturers are affected by this attack.
  • High defense costs: The protection will have a greater impact on overall system performance .
  • Hardware dependence: The implementation of this type of attack depends on the execution environment of the program and the specific hardware design . Different attack plans can be derived for different micro-architectures.

Cache side channel attack defense methods

"Ghost" and "meltdown" attacks have attracted widespread attention and research in academia and industry. Here is a brief introduction to some of the defense ideas currently used by academia for cache side channel attacks for reference:

1. Prevent the propagation of speculative values ​​in the pipeline

A side-channel attack based on transient instruction flow is essentially that the value located on the wrong speculation path in the processor is used for the execution of subsequent instructions, and is eventually obtained by the attacker through side-channel means.

Therefore, a straightforward solution is to prevent speculative values ​​containing private data from propagating on the pipeline . In fact, the industry's early solution of using lfence-type barrier instructions was also based on this idea, but simply using barrier instructions to block all subsequent instructions would cause a huge loss in speculative execution performance.

Therefore, it has become an important research direction to divide and dynamically mark unsafe instructions in hardware in a more fine-grained manner, thereby narrowing the range of instructions that are restricted for execution and reducing the performance loss of the solution. Representative works include STT (MICRO'19), NDA (MICRO' 19), SpectreGuard (DAC'19) and other papers' design methods.

2. Defer modifications to cache state by speculative memory access instructions

This type of solution introduces an additional buffer into the cache architecture, and memory access instructions that have not yet escaped speculative execution can only enter the buffer instead of the cache . If the guess is correct, the value of the buffer will be entered into the cache; otherwise, if the guess is wrong, the value of the buffer will be discarded, thereby preventing the cache state from being modified by the memory access result on the incorrect guess path. Representative works include design methods in InvisiSpec (MICRO'18) and SafeSpec (DAC'19).

3. Provide data isolation between different security domains at the cache level

This direction also modifies the cache architecture to prevent attackers from effectively measuring cache information, but the essence is to provide data isolation between attackers and attacked parties. Basic directions include cache partitioning and cache address mapping randomization .

Among them, Intel's CAT (Cache Allocation Technology) technology also provides a cache partitioning mechanism in LLC (CAT was originally introduced to solve the noisy neighbor problem of multi-core systems), and some research has borrowed or utilized this structure.

4. Dynamically identify side channel attackers

Cache side channel attackers usually compete with the attacker to access the same cache physical location, so they will show a specific memory access pattern (pattern). If the hardware can identify potential side channel attack memory access patterns and give a timely warning , then the attacker can take measures to avoid data leakage. For example, the design method in Cyclone (MICRO '19).

Cache side channel attack reappears

Here is a fragment of the Specter-v1 (Bounds Check Bypass) attack program as an example:

if (idx < array1_sz){
        secret = array1[idx]
        dummy = array2[secret * L1_BLOCK_SZ_BYTES];
}

This attack targets conditional branch instructions (such as bge instructions in RISC-V). The attack process consists of the following stages:

  • During the training phase, the above code segment is repeatedly executed with the data index idx that has not crossed the boundary, so that the direction of the branch jump recorded in the PHT (Pattern History Table) is the code segment in the if. Since the idx value never exceeds the index range of array array1 at this stage, it does not cause security issues.

  • In the attack phase, the above code segment is executed with an out-of-bounds array index idx. From the perspective of a software writer, since the if conditional judgment stipulates that the idx value cannot exceed the array1 index range, data access within the code segment should not be executed at this time, and the program should be safe. However, since the branch predictor has already been trained, the hardware will still speculatively enter the code segment execution and access the private data secret somewhere else in the memory address space with the out-of-bounds array index idx. Within the speculative execution window, the attacker then constructs a memory access address through the secret value in the transient instruction stream to access an auxiliary array array2.

  • During the processor recovery phase, a branch prediction error is discovered, and the processor rolls back the pipeline and discards the memory access results of the above instructions. However, some data in the auxiliary array array2 has been brought into the cache during the attack stage, and the cache address is related to the private data secret . Content that has been brought into the cache will not be revoked due to pipeline rollback.

  • In the data reconstruction phase, the attacker only needs to measure the hits of data at different positions in array array2 in the cache , and then he can infer where in array2 the above code segment accessed during the attack phase, and then infer the specific value of secret, thereby completing data theft.

It can be seen that the above attack process bypasses the boundary check set by the software and obtains private data elsewhere in the memory address space without throwing any program exception.

The attack is reproduced below on a real processor. The SoC environment used is Chipyard, the processor core is SonicBoom, and the FPGA uses VCU118 or VC707. For the attack source code, please refer to the POC code provided by SonicBOOM https://github.com/riscv-boom/boom-attacks.

The attacked address stores: !"ThisIsTheBabyBoomerTest

The actual attacker obtained: !"ThisIsTheBabyBoomerTest

It can be seen that the Specter-v1 attack program was successfully reproduced on SonicBoom.

Memory data security

According to the latest statistics from MITER, 40% of the vulnerabilities in the 2022 CWE Top 25 Most Dangerous Software Weaknesses (2022 CWE Top 25 Most Dangerous Software Weaknesses) are related to memory modification/control. In recent years, C/C++ language has been widely used in modern system programming. Due to the characteristics of languages ​​such as C/C++ that can operate the memory system, it has greatly improved the running speed and efficiency of the program, but it has also introduced many Memory safety issues. Currently, mainstream compilers and runtimes do not perform static or dynamic security checks on C/C++ program pointers. The end result is that programs programmed in C/C++ are very vulnerable to attackers.

According to the type of illegal pointer access, memory safety is mainly divided into two categories: spatial memory safety and temporal safety (spatial safety & temporal safety).

  • Space safety violation: When the pointer accesses an object outside the scope of the program, memory space safety will be violated. The most common example is to use buffer overflow on the stack to overwrite the return address of the function with a value designed by the attacker to guide the execution flow direction of the program; or directly use the overflow to rewrite important variables or important information.

  • Temporal safety violation: Memory temporal safety is violated when a reference to an object is used outside of a specified time range, usually after the memory of the instantiated object has been reallocated without strict memory initialization . For example, a time safety violation (Use After Free) caused by dereferencing a pointer to invalid (usually unallocated or freed) memory.

Memory safety can be further subdivided into

  • Buffer overflow,
  • Double free,
  • Null pointer dereferencing,
  • Invalid free,
  • Accessing uninitialized memory (Reading uninitialized memory),
  • Use after free (Use after free) and several other subcategories.

Memory vulnerability attacks are mainly divided into three stages:

  • Code implantation: implant attack code (shellcode) into the target program;

  • Overflow attack: buffer overflow is achieved by inputting a special string as a parameter, and the return address after overflow points to the starting address of the attack code;

  • System hijacking: Hijack and control the system by running attack code;

Memory security software protection technology

After understanding memory vulnerability attacks and various memory vulnerabilities, let’s introduce existing software protection technologies:

The more typical one is the data execution protection (DEP) mechanism . Its basic principle is to mark the memory page where the data is located as non-executable . When the program overflows and successfully transfers to the shellcode , the program will try to execute instructions on the data page, and the CPU will Throw an exception instead of executing malicious instructions. By enabling DEP, you can effectively prevent data pages (such as the default heap page, various stack pages, and memory pool pages) from executing code.

ASLR (Address Space Layout Randomization) technology is a protection mechanism that interferes with shellcode positioning by no longer using a fixed base address when loading a program. Including image randomization, stack randomization, and randomization of PEB (Process Environment Block, process environment block) and TEB (Thread Environment Block, thread environment block)

Another effective defense mechanism is the Stack Canary mechanism. Its principle is to insert cookie information to the bottom of the stack when the function is executed. When the function returns, it will verify whether the cookie information is legal. If it is not legal, the program will stop running.

The ASLR mechanism, DEP (NX/XD) or Stack Canary mechanism implemented through the virtual memory system prevents attackers from injecting and executing arbitrary attack code at will. Therefore, these prevention mechanisms protect the safe operation of the program to a certain extent. .

However, in the face of more complex attack codes and methods, these protection mechanisms will face the consequences of failure. For example, in return-oriented programming (ROP), arbitrary code execution is achieved by destroying code pointers (such as return addresses) and chaining the execution of multiple gadgets together. Sequences from these raw binaries are combined to implement the malicious attack code envisioned by the attacker.

In addition to adding some software protection technologies to the original programming languages, many security-oriented programming languages ​​have emerged in recent years. For example, the Rust language has received a lot of attention for its ability to achieve performance comparable to C/C++ while ensuring high security. For this reason, this article briefly introduces the security language Rust.

Safety

Rust values ​​safety and speed first. It does not have a garbage collection mechanism, but it successfully achieves memory safety. What sets Rust apart from C and C++ is its strong safety guarantees.

Rust is completely memory-safe unless explicitly selected using the "unsafe" keyword.

Rust's core design is a strict set of safety rules enforced through compile-time checking. To support more low-level control, Rust allows programmers to bypass these compiler checks to write unsafe code. About 70% of the security issues in the CVE provided by MSRC (Microsoft Security Response Center) are memory security issues. They believe that if the programs currently found to cause security problems are written in Rust, then 70% of the memory safety problems will no longer exist.

A research paper at the ACM SIGPLAN International Conference (PLDI'20) pointed out that "the safe code mechanism of the Rust language can very effectively avoid memory safety problems. All memory safety problems found in stable versions are related to unsafe code."

performance

If you want to use Rust to replace C or C++ as the mainstream programming language, you must consider the performance and control capabilities of the programming language.

Rust also has a configurable runtime, and Rust's standard library relies on libc to support its platform, just like the C and C++ languages, and the Rust standard library is also optional. Rust-compiled programs run on platforms that do not have an operating system. It can also be run.

Memory security hardware protection technology

Here is a brief introduction to some technical ideas currently used by the academic community to protect memory security hardware, for reference:

1.Tripwire-Based Mitigation

This type of solution mainly places redzone between allocated memory blocks. When the pointer accesses the redzone area, a corresponding error will occur. Typical examples are Google's AddressSanitizer (ATC'12) and Valgrind's memory checker (PLDI'07).

The innovation of Tripwire is that they are activated immediately when a memory intrusion occurs - relying on shadow memory to check, unlike software measures such as Canary (Stack Canary) that rely on software to perform explicit checks.

Although Tripwire provides strong security, it brings prohibitive high-performance overhead and is therefore not suitable for deployment in performance-sensitive devices. The figure below is the behavioral flow chart of REST (ISCA'18). REST is just a very large random value embedded in the program. It hardware-based redzone detection - replacing redzone with a token to eliminate dependence on shadow mem. The load/store instructions will detect whether the address area is isolated.

2. Fat Pointers

In addition to the Tripwire mechanism, fat pointers are also a popular direction in the industry. They mainly maintain the metadata of the object - generally referring to the object's base, bounds, etc., inserted into the pointer and indexed together with the pointer address. Generally, it will be checked during static compilation and Dynamic instrumentation during runtime performs memory security protection.

Hardbound (SOSP'08) uses shadow space to store pointer permissions. The address pointed to by the pointer is base plus size as the entire bound interval. Softbound (PLDI'09) and Watchdog (ISCA'12) watchdog for each Generate a unique identifier for each memory allocation, associate these identifiers with pointers, and check to ensure that the identifier is present on every memory access. To provide comprehensive detection, Watchdog uses identifier-based identifiers almost entirely on hardware. detection. The picture below is an overview of HardBound design. Please refer to the paper for the specific meaning. This article will not describe it.

3. Capability

Capability and Fat pointer are similar to a certain extent. They are protected by adding metadata to the pointer. However, the difference between Capability and Fat pointer is that it is a Fat pointer that cannot be modified. Each capability can only be created and transferred (child module reference) and is deleted. The read, write, execute and other permissions of each pointer are also attached to the pointer. The pointer no longer only functions as an index, but more like an "object" with permission retrieval.

There are also some hardware implementations based on capability, and the representative design is CHERI (ISCA'14, Capability Hardware Enhanced RISC Instructions). Metadata (boundaries and permissions) are maintained inline with addresses, thereby eliminating expensive shadow table lookups by swapping regions, storage and memory bandwidth. The following briefly explains some advantages of CHERI design:

  • An abstract protection model of CHERI was proposed and implemented in 64bit MIPS, 32/64bit RISC-V, and 64bit ARMv8-A ISA respectively.
  • Has a formal ISA model, supports Qemu-CHERI simulation, and multiple FPGA prototypes
  • Verify that the ISA security model is established through software
  • Numerous software supporting systems: CHERI Clang/LLVM/LLD, CHERI BSD, C/C++ APP
  • Morello: The industrial-grade demonstration board of the CHERI ISA architecture is currently the only physical implementation of the CHERI prototype architecture.

control flow integrity

Control flow hijacking attacks achieve the purpose of attack by constructing specific attack vectors, exploiting software vulnerabilities such as memory overflow, and illegally tampering with control data in the process, thereby changing the control flow of the process and executing pre-constructed attack programs. A successful buffer overflow attack can be used as an effective prerequisite to modify the running control flow of the computer system. For example, it can modify the return address of the program, the EBP of the calling function, or modify the function pointer and GOT table, etc., thus leading the program to the attacker's preset Attack programs are used to execute malicious code, causing the entire program to collapse, or allowing attackers to gain control of part of the system, causing a security crisis for the entire computer system. Therefore, the defense against buffer overflow attacks is of extremely important practical significance.

W ⊕ , this is a code reuse attack. The initial code reuse attack is a Return-to-libc attack. Since the system's libc library often contains subroutines for executing system calls and other functions that may be useful to the attacker, they are most likely to be looked for code to assemble the attack. .

In a Return-to-libc attack, the attacker hijacks the program control flow by exploiting a buffer overflow vulnerability, selects an available library function and overwrites the return address with its entry location, and then overwrites the designed parameters with detailed stack location overwriting. Passed to the function to complete the attack.

ROP attack

Return-to-libc attack bypasses the W⊕X defense method, but its shortcomings are also obvious: the attack programs it can execute are not flexible enough and are easily targeted for defense, so ROP came into being.

ROP (Return-Oriented Programming) allows attackers to execute code in the presence of security defenses such as executable space protection and code signing. In this technique, an attacker can control the call stack to hijack program control flow and then execute a carefully selected sequence of machine instructions that is already present in the machine's memory, called a "gadget."

Each widget typically ends with a return instruction (ret) and resides in a subroutine within existing program and/or shared library code. Chained together, these gadgets allow an attacker to perform arbitrary actions on a defended machine.

ROP still overwrites the stack with the return address and parameters, but the return address provided by the attacker can point to any point in the code base, specifically any code fragment (gadgets) that ends with the ret instruction.


Even if the existing available library functions are completely unexploitable, ROP can still use a patchwork approach to assemble a complete attack program, especially since the instruction set of the x86 architecture is very dense, and almost any random byte sequence may be interpreted as some valid x86 instruction set .

The focus of the ROP attack is to find a set of gadgets that meet the following requirements:

  • Must end with ret
  • Does not explicitly change the stack pointer
  • Has basic functions such as arithmetic and logical functions
  • With complex functions such as memory access and system calls
  • With conditional branch and loop function (modify esp value by logic)
  • As long as a set of gadgets that meet the above conditions can be found in the executable program, a Turing-complete return-oriented machine is generated in disguise.

Many defenses against ROP:

  • DynIMA (Dynamic Integrity Measurement and Attestation, dynamic integrity measurement) - detects each consecutively executed instruction sequence ending with ret
  • DROP (Detecting ROP Malicious Code, detecting return-oriented programming malicious code) - detects the return address popped in the stack that always points to a specific memory space
  • "Return-less" Kernels - a compiler-level optimization that eliminates all ret instructions in the program, making it impossible for attackers to use ROP attacks
  • ……

JOP attack

ROP attacks are powerful, but relying on the stack and ret instructions to hijack control flow is also its disadvantage. On this basis, attackers proposed the JOP/COP attack method.

JOP/COP (Jump/Call-Oriented Programming) gives up all dependence on the control flow of the stack and gadget discovery and linking, and just uses a series of indirect jump instructions, which completely bypasses Passed all defenses against ROP.

In ROP, a jump-oriented program consists of a set of gadget addresses and data values ​​loaded into memory. Gadget addresses are analogous to opcodes in a new jump-oriented machine. In ROP, this data is stored on the stack, so the stack pointer acts as the "program counter" in return-oriented programs.


JOP is not limited to using esp to reference its gadget address, nor is control flow driven by ret instructions. Instead, JOP uses a dispatch table to hold widget addresses and data.


The "program counter" is any register that points to the dispatch table. Control flow is driven by a special scheduler gadget that executes a sequence of gadgets. On each call, the scheduler advances the virtual program counter and starts the associated widgets. When the gadget execution ends, the dispatcher is returned.


Above is an example of a jump-oriented program. The order of jumps is represented by the numbers 1->6. Here, edx is used as pc, and the dispatcher goes to the next address in a consecutive table of gadget addresses by simply adding 4 (so f(pc) = pc + 4). The "attack" function will

  • Dereference eax
  • Add the value of address ebx to eax
  • The result is stored at address ecx
    registers esi and edi are used to return control to the scheduler

defensive means

Control-flow Enforcement Technology (CET) is an instruction set extension proposed by Intel specifically for advanced stack overflows such as ROP/COP/JOP. It is mainly divided into the following two aspects:

  • a. Shadow Stack (SS, shadow stack) is a defense method mainly against ROP. The goal is to protect the return address of the stack. This defense creates a second stack independent of the program's data stack. When the shadow stack is enabled, the CALL instruction pushes the return address onto both the data and shadow stacks.
  • b. Indirect Branch Tracking (IBT, indirect branch tracking) is a defense method mainly against COP/JOP. The method is to add a new ENDBRANCH instruction, which is used to identify the effective target address of indirect calls and jumps in the program. Its opcode is represented as a NOP for older architectures that are not compatible with this instruction, and will not prevent the program from running. For processors that support CET, its opcode is still NOP, which will not bring additional hardware burden.

    For control flow integrity attacks, ARM provides two defense solutions: BTI and PAC:
    a. Pointer Authentication Code (PAC, pointer authentication code): The ARMV8.3-A instruction set adds a pointer authentication (PA) mechanism. Use the value of a register as a pointer to verify its content before accessing data or code, in order to resist ROP attacks. The basic principle of the mechanism is to use special instructions to obtain a 64-bit ciphertext through an encryption algorithm based on a 64-bit context, the original value of the pointer and a 128-bit key, truncate it and use it as a PAC. Use special instructions to verify the pointer value before using the pointer. After the verification is passed, the high bit of the pointer will be restored to its original state. Otherwise, an error code will be filled in the high bit of the pointer and an exception will be triggered during the addressing phase.
  • b. Branch Target Identification (BTI, Branch Target Identification): BTI is a security feature added in Armv8.5 to limit attacks. It is mainly used to defend against JOP (jump-oriented programming) attacks. The working principle is that the processor will configure a BTI instruction for the BR or BLR instruction when compiling, that is, setting the specified jump target for the indirect jump instruction (a very appropriate word is used in English, landing pads, landing point). If the indirect jump target is not the specified landing point, a jump target exception will be generated. The use of landing sites limits the number of possible targets for indirect jumps, making it difficult to chain gadgets together to form new programs.

Memory attack reappears

Here is a fragment of the Buffer-Overflow attack program as an example:

void shellcode() {
  .......
}
int main(int argc, char *argv[]) {
  char buf[8];
  if(argc < 2) {
    printf("Pass an argument, champ!\n");
  }
  strcpy(buf,argv[1]);
  printf(buf);
}
  • The first stage of code implantation: implant the attack code (shellcode) into the target program;

  • The second stage overflow attack: achieve buffer overflow by inputting a special string as a parameter, and the return address after overflow points to the starting address of the attack code (shellcode starting address);

  • The third stage of system hijacking: hijacking and controlling the system by running attack code;

The SoC environment is Chipyard, the processor core is SonicBoom, and the FPGA uses VCU118/VC707

This is an example of the output result of a stack overflow attack program. GDB debugging finds the address of the program RA (return address), overwrites it through stack overflow, and causes the program to jump to the established attack program. The final result is the output You win! As well as changing the program's shell.

Arm security framework - an important reference system for RISC-V security design

In response to the above security threats, many developers in the RISC-V field have formed their own solutions, but these security solutions are not yet standardized. The Arm architecture has highly standardized security countermeasures. You can learn from the Arm security architecture to provide ideas for the future security expansion direction of RISC-V. Here we mainly introduce four types of security countermeasures:

  • Defensive Execution Technologies: This is a class of protection technologies against control flow attacks, data access attacks, and side channel attacks. Software is rarely perfect, and some program vulnerabilities can be repaired artificially through some defensive programming, but this approach may not be applicable to all code programs. At present, the solution to such security problems is mainly through three technical means of modifying hardware, compiler and runtime for protection. This type of content will be explained in detail later and will not be explained here.
  • Isolation Technologies: Well-defined security boundaries are one of the most basic principles of security engineering. This security defense is mainly a type of defense isolation technology designed for TEE. Representative products include Arm TrustZone, Intel SGX, Apple Secure Enclave, Penglai Enclave, Keystone Enclave, etc.
  • Common Platform Security Services: For security purposes, all devices require a Root of Trust (RoT) and are verified against an extensive checklist of security requirements. Without the use of publicly available frameworks, the market cannot determine whether a product is appropriately resistant to attacks.
  • Standard Security API: With the rapid development of computer science and technology, more and more customers are using API interfaces customized by developers to carry out iterative development of various microservices. For this reason, the API interface can not only connect the functions of the service, but also transmit data. The security protection of the API has become increasingly important. There have been many data leakage accidents due to API attacks or vulnerabilities. For example, the Huazhu information leakage incident in China in 2018 caused a large amount of user personal information to be leaked, and Instagram in 2019 due to API interface vulnerabilities. User data and photos were leaked.
    How to establish a complete and effective computing security architecture on the RISC-V ecological chain and reduce the fragmentation of technical design is a very challenging and urgent need in the current RISC-V ecological development.

Opportunities and Challenges

With the popularization of 5G technology, the era of the real Internet of Everything will come, and the endless loopholes will spread more and more widely. The existing short-board effect of security protection will be further amplified. In an environment where everything can be attacked, various weak links will become the first choice targets for attackers. Looking at the development of X86, Arm, MIPS and other architectures, the security of chips is always "out of touch" during the early design of the architecture, and "patches" are added in the later stages of the design cycle. Can the RISC-V architecture be implemented earlier? Deploy and build a secure "moat"? Industry and academic circles have proven the efficiency and convenience of the open source instruction set RISC-V through various examples (high-performance processors, efficient accelerators, etc.), but currently security extensions in the RISC-V field are still being actively promoted. As more and more RISC-V chips are used in mission-critical systems, how to inject more thinking and strategies into security methods and the life cycle of security development, and how to expand from single-point protection to systematic global protection. System construction is a new challenge for architecture developers.

Guess you like

Origin blog.csdn.net/weixin_45264425/article/details/132698911