Abstract
The basic concepts, technical implementation, and application scenarios of computer simulation, virtualization, and cloud computing are intertwined and each has its characteristics. In this article, the basic concepts, necessity, feasibility, and core difficulties of computer simulation, virtualization, and cloud computing are analyzed. Also, the application prospects of the three technologies are expounded, the technical differences are analyzed, and the application of some key technologies in the data center is introduced. The generalization of computer simulation technology, the specialization of virtualization technology, and the systematization of cloud computing technology are expounded, which represent the mainstream direction of technology development in the field of information and communication at this stage. The application examples of the three in the aerospace telemetry, tracking, and command field are taken to illustrate that information technology essentially serves the business logic of their respective fields, which also well reflects the basic characteristics of demand-oriented, safety-oriented, innovation, and long-term development. It has guiding significance for the technical orientation, technical selection, and engineering implementation of information system architecture design.
1 Introduction
Today’s information technology changes with each passing day. Information technology represented by cloud computing has greatly promoted social production and economic prosperity. How to strengthen the control and protection of on-orbit spacecraft by taking advantages of cloud computing technology is an important direction for the next development of aerospace measurement and control data centers.
The concepts of computer simulation, virtualization, and cloud computing are progressive in the course of computer development. Many experts and scholars have carried out technical research in related fields, but few of them have discussed the dialectical relationship and engineering application. Based on expounding on the technical concepts and differences between computer simulation, virtualization, and cloud computing, this article focuses on sorting out the core difficulties of the technologies and discusses the application effects. Also, the application prospect of computer simulation, virtualization, and cloud computing technology is analyzed combined with the application examples of aerospace telemetry, tracking, and command (TT&C) data centers. Focusing on the dialectical relationship and engineering application, it has a guiding significance for the technical orientation, technical selection, and engineering implementation of complex information system architecture design.
The complexity of software and hardware systems in the field of information and communication has given birth to the operating system. Different combinations of software and hardware systems have led to the differentiation of the operating system. Currently, various operating systems have been decoupled from software and hardware systems and have gradually become large, complex, and rapidly evolving independent systems, which gives birth to the emergence of computer simulation, virtualization, cloud computing, and other technologies.
The diversity of hardware systems mainly includes: (1) scale diversity, as small as single-chip microcomputers and wearable devices, as large as supercomputers and secondary supercomputers; (2) diversity of instruction system. The instruction systems of different manufacturers often follow the simplified instruction system or the complex instruction system, but the number and details of instruction support are different; (3) diversity of composition systems. Calculators, controllers, memories, input devices, and output devices are all supported by different technologies, which results in a wide variety of application scenarios.
The diversity of software systems mainly includes the following: (1) Diversity of application domains. The core purpose of the software is to serve the corresponding application domains, which are all-encompassing, and the modes, devices, and methods that different fields depend on their characteristics; (2) diversity of software languages. Different low-level languages and high-level languages are used in different fields, and different compilers may be used for high-level languages; (3) diversity of application scenarios. Monolithic software, cluster software, and distributed software are deployed on a single hardware platform, multiple hardware platforms, or even in different regions, and systemic issues need to be considered.
The phenomenon of combinatorial explosion (Joshi et al. 2011) makes the operating system the middle layer and management layer in both hardware and software fields. The operating system not only solves the problem of layered coupling but also faces the problem of diversity, mainly including: (1) Ecological diversity. Operating systems represented by Windows and Linux have attracted a large number of software ecosystems, which results in the excessive internal coupling of the ecosystem and difficulty in outward migration; (2) Diversity of distribution versions, the same kernel, and different versions of operating systems have significant differences in software and hardware support capabilities. Computer simulation technology, computer resource virtualization technology, and cloud computing technology emerged as the times require. By solving systemic problems in their respective fields, the application efficiency of computer systems has been improved, the operation process of information systems has been simplified, and the difficulty of use by ordinary users has been reduced.
Computer simulation technology uses binary translation (Sites et al. 1993) to realize software functions that migrate between different hardware systems and operating systems. Binary translation is a technology that directly translates and executes binary programs, which can translate binary programs from one specific processor to another for execution. It enables the binary programs between different hardware platforms to be easily transplanted to each other, expands the scope of application of software and hardware, and helps to break the situation that the software and hardware are hindered by each other and affect innovation. Because binary translation software is similar to virtual machine monitor (VMM) (Swiech et al. 2014) of virtualization technology, computer simulation is also called pure software virtualization. Since all instructions are simulated by software, the performance of this technology is often poor. But machines of different architecture platforms can be simulated on the same platform.
Common computer simulation scenarios include: (1) hardware level. For example, x86 computers are used to simulate specific game consoles. As cloud computing technology becomes more and more mature, hardware simulation scenarios are extensively iterated. Typical software includes PearPC, Bochs, and QEMU (Quick EMUlator) (Martignoni et al. 2013), etc.; (2) Operating system level. For example, there is the use of the Wine (Wine Is Not an Emulator) (Kay 2009) compatibility layer to be compatible with software running Windows operating system on Linux; (3) Software layer. For example, there is the use of the Java Virtual Machine (JVM) (Downing and Meyer 1997) to run various applications across platforms.
Virtualization technology refers to the operation of computing elements in a virtual environment, which is a solution to simplify management and optimize resources. Virtualization technology itself cannot solve related problems across hardware systems and operating systems. It is mainly oriented to hardware resources, which solves the main problems by efficiently utilizing resources and simplifying hardware resource management.
Virtualization technology is often used in the personal and enterprise fields. Products in the personal field mainly include: VMware WorkStation (Bugnion et al. 2012, Damodaran et al. 2012) and Oracle VirtualBox (Damodaran et al. 2012), mainly for PCs; products in the enterprise field mainly include: VMware ESX (Nagesh et al. 2017, Djordjevic et al. 2021), Huawei FusionCompute, open-source KVM (Kernel-based Virtual Machine) (Wei et al. 2019, Djordjevic et al. 2021), Citrix Xen (Nagesh et al. 2017, Wei et al. 2019, Djordjevic et al. 2021), and Microsoft Hyper-V (Djordjevic et al. 2021), etc., mainly for devices such as servers and dedicated storage. The concept of virtualization was first proposed in the 1950s, and the virtualization technology was commercialized by IBM in the 1960s. Currently, the server virtualization technology based on the x86 architecture is becoming more and more mature, bringing IT administrators an efficient and convenient management experience, improving the utilization rate of IT resources, and reducing resource consumption. Definitely, virtualization still has the potential for further improvement in terms of instruction security shielding, cross-instruction set support, and efficiency conversion.
Cloud computing has not had a clear and unified definition since it was proposed. This article holds the view that cloud computing is a network-centric distributed technology and service management model that can provide fast services on demand anytime, anywhere. Cloud computing includes many design patterns for software systems, virtualization technologies for hardware systems, and service-oriented technologies such as automation and decoupling.
The application scenarios of cloud computing technology are all-encompassing. The majority of Internet applications in the civilian field need to be based on cloud computing technology and then provide services in the fields of catering, office, finance, sales, and communication; the military field is represented by the US “cloud strategy.” Their data warfare, global warfare, and system warfare capabilities all require cloud computing to provide stable services. The main cloud computing providers (Zhang et al. 2012) include Amazon, Microsoft, Google, Alibaba, Tencent, Huawei, etc. The cloud computing technologies of various manufacturers use many products. Cloud computing-related product technologies are usually collectively referred to as cloud technology stacks, and cloud computing software collection instances are called cloud platforms. The cloud technology stacks of various manufacturers are generally derived from two technology stacks. One is CloudStack, which is a closed source but supported by a large number of product suppliers; the other is OpenStack, which is open source and has a relatively active development community. Cloud computing has been going on for more than ten years, and more and more applications are migrating to the “cloud,” and their application fields are constantly expanding.
In terms of basic concepts, the technologies of computer simulation and virtualization have relatively independent definitions. The concept of cloud computing is relatively broad and constantly evolving. In terms of technical implementation, there is a certain technical overlap between the two technologies of computer simulation and virtualization. Cloud computing is based on the aforementioned two technologies. In terms of application scenarios, computer simulation technology has relatively independent application scenarios for software, hardware, and operating systems. Virtualization technology has application scenarios for hardware and operating system. Cloud computing has application scenarios for software, hardware, and operating system, but need to be based on virtualization technology.
2 Generalized computer simulation technology
Computer simulation technology provides bottom-level interfaces for upper-layer systems and applications. More and more computer simulation technologies are integrated into hardware, operating systems, or virtualization suites, and a lot of application software uses computer simulations to prevent software reverses and protect intellectual property rights. The bottom layer is encapsulated with the upper layer application. Computer simulation technology is widely used, and its functions are gradually diversified. This article defines this trend as generalization, and many problems are gradually exposed under this trend.
2.1 Core difficulties of computer simulation technology
2.1.1 Intellectual property core
Intellectual property core (IP core) refers to a reusable module including a soft core, a hard core, and a solid core, which can be provided by one party to the other through an agreement in the form of logic units and chip designs. The differences in the services provided by the IP core to the upper-layer applications are manifested in the differences in instructions and instruction sets. As the name implies, these IP cores and their application models are protected by intellectual property law. The unauthorized use of certain instructions (sets) and generating profits will be warned or sued by relevant companies. For example, Wine imitated Windows applications on Linux infringing the intellectual property rights of Microsoft’s Windows NT-related trademarks, and was ordered to rectify.
2.1.2 Efficiency and safety
Computer simulation technology simulates the fetching, decoding, and execution of the x86 platform processor through software. All the instructions of the client are not directly executed on the physical platform, but the binary translation software or the VMM proxy client completes the access to system resources and commands. In software virtualization solutions, the location of binary translation software or VMM in the software suite is where the operating system is in the traditional sense, and the location of the operating system is where the application program is in the traditional sense, which will increase the complexity of the system. The increased complexity of the software stack means less-efficient execution and more difficult environments to manage, making it harder to maintain system reliability and security.
2.1.3 Instruction set mapping
Although computer simulation technology can simulate machines of different architecture platforms on the same platform, it is extremely dependent on developers’ management of instruction sets and optimization of processing modes. For example, QEMU has imperfect support for Microsoft Windows and some host operating systems, and imperfect support for less commonly used architectures. Unless a KQEMU or KVM (Sites et al. 1993) accelerator is used, its simulation speed is still not as fast as other virtualization software such as VMware Workstation.
2.2 Analysis of application examples of computer simulation technology
When carrying out all kinds of spacecraft control work, the aerospace TT&C data center is limited by the payload manufacturing cost and application environment. It is often necessary to use the payload simulation method to carry out system testing before the implementation of the task, so as to cover the whole information process of “central computer system – TT&C station computer system – rocket (space-borne) computer system.”
By using computer simulation technology, the simple payload simulation work is deployed to the computer of the x86 architecture in software to simulate some functions of the DSP architecture or FPGA architecture on-board computer, and communicates with the central computer system through the network interface, so as to realize the business layer information interact. This kind of simulation software is developed and tested by the payload development corporation and deployed to the designated operating system of the dedicated central computer. The relevant personnel of the payload development corporation and the personnel of the flight control position complete the related operations such as sending and receiving instructions. Simple payload simulation tasks do not require high instruction translation efficiency, and high-performance microcomputers are usually sufficient.
Complex payload simulation tasks have high requirements on computer simulation efficiency and are mainly used for complex image information processing and multi-instruction synchronization processing, which often needs to be simulated in combination with hardware computer simulation. This kind of simulation system combining software and hardware is developed, tested, and manufactured by the payload development corporation, communicates with the central computer system through a common network interface (sometimes certain code conversion is required), and finally realizes the information exchange at the business layer.
Figure 1 shows a link topology diagram of the space measurement and control data transmission network. The preparation stage of the spacecraft entering and leaving the space mission is carried out in three steps. (1) Simulation system test: The computer simulation system directly interacts with the flight control computer information and checks whether the simulation system is running normally through flight control-related programs and processes. The main inspection contents include the integrity of the simulation process and the smoothness of multicast communication; (2) Joint debugging test: The computer simulation system is forwarded to the computer system of the measurement and control station through the central server to simulate the process of “payload – measurement and control equipment – the computer system of the measurement and control station.” The data are simply processed by the computer system of the measurement and control station and sent back to the central computer system for processing, analysis, and storing, checking whether the equipment, software, and communication status of each link meet the task requirements through each participating post, and conduct appropriate debugging according to the inspection results; (3) Test in the whole area: The data sending and receiving process of the computer software and hardware simulation system and the joint debugging and testing stage remain unchanged. Other corporations participating in the test exchange information with the central computer system through the network to realize the joint measurement and control of the rocket and the payload. The process of this test is consistent with the implementation stage. After the test is completed, the relevant technical state will be solidified, and the final preparations will be made for the actual implementation.

Space TT&C network topology of data transmission.
2.3 Application prospect of computer simulation technology
Because the application scenarios of computer simulation technology are too broad and different in scale, it gradually develops in the direction of generalization. The line between simulation software and virtualization software is gradually blurring, and some simulation software is even replaced by virtualization software.
Taking the QEMU technology with the largest application scale as an example, KVM-QEMU (Yongjie and Haitao 2013) technology is one of the most popular virtualization technologies at present. QEMU is not a part of KVM. It already has processor virtualization, memory virtualization, and device virtualization function. For simplifying development and code reuse, KVM is modified from QEMU. During the running of the virtual machine, QEMU will enter the kernel through the system call provided by the KVM module, and the KVM module is responsible for placing the virtual machine in a special mode of the processor to run. From the perspective of QEMU, it can be said that it uses the virtualization function of the KVM module to accelerate the virtualization of the QEMU virtual machine hardware, thereby improving the performance of the virtual machine.
The development of software can benefit from public code. Therefore, based on the computer simulation technology architecture and its functions, combined with acceleration technologies such as KVM, more application scenarios can be realized. For example, CPU architecture simulation, operating system-level simulation, software ecological compatibility testing, and software encryption protection are performed.
3 Professional virtualization technology
Whether in the personal field or the enterprise field, the main reason why virtualization technology has gradually become more stable and professional is that the core technologies are mastered by industry giants, and the competition threshold is relatively high. The core difficulties mainly include instruction shielding, cross instruction set platform support, and instruction execution efficiency.
3.1 Core difficulties of virtualization technology
3.1.1 Instruction shielding
Among the system call instructions defined by the operating system, those that modify system resources or that have different behaviors in different modes are all sensitive instructions. In a virtualization scenario, the VMM needs to monitor these sensitive instructions and perform shielding processing, which reduces execution efficiency. Sensitive instructions that support a virtualization architecture are all privileged instructions. When executing these sensitive instructions at an unprivileged level, the CPU will throw an exception and call the exception handling function of the VMM, thereby realizing the purpose of controlling the virtual machine’s access to sensitive resources. However, the x86 architecture just failed to meet this criterion. Not all sensitive instructions in the x86 architecture were privileged instructions. Some sensitive instructions did not throw exceptions when executed in the unprivileged mode. At this time, VMM cannot perform shielding processing on virtual machine behavior.
Taking the modification of the IF (Interrupt Flag) in the FLAGS register as an example, first instruction “pushf” is used to push the contents of the FLAGS register onto the stack, then the IF at the top of the stack, and finally the “popf” instruction is used to restore the FLAGS register from the stack. If the virtual machine kernel is not running in ring 0, the x86 CPU will not throw an exception, but just ignore the instruction “popf,” so that the purpose of the virtual machine closing IF does not take the effect.
3.1.2 Cross instruction set platform support
Standard VMM does not have cross instruction set platform support, and only software virtualization can achieve cross instruction set platform support, but not all software virtualization can achieve cross instruction set platform support. VMware’s software virtualization uses dynamic binary translation technology and the hypervisor is under control, allowing client instructions to run directly on the physical platform. However, the client instructions will be scanned by the VMM machine before running, and the instructions that break through the limitations of the VMM machine will be dynamically replaced with security instructions that can be run directly on the physical platform or replaced with software calls for the VMM machine. The aforementioned approach improves emulation performance, but at the same time loses the ability to support cross instruction set platforms. The realization of cross instruction set platform support must be combined with computer simulation technology, which will greatly reduce the execution efficiency. Therefore, most of the existing virtualization technologies do not support this function, resulting in some users may face the risk of platform migration in the future.
3.1.3 Execution efficiency
Facing the problem of instruction shielding, VMM has two solutions: para virtualization and full virtualization. A paravirtualized solution requires modification of the guest code, but this does not meet the transparency guidelines of virtualization. Xen used para-virtualization solutions in the early days, and most VMMs currently use full-virtualization solutions. Full virtualization needs to solve the problem of instruction shielding in other ways, mainly including: (1) Adopt software virtualization technology, introduces binary translation technology, including static translation and dynamic translation; (2) Adopt hardware virtualization technology: taking the x86 architecture as an example, Intel did not modify those unprivileged sensitive instructions into privileged instructions, because not all privileged instructions need to be intercepted and processed. Hardware virtualization technology is a complete set of solutions, and the complete function requires the support from various aspects such as CPU, motherboard chipset, BIOS, and software.
For the platform resource management, VMM has two models: hypervisor mode and host mode. After the virtual machine in the hypervisor mode is powered on, it first loads and runs the hypervisor, while the traditional operating system runs in the virtual machine created by it. The VMM in the hypervisor mode, in a sense, can be regarded as an operating system kernel optimized and tailored for virtual machines. The hypervisor of this type of virtual machine generally provides a special virtual machine with certain privileges, and this special virtual machine runs the operating system environment that needs to be provided to the user for daily operation and management. The well-known open-source virtualization software Xen, commercial software VMware ESX/ESXi, and Microsoft’s Hyper-V are typical representatives of hypervisor mode virtual machines. Different from the hypervisor mode virtual machine mode, the host mode VMM still runs the operating system in the general sense (also known as the host operating system) after the system is powered on. As a special application, the hypervisor can be regarded as an extension of operating system functionality. For host mode virtual machines, the biggest advantage is that they can make full use of the existing operating system. KVM (Baisheng and Guangjun 2020), VMware Workstation, and VirtualBox are host-mode virtual machines.
3.2 Analysis of application examples of virtualization technology
With the continuous increase of the construction scale of the computer system of the aerospace measurement and control data center, the role of the central computer system has become increasingly prominent, and the scope of the business has increased significantly. The original central system architecture is relatively fixed, and various computing, storage, and network resources are not easy to dynamically adjust. Beyond aforementioned shortcomings, there are still other ones that are increasingly unsatisfactory with the working state of the center, which is frequently switched, and need continuously improved technical requirements. From the perspective of system operation and maintenance, the original system is also not conducive to precise control and system reinforcement. The data center uses Huawei cloud to build the underlying resource pool and plans to build a cloud platform in terms of computing virtualization, data storage virtualization, network virtualization, and virtual security management based on actual business needs, and finally form an information system with intelligent sharing of underlying resources and relatively independent business applications. The overall architecture of the virtualization platform is shown in Figure 2.

Space TT&C virtual platform basic architecture block diagram.
The virtualization platform provides virtual machine services for all functional positions of the central computer system, which are used to build platform-level services such as database, database management, and message middleware, as well as software-level services such as orbital computing, real-time processing, front-end communication, and comprehensive display. Different business systems support each other through message communication. The elastic characteristics of computing, storage, and network resources provided by the virtualization platform effectively solve the shortcomings of traditional computer systems and provide technical support for various types of spacecraft control works on the basis of ensuring the efficient use of resources.
3.3 Application prospects of virtualization technology
The two major problems of instruction shielding and cross instruction set platform support are closely related to execution efficiency. As Intel, AMD, and other chip manufacturers strongly promote the hardware virtualization of the x86 platform and lead the development of cloud computing technology, the problems of instruction shielding and execution efficiency are gradually solved and the problem of cross instruction set platform support has been gradually diluted by the industry. Specialized virtualization technology is mainly manifested in the gradual stabilization of technology selection, the large-scale industrial chain, and the difficulty of new equipment manufacturers to intervene.
Taking Intel virtualization technology as an example, as a collection of hardware technologies, its technical practice has developed into Intel’s internal technology iteration and differentiation, which has crushed other hardware manufacturers. Intel’s main virtualization technologies include VT-x technology related to processors, VT-d technology related to chipsets, and virtualization technologies related to input and output devices, which have basically become industry standards and are recognized by integrated manufacturers.
The development of virtualization technology will further promote the prosperity of cloud computing technology. It will be directly linked to the production environment of all walks of life, and its role in infrastructure will be integrated into the construction of the information field, which will inevitably lead to fierce competition in the upstream and downstream of the industry chain. Currently, many domestic manufacturers have accumulated a lot of software-level virtualization technology, but there is still a large space for catching up in hardware virtualization. Of course, this also requires the simultaneous development of the semiconductor chip field as the technical support.
4 Systematic cloud computing technology
“Going to the cloud” is currently a hot word in various industries, not only because of the inherent technical advantages of cloud computing but also because cloud computing itself is designed for the system, which not only covers the technical and management content in the system but also guarantees system efficiency and safety requirements.
4.1 Core difficulties of cloud computing technology
4.1.1 System integration
For the development process of the application architecture, the first stage is the monolithic architecture. The monolithic application packages all the functions of the application into an independent unit or a tightly coupled program set. As the application functions become more powerful, the volume of the monolithic application also increases. The bigger scale, the poorer flexibility, the difficulty in upgrading (a single trigger affects the whole body) and other problems gradually exposed; the second stage is the application service, and the different functional units (services) of the application are passed through the well-defined interfaces between the services. In connection, the interface protocol is independent of the hardware platform, operating system, and programming language that implements the service. This service-oriented structural pattern can handle the data exchange and message exchange between services well, but the system bus service that handles the data exchange will become the bottleneck; the third stage is application microservice (Jinxiao et al. 2020), which builds the application into a loosely coupled service collection, improves the modularity of the system through fine-grained services and light-weight interface protocols, and makes the system easier to understand, develop, test, and more elastic. The most important thing is that the microservice-based architecture can make continuous delivery and continuous deployment work perfectly.
For the development process of the infrastructure, the first stage of the application runs on the physical server, and the reliability of the application usually needs to be guaranteed by the infrastructure and the operating system; the second stage of the application runs on the x86 virtualized server. The server is loosely coupled with the upper-layer operating system, a variety of protection virtualization technologies are provided, and the high availability of the system is fully guaranteed; in the third stage, with the rise of microservices and container technology, most microservices are built based on container technology. The infrastructure is abstracted as a platform, and the failure of a single microservice does not affect the external services provided by the entire microservice system, so the application-level high availability is guaranteed; the fourth stage is based on serverless technology and microservice ecology, the infrastructure is completely abstracted into the background and development Personnel can carry out agile development for business events and realize development integration and deployment at the function level.
The main difficulty of the cloud computing technology stack is the system integration of application architecture and infrastructure which ultimately achieves elastic, agile, and reliable system performance. Due to the complexity and diversity of system integration, cloud computing has been subdivided since the beginning of its application. According to the definition of the National Institute of Standards and Technology, cloud computing can be divided into Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS will provide all facilities including processing, storage, network, and other basic computing resources as a service to users using packaging technology. Users can deploy and run arbitrary software, including operating systems and applications, which is equivalent to using bare metal and disk. PaaS provides users with a running environment for applications (usually specific development languages and tools, such as Java [17], Python, etc.). Users develop applications and then deploy the applications to the supplier’s cloud computing infrastructure. SaaS is the most targeted, which provides operators’ applications running on cloud computing infrastructure to users as a service. Currently, cloud computing is still being subdivided into the field, and concepts such as Backend as a Service (BaaS), Function as a Service (FaaS), and Security as a Service (SeaaS) are emerging one after another. The ultimate goal is to realize the systematization and modularization of the information system and to endow the information system with more and better functional services that can be selected by users.
4.1.2 Management efficiency
IaaS is often compared with virtualization technology due to the characteristics of providing bare metal and disk. Because users often grasp the differences and changes in the infrastructure, they ignore that the essence of the cloud computing technology stack is a management service bus based on VMM. One of the core difficulties of cloud computing is optimizing management components and improving management efficiency. As an important cornerstone and key technology of cloud computing, virtualization technology can provide users with functions such as abstract virtualization of resources, high availability of service, and migration ability of the state. Based on the excellent functional interface of virtualization, cloud computing technology provides external self-service on demand, ubiquitous network access, cross host dynamic resource pool, fast elasticity, and measurable services. Therefore, IaaS needs to provide users with user self-help services based on B/S architecture, representational state transfer (REST) standard API of multiple technology platforms, virtual storage and image service, resource scheduling, and unified service registration and identity authentication based on virtualization.
Taking the open-source cloud technology stack OpenStack (Xiaofei) as an example, it includes a group of open-source projects maintained by the community to provide relevant management technical support for each part of IaaS, PaaS, and SaaS functions. For example, Keystone provides service registration and authentication services, Horizon provides Web human–computer interaction services, Ceilometer provides metering services, Heat provides orchestration services, Trove provides database services, and Sahara provides big data processing services.
4.1.3 Privacy and security
Potential security risks brought by the highly intensive features of the cloud platform. The cloud platform realizes highly intensive management of IT resources and effectively improves the efficiency of resource deployment and use. However, the concentration of resources will cause the original single system and a single application to be unusable and expand to multiple systems, multiple applications, and even the entire IT infrastructure’s unavailable and other security problems, which are the most critical aspects of cloud platform design and construction. The cloud security alliance (CSA) released the top ten cloud computing threat analysis report in 2019, which has been widely cited and recognized by the industry. The CSA cloud security architecture model is shown in Figure 3, which discusses the security architecture of the cloud platform from three aspects: global security control, virtualization technology security control, and business continuity.
Security risks of the dynamic resource allocation feature of the cloud platform. The construction of the cloud platform has changed the traditional IT environment. Due to the high-reliability requirements of dual-machine hot backup, virtual machines often undergo dynamic migration resulting in dynamic changes in the protection boundary. Traditional security devices cannot achieve dynamic security protection policies with virtual machine migration. The virtual machine cannot be protected after migration, which brings huge risks to the bearer business.
Cloud platform virtualization network security risks. The operating mode of virtual machines is different from that of physical servers. East-west data traffic between business virtual machines is only visible inside the cloud platform and interacts through virtual switching networks. Traditional security protection devices such as firewalls, intrusion prevention, anti-virus gateways, and auditing devices cannot perceive the traffic between business virtual machines, which results in the inability of traditional security protection devices to provide effective protection inside the cloud platform.
Cloud platform data security risks. The cloud platform uniformly stores business data in cloud computing servers, and the concentration of cloud computing resources makes security risks more concentrated.

CSA cloud platform security architecture.
4.2 Analysis of application examples of cloud computing technology
The construction of the cloud platform of the aerospace measurement and control data center cannot copy the commercial cloud platform model. The cloud platform should be built in stages and with emphasis, especially the commercial cloud platform model requires a lot of software transformation and infrastructure construction. Data centers should explore software-defined security based on the key field of cloud security, and enhance the construction of information protection capabilities through cloud computing technology.
Figure 4 shows the overall architecture design framework of the data center cloud platform security defense system. The system bypass is deployed on the core switch, and business data are introduced into the system using policy routing or Software Defined Network (SDN) traction. The system will receive data according to the relevant service function chain by using the virtualized security components in the service chain path to clean the traffic and injecting it back into the business network, so as to realize the security protection of business data.

Space TT&C cloud platform security protection architecture project.
The high-performance security component virtualization technology realizes collaborative and dynamic orchestration of service chains through virtual switches. Security function virtualization decomposes the business functions in traditional security devices into virtual network units, which are realized by container technology. Through the unified arrangement and management of security components, they are defined as different business service chains according to application requirements, and different services pass through the chain. It can be processed by different security components in it, so as to realize various complex security protection logic. All virtualized security components can dynamically arrange service chains based on different business requirements and adapt to the needs of different business scenarios.
4.3 Application prospects of cloud computing technology
Cloud computing technology represents the trend of “software-defined everything” in the field of information and communication. (1) Cloud computing technology is highly compatible with the “software-defined” technology system, and the “software-defined” technology system is also being widely used by cloud computing; (2) cloud computing technology provides a perfect platform for the “software-defined” technology system, the “software-defined” technology system no longer needs to separately consider difficult issues such as elasticity and high availability; (3) cloud computing provides a ubiquitous user group for the “software-defined” technology system, and users can obtain the “software-defined” system through self-service applications. Currently, the relatively mature cases of the “software-defined” technology system include SDN, Software Defined Security (SDS), and Software Defined Vision (SDV).
Cloud computing technology provides an incubation platform, integration platform, and operation management platform for emerging technologies such as big data, artificial intelligence, and the Internet of Things, which is mainly due to cloud computing technology’s management of the full life cycle of application services. In the incubation stage of application services, cloud computing technology can provide a large number of platform libraries for entrepreneurial teams to meet the needs of distributed computing and distributed storage. It can provide the test team with a rich test environment, test process, and test components to meet the needs of the quality management system. In the application service integration stage, cloud computing technology can provide continuous integration/continuous delivery (CI/CD) service for the generation team to meet the needs of the agile development. In the stable operation stage, cloud computing technology can provide high availability, disaster recovery, and elastic scaling services for the system operation and maintenance team to meet the needs of stable and reliable operation of the system.
5 Summary
The generalization of computer simulation technology, the specialization of virtualization technology, and the systematization of cloud computing technology represent the mainstream direction of technology development in the field of information and communication at this stage and are also a typical epitome of the development and evolution of countless information technologies. Information technology essentially serves the business logic of their respective fields, just like the application examples of the three in the aerospace TT&C field, which well reflects the basic characteristics of demand-oriented, safety-oriented, innovation, and long-term development.
According to the logical sequence of concept elaboration, difficulty analysis, example application, and prospect outlook, this article briefly discusses the basic situation of the three, clarifies the dialectical relationship between the three, and introduces the application status of related key technologies in detail. It has laid a theoretical foundation for information system architecture designers and builders to accurately grasp the technical positioning, carry out technical selection, and control the effect of engineering technology implementation.
-
Funding information: The authors state no funding involved.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Conflict of interest: The authors state no conflict of interest.
References
Baisheng W, Guangjun X. 2020. In-depth exploration of Linux system virtualization principle and implementation. China Machine Press. Search in Google Scholar
Bugnion E, Devine S, Rosenblum M, Sugerman J, Wang EY. 2012. Bringing virtualization to the x86 architecture with the original VMware workstation. Acm Trans Comput Syst. 30(4):1. 10.1145/2382553.2382554Search in Google Scholar
Damodaran DK, Mohan BR, Vasudevan MS, Naik, D. 2012. Performance evaluation of vmware and virtualbox. IACSIT Press. Vol. 29. Search in Google Scholar
Djordjevic B, Timcenko VV, Kraljevic N, Maek N. 2021. File system performance comparison in full hardware virtualization with ESXi, KVM, hyper-V and Xen hypervisors. Adv Electric Comput Eng. 21(1):202110.4316/AECE.2021.01002Search in Google Scholar
Downing T, Meyer J, 1997. Java virtual machine. LiU Electronic Press. Ep.liu.se. Search in Google Scholar
Jinxiao S, Xiaohua P, Shimin L. 2020. OpenShift cloud native architecture: principles and practice. China Machine Press. Search in Google Scholar
Joshi P, Gunawi HS, Sen K. 2011. PREFAIL: A programmable tool for multiple-failure injection. Proceedings of the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA). 2011 Oct 22–27; Portland (OR), USA. ACM SIGPLAN Notices. 46(10):171–88.10.1145/2048066.2048082Search in Google Scholar
Kay R. 2009. Wine: is it an emulator or not? Computerworld. 43(22):33. Search in Google Scholar
Martignoni L, Paleari R, Reina A, Roglia GF, Bruschi D. 2013. A methodology for testing CPU emulators. ACM Trans Software Eng Methodol (TOSEM). 22(4):1–26. 10.1145/2522920.2522922Search in Google Scholar
Nagesh OS, Kumar T, Venkateswararao V. 2017. A survey on security aspects of server virtualization in cloud computing. Int J Electric Comput Eng. 7(3):1326. 10.11591/ijece.v7i3.pp1326-1336Search in Google Scholar
Sites RL, Chernoff A, Kirk MB, Marks MP, Robinson SG. 1993. Binary translation. Commun ACM. 36(2):69–81. 10.1145/151220.151227Search in Google Scholar
Swiech M, Hale KC, Dinda P. 2014. VMM emulation of Intel hardware transactional memory. Proceedings of the 4th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS ’14). 2014 Jun 10; Munich, Germany. Association for Computing Machinery, 2014. pp. 1–8. 10.1145/2612262.2612265Search in Google Scholar
Wei S, Zhang K, Tu B. 2019. HyperBench: A benchmark suite for virtualization capabilities. In: Chaintreau A, Bonald T, Duffield N. Editors. Proceedings of the ACM on Measuremet and Analysis of Computing Systems. Association for Computing Machinery. 3(2):1–22. 10.1145/3309697.3331478Search in Google Scholar
Xiaofei W. 2012. Building a Private Cloud Computing Platform Based on OpenStack. Ph.D. thesis, South China University of Technology, Kanton, China.Search in Google Scholar
Yongjie R, Haitao S. 2013. KVM virtualization technology: actual combat and principle analysis. China Machine Press. Search in Google Scholar
Zhang S, Yan H, Chen X. 2012. Research on key technologies of cloud computing. Phys Procedia. 33:1791. 10.1016/j.phpro.2012.05.286Search in Google Scholar
© 2022 Gang Chen et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Research Articles
- Deep learning application for stellar parameters determination: I-constraining the hyperparameters
- Explaining the cuspy dark matter halos by the Landau–Ginzburg theory
- The evolution of time-dependent Λ and G in multi-fluid Bianchi type-I cosmological models
- Observational data and orbits of the comets discovered at the Vilnius Observatory in 1980–2006 and the case of the comet 322P
- Special Issue: Modern Stellar Astronomy
- Determination of the degree of star concentration in globular clusters based on space observation data
- Can local inhomogeneity of the Universe explain the accelerating expansion?
- Processing and visualisation of a series of monochromatic images of regions of the Sun
- 11-year dynamics of coronal hole and sunspot areas
- Investigation of the mechanism of a solar flare by means of MHD simulations above the active region in real scale of time: The choice of parameters and the appearance of a flare situation
- Comparing results of real-scale time MHD modeling with observational data for first flare M 1.9 in AR 10365
- Modeling of large-scale disk perturbation eclipses of UX Ori stars with the puffed-up inner disks
- A numerical approach to model chemistry of complex organic molecules in a protoplanetary disk
- Small-scale sectorial perturbation modes against the background of a pulsating model of disk-like self-gravitating systems
- Hα emission from gaseous structures above galactic discs
- Parameterization of long-period eclipsing binaries
- Chemical composition and ages of four globular clusters in M31 from the analysis of their integrated-light spectra
- Dynamics of magnetic flux tubes in accretion disks of Herbig Ae/Be stars
- Checking the possibility of determining the relative orbits of stars rotating around the center body of the Galaxy
- Photometry and kinematics of extragalactic star-forming complexes
- New triple-mode high-amplitude Delta Scuti variables
- Bubbles and OB associations
- Peculiarities of radio emission from new pulsars at 111 MHz
- Influence of the magnetic field on the formation of protostellar disks
- The specifics of pulsar radio emission
- Wide binary stars with non-coeval components
- Special Issue: The Global Space Exploration Conference (GLEX) 2021
- ANALOG-1 ISS – The first part of an analogue mission to guide ESA’s robotic moon exploration efforts
- Lunar PNT system concept and simulation results
- Special Issue: New Progress in Astrodynamics Applications - Part I
- Message from the Guest Editor of the Special Issue on New Progress in Astrodynamics Applications
- Research on real-time reachability evaluation for reentry vehicles based on fuzzy learning
- Application of cloud computing key technology in aerospace TT&C
- Improvement of orbit prediction accuracy using extreme gradient boosting and principal component analysis
- End-of-discharge prediction for satellite lithium-ion battery based on evidential reasoning rule
- High-altitude satellites range scheduling for urgent request utilizing reinforcement learning
- Performance of dual one-way measurements and precise orbit determination for BDS via inter-satellite link
- Angular acceleration compensation guidance law for passive homing missiles
- Research progress on the effects of microgravity and space radiation on astronauts’ health and nursing measures
- A micro/nano joint satellite design of high maneuverability for space debris removal
- Optimization of satellite resource scheduling under regional target coverage conditions
- Research on fault detection and principal component analysis for spacecraft feature extraction based on kernel methods
- On-board BDS dynamic filtering ballistic determination and precision evaluation
- High-speed inter-satellite link construction technology for navigation constellation oriented to engineering practice
- Integrated design of ranging and DOR signal for China's deep space navigation
- Close-range leader–follower flight control technology for near-circular low-orbit satellites
- Analysis of the equilibrium points and orbits stability for the asteroid 93 Minerva
- Access once encountered TT&C mode based on space–air–ground integration network
- Cooperative capture trajectory optimization of multi-space robots using an improved multi-objective fruit fly algorithm
Articles in the same Issue
- Research Articles
- Deep learning application for stellar parameters determination: I-constraining the hyperparameters
- Explaining the cuspy dark matter halos by the Landau–Ginzburg theory
- The evolution of time-dependent Λ and G in multi-fluid Bianchi type-I cosmological models
- Observational data and orbits of the comets discovered at the Vilnius Observatory in 1980–2006 and the case of the comet 322P
- Special Issue: Modern Stellar Astronomy
- Determination of the degree of star concentration in globular clusters based on space observation data
- Can local inhomogeneity of the Universe explain the accelerating expansion?
- Processing and visualisation of a series of monochromatic images of regions of the Sun
- 11-year dynamics of coronal hole and sunspot areas
- Investigation of the mechanism of a solar flare by means of MHD simulations above the active region in real scale of time: The choice of parameters and the appearance of a flare situation
- Comparing results of real-scale time MHD modeling with observational data for first flare M 1.9 in AR 10365
- Modeling of large-scale disk perturbation eclipses of UX Ori stars with the puffed-up inner disks
- A numerical approach to model chemistry of complex organic molecules in a protoplanetary disk
- Small-scale sectorial perturbation modes against the background of a pulsating model of disk-like self-gravitating systems
- Hα emission from gaseous structures above galactic discs
- Parameterization of long-period eclipsing binaries
- Chemical composition and ages of four globular clusters in M31 from the analysis of their integrated-light spectra
- Dynamics of magnetic flux tubes in accretion disks of Herbig Ae/Be stars
- Checking the possibility of determining the relative orbits of stars rotating around the center body of the Galaxy
- Photometry and kinematics of extragalactic star-forming complexes
- New triple-mode high-amplitude Delta Scuti variables
- Bubbles and OB associations
- Peculiarities of radio emission from new pulsars at 111 MHz
- Influence of the magnetic field on the formation of protostellar disks
- The specifics of pulsar radio emission
- Wide binary stars with non-coeval components
- Special Issue: The Global Space Exploration Conference (GLEX) 2021
- ANALOG-1 ISS – The first part of an analogue mission to guide ESA’s robotic moon exploration efforts
- Lunar PNT system concept and simulation results
- Special Issue: New Progress in Astrodynamics Applications - Part I
- Message from the Guest Editor of the Special Issue on New Progress in Astrodynamics Applications
- Research on real-time reachability evaluation for reentry vehicles based on fuzzy learning
- Application of cloud computing key technology in aerospace TT&C
- Improvement of orbit prediction accuracy using extreme gradient boosting and principal component analysis
- End-of-discharge prediction for satellite lithium-ion battery based on evidential reasoning rule
- High-altitude satellites range scheduling for urgent request utilizing reinforcement learning
- Performance of dual one-way measurements and precise orbit determination for BDS via inter-satellite link
- Angular acceleration compensation guidance law for passive homing missiles
- Research progress on the effects of microgravity and space radiation on astronauts’ health and nursing measures
- A micro/nano joint satellite design of high maneuverability for space debris removal
- Optimization of satellite resource scheduling under regional target coverage conditions
- Research on fault detection and principal component analysis for spacecraft feature extraction based on kernel methods
- On-board BDS dynamic filtering ballistic determination and precision evaluation
- High-speed inter-satellite link construction technology for navigation constellation oriented to engineering practice
- Integrated design of ranging and DOR signal for China's deep space navigation
- Close-range leader–follower flight control technology for near-circular low-orbit satellites
- Analysis of the equilibrium points and orbits stability for the asteroid 93 Minerva
- Access once encountered TT&C mode based on space–air–ground integration network
- Cooperative capture trajectory optimization of multi-space robots using an improved multi-objective fruit fly algorithm