1.1 Introduction Meanwhile 1960 IBM introduced the number


1960 IBM introduced the number one virtual computer that permits one computer
to be accessed as if it was several. This is well-established outset by IBM to
virtual physical resources of mainframe systems to achieve improved utilization
of resources. (Manuela K. Ferreira Henrique C. Freitas, August 2008)1 During
that time there a need for several users to gain access to a computer at the equal
time. Temporarily majority of the applications spend most of the time waiting
for data to be read or written, there was enough compute power for several
immediate applications but memory was insufficient to support a large number of
simultaneous users on a system. It was the time-sharing environment that really
drove the demand to virtualize memory so that, in a sense, the real memory
could be time-shared properly among computer hardware resources and users
applications.(Rogers, February 2017 )2 Organizations
don’t know virtualization
underutilization the computer hardware

Today computer hardware
is designed and architected for hosting multiple operating system and
applications. The principal resolution to this problematic is virtualization.

Virtualization typically
refers to the creation of virtual machine that can virtualize all of the
hardware resources, including processors, memory, storage, and network
connectivity.  With the virtualization,
physical hardware resources can be shared by one or more virtual machines. (Lee, 2014)3

This can have both
advantages and disadvantages. There are different types of virtualization as
well as the benefits of each:

•           Hardware Virtualization

•           Software Virtualization

•           Memory Virtualization

•           Storage Virtualization

•           Data Virtualization

•           Network Virtualization (Bill, March 12, 2012)4


This paper will focus on
memory virtualization and it impact in the computer hardware. It will study how
memory can be virtualize and analyses the aids of this technology.




chapter reviews previous related work on the subject of the study. Available
literature shows that many researchers have been carried out on various aspects
of memory
virtualization and it impact in the computer hardware. However for the
purpose of this study, the literature will review the general overview of memory
virtualization and it impact in the computer hardware. The memory
virtualization and it impact in the computer hardware is not a new era,
even much improve existing memory virtualization and it impact in the computer
hardware exit as result many literatures exist. The sources of the
reviews include the internet, articles, journals and magazines from various

What is memory

According to Wikipedia in computer science, memory
virtualization divides volatile random access memory (RAM)
resources from individual systems in the data center, and then sums those
resources into a virtualized memory pool available to any computer in the group.
The operating system or applications running on top of the operating system obtains
in the memory pool. The distributed memory pool can then be used as a
high-speed cache, a messaging layer, or a large, shared memory resource for a
CPU or a GPU application. (Wikipedia, 2017)5


main purpose of virtualization is to manage the workload by transforming outmoded
computing to make it more scalable, efficient and economical. Virtualization
can be applied to a wide range such as operating system virtualization,
hardware-level virtualization and server virtualization. Saving and energy,
saving technology and cut down the cost hardware that is rapidly transforming
and the fundamental way of computing in virtualization technology is.  (Bill, March 12, 2012)6

the book (Distributed and Cloud computing…) chapter 3 memory virtualization
was defined as virtual memory virtualization is comparable to the virtual
memory support provided by modern operating systems…


an outmoded execution environment, the operating system preserves mappings of
virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. In virtual memory virtualization
includes sharing the physical system memory in RAM and dynamically allocating
it to the physical memory of the VMs. That means the guest OS and the VMM can
be preserved by a two-stage mapping processes respectively: virtual memory to
physical memory and physical memory to machine memory. (Kai Hwang, 2012 )7


of data will be done through: online articles, online books, online magazine
and personal interview.

4.3 Design

on the illustration and the analysis done in the previous chapters, we can now
look at the architecture memory virtualization. From switches, and routers to
servers, basically all physical computer hardware use some type of memory. They
all have both physical memory and logical memory. The 3 level in memory
virtualization are: abstraction levels; machine memory, physical memory and
virtual memory. Virtual memory is the virtual machine’s allocated memory and
physical memory is the server memory obtainable to the virtual machines.
Machine memory is the real physical memory which is present on the server and
the VMM can access. Almost all the physical resources which are shared among
virtual machines are using the time slicing technique which can be scheduled
based on the priority of each VM. (Semnanian, 2013)8  
The virtual memory support provided
by current operating systems is to be likely a memory virtualization. In an
outdated situation, the OS maintains page table for mappings of virtual memory
to machine memory, which is a one-stage mapping.  All current x86 CPUs comprise a memory
management unit (MMU) and a translation look aside buffer (TLB) to enhance
virtual memory performance.



Yet, in a virtual implementation situation, virtual memory
virtualization includes sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs. A two-stage
mapping process should be preserved by the guest OS and the VMM,
correspondingly: virtual memory to physical memory and physical memory to
machine memory. The VMM is accountable for mapping the guest physical memory to
the actual machine memory in guest OS.

Each page table of the guest OSes temporarily has a
distinct page table in the VMM matching to it, the VMM page table is named the
shadow page table.  VMware uses shadow
page tables to complete virtual-memory-to-machine-memory address translation.
Processors use TLB hardware to map the virtual memory straightly to the machine
memory to dodge the two levels of translation on every access. When the guest
OS changes the virtual memory to a physical memory mapping, the VMM informs the
shadow page tables to allow a direct lookup. See
the figure below

(Kai Hwang, 2012 )9









Memory Virtualization

It announces a new way to separate
memory from the hardware to provide a shared, distributed or networked
function. It improves performance by providing greater memory capacity without
any addition to the main memory for the CPU and
for the server.  A pool of memory is
share to challenge physical limitations which can be a bottleneck in software performance. The
Applications will use contiguous
address space which is not related to the physical
memory on the server. The operation systems will manage the mapping of the
physical page numbers to the virtual page numbers.

Computer Hardware virtualization

Computer hardware virtualization
of is done by a component called Virtual Machine Monitor (VMM). VMM is the
control system at the central of virtualization. It function is to behave as
the control and translation system between the VMs and the hardware. Explicitly,
 the OS(s) sits between the monitor and
the hardware and gives the impression to the each OS that it controls the machine.
But in certainty, is not the hardware but the monitor is in control of the
hardware, load balances and multiplexes and time slices running OS instructions
diagonally the physical resources of the machine. The VMM can be regarded as an
operating system for operating systems, but at a much inferior level. The
design of VMM such that the running OS still reasons that it is cooperating
with the physical hardware itself.  The
key task of VMM is the effective controlling of physical platform resources; this
comprises memory translation and I/O mapping.  In the multifaceted environment, time
consuming operations involved creating and run them in virtual machines, until
now, showed significant performance reductions compared to dedicated physical



of integration (Application) -– Applications continuously on linked
computers directly link to the memory pool through an API or the file

System Level Integration – The operating system first links to the memory pool,
and makes that pooled memory existing to applications.

 (Bhupender, June 17, 2016)11

we as analyses architecture memory virtualization, (Pant, April 22, 2016)12 in his
report during the conference he said, because of the memory over commitment
there is need of memory management. There must be memory management policies
that will include low level techniques of memory reclamation. In an addition
the present reclamation techniques can have problems and need perfections or a
particular reclamation can be better over other.

What is memory reclamation?

Memory management comprises two main
active responsibilities, memory allocation and memory reclamation. Memory
allocation deals with the process of reserving memory and memory reclamation
deals with the process of discovery idle memory that can be reclaimed. (Marina
Papatriantafilou, August 2009)13


 Tools of

Today’s virtualization has becoming
so important in the organization; many tools has been develop to achieve
efficient virtualization. Virtualization offers higher hardware utilization. It
usage is to partition the computer resources and henceforth supports sharing of
resources and provide load balancing. Virtualization tools like OpenVz, Xen,
VmWare, Virtual Box, Qemu etc. are extensively used in the
computing firms today. Some of these tools are from open sources and they are
efficient and effective. (Anum Masood, 2014)14

Why virtualize?

As computer has becoming a major
asset for many organizations today and the cost of personal computers and
servers continuing varies and this has led to a state where it is not unusual
to have many servers throughout the organizations. Organizations cannot afford
to buy several servers and many operating systems. More powerful and multitask
computers are in the system today. Virtualization is approaches that will
enable organization the cut down and manage the servers that are underutilize,
work load will be shared easily and also energy will be preserved. (Gahagan, 2010)15 Virtualization
in general has great impact in the computer as whole. Virtualization is done on
most servers. One of the main benefits of server virtualization is that it
permits IT organizations to combine servers as a single physical server can
support multiple virtual machines (VMs) and applications. No more many servers,
one single physical server can be used as virtual machine. This result is a
reduction in the number of servers in a data center or server room which leads
to important savings in, costs of server hardware and software, server
management labor expense plus facility costs for power, cooling and floor space. (Metzle, 2011)16 In the
white paper writing by (Glasgow, July 2007)17 at Intel Information
Technology has decided to do extensive analysis of maximum physical memory
consumption. They use more than 3,000 servers running on non-virtualized
workloads in their business computing environment. They found that roughly half
of these servers consume 1 GB of memory or Less for workloads of this size,
they believe that they can achieve high consolidation ratios of up to 15-20 to
1 using low-cost dual-socket virtualization hosts based on quad-core
processors. They expect that this approach will diminish cost because they
evade paying for unused memory and related power and cooling. Intel IT has
uniform on 16 gigabytes (GB) of memory for dual-socket virtualization hosts.

They are able to achieve a good
result.  For the workloads of this size,
their search to date shows that they are able to achieve consolidation ratios
of up to 15-20 to 1 using low-cost dual-socket virtualization hosts based on
quad-core processors.  Another research
done in 2010 by Processor Goran Martinovi? and his colleague at University of
Osijek revealed the ability for virtual machine to run well it depends on the
host operating system and resources management. A host operating system that
has more competent hardware and software   
resources allocation will provide better performance for running a
virtual   machine. The utmost impact on  the performance Hardware  components with are as follows : Memory –
memory  size  of a virtual 
computer  can take at  most half the size  of 
system  memory. The performance
measurement is done by (Goran Martinovi?, 2010)18.

The Hardware and software
requirement below.

They have used use a virtual
computer with 1 GB of memory and Windows 7 to see the performance of the PC. They
assume this operating system has improved performance than Windows XP, CPU. As virtual
PC does not emulate the CPU, a virtual machine executes a certain instruction
right on system CPU.  Likewise,
performance measurement is done on  a
laptop computer connected to an AC power during measurements because mobile
processors lower  they  performance 
to  save  energy when a laptop  computer 
is  not  connected 
to an AC power, graphics has only 
8MB of  memory. Furthermore, knowing
the resolution  and the number  of 
monitors  can  also 
affect  performance  so they  used 
only a laptop monitor with the screen size of 17 inches. On Hard disk
drive a virtual machine uses hard disk drive resources by generating virtual
disk partition.



After they measurements performance,
they realize    Windows Vista   has  
alike   results   as  
Windows   7.  Yet,  
other   results   show  
meager   performance of Windows Vista,
even worse than Windows XP.  From
performance evaluation they can settle that the virtual operating system has
the best performance when Windows 7 is used as the host operating and with high
specs. From the note above we have realized virtualization provides a lot benefits
but also there challenge. Especially these challenges are common with data
centers where there big data transactions. So
what are the collective problems with data center virtualization?

Missing components

of the major problems is that IT organizations regularly virtualize part of
their data center assets. Like big data, virtualization works finest when it
comprises everything and there aren’t tall tower of data storage or data
management usages. Preventive the scope of the virtual infrastructure
ultimately increases cost and difficulty.

Underused servers

standalone server is underutilized   but
virtualization makes better use of current servers while separating resources
and workgroups.

Resource challenges

virtualization data center allocating of resources can create load-sharing
problems. For example, where several workloads are multiplexed using the VM
hypervisor, IO streams starts contending for accessible resources. This raises
the IOPS needed for virtual workloads. The normal solution is to overprovision
the hardware to improve performance. (Burgess, 2017)19

the virtualization domain, most time memory is an important resource. With
virtualization will helps to over commit memory which improves usage, but can
lead to other problems if not properly managed.


Memory Over-Commitment is the hallmark that permits a
hypervisor to allocate more memory than the physical host actually has
available. For example, if the host server has 2 GB physical memory available
it can allocate 1 GB each of it memory some to its virtual guest machines   typically,
this is does not cause any harm, as most virtual machines only use a portion of
their allocated memory.

If the guest PC has not used all the memory otherwise,
memory on the physical host might begin to run out. The hypervisor can find you
in these condition and reallocate unused memory from other VM’s, using a
technique called memory ballooning.


Memory Ballooning arises when a host
is running low on available physical memory. It includes the use of a driver –
called the balloon driver – installed on the guest Operating System (OS).

So, how does this happen?

Virtual Machine X wants memory,
and the hypervisor has no more physical memory accessible.
Virtual Machine Y has particular
underutilized memory.
The Balloon driver on VM Y ‘fill’ and this memory is now
available to the hypervisor.
The Hypervisor makes this memory
balloon available to VM X.
Once there is more physical
memory available, the balloon on VM Y ‘gas out’.

 For the virtual machine stop ballooning you
can create a ‘memory reservation’ for the virtual and assuring an amount of
physical memory.  Ballooning can lead to swapping, another memory
management technique. In order avoid any issues that might arise, we need to
plan and before implementation memory virtualization. (Sowande, 2015)20


                                                            6.10 Conclusion

The primary purpose of this paper is to study the impact of
memory vitalization on computer in general. Based on these studies and
analyses, virtualization comes with a lot benefits in terms cost and
performance and saving energy. There no need to buy many servers; with a single
machine you can virtualize others server, and manage the workload. Memory
virtualization doesn’t reduce the performance of the PC or the server rather,
if memory is not well management, you pc can’t be underutilize or can you to difficulties.  Not all of tasks have been accomplished in
this paper; thus, this paper is open for further review.