Skip to content

Commit

Permalink
Update information about preparing VM image
Browse files Browse the repository at this point in the history
  • Loading branch information
huaicheng committed Apr 11, 2018
1 parent 6c5ea3f commit c8ef427
Showing 1 changed file with 60 additions and 18 deletions.
78 changes: 60 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,30 @@ FEMU README
Project Description
-------------------

Briefly speaking, FEMU is a NVMe SSD Emulator. Based upon QEMU/KVM, FEMU is exposed to Guest OS (Linux) as a NVMe block device (e.g. /dev/nvme0nX). It can be used as an emulated whitebox or blackbox SSD: (1). whitebox mode (a.k.a. Software-Defined Flash (SDF), or OpenChannel-SSD) with FTL residing in the host side (e.g. LightNVM) (2). blackbox mode with FTL residing inside the device (most of current commercial SSDs).

FEMU tries to achieve benefits of both SSD Hardware platforms (e.g. CNEX OpenChannel SSD, OpenSSD, etc.) and SSD simulators (e.g. DiskSim+SSD, FlashSim, SSDSim, etc.). Like hardware platforms, FEMU can support running full system stack (Applications + OS + NVMe interface) on top, thus enabling Software-Defined Flash (SDF) alike research with modifications at application, OS, interface or SSD controller architecture level. Like SSD simulators, FEMU can also support internal-SSD/FTL related research. Users can feel free to experiment with new FTL algorithms or SSD performance models to explore new SSD architecture innovations as well as benchmark the new arch changes with real applications, instead of using decade-old disk trace files.
Briefly speaking, FEMU is a NVMe SSD Emulator. Based upon QEMU/KVM, FEMU is
exposed to Guest OS (Linux) as a NVMe block device (e.g. /dev/nvme0nX). It can
be used as an emulated whitebox or blackbox SSD: (1). whitebox mode (a.k.a.
Software-Defined Flash (SDF), or OpenChannel-SSD) with FTL residing in the host
side (e.g. LightNVM) (2). blackbox mode with FTL residing inside the device
(most of current commercial SSDs).

FEMU tries to achieve benefits of both SSD Hardware platforms (e.g. CNEX
OpenChannel SSD, OpenSSD, etc.) and SSD simulators (e.g. DiskSim+SSD, FlashSim,
SSDSim, etc.). Like hardware platforms, FEMU can support running full system
stack (Applications + OS + NVMe interface) on top, thus enabling
Software-Defined Flash (SDF) alike research with modifications at application,
OS, interface or SSD controller architecture level. Like SSD simulators, FEMU
can also support internal-SSD/FTL related research. Users can feel free to
experiment with new FTL algorithms or SSD performance models to explore new SSD
architecture innovations as well as benchmark the new arch changes with real
applications, instead of using decade-old disk trace files.

Installation
------------


1. Make sure you have installed necessary libraries for building QEMU. The dependencies can be installed automatically by
1. Make sure you have installed necessary libraries for building QEMU. The
dependencies can be installed automatically by

```bash
# Switch to the FEMU building directory
Expand All @@ -22,16 +37,43 @@ Installation
cp ../femu-scripts/femu-copy-scripts.sh .
./femu-copy-scripts.sh .
# only Debian/Ubuntu based distributions supported
sudo ./pkgdep.sh
sudo ./pkgdep.sh
```

2. Compile & Install FEMU:

```bash
./femu-compile.sh
```
FEMU binary will appear as ``x86_64-softmmu/qemu-system-x86_64``

3. Prepare the VM image (For performance reasons, we suggest to use a server
version guest OS [e.g. Ubuntu Server 16.04, 14.04])

You can either build your own VM image, or use the VM image provided by us

Option 1: Build your own VM image by following guides (e.g.
[here](https://help.ubuntu.com/community/Installation/QemuEmulator#Installation_of_an_operating_system_from_ISO_to_the_QEMU_environment)).
After the guest OS is installed, make following changes to redirect VM output
to the console, instead of using a separate GUI window.

- Inside your guest Ubuntu server, edit `/etc/default/grub`, make sure the
following options are set

```
GRUB_CMDLINE_LINUX="ip=dhcp console=ttyS0,115200 console=tty console=ttyS0"
GRUB_TERMINAL=serial
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
```

Now you're ready to `Run FEMU`. If you stick to a Desktop version guest OS,
please remove "-nographics" command option from the running script before
running FEMU.


Option 2: Use our VM image, please download it from our site(??) and save it as
`$HOME/images/u14s.qcow2`

Run FEMU
--------

Expand All @@ -46,15 +88,15 @@ Run FEMU

- FEMU relies on DRAM to provide accurate delay emulation, so make sure you
have enough DRAM free space for the emulated SSD.

- Only **Guest Linux version >= 4.14** are supported as FEMU requires the
shadow doorbell buffer support in Linux NVMe driver implementation. (Linux
4.12, 4.13 are abandoned due to their wrong implementation in doorbell buffer
config support.)

- To achieve best performance, users need to disable the
doorbell write operations in guest Linux NVMe driver since FEMU uses polling.
Please see [here](#ddb) on how to do this.
Please see [here](#ddb) on how to do this.

### 2. Run FEMU as an emulated blackbox SSD (device-managed FTL) ###

Expand All @@ -66,7 +108,7 @@ The key configuration options are explained below:

It configures an emulated SSD with 8 channels and there are 8 chips on each
channel. The total SSD size is 1GB.

PAGE_SIZE 4096 // SSD page size in bytes
PAGE_NB 256 // # of pages in one block
SECTOR_SIZE 512 // # sector size in bytes
Expand All @@ -80,7 +122,7 @@ channel. The total SSD size is 1GB.
BLOCK_ERASE_DELAY 3000000 // Block erase latency in nanosecond
CHANNEL_NB 8 // # of channels
GC_MODE 2 // GC blocking mode, see hw/block/ssd/common.h for definition

After the FEMU configuration file is ready, boot the VM using the following
script:

Expand All @@ -94,7 +136,7 @@ script:
./run-whitebox.sh
```

Inside the VM, you can play with LightNVM.
Inside the VM, you can play with LightNVM.

Currently FEMU only supports [OpenChannel Specification
1.2](http://lightnvm.io/docs/Open-ChannelSSDInterfaceSpecification12-final.pdf),
Expand All @@ -110,7 +152,7 @@ In this ``nossd`` mode, no SSD emulation logic (either blackbox or whitebox
emulation) will be executed. Base NVMe specification is supported, and FEMU in
this case handles IOs as fast as possible. It can be used for basic performance
benchmarking.

Tuning
------
To Add ...
Expand All @@ -122,9 +164,9 @@ To Add ...

FEMU Design
-----------
Please refer to our FAST paper and design document (to come) ...
Please refer to our FAST paper and design document (to come) ...


Additonal Tweaks
----------------

Expand All @@ -142,12 +184,12 @@ implemented in 4.12/4.13**

In Linux 4.14 source code, file ``drivers/nvme/host/pcie.c``, around ``line
293``, you will find below function which is used to indicate whether to
perform doorbell write operations.
perform doorbell write operations.

What we need to do is to add one sentence (``return false;``) after ``*dbbuf_db
= value;``, as shown in the code block below.

After this, recompile your guest Linux kernel.
After this, recompile your guest Linux kernel.

```C
/* Update dbbuf and return true if an MMIO is required */
Expand All @@ -165,7 +207,7 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,

old_value = *dbbuf_db;
*dbbuf_db = value;

/* Disable Doorbell Writes for FEMU: We only need to
* add the following statement */
return false;
Expand Down

0 comments on commit c8ef427

Please sign in to comment.