Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
huaicheng committed Mar 7, 2018
1 parent 20d5a46 commit 6842a32
Showing 1 changed file with 53 additions and 20 deletions.
73 changes: 53 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ FEMU README
Project Description
-------------------

Briefly speaking, FEMU is an NVMe SSD Emulator. Based upon QEMU/KVM, FEMU is exposed to Guest OS (Linux) as an NVMe block device (e.g. /dev/nvme0nX). It can be used as an emulated whitebox or blackbox SSD: (1). whitebox mode (a.k.a. Software-Defined Flash (SDF), or OpenChannel-SSD) with FTL residing in the host side (e.g. LightNVM) (2). blackbox mode with FTL residing inside the device (most of current commercial SSDs).
Briefly speaking, FEMU is a NVMe SSD Emulator. Based upon QEMU/KVM, FEMU is exposed to Guest OS (Linux) as a NVMe block device (e.g. /dev/nvme0nX). It can be used as an emulated whitebox or blackbox SSD: (1). whitebox mode (a.k.a. Software-Defined Flash (SDF), or OpenChannel-SSD) with FTL residing in the host side (e.g. LightNVM) (2). blackbox mode with FTL residing inside the device (most of current commercial SSDs).

FEMU tries to achieve benefits of both SSD Hardware platforms (e.g. CNEX OpenChannel SSD, OpenSSD, etc.) and SSD simulators (e.g. DiskSim+SSD, FlashSim, SSDSim, etc.). Like hardware platforms, FEMU can support running full system stack (Applications + OS + NVMe interface) on top, thus enabling Software-Defined Flash (SDF) alike research with modifications at application, OS, interface or SSD controller architecture level. Like SSD simulators, FEMU can also support internal-SSD/FTL related research. Users can feel free to experiment with new FTL algorithms or SSD performance models to explore new SSD architecture innovations as well as benchmark the new arch changes with real applications, instead of using decade-old disk trace files.

Expand All @@ -19,8 +19,9 @@ Installation
# Switch to the FEMU building directory
cd femu/build-femu
# Copy femu script
cp -r ../femu-scripts/[pkgdep,femu-compile,lnvm-run,wcc-run,pin].sh ../femu-scripts/ftk ../femu-scripts/vssd1.conf .
# only Debian based distributions supported
cp ../femu-scripts/femu-copy-scripts.sh .
./femu-copy-scripts.sh .
# only Debian/Ubuntu based distributions supported
sudo ./pkgdep.sh
```

Expand All @@ -36,26 +37,35 @@ Run FEMU

### 1. Before running ###

- FEMU currently uses its own malloc'ed space for data storage, instead of using
image-files. However, FEMU still requires a image-file in QEMU command line so
as to cheat QEMU to probe correct internal numbers about the backend storage.
Thus, if you want to emulate an SSD of 32GB, you need to create an image file
of 32GB on your local filesystem and attach it to QEMU. (This limitation will be remove in near future)
- FEMU currently uses its own malloc'ed space for data storage, instead of
using image-files. However, FEMU still requires a image-file in QEMU command
line so as to cheat QEMU to probe correct internal numbers about the backend
storage. Thus, if you want to emulate an SSD of 32GB, you need to create an
image file of 32GB on your local file system and attach it to QEMU. (This
limitation will be remove in near future)

- FEMU relies on DRAM to provide accurate delay emulation, so make sure you
have enough DRAM free space for the emulated SSD.

- Only **Guest Linux version >= 4.12** are supported as FEMU requires the shadow doorbell buffer support in Linux NVMe driver implementation. **Optionally**, to achieve best performance, users need to disable the doorbell write operations in guest Linux NVMe driver since FEMU uses polling. Please see [here](#ddb) for how to do this.
- Only **Guest Linux version >= 4.14** are supported as FEMU requires the
shadow doorbell buffer support in Linux NVMe driver implementation. (Linux
4.12, 4.13 are abandoned due to their wrong implementation in doorbell buffer
config support.)

- To achieve best performance, users need to disable the
doorbell write operations in guest Linux NVMe driver since FEMU uses polling.
Please see [here](#ddb) on how to do this.

### 2. Run FEMU as an emulated blackbox SSD (device-managed FTL) ###

Under this mode, each emulated NVMe SSD needs configuration files in the format
of vssd1.conf, vssd2.conf, ..., etc. to run.
of vssd1.conf, vssd2.conf, ..., etc. (which should correspond to your virtual
NVMe image file names: vssd1.raw, vssd2.raw, etc.) to run.

The key configuration options are explained below:

It configures an emulated SSD with 8 channels and there are 8 chips on each channel.
The total SSD size is 1GB.
It configures an emulated SSD with 8 channels and there are 8 chips on each
channel. The total SSD size is 1GB.

PAGE_SIZE 4096 // SSD page size in bytes
PAGE_NB 256 // # of pages in one block
Expand All @@ -71,21 +81,35 @@ The total SSD size is 1GB.
CHANNEL_NB 8 // # of channels
GC_MODE 2 // GC blocking mode, see hw/block/ssd/common.h for definition

After the FEMU configuration file is ready, boot the VM using the following script:
After the FEMU configuration file is ready, boot the VM using the following
script:

```Bash
./wcc-run.sh
./run-blackbox.sh
```

### 3. Run FEMU as an emulated whitebox SSD (OpenChannel-SSD) ###

```Bash
./lnvm-run.sh
./run-whitebox.sh
```

Inside the VM, you can play with LightNVM.

Currently FEMU only supports [OpenChannel Specification 1.2](http://lightnvm.io/docs/Open-ChannelSSDInterfaceSpecification12-final.pdf), the newer 2.0 spec support in work-in-progress and will be added soon.
Currently FEMU only supports [OpenChannel Specification
1.2](http://lightnvm.io/docs/Open-ChannelSSDInterfaceSpecification12-final.pdf),
the newer 2.0 spec support in work-in-progress and will be added soon.

### 4. Run FEMU without SSD logic emulation ###

```Bash
./run-nossd.sh
```

In this ``nossd`` mode, no SSD emulation logic (either blackbox or whitebox
emulation) will be executed. Base NVMe specification is supported, and FEMU in
this case handles IOs as fast as possible. It can be used for basic performance
benchmarking.

Tuning
------
Expand All @@ -106,13 +130,22 @@ Additonal Tweaks

1. <a name="ddb"></a>Disable doorbell writes in your guest Linux NVMe driver:

**Note: Linux kernel version less than 4.14 has a wrong implementation over the doorbell buffer config support bit. (Fixed in this commit: 223694b9ae8bfba99f3528d49d07a740af6ff95a). FEMU has been updated to fix this problem accordingly. Thus, in order for FEMU polling to work properly out of box, please use guest Linux >= 4.14.
**Note: Linux kernel version less than 4.14 has a wrong implementation over the
doorbell buffer config support bit. (Fixed in this commit:
223694b9ae8bfba99f3528d49d07a740af6ff95a). FEMU has been updated to fix this
problem accordingly. Thus, in order for FEMU polling to work properly out of
box, please use guest Linux >= 4.14.

Otherwise, if you want to stick to 4.12/4.13, please make sure ``NVME_OACS_DBBUF = 1 << 7`` in ``hw/block/nvme.h`` as this is what was wrongly implemented in 4.12/4.13**
Otherwise, if you want to stick to 4.12/4.13, please make sure
``NVME_OACS_DBBUF = 1 << 7`` in ``hw/block/nvme.h`` as this is what was wrongly
implemented in 4.12/4.13**

In Linux 4.14 source code, file ``drivers/nvme/host/pcie.c``, around ``line 293``, you will find below function which is used to indicate whether to perform doorbell write operations.
In Linux 4.14 source code, file ``drivers/nvme/host/pcie.c``, around ``line
293``, you will find below function which is used to indicate whether to
perform doorbell write operations.

What we need to do is to add one sentence (``return false;``) after ``*dbbuf_db = value;``, as shown in the code block below.
What we need to do is to add one sentence (``return false;``) after ``*dbbuf_db
= value;``, as shown in the code block below.

After this, recompile your guest Linux kernel.

Expand Down

0 comments on commit 6842a32

Please sign in to comment.