読者です 読者をやめる 読者になる 読者になる

それマグで!

知識はカップより、マグでゆっくり頂きます。 takuya_1stのブログ

習慣に早くから配慮した者は、 おそらく人生の実りも大きい。

bcacheでSSDをキャッシュに使ってHDDアクセスを早くしてSSHD的なことをやる

ssd を書込み読み込みキャッシュに使いたい

HDDのアクセスを早くしたい。Writeおせーんだわ。

あれこれ、実装があるけど lvm cache / dm-cache より、bcacheの方が圧倒的に簡単で速かった。

もちろん、こんなことをしなくても、よく使うデータはHDDからメモリにキャッシュされるので、実運用ではメモリに空き容量を増やすのがとにかく先決です。bcacheは趣味ですね。

apt からインストールできる場合

sudo apt install bcache-tools

bcacheのヘッドからインストール

git から取得しインストール

git clone https://github.com/g2p/bcache-tools.git
cd bcache-tools
make
sudo checkinstall    \
--pkgname=bcache-head    \
--pkgversion="1:$(date +%Y%m%d%H%M)" \
--backup=no    \
--deldoc=yes \
--fstrans=no \
--default

bcacheが有効か調べる

bcacheが使えるかどうか確認。

takuya@:~$ sudo modprobe bcache
takuya@:~$ sudo lsmod | grep cache
bcache                193649  0
dm_cache               41114  0
dm_persistent_data     49347  1 dm_cache
dm_bio_prison          13056  1 dm_cache
fscache                45542  1 nfs
mbcache                17171  1 ext4
dm_mod                 89405  17 dm_persistent_data,dm_cache,dm_bufio

使い始める前に用語の整理

  • bcache 此のソフト
  • バッキング・デバイスbacking device キャッシュされる側(通常はHDD)
  • キャッシング・デバイス chacing device キャッシュ側(通常はSSD
  • make-bcache 作成コマンド

バッキングデバイスの指定

今回は、LVM上のVGから100GB のストレージを切り出してブロックデバイスを作成して/dev/sda などの代わりに使う。

これをバッキングデバイスに指定した。

takuya@:~$ sudo make-bcache -B /dev/mapper/data-storage
UUID:           a057b697-795d-450c-8e68-454d6f244b93
Set UUID:       9778b366-5f4c-4947-a180-c66442f2a934
version:        1
block_size:     1
data_offset:        16

キャッシュデバイスの指定

今回はSSD上のLVMから20GB ほど切り出して/dev/sdb などの代わりに使う。

これをキャッシュデバイスに指定した

takuya@:~$ sudo make-bcache -C /dev/mapper/acid-ssd
UUID:           9ac55038-1b95-42fb-bd0f-3b4638476e5d
Set UUID:       aad94dbd-5ded-4d23-bccb-1f3c1f475c48
version:        0
nbuckets:       40960
block_size:     1
bucket_size:        1024
nr_in_set:      1
nr_this_dev:        0
first_bucket:       1

状態を確認する。

バイスが登録されたか調べる。

bacheデバイスが見えるようになったか確認する。

takuya@:~$ ls  -l -R /dev/bcache**
brw-rw---- 1 root disk 253, 0 2017-03-06 18:21 /dev/bcache0

/dev/bcache:
合計 0
drwxr-xr-x 2 root root 60 2017-03-06 18:21 by-uuid

/dev/bcache/by-uuid:
合計 0
lrwxrwxrwx 1 root root 13 2017-03-06 18:21 a057b697-795d-450c-8e68-454d6f244b93 -> ../../bcache0

ここで作成された /dev/bcache0/ が bcacheでキャッシュされた、バッキングデバイス(/dev/sda)などを指している。

バッキングデバイスの状態を確認する

takuya@:~$ sudo bcache-super-show /dev/mapper/data-storage
sb.magic        ok
sb.first_sector     8 [match]
sb.csum         D1AA8DCE13E9B35C [match]
sb.version      1 [backing device]

dev.label       (empty)
dev.uuid        a057b697-795d-450c-8e68-454d6f244b93
dev.sectors_per_block   1
dev.sectors_per_bucket  1024
dev.data.first_sector   16
dev.data.cache_mode 0 [writethrough]
dev.data.cache_state    0 [detached]

cset.uuid       9778b366-5f4c-4947-a180-c66442f2a934

キャッシュデバイスの状態を確認する

takuya@:~$ sudo bcache-super-show /dev/mapper/acid-ssd
sb.magic        ok
sb.first_sector     8 [match]
sb.csum         455541E08344F7F6 [match]
sb.version      3 [cache device]

dev.label       (empty)
dev.uuid        9ac55038-1b95-42fb-bd0f-3b4638476e5d
dev.sectors_per_block   1
dev.sectors_per_bucket  1024
dev.cache.first_sector  1024
dev.cache.cache_sectors 41942016
dev.cache.total_sectors 41943040
dev.cache.ordered   yes
dev.cache.discard   no
dev.cache.pos       0
dev.cache.replacement   0 [lru]

cset.uuid       aad94dbd-5ded-4d23-bccb-1f3c1f475c48

bcache0 にキャッシュデバイスを紐付ける。

sudo bcache-super-show /dev/mapper/acid-ssd  \ 
| grep cset.uuid | awk '{ print $2 }' \
| sudo tee /sys/block/bcache0/bcache/attach

ファイルシステムを作る

ここまでで、bcacheの準備ができたので、後はターゲット(/dev/bcache0)に対して操作をする。

takuya@:~$ sudo mkfs.ext4 /dev/bcache0
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 26214398 4k blocks and 6553600 inodes
Filesystem UUID: 5848c161-fd7c-4031-807a-01c56dc7c967
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

マウントする

sudo mount /dev/bcache0 mnt

測定してみる。

細かいテスト条件を考える暇がなかったのでかなり適当な条件ですが1GB程度の書込みで測定してみました。

単純にbw だけを見てみる。

時間がなくて、細かいテスト条件を考慮できてないので、単純にbw だけを見てみる。 読み込みは今回測定してない

ランダム書込みの測定

バイス 速度(bw)
ssdのみ 242557KB/s
hddのみ 910KB/s
bcache/writethrough 1314KB/s
bcache/writeback 144015KB/s

1GB程度のランダムデータは差が・・・でもまぁ少しは早くなる?

writebackで ssd にだけ書いて hdd にsync 待ちしないならそりゃ圧倒的に早いな

何かに使えそうですよね。

bcacheを使ったときの速度測定

とりあえず、fio でぱぱっと測定。測定条件を全く考慮せず使いまわした・・・次やるときはちゃんと測定条件を考えないと。

[global]
bs=4k
ioengine=libaio
iodepth=4
size=1g
direct=1
runtime=60
directory=/home/takuya/mnt
filename=ssd.test.file

[rand-write]
rw=randwrite
stonewall

fio の結果

takuya@:~$ sudo fio myjob.fio
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.1.11
Starting 1 process
rand-write: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/1576KB/0KB /s] [0/394/0 iops] [eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=2016: Mon Mar  6 18:44:02 2017
  write: io=78864KB, bw=1314.3KB/s, iops=328, runt= 60008msec
    slat (usec): min=6, max=757302, avg=104.79, stdev=5956.73
    clat (usec): min=78, max=2122.7K, avg=12067.09, stdev=45932.99
     lat (usec): min=89, max=2122.7K, avg=12172.05, stdev=46560.28
    clat percentiles (usec):
     |  1.00th=[  169],  5.00th=[  207], 10.00th=[  251], 20.00th=[ 5600],
     | 30.00th=[ 6688], 40.00th=[ 7136], 50.00th=[ 7392], 60.00th=[ 7648],
     | 70.00th=[ 7968], 80.00th=[ 8768], 90.00th=[11584], 95.00th=[20864],
     | 99.00th=[130560], 99.50th=[144384], 99.90th=[602112], 99.95th=[839680],
     | 99.99th=[2113536]
    bw (KB  /s): min=   14, max= 3093, per=100.00%, avg=1438.70, stdev=699.48
    lat (usec) : 100=0.07%, 250=9.92%, 500=1.09%, 750=0.29%, 1000=0.27%
    lat (msec) : 2=0.31%, 4=1.37%, 10=74.64%, 20=6.77%, 50=2.80%
    lat (msec) : 100=0.22%, 250=1.94%, 500=0.12%, 750=0.14%, 1000=0.04%
    lat (msec) : >=2000=0.02%
  cpu          : usr=0.22%, sys=1.58%, ctx=17090, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=19716/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=78864KB, aggrb=1314KB/s, minb=1314KB/s, maxb=1314KB/s, mint=60008msec, maxt=60008msec

Disk stats (read/write):
    bcache0: ios=0/21318, merge=0/0, ticks=0/419156, in_queue=0, util=0.00%, aggrios=0/21324, aggrmerge=0/0, aggrticks=0/260408, aggrin_queue=260408, aggrutil=98.78%
    dm-4: ios=0/21434, merge=0/0, ticks=0/510812, in_queue=510812, util=98.78%, aggrios=0/19972, aggrmerge=0/1474, aggrticks=0/446100, aggrin_queue=446092, aggrutil=98.78%
  sda: ios=0/19972, merge=0/1474, ticks=0/446100, in_queue=446092, util=98.78%
    dm-5: ios=0/21215, merge=0/0, ticks=0/10004, in_queue=10004, util=5.14%, aggrios=0/26443, aggrmerge=0/2553, aggrticks=0/79524, aggrin_queue=79516, aggrutil=7.90%
  sdb: ios=0/26443, merge=0/2553, ticks=0/79524, in_queue=79516, util=7.90%

SSDだけの場合

キャッシュデバイスに利用したSSDからLVMを切り出して使うと次の速度だった。

takuya@:~$ fio myjob.fio
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/237.4MB/0KB /s] [0/60.8K/0 iops] [eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=2146: Mon Mar  6 18:47:09 2017
  write: io=1024.0MB, bw=242557KB/s, iops=60639, runt=  4323msec
    slat (usec): min=2, max=1436, avg= 7.19, stdev=13.71
    clat (usec): min=12, max=48210, avg=57.96, stdev=215.17
     lat (usec): min=26, max=48214, avg=65.23, stdev=216.07
    clat percentiles (usec):
     |  1.00th=[   33],  5.00th=[   36], 10.00th=[   37], 20.00th=[   39],
     | 30.00th=[   40], 40.00th=[   41], 50.00th=[   41], 60.00th=[   42],
     | 70.00th=[   43], 80.00th=[   46], 90.00th=[   58], 95.00th=[  231],
     | 99.00th=[  237], 99.50th=[  241], 99.90th=[  684], 99.95th=[ 1160],
     | 99.99th=[ 5856]
    bw (KB  /s): min=169536, max=258520, per=99.67%, avg=241758.00, stdev=30286.36
    lat (usec) : 20=0.01%, 50=83.98%, 100=9.08%, 250=6.55%, 500=0.24%
    lat (usec) : 750=0.05%, 1000=0.02%
    lat (msec) : 2=0.06%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=11.29%, sys=44.42%, ctx=154698, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=1024.0MB, aggrb=242557KB/s, minb=242557KB/s, maxb=242557KB/s, mint=4323msec, maxt=4323msec

Disk stats (read/write):
    dm-1: ios=644/251352, merge=0/0, ticks=144/14056, in_queue=14208, util=97.55%, aggrios=644/262180, aggrmerge=0/15, aggrticks=144/18640, aggrin_queue=18752, aggrutil=96.11%
  sdb: ios=644/262180, merge=0/15, ticks=144/18640, in_queue=18752, util=96.11%

HDDだけの場合

takuya@:~$ sudo fio myjob.fio
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.1.11
Starting 1 process
rand-write: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/1420KB/0KB /s] [0/355/0 iops] [eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=2249: Mon Mar  6 18:50:09 2017
  write: io=54776KB, bw=931985B/s, iops=227, runt= 60184msec
    slat (usec): min=5, max=371161, avg=137.76, stdev=5134.81
    clat (usec): min=101, max=883763, avg=17428.75, stdev=51339.16
     lat (usec): min=126, max=883769, avg=17566.73, stdev=51900.52
    clat percentiles (usec):
     |  1.00th=[  171],  5.00th=[  209], 10.00th=[  223], 20.00th=[ 5600],
     | 30.00th=[ 6752], 40.00th=[ 7200], 50.00th=[ 7456], 60.00th=[ 7712],
     | 70.00th=[ 8160], 80.00th=[ 9280], 90.00th=[17792], 95.00th=[125440],
     | 99.00th=[189440], 99.50th=[261120], 99.90th=[684032], 99.95th=[880640],
     | 99.99th=[880640]
    bw (KB  /s): min=   17, max= 3032, per=100.00%, avg=991.52, stdev=724.89
    lat (usec) : 250=11.17%, 500=0.91%, 750=0.04%, 1000=0.01%
    lat (msec) : 2=0.24%, 4=1.22%, 10=71.24%, 20=7.80%, 50=1.29%
    lat (msec) : 100=0.06%, 250=5.26%, 500=0.48%, 750=0.19%, 1000=0.09%
  cpu          : usr=0.19%, sys=1.14%, ctx=11227, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=13694/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=54776KB, aggrb=910KB/s, minb=910KB/s, maxb=910KB/s, mint=60184msec, maxt=60184msec

Disk stats (read/write):
    dm-6: ios=0/14883, merge=0/0, ticks=0/462800, in_queue=473700, util=99.95%, aggrios=0/13821, aggrmerge=0/1075, aggrticks=0/443268, aggrin_queue=450992, aggrutil=99.95%
  sda: ios=0/13821, merge=0/1075, ticks=0/443268, in_queue=450992, util=99.95%
takuya@:~$

writeback にして測定してみる。

witebackを有効にする

takuya@:~$ echo writeback | sudo tee  /sys/block/bcache0/bcache/cache_mode
writeback
takuya@:~$ sudo fio myjob.fio
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/151.5MB/0KB /s] [0/38.8K/0 iops] [eta 00m:00s]
rand-write: (groupid=0, jobs=1): err= 0: pid=2341: Mon Mar  6 18:54:19 2017
  write: io=1024.0MB, bw=144015KB/s, iops=36003, runt=  7281msec
    slat (usec): min=1, max=6871, avg=12.32, stdev=16.83
    clat (usec): min=19, max=14755, avg=98.10, stdev=368.31
     lat (usec): min=28, max=14760, avg=110.48, stdev=368.54
    clat percentiles (usec):
     |  1.00th=[   34],  5.00th=[   38], 10.00th=[   39], 20.00th=[   40],
     | 30.00th=[   41], 40.00th=[   43], 50.00th=[   43], 60.00th=[   44],
     | 70.00th=[   46], 80.00th=[   50], 90.00th=[   63], 95.00th=[  157],
     | 99.00th=[ 1864], 99.50th=[ 2480], 99.90th=[ 3152], 99.95th=[ 5024],
     | 99.99th=[13504]
    bw (KB  /s): min=136680, max=161360, per=99.65%, avg=143503.93, stdev=6617.05
    lat (usec) : 20=0.01%, 50=79.82%, 100=14.01%, 250=2.88%, 500=0.19%
    lat (usec) : 750=0.67%, 1000=0.42%
    lat (msec) : 2=1.12%, 4=0.81%, 10=0.06%, 20=0.03%
  cpu          : usr=5.00%, sys=44.67%, ctx=67203, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=262144/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=1024.0MB, aggrb=144015KB/s, minb=144015KB/s, maxb=144015KB/s, mint=7281msec, maxt=7281msec

Disk stats (read/write):
    bcache0: ios=223/257488, merge=0/0, ticks=28/26512, in_queue=0, util=0.00%, aggrios=111/132361, aggrmerge=0/0, aggrticks=14/13326, aggrin_queue=13352, aggrutil=95.97%
    dm-4: ios=0/1, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/1, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
  sda: ios=0/1, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
    dm-5: ios=223/264721, merge=0/0, ticks=28/26652, in_queue=26704, util=95.97%, aggrios=223/258582, aggrmerge=0/6444, aggrticks=28/15168, aggrin_queue=15116, aggrutil=95.91%
  sdb: ios=223/258582, merge=0/6444, ticks=28/15168, in_queue=15116, util=95.91%
takuya@:~$

writethrough で限界までパフォーマンス上げてみる

シーケンシャルIOの検出・バイパスオフ

takuya@:~$ echo writethrough | sudo tee  /sys/block/bcache0/bcache/cache_mode
writethrough
takuya@:~$ echo 0 | sudo tee /sys/block/bcache0/bcache/sequential_cutoff
0
takuya@:~$ echo 0 | sudo tee /sys/fs/
bcache/ btrfs/  cgroup/ ext4/   fuse/   pstore/

レイテンシ検出オフ

takuya@:~$ echo 0 | sudo tee /sys/fs/bcache/aad94dbd-5ded-4d23-bccb-1f3c1f475c48/congested_read_threshold_us
0
takuya@:~$ echo 0 | sudo tee /sys/fs/bcache/aad94dbd-5ded-4d23-bccb-1f3c1f475c48/congested_write_threshold_us
0

この条件だと、測定結果は次のとおりになった。5%くらいはやい?あんまり変わらないね。

takuya@:~$ sudo fio myjob.fio
rand-write: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=4
fio-2.1.11
Starting 1 process
Jobs: 1 (f=0): [w(1)] [8.1% done] [0KB/1810KB/0KB /s] [0/452/0 iops] [eta 11m:35s]
rand-write: (groupid=0, jobs=1): err= 0: pid=4873: Mon Mar  6 19:12:02 2017
  write: io=84528KB, bw=1408.7KB/s, iops=352, runt= 60007msec
    slat (usec): min=5, max=16313, avg=86.45, stdev=167.83
    clat (usec): min=77, max=726096, avg=11262.20, stdev=31084.99
     lat (usec): min=85, max=730582, avg=11349.47, stdev=31109.02
    clat percentiles (usec):
     |  1.00th=[  163],  5.00th=[  209], 10.00th=[ 4576], 20.00th=[ 5792],
     | 30.00th=[ 6816], 40.00th=[ 7136], 50.00th=[ 7392], 60.00th=[ 7648],
     | 70.00th=[ 7968], 80.00th=[ 8768], 90.00th=[10944], 95.00th=[18816],
     | 99.00th=[129536], 99.50th=[140288], 99.90th=[544768], 99.95th=[634880],
     | 99.99th=[724992]
    bw (KB  /s): min=   71, max= 3040, per=100.00%, avg=1479.37, stdev=684.76
    lat (usec) : 100=0.06%, 250=6.80%, 500=0.60%, 750=0.18%, 1000=0.13%
    lat (msec) : 2=0.11%, 4=1.35%, 10=79.16%, 20=7.32%, 50=2.14%
    lat (msec) : 100=0.01%, 250=1.91%, 500=0.11%, 750=0.13%
  cpu          : usr=0.73%, sys=3.81%, ctx=17268, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=21132/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: io=84528KB, aggrb=1408KB/s, minb=1408KB/s, maxb=1408KB/s, mint=60007msec, maxt=60007msec

Disk stats (read/write):
    bcache0: ios=0/21070, merge=0/0, ticks=0/242964, in_queue=0, util=0.00%, aggrios=30/21496, aggrmerge=0/0, aggrticks=0/178762, aggrin_queue=178812, aggrutil=99.98%
    dm-4: ios=0/21224, merge=0/0, ticks=0/355520, in_queue=355620, util=99.98%, aggrios=0/21184, aggrmerge=0/51, aggrticks=0/296676, aggrin_queue=296624, aggrutil=99.97%
  sda: ios=0/21184, merge=0/51, ticks=0/296676, in_queue=296624, util=99.97%
    dm-5: ios=60/21768, merge=0/0, ticks=0/2004, in_queue=2004, util=2.88%, aggrios=60/22260, aggrmerge=0/71, aggrticks=0/1788, aggrin_queue=1752, aggrutil=2.54%
  sdb: ios=60/22260, merge=0/71, ticks=0/1788, in_queue=1752, util=2.54%

停止する

stop に 1 を書き込んだらデバイスを停止できる。

takuya@:~$ echo 1 | sudo tee /sys/block/bcache0/bcache/stop
1

もしまた使いたいときは、wipefs でフラッシュする

takuya@:~$ sudo wipefs -a /dev/mapper/acid-cache

主な操作方法

主な操作は /sys/fs 経由で行う。

参考資料

http://www.slideshare.net/nobuto_m/bcachessd-hdd

http://unix.stackexchange.com/questions/225017/how-to-remove-bcache0-volume

http://www.tech-g.com/2015/05/10/bcache-linux-ssd-caching-for-hard-drives-on-debian-jessie/

https://pommi.nethuis.nl/ssd-caching-using-linux-and-bcache/

https://jp.linux.com/news/linuxcom-exclusive/413722-lco20140225

広告を非表示にする