Добрый день!
Такой сервер я уже тестировал 2 года назад и теперь вот решил попробовать еще разок. В целом все без изменений кроме того что скорость чтения и записи на диск стала сильно выше.
Есть ощущение что там SSD =)
Опубликую только результаты связанные со скоростью записи на диск, остальное в пределах погрешности.
Сведения о VM
PT Summary
pt-summary ----------- # Percona Toolkit System Summary Report ###################### Date | 2020-09-04 10:40:50 UTC (local TZ: MSK +0300) Hostname | vds1968051.my-ihor.ru Uptime | 10 min, 1 user, load average: 0.04, 0.13, 0.08 System | Red Hat; KVM; vRHEL 7.6.0 PC (i440FX + PIIX, 1996) (Other) Service Tag | Not Specified Platform | Linux Release | CentOS Linux release 7.8.2003 (Core) Kernel | 3.10.0-1127.19.1.el7.x86_64 Architecture | CPU = 64-bit, OS = 64-bit Threading | NPTL 2.17 SELinux | Disabled Virtualized | VMWare # Processor ################################################## Processors | physical = 1, cores = 1, virtual = 1, hyperthreading = no Speeds | 1x2799.998 Models | 1xQEMU Virtual CPU version 2.5+ Caches | 1x16384 KB # Memory ##################################################### Total | 486.9M Free | 69.2M Used | physical = 95.6M, swap allocated = 0.0, swap used = 0.0, virtual = 95.6M Shared | 4.4M Buffers | 322.0M Caches | 374.1M Dirty | 92 kB UsedRSS | 106.1M Swappiness | 30 DirtyPolicy | 30, 10 DirtyStatus | 0, 0 Locator Size Speed Form Factor Type Type Detail ========= ======== ================= ============= ============= =========== DIMM 0 512 MB Unknown DIMM RAM Other # Mounted Filesystems ######################################## Filesystem Size Used Type Opts Mountpoint devtmpfs 233M 0% devtmpfs rw,nosuid,size=237644k,nr_inodes=59411,mode=755 /dev /dev/vda1 488M 23% ext4 rw,relatime,data=ordered /boot /dev/vda2 9.5G 16% xfs rw,relatime,attr2,inode64,noquota / tmpfs 244M 0% tmpfs rw,nosuid,nodev /dev/shm tmpfs 244M 0% tmpfs rw,nosuid,nodev,mode=755 /dev/shm tmpfs 244M 0% tmpfs rw,nosuid,nodev,relatime,size=49864k,mode=700 /dev/shm tmpfs 244M 0% tmpfs ro,nosuid,nodev,noexec,mode=755 /dev/shm tmpfs 244M 0% tmpfs rw,nosuid,nodev /sys/fs/cgroup tmpfs 244M 0% tmpfs rw,nosuid,nodev,mode=755 /sys/fs/cgroup tmpfs 244M 0% tmpfs rw,nosuid,nodev,relatime,size=49864k,mode=700 /sys/fs/cgroup tmpfs 244M 0% tmpfs ro,nosuid,nodev,noexec,mode=755 /sys/fs/cgroup tmpfs 244M 2% tmpfs rw,nosuid,nodev /run tmpfs 244M 2% tmpfs rw,nosuid,nodev,mode=755 /run tmpfs 244M 2% tmpfs rw,nosuid,nodev,relatime,size=49864k,mode=700 /run tmpfs 244M 2% tmpfs ro,nosuid,nodev,noexec,mode=755 /run tmpfs 49M 0% tmpfs rw,nosuid,nodev /run/user/0 tmpfs 49M 0% tmpfs rw,nosuid,nodev,mode=755 /run/user/0 tmpfs 49M 0% tmpfs rw,nosuid,nodev,relatime,size=49864k,mode=700 /run/user/0 tmpfs 49M 0% tmpfs ro,nosuid,nodev,noexec,mode=755 /run/user/0 # Disk Schedulers And Queue Size ############################# vda | [mq-deadline] 256 # Disk Partioning ############################################ Device Type Start End Size ============ ==== ========== ========== ================== /dev/vda Disk 10737418240 /dev/vda1 Part 2048 1050623 536870400 /dev/vda2 Part 1050624 20971519 10199498240 # Kernel Inode State ######################################### dentry-state | 30377 19674 45 0 9185 0 file-nr | 928 0 46628 inode-nr | 20189 102 # LVM Volumes ################################################ Unable to collect information # LVM Volume Groups ########################################## Unable to collect information # RAID Controller ############################################ Controller | No RAID controller detected # Network Config ############################################# Controller | Red Hat, Inc. Virtio network device FIN Timeout | 60 Port Range | 60999 # Interface Statistics ####################################### interface rx_bytes rx_packets rx_errors tx_bytes tx_packets tx_errors ========= ========= ========== ========== ========== ========== ========== lo 0 0 0 0 0 0 eth0 60000000 40000 0 2250000 22500 0 # Network Devices ############################################ Device Speed Duplex ========= ========= ========= eth0 # Top Processes ############################################## PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 43580 3776 2472 S 0.0 0.8 0:01.14 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 5 root 20 0 0 0 0 S 0.0 0.0 0:00.09 kworker/u2:0 6 root 20 0 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/0 7 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root 20 0 0 0 0 R 0.0 0.0 0:00.15 rcu_sched 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-dra+ # Notable Processes ########################################## PID OOM COMMAND 810 -17 sshd # Memory mamagement ########################################## Transparent huge pages are currently disabled on the system. # The End ####################################################
Тесты
DD
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 2>&1 rm -f test ----------- 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 2.1473 s, 500 MB/s 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 1.64194 s, 654 MB/s 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 1.71044 s, 628 MB/s
Это в 10 раз быстрее того что было до этого.
Sysbench тест диска
sysbench fileio --file-total-size=1G prepare ----------- sysbench 1.0.17 (using system LuaJIT 2.0.4) 128 files, 8192Kb each, 1024Mb total Creating files for the test... Extra file open flags: (none) Creating file test_file.0 ... Creating file test_file.127 1073741824 bytes written in 3.61 seconds (283.34 MiB/sec). sysbench fileio --file-total-size=1G --file-test-mode=rndrw --time=300 --max-requests=0 run ----------- sysbench 1.0.17 (using system LuaJIT 2.0.4) Running the test with following options: Number of threads: 1 Initializing random number generator from current time Extra file open flags: (none) 128 files, 8MiB each 1GiB total file size Block size 16KiB Number of IO requests: 0 Read/Write ratio for combined random IO test: 1.50 Periodic FSYNC enabled, calling fsync() each 100 requests. Calling fsync() at the end of test, Enabled. Using synchronous I/O mode Doing random r/w test Initializing worker threads... Threads started! File operations: reads/s: 1825.15 writes/s: 1216.77 fsyncs/s: 3893.78 Throughput: read, MiB/s: 28.52 written, MiB/s: 19.01 General statistics: total time: 300.0213s total number of events: 2080799 Latency (ms): min: 0.00 avg: 0.14 max: 136.85 95th percentile: 0.50 sum: 296767.58 Threads fairness: events (avg/stddev): 2080799.0000/0.00 execution time (avg/stddev): 296.7676/0.00
Выводы
Взял себе такой сервер чтобы запускать на нем автоматическую сборку веб-приложения. Работает все достаточно шустро для таких скромных показателей. С возросшей производительностью диска это очень хороший вариант за 100 рублей в месяц.