jokerGermany
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
Bevor ich mir mit meinem LVM-Halbwissen den Kopf zerbreche wollte ich schon mal die Profis fragen. Ich habe LVM geupdatet und neugestartet.
Der Server wurde dann in den Emergency Mode geschickt. Der Grund: Die Volume Group Raid war nicht mehr aktiviert. Hab jetzt wenig lust nochmal neu zu starten, aber kann mir wer verraten warum sich eine Volume Group einfach deaktiviert? mount -a
mount: /media/daten-raid/Hoerbuecher: special device /dev/Raid/Hoerbuecher does not exist.
mount: /media/daten-raid/Backup: special device /dev/Raid/Backup does not exist.
mount: /media/daten-raid/Emulatoren: special device /dev/Raid/Emulatoren does not exist.
mount: /media/daten-raid/Musik: special device /dev/Raid/Musik does not exist.
mount: /media/daten-raid/Images: special device /dev/Raid/Images does not exist.
mount: /media/daten-raid/Filme/Serien: special device /dev/Raid/Filme-Serien does not exist.
mount: /media/daten-raid/Filme/unfertig: special device /dev/Raid/Filme-unfertig does not exist.
mount: /media/daten-raid/Filme/Aufnahmen: special device /dev/Raid/Filme-Aufnahmen does not exist.
mount: /media/daten-raid/Filme/Spielfilme: special device /dev/Raid/Spielfilme does not exist.
mount: /media/wichtig/Bilder: special device /dev/Wichtig/Bilder does not exist.
mount: /media/daten-raid/eBook: special device /dev/Raid/eBook does not exist.
mount: /media/daten-raid/Ubuntu-Installation: special device /dev/Raid/Ubuntu-Installation does not exist.
mount: /media/daten-raid/Spiele: special device /dev/Raid/Spiele does not exist.
mount: /media/daten-raid/gorleben: special device /dev/Raid/gorleben does not exist.
mount: /media/wichtig/Wichtige-Daten: special device /dev/Wichtig/Wichtige-Daten does not exist.
mount: /media/cloud: special device /dev/Wichtig/cloud does not exist.
sudo pvs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
PV VG Fmt Attr PSize PFree
/dev/md1 Raid lvm2 a-- <1,82t <1,28t
/dev/md2 Raid lvm2 a-- <1,82t 0
/dev/md3 Raid lvm2 a-- <1,82t <441,39g
/dev/md4 Raid lvm2 a-- <1,82t 774,77g
/dev/md5 non-Raid lvm2 a-- 931,19g 0
/dev/md6 non-Raid lvm2 a-- 931,19g 0
/dev/md7 non-Raid lvm2 a-- 931,19g 520,57g
/dev/non-Raid/Wichtig Wichtig lvm2 a-- <1024,00g 0
/dev/sdc4 non-Raid lvm2 a-- <888,26g <888,26g
[unknown] Wichtig lvm2 a-m <1024,00g 0
unknown sollte /dev/Raid/Wichtig sein. Aber hier fange ich mich an zu wundern. mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Mar 6 20:09:35 2019
Raid Level : raid5
Array Size : 2929287168 (2793.59 GiB 2999.59 GB)
Used Dev Size : 976429056 (931.20 GiB 999.86 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Dec 20 19:48:46 2019
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : nas:1 (local to host nas)
UUID : bd5343e2:e95700e9:67450606:7b31d60d
Events : 7684
Number Major Minor RaidDevice State
0 8 65 0 active sync /dev/sde1
1 8 49 1 active sync /dev/sdd1
2 8 17 2 active sync /dev/sdb1
3 8 37 3 active sync /dev/sdc5
(Die anderen sehen nicht schlechter aus, falls benötigt kann ich die hier natürlich auch posten) sudo vgs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
VG #PV #LV #SN Attr VSize VFree
Raid 4 15 0 wz--n- 7,27t 2,46t
Wichtig 2 3 0 wz-pn- <2,00t 0
non-Raid 4 5 0 wz--n- <3,60t <1,38t sudo vgdisplay
--- Volume group ---
VG Name non-Raid
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 17
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 4
Max PV 0
Cur PV 4
Act PV 4
VG Size <3,60 TiB
PE Size 4,00 MiB
Total PE 942549
Alloc PE / Size 581888 / <2,22 TiB
Free PE / Size 360661 / <1,38 TiB
VG UUID lYy3lc-lkm4-KDZt-ZHoN-6qL9-Bwk0-ZFZvDz
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
--- Volume group ---
VG Name Wichtig
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 15
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 2
Act PV 1
VG Size <2,00 TiB
PE Size 4,00 MiB
Total PE 524286
Alloc PE / Size 524286 / <2,00 TiB
Free PE / Size 0 / 0
VG UUID UmxlLi-OpFU-tZNI-r2AA-Xj5Z-72r3-e6QDa9
--- Volume group ---
VG Name Raid
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 91
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 15
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 7,27 TiB
PE Size 4,00 MiB
Total PE 1907084
Alloc PE / Size 1261312 / 4,81 TiB
Free PE / Size 645772 / 2,46 TiB
VG UUID 38Nnop-Ntk0-NK1q-JKN6-SHXn-S2mZ-amSlLF sudo lvs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
Backup Raid -wi------- 2,50t
Emulatoren Raid -wi------- 5,00g
Filme-Aufnahmen Raid -wi------- 5,00g
Filme-Serien Raid -wi------- 300,00g
Filme-unfertig Raid -wi------- 300,00g
Hoerbuecher Raid -wi------- 15,00g
Images Raid -wi------- 25,00g
Musik Raid -wi------- 15,00g
Spiele Raid -wi------- 100,00g
Spielfilme Raid -wi------- 500,00g
Ubuntu-Installation Raid -wi------- 1,00g
Wichtig Raid -wi------- 1,00t
cloud Raid -wi------- 1,00g
eBook Raid -wi------- 1,00g
gorleben Raid -wi------- 75,00g
Bilder Wichtig rwi---r-p- 250,00g
Wichtige-Daten Wichtig rwi---r-p- <24,99g
cloud Wichtig rwi---r-p- <749,00g
Wichtig non-Raid -wi-a----- 1,00t
WinFreigabe non-Raid -wi-ao---- 50,00g
downloads non-Raid -wi-ao---- 1,00t
gorleben non-Raid -wi-ao---- 75,00g
streetview non-Raid -wi-ao---- 100,00g sudo lvdisplay
--- Logical volume ---
LV Path /dev/non-Raid/WinFreigabe
LV Name WinFreigabe
VG Name non-Raid
LV UUID tngkJ2-VQYs-Mocj-Ptxm-e2LE-LKre-pKGYCB
LV Write Access read/write
LV Creation host, time nas, 2019-04-13 16:58:21 +0200
LV Status available
# open 1
LV Size 50,00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/non-Raid/downloads
LV Name downloads
VG Name non-Raid
LV UUID WafgAf-i6WC-dVeS-eg4I-LMp8-axoZ-bbrRd3
LV Write Access read/write
LV Creation host, time nas, 2019-04-13 17:05:54 +0200
LV Status available
# open 1
LV Size 1,00 TiB
Current LE 262144
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Path /dev/non-Raid/streetview
LV Name streetview
VG Name non-Raid
LV UUID MSOJJa-Q6Nk-28gj-3p9d-VDwJ-SvKv-WF5SBR
LV Write Access read/write
LV Creation host, time nas, 2019-04-13 17:11:54 +0200
LV Status available
# open 1
LV Size 100,00 GiB
Current LE 25600
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
--- Logical volume ---
LV Path /dev/non-Raid/gorleben
LV Name gorleben
VG Name non-Raid
LV UUID qQ8VRp-h6M2-6b1S-6igg-MtBP-OPcP-1PYY3F
LV Write Access read/write
LV Creation host, time nas, 2019-04-13 22:18:51 +0200
LV Status available
# open 1
LV Size 75,00 GiB
Current LE 19200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Logical volume ---
LV Path /dev/non-Raid/Wichtig
LV Name Wichtig
VG Name non-Raid
LV UUID xc0yGx-RqKG-1evf-iq4a-L8FB-NTIj-029Zvt
LV Write Access read/write
LV Creation host, time nas, 2019-04-24 18:21:09 +0200
LV Status available
# open 0
LV Size 1,00 TiB
Current LE 262144
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
--- Logical volume ---
LV Path /dev/Wichtig/Bilder
LV Name Bilder
VG Name Wichtig
LV UUID Z0GoUg-ahd6-mwEm-Y97n-MxuN-tU4u-s29Y4R
LV Write Access read/write
LV Creation host, time nas, 2019-04-24 18:50:57 +0200
LV Status NOT available
LV Size 250,00 GiB
Current LE 64000
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Wichtig/Wichtige-Daten
LV Name Wichtige-Daten
VG Name Wichtig
LV UUID iof859-G3ox-Ye6V-npRa-0w85-Inyv-Q8lK5i
LV Write Access read/write
LV Creation host, time nas, 2019-04-24 18:53:16 +0200
LV Status NOT available
LV Size <24,99 GiB
Current LE 6397
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Wichtig/cloud
LV Name cloud
VG Name Wichtig
LV UUID 5LX4QJ-5nXJ-Kiu1-yqsl-pS0x-2Ued-I0eqVa
LV Write Access read/write
LV Creation host, time nas, 2019-05-08 20:50:05 +0200
LV Status NOT available
LV Size <749,00 GiB
Current LE 191743
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Hoerbuecher
LV Name Hoerbuecher
VG Name Raid
LV UUID 0WVKcE-W3cK-uxr7-2NkK-gxdG-SWXh-DwAuIn
LV Write Access read/write
LV Creation host, time nas, 2019-03-02 23:06:39 +0100
LV Status NOT available
LV Size 15,00 GiB
Current LE 3840
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Backup
LV Name Backup
VG Name Raid
LV UUID iYiWC5-jB1s-nIvq-MdVb-1bLo-gwhG-dR6afH
LV Write Access read/write
LV Creation host, time nas, 2019-03-04 20:46:57 +0100
LV Status NOT available
LV Size 2,50 TiB
Current LE 655360
Segments 4
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Emulatoren
LV Name Emulatoren
VG Name Raid
LV UUID XSMmPb-s2xG-KnWe-ZYwZ-jCgu-t7EL-Heekfi
LV Write Access read/write
LV Creation host, time nas, 2019-03-10 18:30:41 +0100
LV Status NOT available
LV Size 5,00 GiB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Musik
LV Name Musik
VG Name Raid
LV UUID XPc3E5-iZJa-EDAc-owNN-ySXK-vhXg-4SCviF
LV Write Access read/write
LV Creation host, time nas, 2019-03-10 18:42:53 +0100
LV Status NOT available
LV Size 15,00 GiB
Current LE 3840
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Images
LV Name Images
VG Name Raid
LV UUID Pjocuh-8atw-vZ33-nrLW-pda6-OZB7-XBfjmx
LV Write Access read/write
LV Creation host, time nas, 2019-03-10 20:04:08 +0100
LV Status NOT available
LV Size 25,00 GiB
Current LE 6400
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Filme-Serien
LV Name Filme-Serien
VG Name Raid
LV UUID 4K0HH6-CDGt-ekTy-R341-Fpuu-7CIQ-HJNBj4
LV Write Access read/write
LV Creation host, time nas, 2019-03-10 21:05:33 +0100
LV Status NOT available
LV Size 300,00 GiB
Current LE 76800
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Filme-unfertig
LV Name Filme-unfertig
VG Name Raid
LV UUID NLZT4j-Duu2-DI6N-tXal-gc1F-xcdi-5kJU4K
LV Write Access read/write
LV Creation host, time nas, 2019-03-11 19:09:20 +0100
LV Status NOT available
LV Size 300,00 GiB
Current LE 76800
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Filme-Aufnahmen
LV Name Filme-Aufnahmen
VG Name Raid
LV UUID 52iXhj-3IXQ-AC7C-uJMD-IX07-Fo01-eI07U7
LV Write Access read/write
LV Creation host, time nas, 2019-03-12 19:07:16 +0100
LV Status NOT available
LV Size 5,00 GiB
Current LE 1280
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Spielfilme
LV Name Spielfilme
VG Name Raid
LV UUID ew9oG0-bD5N-CNuU-Dj42-d553-VtW2-AzeyyL
LV Write Access read/write
LV Creation host, time nas, 2019-03-12 19:25:29 +0100
LV Status NOT available
LV Size 500,00 GiB
Current LE 128000
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/eBook
LV Name eBook
VG Name Raid
LV UUID oZ5cU7-tdm2-v9UL-1Xbv-GOuq-YRgo-pok5cG
LV Write Access read/write
LV Creation host, time nas, 2019-03-14 19:32:15 +0100
LV Status NOT available
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Ubuntu-Installation
LV Name Ubuntu-Installation
VG Name Raid
LV UUID GvVSm1-CfFt-zaY8-8Yxh-DchU-0mak-s10LAS
LV Write Access read/write
LV Creation host, time nas, 2019-03-14 19:36:24 +0100
LV Status NOT available
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Spiele
LV Name Spiele
VG Name Raid
LV UUID uhX23L-9YT4-ZjXj-qbGx-JPZU-iE2F-fiA683
LV Write Access read/write
LV Creation host, time nas, 2019-03-14 19:41:09 +0100
LV Status NOT available
LV Size 100,00 GiB
Current LE 25600
Segments 3
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/gorleben
LV Name gorleben
VG Name Raid
LV UUID 0eAW6x-40yd-tUVb-KcZ3-KQ9p-ruNi-egrexg
LV Write Access read/write
LV Creation host, time nas, 2019-03-15 23:29:09 +0100
LV Status NOT available
LV Size 75,00 GiB
Current LE 19200
Segments 2
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/cloud
LV Name cloud
VG Name Raid
LV UUID pi2HdK-JTwb-SOyE-gFIW-LjD2-w9Ch-tNLb6a
LV Write Access read/write
LV Creation host, time nas, 2019-03-17 12:21:42 +0100
LV Status NOT available
LV Size 1,00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
--- Logical volume ---
LV Path /dev/Raid/Wichtig
LV Name Wichtig
VG Name Raid
LV UUID H5uggO-mzSk-RoMO-NJnm-NqxX-VxIx-y9Mi19
LV Write Access read/write
LV Creation host, time nas, 2019-04-24 18:21:02 +0200
LV Status NOT available
LV Size 1,00 TiB
Current LE 262144
Segments 1
Allocation inherit
Read ahead sectors auto not available? ok vgchange -a y Raid
15 logical volume(s) in volume group "Raid" now active
|
Cranvil
Anmeldungsdatum: 9. März 2019
Beiträge: 990
|
jokerGermany schrieb: sudo pvs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
Vorneweg: Ich habe keine Ahnung von LVM. Hintendran: Hast du mal herauszufinden versucht, was es mit dieser Warnung auf sich hat? Ist das ein altes Gerät, das noch nicht sauber entfernt wurde? Hast du vielleicht einen Filter in der /etc/lvm.conf (oder wo auch immer die bei dir liegt) gesetzt, der jetzt nicht mehr greift?
|
jokerGermany
(Themenstarter)
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
Ich vermute, dass es /dev/Raid/Wichtig war, welches wegen Deaktivierung der Volume Group natürlich nicht mehr da war... Nach Aktivierung:
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/Raid/Wichtig Wichtig lvm2 a-- <1024,00g 0
/dev/md1 Raid lvm2 a-- <1,82t <1,28t
/dev/md2 Raid lvm2 a-- <1,82t 0
/dev/md3 Raid lvm2 a-- <1,82t <441,39g
/dev/md4 Raid lvm2 a-- <1,82t 774,77g
/dev/md5 non-Raid lvm2 a-- 931,19g 0
/dev/md6 non-Raid lvm2 a-- 931,19g 0
/dev/md7 non-Raid lvm2 a-- 931,19g 520,57g
/dev/non-Raid/Wichtig Wichtig lvm2 a-- <1024,00g 0
/dev/sdc4 non-Raid lvm2 a-- <888,26g <888,26g
An der /etc/lvm.conf hab ich nichts geändert...
Das schöne: Ich darf die VolumeGroup Raid jetzt nach jedem Neustart aktivieren...
|
dingsbums
Anmeldungsdatum: 13. November 2010
Beiträge: 3762
|
Ich habe LVM geupdatet
Was genau hast du gemacht?
An der /etc/lvm.conf hab ich nichts geändert...
Vielleicht liegt es ja genau daran. Steht da nicht drin, was beim Start aktiviert wird und was nicht? Haben sich Namen geändert?
Ich darf die VolumeGroup Raid jetzt nach jedem Neustart aktivieren...
Oder es fehlt eine Aktualisierung des initramfs. Ähnlicher Fall: https://unix.stackexchange.com/questions/213027/lvm-volume-is-inactive-after-reboot-of-centos Nachtrag: Notfalls schreib dir als "Würg-Around" eine System-Unit oder einen crontab-Eintrag, welche den Aktivierungsbefehl durchführt ... 😉
|
jokerGermany
(Themenstarter)
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
dingsbums schrieb: Ich habe LVM geupdatet
Was genau hast du gemacht?
sudo apt-get dist-upgrade dingsbums schrieb: An der /etc/lvm.conf hab ich nichts geändert...
Vielleicht liegt es ja genau daran. Steht da nicht drin, was beim Start aktiviert wird und was nicht? Haben sich Namen geändert?
Hat sich nichts geändert, das LVM habe ich zuletzt vor 1/2 - 1/4 Jahr angefasst /etc/lvm/lvm.conf
# This is an example configuration file for the LVM2 system.
# It contains the default settings that would be used if there was no
# /etc/lvm/lvm.conf file.
#
# Refer to 'man lvm.conf' for further information including the file layout.
#
# Refer to 'man lvm.conf' for information about how settings configured in
# this file are combined with built-in values and command line options to
# arrive at the final values used by LVM.
#
# Refer to 'man lvmconfig' for information about displaying the built-in
# and configured values used by LVM.
#
# If a default value is set in this file (not commented out), then a
# new version of LVM using this file will continue using that value,
# even if the new version of LVM changes the built-in default value.
#
# To put this file in a different directory and override /etc/lvm set
# the environment variable LVM_SYSTEM_DIR before running the tools.
#
# N.B. Take care that each setting only appears once if uncommenting
# example settings in this file.
# Configuration section config.
# How LVM configuration settings are handled.
config {
# Configuration option config/checks.
# If enabled, any LVM configuration mismatch is reported.
# This implies checking that the configuration key is understood by
# LVM and that the value of the key is the proper type. If disabled,
# any configuration mismatch is ignored and the default value is used
# without any warning (a message about the configuration key not being
# found is issued in verbose mode only).
checks = 1
# Configuration option config/abort_on_errors.
# Abort the LVM process if a configuration mismatch is found.
abort_on_errors = 0
# Configuration option config/profile_dir.
# Directory where LVM looks for configuration profiles.
profile_dir = "/etc/lvm/profile"
}
# Configuration section devices.
# How LVM uses block devices.
devices {
# Configuration option devices/dir.
# Directory in which to create volume group device nodes.
# Commands also accept this as a prefix on volume group names.
# This configuration option is advanced.
dir = "/dev"
# Configuration option devices/scan.
# Directories containing device nodes to use with LVM.
# This configuration option is advanced.
scan = [ "/dev" ]
# Configuration option devices/obtain_device_list_from_udev.
# Obtain the list of available devices from udev.
# This avoids opening or using any inapplicable non-block devices or
# subdirectories found in the udev directory. Any device node or
# symlink not managed by udev in the udev directory is ignored. This
# setting applies only to the udev-managed device directory; other
# directories will be scanned fully. LVM needs to be compiled with
# udev support for this setting to apply.
obtain_device_list_from_udev = 1
# Configuration option devices/external_device_info_source.
# Select an external device information source.
# Some information may already be available in the system and LVM can
# use this information to determine the exact type or use of devices it
# processes. Using an existing external device information source can
# speed up device processing as LVM does not need to run its own native
# routines to acquire this information. For example, this information
# is used to drive LVM filtering like MD component detection, multipath
# component detection, partition detection and others.
#
# Accepted values:
# none
# No external device information source is used.
# udev
# Reuse existing udev database records. Applicable only if LVM is
# compiled with udev support.
#
external_device_info_source = "none"
# Configuration option devices/preferred_names.
# Select which path name to display for a block device.
# If multiple path names exist for a block device, and LVM needs to
# display a name for the device, the path names are matched against
# each item in this list of regular expressions. The first match is
# used. Try to avoid using undescriptive /dev/dm-N names, if present.
# If no preferred name matches, or if preferred_names are not defined,
# the following built-in preferences are applied in order until one
# produces a preferred name:
# Prefer names with path prefixes in the order of:
# /dev/mapper, /dev/disk, /dev/dm-*, /dev/block.
# Prefer the name with the least number of slashes.
# Prefer a name that is a symlink.
# Prefer the path with least value in lexicographical order.
#
# Example
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
#
# This configuration option does not have a default value defined.
# Configuration option devices/filter.
# Limit the block devices that are used by LVM commands.
# This is a list of regular expressions used to accept or reject block
# device path names. Each regex is delimited by a vertical bar '|'
# (or any character) and is preceded by 'a' to accept the path, or
# by 'r' to reject the path. The first regex in the list to match the
# path is used, producing the 'a' or 'r' result for the device.
# When multiple path names exist for a block device, if any path name
# matches an 'a' pattern before an 'r' pattern, then the device is
# accepted. If all the path names match an 'r' pattern first, then the
# device is rejected. Unmatching path names do not affect the accept
# or reject decision. If no path names for a device match a pattern,
# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
# as the combination might produce unexpected results (test changes.)
# Run vgscan after changing the filter to regenerate the cache.
# See the use_lvmetad comment for a special case regarding filters.
#
# Example
# Accept every block device:
# filter = [ "a|.*/|" ]
# Reject the cdrom drive:
# filter = [ "r|/dev/cdrom|" ]
# Work with just loopback devices, e.g. for testing:
# filter = [ "a|loop|", "r|.*|" ]
# Accept all loop devices and ide drives except hdc:
# filter = [ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
# Use anchors to be very specific:
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/cache_dir.
# Directory in which to store the device cache file.
# The results of filtering are cached on disk to avoid rescanning dud
# devices (which can take a very long time). By default this cache is
# stored in a file named .cache. It is safe to delete this file; the
# tools regenerate it. If obtain_device_list_from_udev is enabled, the
# list of devices is obtained from udev and any existing .cache file
# is removed.
cache_dir = "/run/lvm"
# Configuration option devices/cache_file_prefix.
# A prefix used before the .cache file name. See devices/cache_dir.
cache_file_prefix = ""
# Configuration option devices/write_cache_state.
# Enable/disable writing the cache file. See devices/cache_dir.
write_cache_state = 1
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
# maximum number of partitions.
#
# Example
# types = [ "fd", 16 ]
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# Configuration option devices/sysfs_scan.
# Restrict device scanning to block devices appearing in sysfs.
# This is a quick way of filtering out block devices that are not
# present on the system. sysfs must be part of the kernel and mounted.)
sysfs_scan = 1
# Configuration option devices/multipath_component_detection.
# Ignore devices that are components of DM multipath devices.
multipath_component_detection = 1
# Configuration option devices/md_component_detection.
# Ignore devices that are components of software RAID (md) devices.
md_component_detection = 1
# Configuration option devices/fw_raid_component_detection.
# Ignore devices that are components of firmware RAID devices.
# LVM must use an external_device_info_source other than none for this
# detection to execute.
fw_raid_component_detection = 0
# Configuration option devices/md_chunk_alignment.
# Align PV data blocks with md device's stripe-width.
# This applies if a PV is placed directly on an md device.
md_chunk_alignment = 1
# Configuration option devices/default_data_alignment.
# Default alignment of the start of a PV data area in MB.
# If set to 0, a value of 64KiB will be used.
# Set to 1 for 1MiB, 2 for 2MiB, etc.
# This configuration option has an automatic default value.
# default_data_alignment = 1
# Configuration option devices/data_alignment_detection.
# Detect PV data alignment based on sysfs device information.
# The start of a PV data area will be a multiple of minimum_io_size or
# optimal_io_size exposed in sysfs. minimum_io_size is the smallest
# request the device can perform without incurring a read-modify-write
# penalty, e.g. MD chunk size. optimal_io_size is the device's
# preferred unit of receiving I/O, e.g. MD stripe width.
# minimum_io_size is used if optimal_io_size is undefined (0).
# If md_chunk_alignment is enabled, that detects the optimal_io_size.
# This setting takes precedence over md_chunk_alignment.
data_alignment_detection = 1
# Configuration option devices/data_alignment.
# Alignment of the start of a PV data area in KiB.
# If a PV is placed directly on an md device and md_chunk_alignment or
# data_alignment_detection are enabled, then this setting is ignored.
# Otherwise, md_chunk_alignment and data_alignment_detection are
# disabled if this is set. Set to 0 to use the default alignment or the
# page size, if larger.
data_alignment = 0
# Configuration option devices/data_alignment_offset_detection.
# Detect PV data alignment offset based on sysfs device information.
# The start of a PV aligned data area will be shifted by the
# alignment_offset exposed in sysfs. This offset is often 0, but may
# be non-zero. Certain 4KiB sector drives that compensate for windows
# partitioning will have an alignment_offset of 3584 bytes (sector 7
# is the lowest aligned logical block, the 4KiB sectors start at
# LBA -1, and consequently sector 63 is aligned on a 4KiB boundary).
# pvcreate --dataalignmentoffset will skip this detection.
data_alignment_offset_detection = 1
# Configuration option devices/ignore_suspended_devices.
# Ignore DM devices that have I/O suspended while scanning devices.
# Otherwise, LVM waits for a suspended device to become accessible.
# This should only be needed in recovery situations.
ignore_suspended_devices = 0
# Configuration option devices/ignore_lvm_mirrors.
# Do not scan 'mirror' LVs to avoid possible deadlocks.
# This avoids possible deadlocks when using the 'mirror' segment type.
# This setting determines whether LVs using the 'mirror' segment type
# are scanned for LVM labels. This affects the ability of mirrors to
# be used as physical volumes. If this setting is enabled, it is
# impossible to create VGs on top of mirror LVs, i.e. to stack VGs on
# mirror LVs. If this setting is disabled, allowing mirror LVs to be
# scanned, it may cause LVM processes and I/O to the mirror to become
# blocked. This is due to the way that the mirror segment type handles
# failures. In order for the hang to occur, an LVM command must be run
# just after a failure and before the automatic LVM repair process
# takes place, or there must be failures in multiple mirrors in the
# same VG at the same time with write failures occurring moments before
# a scan of the mirror's labels. The 'mirror' scanning problems do not
# apply to LVM RAID types like 'raid1' which handle failures in a
# different way, making them a better choice for VG stacking.
ignore_lvm_mirrors = 1
# Configuration option devices/disable_after_error_count.
# Number of I/O errors after which a device is skipped.
# During each LVM operation, errors received from each device are
# counted. If the counter of a device exceeds the limit set here,
# no further I/O is sent to that device for the remainder of the
# operation. Setting this to 0 disables the counters altogether.
disable_after_error_count = 0
# Configuration option devices/require_restorefile_with_uuid.
# Allow use of pvcreate --uuid without requiring --restorefile.
require_restorefile_with_uuid = 1
# Configuration option devices/pv_min_size.
# Minimum size in KiB of block devices which can be used as PVs.
# In a clustered environment all nodes must use the same value.
# Any value smaller than 512KiB is ignored. The previous built-in
# value was 512.
pv_min_size = 2048
# Configuration option devices/issue_discards.
# Issue discards to PVs that are no longer used by an LV.
# Discards are sent to an LV's underlying physical volumes when the LV
# is no longer using the physical volumes' space, e.g. lvremove,
# lvreduce. Discards inform the storage that a region is no longer
# used. Storage that supports discards advertise the protocol-specific
# way discards should be issued by the kernel (TRIM, UNMAP, or
# WRITE SAME with UNMAP bit set). Not all storage will support or
# benefit from discards, but SSDs and thinly provisioned LUNs
# generally do. If enabled, discards will only be issued if both the
# storage and kernel provide support.
issue_discards = 1
# Configuration option devices/allow_changes_with_duplicate_pvs.
# Allow VG modification while a PV appears on multiple devices.
# When a PV appears on multiple devices, LVM attempts to choose the
# best device to use for the PV. If the devices represent the same
# underlying storage, the choice has minimal consequence. If the
# devices represent different underlying storage, the wrong choice
# can result in data loss if the VG is modified. Disabling this
# setting is the safest option because it prevents modifying a VG
# or activating LVs in it while a PV appears on multiple devices.
# Enabling this setting allows the VG to be used as usual even with
# uncertain devices.
allow_changes_with_duplicate_pvs = 0
}
# Configuration section allocation.
# How LVM selects space and applies properties to LVs.
allocation {
# Configuration option allocation/cling_tag_list.
# Advise LVM which PVs to use when searching for new space.
# When searching for free space to extend an LV, the 'cling' allocation
# policy will choose space on the same PVs as the last segment of the
# existing LV. If there is insufficient space and a list of tags is
# defined here, it will check whether any of them are attached to the
# PVs concerned and then seek to match those PV tags between existing
# extents and new extents.
#
# Example
# Use the special tag "@*" as a wildcard to match any PV tag:
# cling_tag_list = [ "@*" ]
# LVs are mirrored between two sites within a single VG, and
# PVs are tagged with either @site1 or @site2 to indicate where
# they are situated:
# cling_tag_list = [ "@site1", "@site2" ]
#
# This configuration option does not have a default value defined.
# Configuration option allocation/maximise_cling.
# Use a previous allocation algorithm.
# Changes made in version 2.02.85 extended the reach of the 'cling'
# policies to detect more situations where data can be grouped onto
# the same disks. This setting can be used to disable the changes
# and revert to the previous algorithm.
maximise_cling = 1
# Configuration option allocation/use_blkid_wiping.
# Use blkid to detect existing signatures on new PVs and LVs.
# The blkid library can detect more signatures than the native LVM
# detection code, but may take longer. LVM needs to be compiled with
# blkid wiping support for this setting to apply. LVM native detection
# code is currently able to recognize: MD device signatures,
# swap signature, and LUKS signatures. To see the list of signatures
# recognized by blkid, check the output of the 'blkid -k' command.
use_blkid_wiping = 1
# Configuration option allocation/wipe_signatures_when_zeroing_new_lvs.
# Look for and erase any signatures while zeroing a new LV.
# The --wipesignatures option overrides this setting.
# Zeroing is controlled by the -Z/--zero option, and if not specified,
# zeroing is used by default if possible. Zeroing simply overwrites the
# first 4KiB of a new LV with zeroes and does no signature detection or
# wiping. Signature wiping goes beyond zeroing and detects exact types
# and positions of signatures within the whole LV. It provides a
# cleaner LV after creation as all known signatures are wiped. The LV
# is not claimed incorrectly by other tools because of old signatures
# from previous use. The number of signatures that LVM can detect
# depends on the detection code that is selected (see
# use_blkid_wiping.) Wiping each detected signature must be confirmed.
# When this setting is disabled, signatures on new LVs are not detected
# or erased unless the --wipesignatures option is used directly.
wipe_signatures_when_zeroing_new_lvs = 1
# Configuration option allocation/mirror_logs_require_separate_pvs.
# Mirror logs and images will always use different PVs.
# The default setting changed in version 2.02.85.
mirror_logs_require_separate_pvs = 0
# Configuration option allocation/raid_stripe_all_devices.
# Stripe across all PVs when RAID stripes are not specified.
# If enabled, all PVs in the VG or on the command line are used for
# raid0/4/5/6/10 when the command does not specify the number of
# stripes to use.
# This was the default behaviour until release 2.02.162.
# This configuration option has an automatic default value.
# raid_stripe_all_devices = 0
# Configuration option allocation/cache_pool_metadata_require_separate_pvs.
# Cache pool metadata and data will always use different PVs.
cache_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/cache_metadata_format.
# Sets default metadata format for new cache.
#
# Accepted values:
# 0 Automatically detected best available format
# 1 Original format
# 2 Improved 2nd. generation format
#
# This configuration option has an automatic default value.
# cache_metadata_format = 0
# Configuration option allocation/cache_mode.
# The default cache mode used for new cache.
#
# Accepted values:
# writethrough
# Data blocks are immediately written from the cache to disk.
# writeback
# Data blocks are written from the cache back to disk after some
# delay to improve performance.
#
# This setting replaces allocation/cache_pool_cachemode.
# This configuration option has an automatic default value.
# cache_mode = "writethrough"
# Configuration option allocation/cache_policy.
# The default cache policy used for new cache volume.
# Since kernel 4.2 the default policy is smq (Stochastic multiqueue),
# otherwise the older mq (Multiqueue) policy is selected.
# This configuration option does not have a default value defined.
# Configuration section allocation/cache_settings.
# Settings for the cache policy.
# See documentation for individual cache policies for more info.
# This configuration section has an automatic default value.
# cache_settings {
# }
# Configuration option allocation/cache_pool_chunk_size.
# The minimal chunk size in KiB for cache pool volumes.
# Using a chunk_size that is too large can result in wasteful use of
# the cache, where small reads and writes can cause large sections of
# an LV to be mapped into the cache. However, choosing a chunk_size
# that is too small can result in more overhead trying to manage the
# numerous chunks that become mapped into the cache. The former is
# more of a problem than the latter in most cases, so the default is
# on the smaller end of the spectrum. Supported values range from
# 32KiB to 1GiB in multiples of 32.
# This configuration option does not have a default value defined.
# Configuration option allocation/cache_pool_max_chunks.
# The maximum number of chunks in a cache pool.
# For cache target v1.9 the recommended maximumm is 1000000 chunks.
# Using cache pool with more chunks may degrade cache performance.
# This configuration option does not have a default value defined.
# Configuration option allocation/thin_pool_metadata_require_separate_pvs.
# Thin pool metdata and data will always use different PVs.
thin_pool_metadata_require_separate_pvs = 0
# Configuration option allocation/thin_pool_zero.
# Thin pool data chunks are zeroed before they are first used.
# Zeroing with a larger thin pool chunk size reduces performance.
# This configuration option has an automatic default value.
# thin_pool_zero = 1
# Configuration option allocation/thin_pool_discards.
# The discards behaviour of thin pool volumes.
#
# Accepted values:
# ignore
# nopassdown
# passdown
#
# This configuration option has an automatic default value.
# thin_pool_discards = "passdown"
# Configuration option allocation/thin_pool_chunk_size_policy.
# The chunk size calculation policy for thin pool volumes.
#
# Accepted values:
# generic
# If thin_pool_chunk_size is defined, use it. Otherwise, calculate
# the chunk size based on estimation and device hints exposed in
# sysfs - the minimum_io_size. The chunk size is always at least
# 64KiB.
# performance
# If thin_pool_chunk_size is defined, use it. Otherwise, calculate
# the chunk size for performance based on device hints exposed in
# sysfs - the optimal_io_size. The chunk size is always at least
# 512KiB.
#
# This configuration option has an automatic default value.
# thin_pool_chunk_size_policy = "generic"
# Configuration option allocation/thin_pool_chunk_size.
# The minimal chunk size in KiB for thin pool volumes.
# Larger chunk sizes may improve performance for plain thin volumes,
# however using them for snapshot volumes is less efficient, as it
# consumes more space and takes extra time for copying. When unset,
# lvm tries to estimate chunk size starting from 64KiB. Supported
# values are in the range 64KiB to 1GiB.
# This configuration option does not have a default value defined.
# Configuration option allocation/physical_extent_size.
# Default physical extent size in KiB to use for new VGs.
# This configuration option has an automatic default value.
# physical_extent_size = 4096
}
# Configuration section log.
# How LVM log information is reported.
log {
# Configuration option log/report_command_log.
# Enable or disable LVM log reporting.
# If enabled, LVM will collect a log of operations, messages,
# per-object return codes with object identification and associated
# error numbers (errnos) during LVM command processing. Then the
# log is either reported solely or in addition to any existing
# reports, depending on LVM command used. If it is a reporting command
# (e.g. pvs, vgs, lvs, lvm fullreport), then the log is reported in
# addition to any existing reports. Otherwise, there's only log report
# on output. For all applicable LVM commands, you can request that
# the output has only log report by using --logonly command line
# option. Use log/command_log_cols and log/command_log_sort settings
# to define fields to display and sort fields for the log report.
# You can also use log/command_log_selection to define selection
# criteria used each time the log is reported.
# This configuration option has an automatic default value.
# report_command_log = 0
# Configuration option log/command_log_sort.
# List of columns to sort by when reporting command log.
# See <lvm command> --logonly --configreport log -o help
# for the list of possible fields.
# This configuration option has an automatic default value.
# command_log_sort = "log_seq_num"
# Configuration option log/command_log_cols.
# List of columns to report when reporting command log.
# See <lvm command> --logonly --configreport log -o help
# for the list of possible fields.
# This configuration option has an automatic default value.
# command_log_cols = "log_seq_num,log_type,log_context,log_object_type,log_object_name,log_object_id,log_object_group,log_object_group_id,log_message,log_errno,log_ret_code"
# Configuration option log/command_log_selection.
# Selection criteria used when reporting command log.
# You can define selection criteria that are applied each
# time log is reported. This way, it is possible to control the
# amount of log that is displayed on output and you can select
# only parts of the log that are important for you. To define
# selection criteria, use fields from log report. See also
# <lvm command> --logonly --configreport log -S help for the
# list of possible fields and selection operators. You can also
# define selection criteria for log report on command line directly
# using <lvm command> --configreport log -S <selection criteria>
# which has precedence over log/command_log_selection setting.
# For more information about selection criteria in general, see
# lvm(8) man page.
# This configuration option has an automatic default value.
# command_log_selection = "!(log_type=status && message=success)"
# Configuration option log/verbose.
# Controls the messages sent to stdout or stderr.
verbose = 0
# Configuration option log/silent.
# Suppress all non-essential messages from stdout.
# This has the same effect as -qq. When enabled, the following commands
# still produce output: dumpconfig, lvdisplay, lvmdiskscan, lvs, pvck,
# pvdisplay, pvs, version, vgcfgrestore -l, vgdisplay, vgs.
# Non-essential messages are shifted from log level 4 to log level 5
# for syslog and lvm2_log_fn purposes.
# Any 'yes' or 'no' questions not overridden by other arguments are
# suppressed and default to 'no'.
silent = 0
# Configuration option log/syslog.
# Send log messages through syslog.
syslog = 1
# Configuration option log/file.
# Write error and debug log messages to a file specified here.
# This configuration option does not have a default value defined.
# Configuration option log/overwrite.
# Overwrite the log file each time the program is run.
overwrite = 0
# Configuration option log/level.
# The level of log messages that are sent to the log file or syslog.
# There are 6 syslog-like log levels currently in use: 2 to 7 inclusive.
# 7 is the most verbose (LOG_DEBUG).
level = 0
# Configuration option log/indent.
# Indent messages according to their severity.
indent = 1
# Configuration option log/command_names.
# Display the command name on each line of output.
command_names = 0
# Configuration option log/prefix.
# A prefix to use before the log message text.
# (After the command name, if selected).
# Two spaces allows you to see/grep the severity of each message.
# To make the messages look similar to the original LVM tools use:
# indent = 0, command_names = 1, prefix = " -- "
prefix = " "
# Configuration option log/activation.
# Log messages during activation.
# Don't use this in low memory situations (can deadlock).
activation = 0
# Configuration option log/debug_classes.
# Select log messages by class.
# Some debugging messages are assigned to a class and only appear in
# debug output if the class is listed here. Classes currently
# available: memory, devices, activation, allocation, lvmetad,
# metadata, cache, locking, lvmpolld. Use "all" to see everything.
debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}
# Configuration section backup.
# How LVM metadata is backed up and archived.
# In LVM, a 'backup' is a copy of the metadata for the current system,
# and an 'archive' contains old metadata configurations. They are
# stored in a human readable text format.
backup {
# Configuration option backup/backup.
# Maintain a backup of the current metadata configuration.
# Think very hard before turning this off!
backup = 1
# Configuration option backup/backup_dir.
# Location of the metadata backup files.
# Remember to back up this directory regularly!
backup_dir = "/etc/lvm/backup"
# Configuration option backup/archive.
# Maintain an archive of old metadata configurations.
# Think very hard before turning this off.
archive = 1
# Configuration option backup/archive_dir.
# Location of the metdata archive files.
# Remember to back up this directory regularly!
archive_dir = "/etc/lvm/archive"
# Configuration option backup/retain_min.
# Minimum number of archives to keep.
retain_min = 10
# Configuration option backup/retain_days.
# Minimum number of days to keep archive files.
retain_days = 30
}
# Configuration section shell.
# Settings for running LVM in shell (readline) mode.
shell {
# Configuration option shell/history_size.
# Number of lines of history to store in ~/.lvm_history.
history_size = 100
}
# Configuration section global.
# Miscellaneous global LVM settings.
global {
# Configuration option global/umask.
# The file creation mask for any files and directories created.
# Interpreted as octal if the first digit is zero.
umask = 077
# Configuration option global/test.
# No on-disk metadata changes will be made in test mode.
# Equivalent to having the -t option on every command.
test = 0
# Configuration option global/units.
# Default value for --units argument.
units = "r"
# Configuration option global/si_unit_consistency.
# Distinguish between powers of 1024 and 1000 bytes.
# The LVM commands distinguish between powers of 1024 bytes,
# e.g. KiB, MiB, GiB, and powers of 1000 bytes, e.g. KB, MB, GB.
# If scripts depend on the old behaviour, disable this setting
# temporarily until they are updated.
si_unit_consistency = 1
# Configuration option global/suffix.
# Display unit suffix for sizes.
# This setting has no effect if the units are in human-readable form
# (global/units = "h") in which case the suffix is always displayed.
suffix = 1
# Configuration option global/activation.
# Enable/disable communication with the kernel device-mapper.
# Disable to use the tools to manipulate LVM metadata without
# activating any logical volumes. If the device-mapper driver
# is not present in the kernel, disabling this should suppress
# the error messages.
activation = 1
# Configuration option global/fallback_to_lvm1.
# Try running LVM1 tools if LVM cannot communicate with DM.
# This option only applies to 2.4 kernels and is provided to help
# switch between device-mapper kernels and LVM1 kernels. The LVM1
# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
# They will stop working once the lvm2 on-disk metadata format is used.
# This configuration option has an automatic default value.
# fallback_to_lvm1 = 0
# Configuration option global/format.
# The default metadata format that commands should use.
# The -M 1|2 option overrides this setting.
#
# Accepted values:
# lvm1
# lvm2
#
# This configuration option has an automatic default value.
# format = "lvm2"
# Configuration option global/format_libraries.
# Shared libraries that process different metadata formats.
# If support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# This configuration option does not have a default value defined.
# Configuration option global/segment_libraries.
# This configuration option does not have a default value defined.
# Configuration option global/proc.
# Location of proc filesystem.
# This configuration option is advanced.
proc = "/proc"
# Configuration option global/etc.
# Location of /etc system configuration directory.
etc = "/etc"
# Configuration option global/locking_type.
# Type of locking to use.
#
# Accepted values:
# 0
# Turns off locking. Warning: this risks metadata corruption if
# commands run concurrently.
# 1
# LVM uses local file-based locking, the standard mode.
# 2
# LVM uses the external shared library locking_library.
# 3
# LVM uses built-in clustered locking with clvmd.
# This is incompatible with lvmetad. If use_lvmetad is enabled,
# LVM prints a warning and disables lvmetad use.
# 4
# LVM uses read-only locking which forbids any operations that
# might change metadata.
# 5
# Offers dummy locking for tools that do not need any locks.
# You should not need to set this directly; the tools will select
# when to use it instead of the configured locking_type.
# Do not use lvmetad or the kernel device-mapper driver with this
# locking type. It is used by the --readonly option that offers
# read-only access to Volume Group metadata that cannot be locked
# safely because it belongs to an inaccessible domain and might be
# in use, for example a virtual machine image or a disk that is
# shared by a clustered machine.
#
locking_type = 1
# Configuration option global/wait_for_locks.
# When disabled, fail if a lock request would block.
wait_for_locks = 1
# Configuration option global/fallback_to_clustered_locking.
# Attempt to use built-in cluster locking if locking_type 2 fails.
# If using external locking (type 2) and initialisation fails, with
# this enabled, an attempt will be made to use the built-in clustered
# locking. Disable this if using a customised locking_library.
fallback_to_clustered_locking = 1
# Configuration option global/fallback_to_local_locking.
# Use locking_type 1 (local) if locking_type 2 or 3 fail.
# If an attempt to initialise type 2 or type 3 locking failed, perhaps
# because cluster components such as clvmd are not running, with this
# enabled, an attempt will be made to use local file-based locking
# (type 1). If this succeeds, only commands against local VGs will
# proceed. VGs marked as clustered will be ignored.
fallback_to_local_locking = 1
# Configuration option global/locking_dir.
# Directory to use for LVM command file locks.
# Local non-LV directory that holds file-based locks while commands are
# in progress. A directory like /tmp that may get wiped on reboot is OK.
locking_dir = "/run/lock/lvm"
# Configuration option global/prioritise_write_locks.
# Allow quicker VG write access during high volume read access.
# When there are competing read-only and read-write access requests for
# a volume group's metadata, instead of always granting the read-only
# requests immediately, delay them to allow the read-write requests to
# be serviced. Without this setting, write access may be stalled by a
# high volume of read-only requests. This option only affects
# locking_type 1 viz. local file-based locking.
prioritise_write_locks = 1
# Configuration option global/library_dir.
# Search this directory first for shared libraries.
# This configuration option does not have a default value defined.
# Configuration option global/locking_library.
# The external locking library to use for locking_type 2.
# This configuration option has an automatic default value.
# locking_library = "liblvm2clusterlock.so"
# Configuration option global/abort_on_internal_errors.
# Abort a command that encounters an internal error.
# Treat any internal errors as fatal errors, aborting the process that
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Configuration option global/detect_internal_vg_cache_corruption.
# Internal verification of VG structures.
# Check if CRC matches when a parsed VG is used multiple times. This
# is useful to catch unexpected changes to cached VG structures.
# Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# Configuration option global/metadata_read_only.
# No operations that change on-disk metadata are permitted.
# Additionally, read-only commands that encounter metadata in need of
# repair will still be allowed to proceed exactly as if the repair had
# been performed (except for the unchanged vg_seqno). Inappropriate
# use could mess up your system, so seek advice first!
metadata_read_only = 0
# Configuration option global/mirror_segtype_default.
# The segment type used by the short mirroring option -m.
# The --type mirror|raid1 option overrides this setting.
#
# Accepted values:
# mirror
# The original RAID1 implementation from LVM/DM. It is
# characterized by a flexible log solution (core, disk, mirrored),
# and by the necessity to block I/O while handling a failure.
# There is an inherent race in the dmeventd failure handling logic
# with snapshots of devices using this type of RAID1 that in the
# worst case could cause a deadlock. (Also see
# devices/ignore_lvm_mirrors.)
# raid1
# This is a newer RAID1 implementation using the MD RAID1
# personality through device-mapper. It is characterized by a
# lack of log options. (A log is always allocated for every
# device and they are placed on the same device as the image,
# so no separate devices are required.) This mirror
# implementation does not require I/O to be blocked while
# handling a failure. This mirror implementation is not
# cluster-aware and cannot be used in a shared (active/active)
# fashion in a cluster.
#
mirror_segtype_default = "raid1"
# Configuration option global/raid10_segtype_default.
# The segment type used by the -i -m combination.
# The --type raid10|mirror option overrides this setting.
# The --stripes/-i and --mirrors/-m options can both be specified
# during the creation of a logical volume to use both striping and
# mirroring for the LV. There are two different implementations.
#
# Accepted values:
# raid10
# LVM uses MD's RAID10 personality through DM. This is the
# preferred option.
# mirror
# LVM layers the 'mirror' and 'stripe' segment types. The layering
# is done by creating a mirror LV on top of striped sub-LVs,
# effectively creating a RAID 0+1 array. The layering is suboptimal
# in terms of providing redundancy and performance.
#
raid10_segtype_default = "raid10"
# Configuration option global/sparse_segtype_default.
# The segment type used by the -V -L combination.
# The --type snapshot|thin option overrides this setting.
# The combination of -V and -L options creates a sparse LV. There are
# two different implementations.
#
# Accepted values:
# snapshot
# The original snapshot implementation from LVM/DM. It uses an old
# snapshot that mixes data and metadata within a single COW
# storage volume and performs poorly when the size of stored data
# passes hundreds of MB.
# thin
# A newer implementation that uses thin provisioning. It has a
# bigger minimal chunk size (64KiB) and uses a separate volume for
# metadata. It has better performance, especially when more data
# is used. It also supports full snapshots.
#
sparse_segtype_default = "thin"
# Configuration option global/lvdisplay_shows_full_device_path.
# Enable this to reinstate the previous lvdisplay name format.
# The default format for displaying LV names in lvdisplay was changed
# in version 2.02.89 to show the LV name and path separately.
# Previously this was always shown as /dev/vgname/lvname even when that
# was never a valid path in the /dev filesystem.
# This configuration option has an automatic default value.
# lvdisplay_shows_full_device_path = 0
# Configuration option global/use_lvmetad.
# Use lvmetad to cache metadata and reduce disk scanning.
# When enabled (and running), lvmetad provides LVM commands with VG
# metadata and PV state. LVM commands then avoid reading this
# information from disks which can be slow. When disabled (or not
# running), LVM commands fall back to scanning disks to obtain VG
# metadata. lvmetad is kept updated via udev rules which must be set
# up for LVM to work correctly. (The udev rules should be installed
# by default.) Without a proper udev setup, changes in the system's
# block device configuration will be unknown to LVM, and ignored
# until a manual 'pvscan --cache' is run. If lvmetad was running
# while use_lvmetad was disabled, it must be stopped, use_lvmetad
# enabled, and then started. When using lvmetad, LV activation is
# switched to an automatic, event-based mode. In this mode, LVs are
# activated based on incoming udev events that inform lvmetad when
# PVs appear on the system. When a VG is complete (all PVs present),
# it is auto-activated. The auto_activation_volume_list setting
# controls which LVs are auto-activated (all by default.)
# When lvmetad is updated (automatically by udev events, or directly
# by pvscan --cache), devices/filter is ignored and all devices are
# scanned by default. lvmetad always keeps unfiltered information
# which is provided to LVM commands. Each LVM command then filters
# based on devices/filter. This does not apply to other, non-regexp,
# filtering settings: component filters such as multipath and MD
# are checked during pvscan --cache. To filter a device and prevent
# scanning from the LVM system entirely, including lvmetad, use
# devices/global_filter.
use_lvmetad = 1
# Configuration option global/lvmetad_update_wait_time.
# Number of seconds a command will wait for lvmetad update to finish.
# After waiting for this period, a command will not use lvmetad, and
# will revert to disk scanning.
# This configuration option has an automatic default value.
# lvmetad_update_wait_time = 10
# Configuration option global/use_lvmlockd.
# Use lvmlockd for locking among hosts using LVM on shared storage.
# Applicable only if LVM is compiled with lockd support in which
# case there is also lvmlockd(8) man page available for more
# information.
use_lvmlockd = 0
# Configuration option global/lvmlockd_lock_retries.
# Retry lvmlockd lock requests this many times.
# Applicable only if LVM is compiled with lockd support
# This configuration option has an automatic default value.
# lvmlockd_lock_retries = 3
# Configuration option global/sanlock_lv_extend.
# Size in MiB to extend the internal LV holding sanlock locks.
# The internal LV holds locks for each LV in the VG, and after enough
# LVs have been created, the internal LV needs to be extended. lvcreate
# will automatically extend the internal LV when needed by the amount
# specified here. Setting this to 0 disables the automatic extension
# and can cause lvcreate to fail. Applicable only if LVM is compiled
# with lockd support
# This configuration option has an automatic default value.
# sanlock_lv_extend = 256
# Configuration option global/thin_check_executable.
# The full path to the thin_check command.
# LVM uses this command to check that a thin metadata device is in a
# usable state. When a thin pool is activated and after it is
# deactivated, this command is run. Activation will only proceed if
# the command has an exit status of 0. Set to "" to skip this check.
# (Not recommended.) Also see thin_check_options.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# thin_check_executable = "/usr/sbin/thin_check"
# Configuration option global/thin_dump_executable.
# The full path to the thin_dump command.
# LVM uses this command to dump thin pool metadata.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# thin_dump_executable = "/usr/sbin/thin_dump"
# Configuration option global/thin_repair_executable.
# The full path to the thin_repair command.
# LVM uses this command to repair a thin metadata device if it is in
# an unusable state. Also see thin_repair_options.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# thin_repair_executable = "/usr/sbin/thin_repair"
# Configuration option global/thin_check_options.
# List of options passed to the thin_check command.
# With thin_check version 2.1 or newer you can add the option
# --ignore-non-fatal-errors to let it pass through ignorable errors
# and fix them later. With thin_check version 3.2 or newer you should
# include the option --clear-needs-check-flag.
# This configuration option has an automatic default value.
# thin_check_options = [ "-q", "--clear-needs-check-flag" ]
# Configuration option global/thin_repair_options.
# List of options passed to the thin_repair command.
# This configuration option has an automatic default value.
# thin_repair_options = [ "" ]
# Configuration option global/thin_disabled_features.
# Features to not use in the thin driver.
# This can be helpful for testing, or to avoid using a feature that is
# causing problems. Features include: block_size, discards,
# discards_non_power_2, external_origin, metadata_resize,
# external_origin_extend, error_if_no_space.
#
# Example
# thin_disabled_features = [ "discards", "block_size" ]
#
# This configuration option does not have a default value defined.
# Configuration option global/cache_disabled_features.
# Features to not use in the cache driver.
# This can be helpful for testing, or to avoid using a feature that is
# causing problems. Features include: policy_mq, policy_smq, metadata2.
#
# Example
# cache_disabled_features = [ "policy_smq" ]
#
# This configuration option does not have a default value defined.
# Configuration option global/cache_check_executable.
# The full path to the cache_check command.
# LVM uses this command to check that a cache metadata device is in a
# usable state. When a cached LV is activated and after it is
# deactivated, this command is run. Activation will only proceed if the
# command has an exit status of 0. Set to "" to skip this check.
# (Not recommended.) Also see cache_check_options.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# cache_check_executable = "/usr/sbin/cache_check"
# Configuration option global/cache_dump_executable.
# The full path to the cache_dump command.
# LVM uses this command to dump cache pool metadata.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# cache_dump_executable = "/usr/sbin/cache_dump"
# Configuration option global/cache_repair_executable.
# The full path to the cache_repair command.
# LVM uses this command to repair a cache metadata device if it is in
# an unusable state. Also see cache_repair_options.
# (See package device-mapper-persistent-data or thin-provisioning-tools)
# This configuration option has an automatic default value.
# cache_repair_executable = "/usr/sbin/cache_repair"
# Configuration option global/cache_check_options.
# List of options passed to the cache_check command.
# With cache_check version 5.0 or newer you should include the option
# --clear-needs-check-flag.
# This configuration option has an automatic default value.
# cache_check_options = [ "-q", "--clear-needs-check-flag" ]
# Configuration option global/cache_repair_options.
# List of options passed to the cache_repair command.
# This configuration option has an automatic default value.
# cache_repair_options = [ "" ]
# Configuration option global/fsadm_executable.
# The full path to the fsadm command.
# LVM uses this command to help with lvresize -r operations.
# This configuration option has an automatic default value.
# fsadm_executable = "/sbin/fsadm"
# Configuration option global/system_id_source.
# The method LVM uses to set the local system ID.
# Volume Groups can also be given a system ID (by vgcreate, vgchange,
# or vgimport.) A VG on shared storage devices is accessible only to
# the host with a matching system ID. See 'man lvmsystemid' for
# information on limitations and correct usage.
#
# Accepted values:
# none
# The host has no system ID.
# lvmlocal
# Obtain the system ID from the system_id setting in the 'local'
# section of an lvm configuration file, e.g. lvmlocal.conf.
# uname
# Set the system ID from the hostname (uname) of the system.
# System IDs beginning localhost are not permitted.
# machineid
# Use the contents of the machine-id file to set the system ID.
# Some systems create this file at installation time.
# See 'man machine-id' and global/etc.
# file
# Use the contents of another file (system_id_file) to set the
# system ID.
#
system_id_source = "none"
# Configuration option global/system_id_file.
# The full path to the file containing a system ID.
# This is used when system_id_source is set to 'file'.
# Comments starting with the character # are ignored.
# This configuration option does not have a default value defined.
# Configuration option global/use_lvmpolld.
# Use lvmpolld to supervise long running LVM commands.
# When enabled, control of long running LVM commands is transferred
# from the original LVM command to the lvmpolld daemon. This allows
# the operation to continue independent of the original LVM command.
# After lvmpolld takes over, the LVM command displays the progress
# of the ongoing operation. lvmpolld itself runs LVM commands to
# manage the progress of ongoing operations. lvmpolld can be used as
# a native systemd service, which allows it to be started on demand,
# and to use its own control group. When this option is disabled, LVM
# commands will supervise long running operations by forking themselves.
# Applicable only if LVM is compiled with lvmpolld support.
use_lvmpolld = 1
# Configuration option global/notify_dbus.
# Enable D-Bus notification from LVM commands.
# When enabled, an LVM command that changes PVs, changes VG metadata,
# or changes the activation state of an LV will send a notification.
notify_dbus = 1
}
# Configuration section activation.
activation {
# Configuration option activation/checks.
# Perform internal checks of libdevmapper operations.
# Useful for debugging problems with activation. Some of the checks may
# be expensive, so it's best to use this only when there seems to be a
# problem.
checks = 0
# Configuration option activation/udev_sync.
# Use udev notifications to synchronize udev and LVM.
# The --nodevsync option overrides this setting.
# When disabled, LVM commands will not wait for notifications from
# udev, but continue irrespective of any possible udev processing in
# the background. Only use this if udev is not running or has rules
# that ignore the devices LVM creates. If enabled when udev is not
# running, and LVM processes are waiting for udev, run the command
# 'dmsetup udevcomplete_all' to wake them up.
udev_sync = 1
# Configuration option activation/udev_rules.
# Use udev rules to manage LV device nodes and symlinks.
# When disabled, LVM will manage the device nodes and symlinks for
# active LVs itself. Manual intervention may be required if this
# setting is changed while LVs are active.
udev_rules = 1
# Configuration option activation/verify_udev_operations.
# Use extra checks in LVM to verify udev operations.
# This enables additional checks (and if necessary, repairs) on entries
# in the device directory after udev has completed processing its
# events. Useful for diagnosing problems with LVM/udev interactions.
verify_udev_operations = 0
# Configuration option activation/retry_deactivation.
# Retry failed LV deactivation.
# If LV deactivation fails, LVM will retry for a few seconds before
# failing. This may happen because a process run from a quick udev rule
# temporarily opened the device.
retry_deactivation = 1
# Configuration option activation/missing_stripe_filler.
# Method to fill missing stripes when activating an incomplete LV.
# Using 'error' will make inaccessible parts of the device return I/O
# errors on access. Using 'zero' will return success (and zero) on I/O
# You can instead use a device path, in which case,
# that device will be used in place of missing stripes. Using anything
# other than 'error' with mirrored or snapshotted volumes is likely to
# result in data corruption.
# This configuration option is advanced.
missing_stripe_filler = "error"
# Configuration option activation/use_linear_target.
# Use the linear target to optimize single stripe LVs.
# When disabled, the striped target is used. The linear target is an
# optimised version of the striped target that only handles a single
# stripe.
use_linear_target = 1
# Configuration option activation/reserved_stack.
# Stack size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
reserved_stack = 64
# Configuration option activation/reserved_memory.
# Memory size in KiB to reserve for use while devices are suspended.
# Insufficent reserve risks I/O deadlock during device suspension.
reserved_memory = 8192
# Configuration option activation/process_priority.
# Nice value used while devices are suspended.
# Use a high priority so that LVs are suspended
# for the shortest possible time.
process_priority = -18
# Configuration option activation/volume_list.
# Only LVs selected by this list are activated.
# If this list is defined, an LV is only activated if it matches an
# entry in this list. If this list is undefined, it imposes no limits
# on LV activation (all are allowed).
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
# vgname/lvname
# The VG name and LV name are matched exactly and selects the LV.
# @tag
# Selects an LV if the specified tag matches a tag set on the LV
# or VG.
# @*
# Selects an LV if a tag defined on the host is also set on the LV
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
# Example
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
# This configuration option does not have a default value defined.
# Configuration option activation/auto_activation_volume_list.
# Only LVs selected by this list are auto-activated.
# This list works like volume_list, but it is used only by
# auto-activation commands. It does not apply to direct activation
# commands. If this list is defined, an LV is only auto-activated
# if it matches an entry in this list. If this list is undefined, it
# imposes no limits on LV auto-activation (all are allowed.) If this
# list is defined and empty, i.e. "[]", then no LVs are selected for
# auto-activation. An LV that is selected by this list for
# auto-activation, must also be selected by volume_list (if defined)
# before it is activated. Auto-activation is an activation command that
# includes the 'a' argument: --activate ay or -a ay. The 'a' (auto)
# argument for auto-activation is meant to be used by activation
# commands that are run automatically by the system, as opposed to LVM
# commands run directly by a user. A user may also use the 'a' flag
# directly to perform auto-activation. Also see pvscan(8) for more
# information about auto-activation.
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
# vgname/lvname
# The VG name and LV name are matched exactly and selects the LV.
# @tag
# Selects an LV if the specified tag matches a tag set on the LV
# or VG.
# @*
# Selects an LV if a tag defined on the host is also set on the LV
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
# Example
# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
# This configuration option does not have a default value defined.
# Configuration option activation/read_only_volume_list.
# LVs in this list are activated in read-only mode.
# If this list is defined, each LV that is to be activated is checked
# against this list, and if it matches, it is activated in read-only
# mode. This overrides the permission setting stored in the metadata,
# e.g. from --permission rw.
#
# Accepted values:
# vgname
# The VG name is matched exactly and selects all LVs in the VG.
# vgname/lvname
# The VG name and LV name are matched exactly and selects the LV.
# @tag
# Selects an LV if the specified tag matches a tag set on the LV
# or VG.
# @*
# Selects an LV if a tag defined on the host is also set on the LV
# or VG. See tags/hosttags. If any host tags exist but volume_list
# is not defined, a default single-entry list containing '@*'
# is assumed.
#
# Example
# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
#
# This configuration option does not have a default value defined.
# Configuration option activation/raid_region_size.
# Size in KiB of each raid or mirror synchronization region.
# The clean/dirty state of data is tracked for each region.
# The value is rounded down to a power of two if necessary, and
# is ignored if it is not a multiple of the machine memory page size.
raid_region_size = 2048
# Configuration option activation/error_when_full.
# Return errors if a thin pool runs out of space.
# The --errorwhenfull option overrides this setting.
# When enabled, writes to thin LVs immediately return an error if the
# thin pool is out of data space. When disabled, writes to thin LVs
# are queued if the thin pool is out of space, and processed when the
# thin pool data space is extended. New thin pools are assigned the
# behavior defined here.
# This configuration option has an automatic default value.
# error_when_full = 0
# Configuration option activation/readahead.
# Setting to use when there is no readahead setting in metadata.
#
# Accepted values:
# none
# Disable readahead.
# auto
# Use default value chosen by kernel.
#
readahead = "auto"
# Configuration option activation/raid_fault_policy.
# Defines how a device failure in a RAID LV is handled.
# This includes LVs that have the following segment types:
# raid1, raid4, raid5*, and raid6*.
# If a device in the LV fails, the policy determines the steps
# performed by dmeventd automatically, and the steps perfomed by the
# manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#
# Accepted values:
# warn
# Use the system log to warn the user that a device in the RAID LV
# has failed. It is left to the user to run lvconvert --repair
# manually to remove or replace the failed device. As long as the
# number of failed devices does not exceed the redundancy of the LV
# (1 device for raid4/5, 2 for raid6), the LV will remain usable.
# allocate
# Attempt to use any extra physical volumes in the VG as spares and
# replace faulty devices.
#
raid_fault_policy = "warn"
# Configuration option activation/mirror_image_fault_policy.
# Defines how a device failure in a 'mirror' LV is handled.
# An LV with the 'mirror' segment type is composed of mirror images
# (copies) and a mirror log. A disk log ensures that a mirror LV does
# not need to be re-synced (all copies made the same) every time a
# machine reboots or crashes. If a device in the LV fails, this policy
# determines the steps perfomed by dmeventd automatically, and the steps
# performed by the manual command lvconvert --repair --use-policies.
# Automatic handling requires dmeventd to be monitoring the LV.
#
# Accepted values:
# remove
# Simply remove the faulty device and run without it. If the log
# device fails, the mirror would convert to using an in-memory log.
# This means the mirror will not remember its sync status across
# crashes/reboots and the entire mirror will be re-synced. If a
# mirror image fails, the mirror will convert to a non-mirrored
# device if there is only one remaining good copy.
# allocate
# Remove the faulty device and try to allocate space on a new
# device to be a replacement for the failed device. Using this
# policy for the log is fast and maintains the ability to remember
# sync state through crashes/reboots. Using this policy for a
# mirror device is slow, as it requires the mirror to resynchronize
# the devices, but it will preserve the mirror characteristic of
# the device. This policy acts like 'remove' if no suitable device
# and space can be allocated for the replacement.
# allocate_anywhere
# Not yet implemented. Useful to place the log device temporarily
# on the same physical volume as one of the mirror images. This
# policy is not recommended for mirror devices since it would break
# the redundant nature of the mirror. This policy acts like
# 'remove' if no suitable device and space can be allocated for the
# replacement.
#
mirror_image_fault_policy = "remove"
# Configuration option activation/mirror_log_fault_policy.
# Defines how a device failure in a 'mirror' log LV is handled.
# The mirror_image_fault_policy description for mirrored LVs also
# applies to mirrored log LVs.
mirror_log_fault_policy = "allocate"
# Configuration option activation/snapshot_autoextend_threshold.
# Auto-extend a snapshot when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see snapshot_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_threshold = 70
#
snapshot_autoextend_threshold = 100
# Configuration option activation/snapshot_autoextend_percent.
# Auto-extending a snapshot adds this percent extra space.
# The amount of additional space added to a snapshot is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# snapshot exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# snapshot_autoextend_percent = 20
#
snapshot_autoextend_percent = 20
# Configuration option activation/thin_pool_autoextend_threshold.
# Auto-extend a thin pool when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see thin_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# thin_pool_autoextend_threshold = 70
#
thin_pool_autoextend_threshold = 100
# Configuration option activation/thin_pool_autoextend_percent.
# Auto-extending a thin pool adds this percent extra space.
# The amount of additional space added to a thin pool is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 1G
# thin pool exceeds 700M, it is extended to 1.2G, and when it exceeds
# 840M, it is extended to 1.44G:
# thin_pool_autoextend_percent = 20
#
thin_pool_autoextend_percent = 20
# Configuration option activation/mlock_filter.
# Do not mlock these memory areas.
# While activating devices, I/O to devices being (re)configured is
# suspended. As a precaution against deadlocks, LVM pins memory it is
# using so it is not paged out, and will not require I/O to reread.
# Groups of pages that are known not to be accessed during activation
# do not need to be pinned into memory. Each string listed in this
# setting is compared against each line in /proc/self/maps, and the
# pages corresponding to lines that match are not pinned. On some
# systems, locale-archive was found to make up over 80% of the memory
# used by the process.
#
# Example
# mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# Configuration option activation/use_mlockall.
# Use the old behavior of mlockall to pin all memory.
# Prior to version 2.02.62, LVM used mlockall() to pin the whole
# process's memory while activating devices.
use_mlockall = 0
# Configuration option activation/monitoring.
# Monitor LVs that are activated.
# The --ignoremonitoring option overrides this setting.
# When enabled, LVM will ask dmeventd to monitor activated LVs.
monitoring = 1
# Configuration option activation/polling_interval.
# Check pvmove or lvconvert progress at this interval (seconds).
# When pvmove or lvconvert must wait for the kernel to finish
# synchronising or merging data, they check and report progress at
# intervals of this number of seconds. If this is set to 0 and there
# is only one thing to wait for, there are no progress reports, but
# the process is awoken immediately once the operation is complete.
polling_interval = 15
# Configuration option activation/auto_set_activation_skip.
# Set the activation skip flag on new thin snapshot LVs.
# The --setactivationskip option overrides this setting.
# An LV can have a persistent 'activation skip' flag. The flag causes
# the LV to be skipped during normal activation. The lvchange/vgchange
# -K option is required to activate LVs that have the activation skip
# flag set. When this setting is enabled, the activation skip flag is
# set on new thin snapshot LVs.
# This configuration option has an automatic default value.
# auto_set_activation_skip = 1
# Configuration option activation/activation_mode.
# How LVs with missing devices are activated.
# The --activationmode option overrides this setting.
#
# Accepted values:
# complete
# Only allow activation of an LV if all of the Physical Volumes it
# uses are present. Other PVs in the Volume Group may be missing.
# degraded
# Like complete, but additionally RAID LVs of segment type raid1,
# raid4, raid5, radid6 and raid10 will be activated if there is no
# data loss, i.e. they have sufficient redundancy to present the
# entire addressable range of the Logical Volume.
# partial
# Allows the activation of any LV even if a missing or failed PV
# could cause data loss with a portion of the LV inaccessible.
# This setting should not normally be used, but may sometimes
# assist with data recovery.
#
activation_mode = "degraded"
# Configuration option activation/lock_start_list.
# Locking is started only for VGs selected by this list.
# The rules are the same as those for volume_list.
# This configuration option does not have a default value defined.
# Configuration option activation/auto_lock_start_list.
# Locking is auto-started only for VGs selected by this list.
# The rules are the same as those for auto_activation_volume_list.
# This configuration option does not have a default value defined.
}
# Configuration section metadata.
# This configuration section has an automatic default value.
# metadata {
# Configuration option metadata/check_pv_device_sizes.
# Check device sizes are not smaller than corresponding PV sizes.
# If device size is less than corresponding PV size found in metadata,
# there is always a risk of data loss. If this option is set, then LVM
# issues a warning message each time it finds that the device size is
# less than corresponding PV size. You should not disable this unless
# you are absolutely sure about what you are doing!
# This configuration option is advanced.
# This configuration option has an automatic default value.
# check_pv_device_sizes = 1
# Configuration option metadata/record_lvs_history.
# When enabled, LVM keeps history records about removed LVs in
# metadata. The information that is recorded in metadata for
# historical LVs is reduced when compared to original
# information kept in metadata for live LVs. Currently, this
# feature is supported for thin and thin snapshot LVs only.
# This configuration option has an automatic default value.
# record_lvs_history = 0
# Configuration option metadata/lvs_history_retention_time.
# Retention time in seconds after which a record about individual
# historical logical volume is automatically destroyed.
# A value of 0 disables this feature.
# This configuration option has an automatic default value.
# lvs_history_retention_time = 0
# Configuration option metadata/pvmetadatacopies.
# Number of copies of metadata to store on each PV.
# The --pvmetadatacopies option overrides this setting.
#
# Accepted values:
# 2
# Two copies of the VG metadata are stored on the PV, one at the
# front of the PV, and one at the end.
# 1
# One copy of VG metadata is stored at the front of the PV.
# 0
# No copies of VG metadata are stored on the PV. This may be
# useful for VGs containing large numbers of PVs.
#
# This configuration option is advanced.
# This configuration option has an automatic default value.
# pvmetadatacopies = 1
# Configuration option metadata/vgmetadatacopies.
# Number of copies of metadata to maintain for each VG.
# The --vgmetadatacopies option overrides this setting.
# If set to a non-zero value, LVM automatically chooses which of the
# available metadata areas to use to achieve the requested number of
# copies of the VG metadata. If you set a value larger than the the
# total number of metadata areas available, then metadata is stored in
# them all. The value 0 (unmanaged) disables this automatic management
# and allows you to control which metadata areas are used at the
# individual PV level using pvchange --metadataignore y|n.
# This configuration option has an automatic default value.
# vgmetadatacopies = 0
# Configuration option metadata/pvmetadatasize.
# Approximate number of sectors to use for each metadata copy.
# VGs with large numbers of PVs or LVs, or VGs containing complex LV
# structures, may need additional space for VG metadata. The metadata
# areas are treated as circular buffers, so unused space becomes filled
# with an archive of the most recent previous versions of the metadata.
# This configuration option has an automatic default value.
# pvmetadatasize = 255
# Configuration option metadata/pvmetadataignore.
# Ignore metadata areas on a new PV.
# The --metadataignore option overrides this setting.
# If metadata areas on a PV are ignored, LVM will not store metadata
# in them.
# This configuration option is advanced.
# This configuration option has an automatic default value.
# pvmetadataignore = 0
# Configuration option metadata/stripesize.
# This configuration option is advanced.
# This configuration option has an automatic default value.
# stripesize = 64
# Configuration option metadata/dirs.
# Directories holding live copies of text format metadata.
# These directories must not be on logical volumes!
# It's possible to use LVM with a couple of directories here,
# preferably on different (non-LV) filesystems, and with no other
# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
# to on-disk metadata areas. The feature was originally added to
# simplify testing and is not supported under low memory situations -
# the machine could lock up. Never edit any files in these directories
# by hand unless you are absolutely sure you know what you are doing!
# Use the supplied toolset to make changes (e.g. vgcfgrestore).
#
# Example
# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# }
# Configuration section report.
# LVM report command output formatting.
# This configuration section has an automatic default value.
# report {
# Configuration option report/output_format.
# Format of LVM command's report output.
# If there is more than one report per command, then the format
# is applied for all reports. You can also change output format
# directly on command line using --reportformat option which
# has precedence over log/output_format setting.
# Accepted values:
# basic
# Original format with columns and rows. If there is more than
# one report per command, each report is prefixed with report's
# name for identification.
# json
# JSON format.
# This configuration option has an automatic default value.
# output_format = "basic"
# Configuration option report/compact_output.
# Do not print empty values for all report fields.
# If enabled, all fields that don't have a value set for any of the
# rows reported are skipped and not printed. Compact output is
# applicable only if report/buffered is enabled. If you need to
# compact only specified fields, use compact_output=0 and define
# report/compact_output_cols configuration setting instead.
# This configuration option has an automatic default value.
# compact_output = 0
# Configuration option report/compact_output_cols.
# Do not print empty values for specified report fields.
# If defined, specified fields that don't have a value set for any
# of the rows reported are skipped and not printed. Compact output
# is applicable only if report/buffered is enabled. If you need to
# compact all fields, use compact_output=1 instead in which case
# the compact_output_cols setting is then ignored.
# This configuration option has an automatic default value.
# compact_output_cols = ""
# Configuration option report/aligned.
# Align columns in report output.
# This configuration option has an automatic default value.
# aligned = 1
# Configuration option report/buffered.
# Buffer report output.
# When buffered reporting is used, the report's content is appended
# incrementally to include each object being reported until the report
# is flushed to output which normally happens at the end of command
# execution. Otherwise, if buffering is not used, each object is
# reported as soon as its processing is finished.
# This configuration option has an automatic default value.
# buffered = 1
# Configuration option report/headings.
# Show headings for columns on report.
# This configuration option has an automatic default value.
# headings = 1
# Configuration option report/separator.
# A separator to use on report after each field.
# This configuration option has an automatic default value.
# separator = " "
# Configuration option report/list_item_separator.
# A separator to use for list items when reported.
# This configuration option has an automatic default value.
# list_item_separator = ","
# Configuration option report/prefixes.
# Use a field name prefix for each field reported.
# This configuration option has an automatic default value.
# prefixes = 0
# Configuration option report/quoted.
# Quote field values when using field name prefixes.
# This configuration option has an automatic default value.
# quoted = 1
# Configuration option report/columns_as_rows.
# Output each column as a row.
# If set, this also implies report/prefixes=1.
# This configuration option has an automatic default value.
# columns_as_rows = 0
# Configuration option report/binary_values_as_numeric.
# Use binary values 0 or 1 instead of descriptive literal values.
# For columns that have exactly two valid values to report
# (not counting the 'unknown' value which denotes that the
# value could not be determined).
# This configuration option has an automatic default value.
# binary_values_as_numeric = 0
# Configuration option report/time_format.
# Set time format for fields reporting time values.
# Format specification is a string which may contain special character
# sequences and ordinary character sequences. Ordinary character
# sequences are copied verbatim. Each special character sequence is
# introduced by the '%' character and such sequence is then
# substituted with a value as described below.
#
# Accepted values:
# %a
# The abbreviated name of the day of the week according to the
# current locale.
# %A
# The full name of the day of the week according to the current
# locale.
# %b
# The abbreviated month name according to the current locale.
# %B
# The full month name according to the current locale.
# %c
# The preferred date and time representation for the current
# locale (alt E)
# %C
# The century number (year/100) as a 2-digit integer. (alt E)
# %d
# The day of the month as a decimal number (range 01 to 31).
# (alt O)
# %D
# Equivalent to %m/%d/%y. (For Americans only. Americans should
# note that in other countries%d/%m/%y is rather common. This
# means that in international context this format is ambiguous and
# should not be used.
# %e
# Like %d, the day of the month as a decimal number, but a leading
# zero is replaced by a space. (alt O)
# %E
# Modifier: use alternative local-dependent representation if
# available.
# %F
# Equivalent to %Y-%m-%d (the ISO 8601 date format).
# %G
# The ISO 8601 week-based year with century as adecimal number.
# The 4-digit year corresponding to the ISO week number (see %V).
# This has the same format and value as %Y, except that if the
# ISO week number belongs to the previous or next year, that year
# is used instead.
# %g
# Like %G, but without century, that is, with a 2-digit year
# (00-99).
# %h
# Equivalent to %b.
# %H
# The hour as a decimal number using a 24-hour clock
# (range 00 to 23). (alt O)
# %I
# The hour as a decimal number using a 12-hour clock
# (range 01 to 12). (alt O)
# %j
# The day of the year as a decimal number (range 001 to 366).
# %k
# The hour (24-hour clock) as a decimal number (range 0 to 23);
# single digits are preceded by a blank. (See also %H.)
# %l
# The hour (12-hour clock) as a decimal number (range 1 to 12);
# single digits are preceded by a blank. (See also %I.)
# %m
# The month as a decimal number (range 01 to 12). (alt O)
# %M
# The minute as a decimal number (range 00 to 59). (alt O)
# %O
# Modifier: use alternative numeric symbols.
# %p
# Either "AM" or "PM" according to the given time value,
# or the corresponding strings for the current locale. Noon is
# treated as "PM" and midnight as "AM".
# %P
# Like %p but in lowercase: "am" or "pm" or a corresponding
# string for the current locale.
# %r
# The time in a.m. or p.m. notation. In the POSIX locale this is
# equivalent to %I:%M:%S %p.
# %R
# The time in 24-hour notation (%H:%M). For a version including
# the seconds, see %T below.
# %s
# The number of seconds since the Epoch,
# 1970-01-01 00:00:00 +0000 (UTC)
# %S
# The second as a decimal number (range 00 to 60). (The range is
# up to 60 to allow for occasional leap seconds.) (alt O)
# %t
# A tab character.
# %T
# The time in 24-hour notation (%H:%M:%S).
# %u
# The day of the week as a decimal, range 1 to 7, Monday being 1.
# See also %w. (alt O)
# %U
# The week number of the current year as a decimal number,
# range 00 to 53, starting with the first Sunday as the first
# day of week 01. See also %V and %W. (alt O)
# %V
# The ISO 8601 week number of the current year as a decimal number,
# range 01 to 53, where week 1 is the first week that has at least
# 4 days in the new year. See also %U and %W. (alt O)
# %w
# The day of the week as a decimal, range 0 to 6, Sunday being 0.
# See also %u. (alt O)
# %W
# The week number of the current year as a decimal number,
# range 00 to 53, starting with the first Monday as the first day
# of week 01. (alt O)
# %x
# The preferred date representation for the current locale without
# the time. (alt E)
# %X
# The preferred time representation for the current locale without
# the date. (alt E)
# %y
# The year as a decimal number without a century (range 00 to 99).
# (alt E, alt O)
# %Y
# The year as a decimal number including the century. (alt E)
# %z
# The +hhmm or -hhmm numeric timezone (that is, the hour and minute
# offset from UTC).
# %Z
# The timezone name or abbreviation.
# %%
# A literal '%' character.
#
# This configuration option has an automatic default value.
# time_format = "%Y-%m-%d %T %z"
# Configuration option report/devtypes_sort.
# List of columns to sort by when reporting 'lvm devtypes' command.
# See 'lvm devtypes -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# devtypes_sort = "devtype_name"
# Configuration option report/devtypes_cols.
# List of columns to report for 'lvm devtypes' command.
# See 'lvm devtypes -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# devtypes_cols = "devtype_name,devtype_max_partitions,devtype_description"
# Configuration option report/devtypes_cols_verbose.
# List of columns to report for 'lvm devtypes' command in verbose mode.
# See 'lvm devtypes -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# devtypes_cols_verbose = "devtype_name,devtype_max_partitions,devtype_description"
# Configuration option report/lvs_sort.
# List of columns to sort by when reporting 'lvs' command.
# See 'lvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# lvs_sort = "vg_name,lv_name"
# Configuration option report/lvs_cols.
# List of columns to report for 'lvs' command.
# See 'lvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# lvs_cols = "lv_name,vg_name,lv_attr,lv_size,pool_lv,origin,data_percent,metadata_percent,move_pv,mirror_log,copy_percent,convert_lv"
# Configuration option report/lvs_cols_verbose.
# List of columns to report for 'lvs' command in verbose mode.
# See 'lvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# lvs_cols_verbose = "lv_name,vg_name,seg_count,lv_attr,lv_size,lv_major,lv_minor,lv_kernel_major,lv_kernel_minor,pool_lv,origin,data_percent,metadata_percent,move_pv,copy_percent,mirror_log,convert_lv,lv_uuid,lv_profile"
# Configuration option report/vgs_sort.
# List of columns to sort by when reporting 'vgs' command.
# See 'vgs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# vgs_sort = "vg_name"
# Configuration option report/vgs_cols.
# List of columns to report for 'vgs' command.
# See 'vgs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# vgs_cols = "vg_name,pv_count,lv_count,snap_count,vg_attr,vg_size,vg_free"
# Configuration option report/vgs_cols_verbose.
# List of columns to report for 'vgs' command in verbose mode.
# See 'vgs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# vgs_cols_verbose = "vg_name,vg_attr,vg_extent_size,pv_count,lv_count,snap_count,vg_size,vg_free,vg_uuid,vg_profile"
# Configuration option report/pvs_sort.
# List of columns to sort by when reporting 'pvs' command.
# See 'pvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvs_sort = "pv_name"
# Configuration option report/pvs_cols.
# List of columns to report for 'pvs' command.
# See 'pvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free"
# Configuration option report/pvs_cols_verbose.
# List of columns to report for 'pvs' command in verbose mode.
# See 'pvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,dev_size,pv_uuid"
# Configuration option report/segs_sort.
# List of columns to sort by when reporting 'lvs --segments' command.
# See 'lvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# segs_sort = "vg_name,lv_name,seg_start"
# Configuration option report/segs_cols.
# List of columns to report for 'lvs --segments' command.
# See 'lvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# segs_cols = "lv_name,vg_name,lv_attr,stripes,segtype,seg_size"
# Configuration option report/segs_cols_verbose.
# List of columns to report for 'lvs --segments' command in verbose mode.
# See 'lvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# segs_cols_verbose = "lv_name,vg_name,lv_attr,seg_start,seg_size,stripes,segtype,stripesize,chunksize"
# Configuration option report/pvsegs_sort.
# List of columns to sort by when reporting 'pvs --segments' command.
# See 'pvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvsegs_sort = "pv_name,pvseg_start"
# Configuration option report/pvsegs_cols.
# List of columns to sort by when reporting 'pvs --segments' command.
# See 'pvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvsegs_cols = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size"
# Configuration option report/pvsegs_cols_verbose.
# List of columns to sort by when reporting 'pvs --segments' command in verbose mode.
# See 'pvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvsegs_cols_verbose = "pv_name,vg_name,pv_fmt,pv_attr,pv_size,pv_free,pvseg_start,pvseg_size,lv_name,seg_start_pe,segtype,seg_pe_ranges"
# Configuration option report/vgs_cols_full.
# List of columns to report for lvm fullreport's 'vgs' subreport.
# See 'vgs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# vgs_cols_full = "vg_all"
# Configuration option report/pvs_cols_full.
# List of columns to report for lvm fullreport's 'vgs' subreport.
# See 'pvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvs_cols_full = "pv_all"
# Configuration option report/lvs_cols_full.
# List of columns to report for lvm fullreport's 'lvs' subreport.
# See 'lvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# lvs_cols_full = "lv_all"
# Configuration option report/pvsegs_cols_full.
# List of columns to report for lvm fullreport's 'pvseg' subreport.
# See 'pvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvsegs_cols_full = "pvseg_all,pv_uuid,lv_uuid"
# Configuration option report/segs_cols_full.
# List of columns to report for lvm fullreport's 'seg' subreport.
# See 'lvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# segs_cols_full = "seg_all,lv_uuid"
# Configuration option report/vgs_sort_full.
# List of columns to sort by when reporting lvm fullreport's 'vgs' subreport.
# See 'vgs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# vgs_sort_full = "vg_name"
# Configuration option report/pvs_sort_full.
# List of columns to sort by when reporting lvm fullreport's 'vgs' subreport.
# See 'pvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvs_sort_full = "pv_name"
# Configuration option report/lvs_sort_full.
# List of columns to sort by when reporting lvm fullreport's 'lvs' subreport.
# See 'lvs -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# lvs_sort_full = "vg_name,lv_name"
# Configuration option report/pvsegs_sort_full.
# List of columns to sort by when reporting for lvm fullreport's 'pvseg' subreport.
# See 'pvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# pvsegs_sort_full = "pv_uuid,pvseg_start"
# Configuration option report/segs_sort_full.
# List of columns to sort by when reporting lvm fullreport's 'seg' subreport.
# See 'lvs --segments -o help' for the list of possible fields.
# This configuration option has an automatic default value.
# segs_sort_full = "lv_uuid,seg_start"
# Configuration option report/mark_hidden_devices.
# Use brackets [] to mark hidden devices.
# This configuration option has an automatic default value.
# mark_hidden_devices = 1
# Configuration option report/two_word_unknown_device.
# Use the two words 'unknown device' in place of '[unknown]'.
# This is displayed when the device for a PV is not known.
# This configuration option has an automatic default value.
# two_word_unknown_device = 0
# }
# Configuration section dmeventd.
# Settings for the LVM event daemon.
dmeventd {
# Configuration option dmeventd/mirror_library.
# The library dmeventd uses when monitoring a mirror device.
# libdevmapper-event-lvm2mirror.so attempts to recover from
# failures. It removes failed devices from a volume group and
# reconfigures a mirror as necessary. If no mirror library is
# provided, mirrors are not monitored through dmeventd.
mirror_library = "libdevmapper-event-lvm2mirror.so"
# Configuration option dmeventd/raid_library.
# This configuration option has an automatic default value.
# raid_library = "libdevmapper-event-lvm2raid.so"
# Configuration option dmeventd/snapshot_library.
# The library dmeventd uses when monitoring a snapshot device.
# libdevmapper-event-lvm2snapshot.so monitors the filling of snapshots
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the snapshot is filled.
snapshot_library = "libdevmapper-event-lvm2snapshot.so"
# Configuration option dmeventd/thin_library.
# The library dmeventd uses when monitoring a thin device.
# libdevmapper-event-lvm2thin.so monitors the filling of a pool
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the pool is filled.
thin_library = "libdevmapper-event-lvm2thin.so"
# Configuration option dmeventd/thin_command.
# The plugin runs command with each 5% increment when thin-pool data volume
# or metadata volume gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# User handler is specified with the full path starting with '/'.
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.
# executable = "/sbin/dmeventd"
}
# Configuration section tags.
# Host tag settings.
# This configuration section has an automatic default value.
# tags {
# Configuration option tags/hosttags.
# Create a host tag using the machine name.
# The machine name is nodename returned by uname(2).
# This configuration option has an automatic default value.
# hosttags = 0
# Configuration section tags/<tag>.
# Replace this subsection name with a custom tag name.
# Multiple subsections like this can be created. The '@' prefix for
# tags is optional. This subsection can contain host_list, which is a
# list of machine names. If the name of the local machine is found in
# host_list, then the name of this subsection is used as a tag and is
# applied to the local machine as a 'host tag'. If this subsection is
# empty (has no host_list), then the subsection name is always applied
# as a 'host tag'.
#
# Example
# The host tag foo is given to all hosts, and the host tag
# bar is given to the hosts named machine1 and machine2.
# tags { foo { } bar { host_list = [ "machine1", "machine2" ] } }
#
# This configuration section has variable name.
# This configuration section has an automatic default value.
# tag {
# Configuration option tags/<tag>/host_list.
# A list of machine names.
# These machine names are compared to the nodename returned
# by uname(2). If the local machine name matches an entry in
# this list, the name of the subsection is applied to the
# machine as a 'host tag'.
# This configuration option does not have a default value defined.
# }
# } dingsbums schrieb: Ich darf die VolumeGroup Raid jetzt nach jedem Neustart aktivieren...
Oder es fehlt eine Aktualisierung des initramfs. Ähnlicher Fall: https://unix.stackexchange.com/questions/213027/lvm-volume-is-inactive-after-reboot-of-centos
Also in der /etc/default/grub finde ich bei mir nichts großartiges
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="debian-installer/language=de keyboard-configuration/layoutcode?=de"
GRUB_CMDLINE_LINUX=""
# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"
# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console
# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480
# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true
# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"
# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1" dingsbums schrieb: Nachtrag: Notfalls schreib dir als "Würg-Around" eine System-Unit oder einen crontab-Eintrag, welche den Aktivierungsbefehl durchführt ... 😉
Ja, das ist ne gute Idee werde ich machen, wenn ich bis zum nächsten reboot keine Lösung gefunden habe...
|
misterunknown
Ehemalige
Anmeldungsdatum: 28. Oktober 2009
Beiträge: 4403
Wohnort: Sachsen
|
Was steht denn in den Logs nach dem booten? Ich würde die Raw-Devices für das RAID in der lvm.conf filtern, damit nicht versehentlich der LVM-Daemon eine Einzel-Platte als PV erkennt.
|
jokerGermany
(Themenstarter)
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
Welche Logs brauchst du?
Will heute neustarten. misterunknown schrieb: Ich würde die Raw-Devices für das RAID in der lvm.conf filtern, damit nicht versehentlich der LVM-Daemon eine Einzel-Platte als PV erkennt.
Wie genau mach ich das?
|
misterunknown
Ehemalige
Anmeldungsdatum: 28. Oktober 2009
Beiträge: 4403
Wohnort: Sachsen
|
jokerGermany schrieb: Welche Logs brauchst du?
Das, wo was drin steht 😛 Vermutlich das normale Syslog.
Wie genau mach ich das?
Siehe auch man lvm.conf. Grundsätzlich sind die Direktiven scan und filter relevant. Du musst entscheiden, was für dich praktikabler ist: Entweder die physischen Devices fürs RAID ausschließen, oder nur bestimmte RAID-Devices einschließen. Ersteres könnte so aussehen:
...
devices {
...
scan = [ "/dev/" ]
filter = [ "r|^/dev/sd.*$|", "a|^/dev/sdc4$|" ]
}
Zweiteres so:
...
devices {
...
scan = [ "/dev/" ]
filter = [ "r|.*|", "a|^/dev/md.*$|", "a|^/dev/sdc4$|", "a|^/dev/(non-)?Raid/Wichtig$|" ]
}
Aber bitte nochmal genau die Kommentare in der /etc/lvm/lvm.conf durchlesen! Du solltest unbedingt genau wissen, was du machst.
|
jokerGermany
(Themenstarter)
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
misterunknown schrieb: jokerGermany schrieb: Welche Logs brauchst du?
Das, wo was drin steht 😛 Vermutlich das normale Syslog.
Ok, vermutlich ist das hier das Problem
Jan 23 17:43:34 nas kernel: [ 6.870862] md4: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 6.953102] md2: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 7.065292] md3: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 7.065296] md1: detected capacity change from 0 to 2999590060032
Das sind alle Geräte die für die Gruppe RAID zuständig sind oO Sehr beruhigend, such dir bloß nicht non-Raid raus... Ich weiß bloß nicht wie er darauf kommt...
Hier das komplette Log. Nicht wundern, ich hab leider viele Baustellen: Spieldomäne verbastelt (verschoben auf 20.04) certbot mag nicht sks braucht nicht versuchen sich mit anderen Servern zu unterhalten...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662 | Jan 23 17:43:34 nas kernel: [ 0.000000] microcode: microcode updated early to revision 0x7, date = 2018-04-23
Jan 23 17:43:34 nas kernel: [ 0.000000] Linux version 4.15.0-74-generic (buildd@lcy01-amd64-022) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 (Ubuntu 4.15.0-74.84-generic 4.15.18)
Jan 23 17:43:34 nas kernel: [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-74-generic root=UUID=373c7c2f-8045-4e98-9c21-c2ed2b639ae7 ro debian-installer/language=de keyboard-configuration/layoutcode?=de
Jan 23 17:43:34 nas kernel: [ 0.000000] KERNEL supported cpus:
Jan 23 17:43:34 nas kernel: [ 0.000000] Intel GenuineIntel
Jan 23 17:43:34 nas kernel: [ 0.000000] AMD AuthenticAMD
Jan 23 17:43:34 nas kernel: [ 0.000000] Centaur CentaurHauls
Jan 23 17:43:34 nas kernel: [ 0.000000] x86/fpu: x87 FPU will use FXSAVE
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: BIOS-provided physical RAM map:
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dbff] usable
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000db7dffff] usable
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000db7e0000-0x00000000db7e2fff] ACPI NVS
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000db7e3000-0x00000000db7effff] ACPI data
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000db7f0000-0x00000000db7fffff] reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000f4000000-0x00000000f7ffffff] reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000ffffffff] reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x0000000117ffffff] usable
Jan 23 17:43:34 nas kernel: [ 0.000000] NX (Execute Disable) protection: active
Jan 23 17:43:34 nas kernel: [ 0.000000] SMBIOS 2.4 present.
Jan 23 17:43:34 nas kernel: [ 0.000000] DMI: wortmann H55M-D2H/H55M-D2H, BIOS F4 02/06/2012
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: last_pfn = 0x118000 max_arch_pfn = 0x400000000
Jan 23 17:43:34 nas kernel: [ 0.000000] MTRR default type: uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] MTRR fixed ranges enabled:
Jan 23 17:43:34 nas kernel: [ 0.000000] 00000-9FFFF write-back
Jan 23 17:43:34 nas kernel: [ 0.000000] A0000-BFFFF uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] C0000-CDFFF write-protect
Jan 23 17:43:34 nas kernel: [ 0.000000] CE000-EFFFF uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] F0000-FFFFF write-through
Jan 23 17:43:34 nas kernel: [ 0.000000] MTRR variable ranges enabled:
Jan 23 17:43:34 nas kernel: [ 0.000000] 0 base 000000000 mask F00000000 write-back
Jan 23 17:43:34 nas kernel: [ 0.000000] 1 base 0E0000000 mask FE0000000 uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] 2 base 0DC000000 mask FFC000000 uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] 3 base 100000000 mask FE0000000 write-back
Jan 23 17:43:34 nas kernel: [ 0.000000] 4 base 118000000 mask FF8000000 uncachable
Jan 23 17:43:34 nas kernel: [ 0.000000] 5 disabled
Jan 23 17:43:34 nas kernel: [ 0.000000] 6 disabled
Jan 23 17:43:34 nas kernel: [ 0.000000] 7 disabled
Jan 23 17:43:34 nas kernel: [ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
Jan 23 17:43:34 nas kernel: [ 0.000000] total RAM covered: 3904M
Jan 23 17:43:34 nas kernel: [ 0.000000] Found optimal setting for mtrr clean up
Jan 23 17:43:34 nas kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 5 lose cover RAM: 0G
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: update [mem 0xdc000000-0xffffffff] usable ==> reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: last_pfn = 0xdb7e0 max_arch_pfn = 0x400000000
Jan 23 17:43:34 nas kernel: [ 0.000000] found SMP MP-table at [mem 0x000f56b0-0x000f56bf]
Jan 23 17:43:34 nas kernel: [ 0.000000] Scanning 1 areas for low memory corruption
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc978e000, 0xc978efff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc978f000, 0xc978ffff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc9790000, 0xc9790fff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc9791000, 0xc9791fff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc9792000, 0xc9792fff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc9793000, 0xc9793fff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] BRK [0xc9794000, 0xc9794fff] PGTABLE
Jan 23 17:43:34 nas kernel: [ 0.000000] RAMDISK: [mem 0x31165000-0x348a9fff]
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: Early table checksum verification disabled
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: RSDP 0x00000000000F7070 000014 (v00 GBT )
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: RSDT 0x00000000DB7E3040 000040 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: FACP 0x00000000DB7E30C0 000074 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: DSDT 0x00000000DB7E3180 005510 (v01 GBT GBTUACPI 00001000 MSFT 0100000C)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: FACS 0x00000000DB7E0000 000040
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: HPET 0x00000000DB7E8800 000038 (v01 GBT GBTUACPI 42302E31 GBTU 00000098)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: MCFG 0x00000000DB7E8880 00003C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: EUDS 0x00000000DB7E88C0 0004D0 (v01 GBT 00000000 00000000)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: TAMG 0x00000000DB7E8D90 000A4A (v01 GBT GBT B0 5455312E BG?? 53450101)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: APIC 0x00000000DB7E8700 0000BC (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: SSDT 0x00000000DB7E9800 001C7C (v01 INTEL PPM RCM 80000001 INTL 20061109)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000
Jan 23 17:43:34 nas kernel: [ 0.000000] No NUMA configuration found
Jan 23 17:43:34 nas kernel: [ 0.000000] Faking a node at [mem 0x0000000000000000-0x0000000117ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] NODE_DATA(0) allocated [mem 0x117fd3000-0x117ffdfff]
Jan 23 17:43:34 nas kernel: [ 0.000000] tsc: Fast TSC calibration using PIT
Jan 23 17:43:34 nas kernel: [ 0.000000] Zone ranges:
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] Normal [mem 0x0000000100000000-0x0000000117ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] Device empty
Jan 23 17:43:34 nas kernel: [ 0.000000] Movable zone start for each node
Jan 23 17:43:34 nas kernel: [ 0.000000] Early memory node ranges
Jan 23 17:43:34 nas kernel: [ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009cfff]
Jan 23 17:43:34 nas kernel: [ 0.000000] node 0: [mem 0x0000000000100000-0x00000000db7dffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] node 0: [mem 0x0000000100000000-0x0000000117ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] Reserved but unavailable: 100 pages
Jan 23 17:43:34 nas kernel: [ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x0000000117ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] On node 0 totalpages: 997244
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA zone: 64 pages used for memmap
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA zone: 21 pages reserved
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA zone: 3996 pages, LIFO batch:0
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA32 zone: 13984 pages used for memmap
Jan 23 17:43:34 nas kernel: [ 0.000000] DMA32 zone: 894944 pages, LIFO batch:31
Jan 23 17:43:34 nas kernel: [ 0.000000] Normal zone: 1536 pages used for memmap
Jan 23 17:43:34 nas kernel: [ 0.000000] Normal zone: 98304 pages, LIFO batch:31
Jan 23 17:43:34 nas kernel: [ 0.000000] Reserving Intel graphics memory at 0x00000000de000000-0x00000000dfffffff
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: PM-Timer IO Port: 0x408
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: Local APIC address 0xfee00000
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
Jan 23 17:43:34 nas kernel: [ 0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: IRQ0 used by override.
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: IRQ9 used by override.
Jan 23 17:43:34 nas kernel: [ 0.000000] Using ACPI (MADT) for SMP configuration information
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
Jan 23 17:43:34 nas kernel: [ 0.000000] smpboot: Allowing 8 CPUs, 4 hotplug CPUs
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0x0009d000-0x0009ffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xdb7e0000-0xdb7e2fff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xdb7e3000-0xdb7effff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xdb7f0000-0xdb7fffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xdb800000-0xddffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xde000000-0xdfffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xe0000000-0xf3ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xf4000000-0xf7ffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xf8000000-0xfebfffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] PM: Registered nosave memory: [mem 0xfec00000-0xffffffff]
Jan 23 17:43:34 nas kernel: [ 0.000000] e820: [mem 0xe0000000-0xf3ffffff] available for PCI devices
Jan 23 17:43:34 nas kernel: [ 0.000000] Booting paravirtualized kernel on bare hardware
Jan 23 17:43:34 nas kernel: [ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Jan 23 17:43:34 nas kernel: [ 0.000000] random: get_random_bytes called from start_kernel+0x99/0x4fd with crng_init=0
Jan 23 17:43:34 nas kernel: [ 0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 23 17:43:34 nas kernel: [ 0.000000] percpu: Embedded 45 pages/cpu s147456 r8192 d28672 u262144
Jan 23 17:43:34 nas kernel: [ 0.000000] pcpu-alloc: s147456 r8192 d28672 u262144 alloc=1*2097152
Jan 23 17:43:34 nas kernel: [ 0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7
Jan 23 17:43:34 nas kernel: [ 0.000000] Built 1 zonelists, mobility grouping on. Total pages: 981639
Jan 23 17:43:34 nas kernel: [ 0.000000] Policy zone: Normal
Jan 23 17:43:34 nas kernel: [ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-74-generic root=UUID=373c7c2f-8045-4e98-9c21-c2ed2b639ae7 ro debian-installer/language=de keyboard-configuration/layoutcode?=de
Jan 23 17:43:34 nas kernel: [ 0.000000] Calgary: detecting Calgary via BIOS EBDA area
Jan 23 17:43:34 nas kernel: [ 0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Jan 23 17:43:34 nas kernel: [ 0.000000] Memory: 3772916K/3988976K available (12300K kernel code, 2481K rwdata, 4260K rodata, 2428K init, 2704K bss, 216060K reserved, 0K cma-reserved)
Jan 23 17:43:34 nas kernel: [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 23 17:43:34 nas kernel: [ 0.000000] Kernel/User page tables isolation: enabled
Jan 23 17:43:34 nas kernel: [ 0.000000] ftrace: allocating 39322 entries in 154 pages
Jan 23 17:43:34 nas kernel: [ 0.000000] Hierarchical RCU implementation.
Jan 23 17:43:34 nas kernel: [ 0.000000] RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 23 17:43:34 nas kernel: [ 0.000000] Tasks RCU enabled.
Jan 23 17:43:34 nas kernel: [ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 23 17:43:34 nas kernel: [ 0.000000] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 23 17:43:34 nas kernel: [ 0.000000] Console: colour VGA+ 80x25
Jan 23 17:43:34 nas kernel: [ 0.000000] console [tty0] enabled
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: Core revision 20170831
Jan 23 17:43:34 nas kernel: [ 0.000000] ACPI: 2 ACPI AML tables successfully acquired and loaded
Jan 23 17:43:34 nas kernel: [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
Jan 23 17:43:34 nas kernel: [ 0.000000] hpet clockevent registered
Jan 23 17:43:34 nas kernel: [ 0.000000] APIC: Switch to symmetric I/O mode setup
Jan 23 17:43:34 nas kernel: [ 0.000000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Jan 23 17:43:34 nas kernel: [ 0.020000] tsc: Fast TSC calibration using PIT
Jan 23 17:43:34 nas kernel: [ 0.024000] tsc: Detected 3200.123 MHz processor
Jan 23 17:43:34 nas kernel: [ 0.024000] Calibrating delay loop (skipped), value calculated using timer frequency.. 6400.24 BogoMIPS (lpj=12800492)
Jan 23 17:43:34 nas kernel: [ 0.024000] pid_max: default: 32768 minimum: 301
Jan 23 17:43:34 nas kernel: [ 0.024000] Security Framework initialized
Jan 23 17:43:34 nas kernel: [ 0.024000] Yama: becoming mindful.
Jan 23 17:43:34 nas kernel: [ 0.024000] AppArmor: AppArmor initialized
Jan 23 17:43:34 nas kernel: [ 0.024000] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
Jan 23 17:43:34 nas kernel: [ 0.024000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
Jan 23 17:43:34 nas kernel: [ 0.024000] Mount-cache hash table entries: 8192 (order: 4, 65536 bytes)
Jan 23 17:43:34 nas kernel: [ 0.024000] Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes)
Jan 23 17:43:34 nas kernel: [ 0.024000] CPU0: Thermal monitoring enabled (TM1)
Jan 23 17:43:34 nas kernel: [ 0.024000] process: using mwait in idle threads
Jan 23 17:43:34 nas kernel: [ 0.024000] Last level iTLB entries: 4KB 512, 2MB 7, 4MB 7
Jan 23 17:43:34 nas kernel: [ 0.024000] Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32, 1GB 0
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V2 : Mitigation: Full generic retpoline
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V2 : Enabling Restricted Speculation for firmware calls
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 23 17:43:34 nas kernel: [ 0.024000] Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
Jan 23 17:43:34 nas kernel: [ 0.024000] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Jan 23 17:43:34 nas kernel: [ 0.024000] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Jan 23 17:43:34 nas kernel: [ 0.024000] Freeing SMP alternatives memory: 36K
Jan 23 17:43:34 nas kernel: [ 0.139238] smpboot: CPU0: Intel(R) Core(TM) i3 CPU 550 @ 3.20GHz (family: 0x6, model: 0x25, stepping: 0x5)
Jan 23 17:43:34 nas kernel: [ 0.139392] Performance Events: PEBS fmt1+, Westmere events, 16-deep LBR, Intel PMU driver.
Jan 23 17:43:34 nas kernel: [ 0.139475] core: CPUID marked event: 'bus cycles' unavailable
Jan 23 17:43:34 nas kernel: [ 0.139522] ... version: 3
Jan 23 17:43:34 nas kernel: [ 0.139563] ... bit width: 48
Jan 23 17:43:34 nas kernel: [ 0.139604] ... generic registers: 4
Jan 23 17:43:34 nas kernel: [ 0.139646] ... value mask: 0000ffffffffffff
Jan 23 17:43:34 nas kernel: [ 0.139691] ... max period: 000000007fffffff
Jan 23 17:43:34 nas kernel: [ 0.139735] ... fixed-purpose events: 3
Jan 23 17:43:34 nas kernel: [ 0.139776] ... event mask: 000000070000000f
Jan 23 17:43:34 nas kernel: [ 0.139863] Hierarchical SRCU implementation.
Jan 23 17:43:34 nas kernel: [ 0.140000] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
Jan 23 17:43:34 nas kernel: [ 0.140000] smp: Bringing up secondary CPUs ...
Jan 23 17:43:34 nas kernel: [ 0.140000] x86: Booting SMP configuration:
Jan 23 17:43:34 nas kernel: [ 0.140000] .... node #0, CPUs: #1 #2
Jan 23 17:43:34 nas kernel: [ 0.142337] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Jan 23 17:43:34 nas kernel: [ 0.144126] #3
Jan 23 17:43:34 nas kernel: [ 0.146332] smp: Brought up 1 node, 4 CPUs
Jan 23 17:43:34 nas kernel: [ 0.146332] smpboot: Max logical packages: 2
Jan 23 17:43:34 nas kernel: [ 0.146332] smpboot: Total of 4 processors activated (25600.98 BogoMIPS)
Jan 23 17:43:34 nas kernel: [ 0.148288] devtmpfs: initialized
Jan 23 17:43:34 nas kernel: [ 0.148288] x86/mm: Memory block size: 128MB
Jan 23 17:43:34 nas kernel: [ 0.148498] evm: security.selinux
Jan 23 17:43:34 nas kernel: [ 0.148538] evm: security.SMACK64
Jan 23 17:43:34 nas kernel: [ 0.148579] evm: security.SMACK64EXEC
Jan 23 17:43:34 nas kernel: [ 0.148619] evm: security.SMACK64TRANSMUTE
Jan 23 17:43:34 nas kernel: [ 0.148660] evm: security.SMACK64MMAP
Jan 23 17:43:34 nas kernel: [ 0.148699] evm: security.apparmor
Jan 23 17:43:34 nas kernel: [ 0.148740] evm: security.ima
Jan 23 17:43:34 nas kernel: [ 0.148779] evm: security.capability
Jan 23 17:43:34 nas kernel: [ 0.148835] PM: Registering ACPI NVS region [mem 0xdb7e0000-0xdb7e2fff] (12288 bytes)
Jan 23 17:43:34 nas kernel: [ 0.148835] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
Jan 23 17:43:34 nas kernel: [ 0.148835] futex hash table entries: 2048 (order: 5, 131072 bytes)
Jan 23 17:43:34 nas kernel: [ 0.148835] pinctrl core: initialized pinctrl subsystem
Jan 23 17:43:34 nas kernel: [ 0.148835] RTC time: 16:36:13, date: 01/23/20
Jan 23 17:43:34 nas kernel: [ 0.148962] NET: Registered protocol family 16
Jan 23 17:43:34 nas kernel: [ 0.149089] audit: initializing netlink subsys (disabled)
Jan 23 17:43:34 nas kernel: [ 0.149142] audit: type=2000 audit(1579797373.148:1): state=initialized audit_enabled=0 res=1
Jan 23 17:43:34 nas kernel: [ 0.149142] cpuidle: using governor ladder
Jan 23 17:43:34 nas kernel: [ 0.149142] cpuidle: using governor menu
Jan 23 17:43:34 nas kernel: [ 0.149142] ACPI: bus type PCI registered
Jan 23 17:43:34 nas kernel: [ 0.149142] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 23 17:43:34 nas kernel: [ 0.149142] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf4000000-0xf7ffffff] (base 0xf4000000)
Jan 23 17:43:34 nas kernel: [ 0.149142] PCI: MMCONFIG at [mem 0xf4000000-0xf7ffffff] reserved in E820
Jan 23 17:43:34 nas kernel: [ 0.149142] pmd_set_huge: Cannot satisfy [mem 0xf4000000-0xf4200000] with a huge-page mapping due to MTRR override.
Jan 23 17:43:34 nas kernel: [ 0.149142] PCI: Using configuration type 1 for base access
Jan 23 17:43:34 nas kernel: [ 0.153032] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Module Device)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Processor Device)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Processor Aggregator Device)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Linux-Dell-Video)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Jan 23 17:43:34 nas kernel: [ 0.153032] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Jan 23 17:43:34 nas kernel: [ 0.158353] ACPI: Interpreter enabled
Jan 23 17:43:34 nas kernel: [ 0.158410] ACPI: (supports S0 S3 S4 S5)
Jan 23 17:43:34 nas kernel: [ 0.158453] ACPI: Using IOAPIC for interrupt routing
Jan 23 17:43:34 nas kernel: [ 0.158516] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 23 17:43:34 nas kernel: [ 0.158770] ACPI: Enabled 12 GPEs in block 00 to 3F
Jan 23 17:43:34 nas kernel: [ 0.163643] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3f])
Jan 23 17:43:34 nas kernel: [ 0.163643] acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
Jan 23 17:43:34 nas kernel: [ 0.163643] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
Jan 23 17:43:34 nas kernel: [ 0.163643] PCI host bridge to bus 0000:00
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff window]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [mem 0xdb800000-0xfebfffff window]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci_bus 0000:00: root bus resource [bus 00-3f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:00.0: [8086:0040] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.163643] DMAR: BIOS has allocated no shadow GTT; disabling IOMMU for graphics
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:01.0: [8086:0041] type 01 class 0x060400
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:02.0: [8086:0042] type 00 class 0x030000
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:02.0: reg 0x10: [mem 0xfb800000-0xfbbfffff 64bit]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:02.0: reg 0x20: [io 0xff00-0xff07]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:16.0: [8086:3b64] type 00 class 0x078000
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:16.0: reg 0x10: [mem 0xfbfff000-0xfbfff00f 64bit]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.0: [8086:3b3b] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.0: reg 0x20: [io 0xfe00-0xfe1f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.1: [8086:3b3e] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.1: reg 0x20: [io 0xfd00-0xfd1f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.2: [8086:3b3f] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.2: reg 0x20: [io 0xfc00-0xfc1f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.7: [8086:3b3c] type 00 class 0x0c0320
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.7: reg 0x10: [mem 0xfbffe000-0xfbffe3ff]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1a.7: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1b.0: [8086:3b56] type 00 class 0x040300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1b.0: reg 0x10: [mem 0xfbff4000-0xfbff7fff 64bit]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1c.0: [8086:3b42] type 01 class 0x060400
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1c.5: [8086:3b4c] type 01 class 0x060400
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.0: [8086:3b36] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.0: reg 0x20: [io 0xfb00-0xfb1f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.1: [8086:3b37] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.1: reg 0x20: [io 0xfa00-0xfa1f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.2: [8086:3b38] type 00 class 0x0c0300
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.2: reg 0x20: [io 0xf900-0xf91f]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.7: [8086:3b34] type 00 class 0x0c0320
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.7: reg 0x10: [mem 0xfbffd000-0xfbffd3ff]
Jan 23 17:43:34 nas kernel: [ 0.163643] pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.163666] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401
Jan 23 17:43:34 nas kernel: [ 0.163781] pci 0000:00:1f.0: [8086:3b06] type 00 class 0x060100
Jan 23 17:43:34 nas kernel: [ 0.163945] pci 0000:00:1f.2: [8086:3b22] type 00 class 0x010601
Jan 23 17:43:34 nas kernel: [ 0.163959] pci 0000:00:1f.2: reg 0x10: [io 0xf800-0xf807]
Jan 23 17:43:34 nas kernel: [ 0.163965] pci 0000:00:1f.2: reg 0x14: [io 0xf700-0xf703]
Jan 23 17:43:34 nas kernel: [ 0.163970] pci 0000:00:1f.2: reg 0x18: [io 0xf600-0xf607]
Jan 23 17:43:34 nas kernel: [ 0.163975] pci 0000:00:1f.2: reg 0x1c: [io 0xf500-0xf503]
Jan 23 17:43:34 nas kernel: [ 0.163981] pci 0000:00:1f.2: reg 0x20: [io 0xf400-0xf41f]
Jan 23 17:43:34 nas kernel: [ 0.163986] pci 0000:00:1f.2: reg 0x24: [mem 0xfbffc000-0xfbffc7ff]
Jan 23 17:43:34 nas kernel: [ 0.164023] pci 0000:00:1f.2: PME# supported from D3hot
Jan 23 17:43:34 nas kernel: [ 0.164099] pci 0000:00:1f.3: [8086:3b30] type 00 class 0x0c0500
Jan 23 17:43:34 nas kernel: [ 0.164113] pci 0000:00:1f.3: reg 0x10: [mem 0xfbffb000-0xfbffb0ff 64bit]
Jan 23 17:43:34 nas kernel: [ 0.164128] pci 0000:00:1f.3: reg 0x20: [io 0x0500-0x051f]
Jan 23 17:43:34 nas kernel: [ 0.164248] pci 0000:01:00.0: [1912:0015] type 00 class 0x0c0330
Jan 23 17:43:34 nas kernel: [ 0.164282] pci 0000:01:00.0: reg 0x10: [mem 0xfbefe000-0xfbefffff 64bit]
Jan 23 17:43:34 nas kernel: [ 0.164408] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.164484] pci 0000:00:01.0: PCI bridge to [bus 01]
Jan 23 17:43:34 nas kernel: [ 0.164530] pci 0000:00:01.0: bridge window [mem 0xfbe00000-0xfbefffff]
Jan 23 17:43:34 nas kernel: [ 0.164563] pci 0000:00:1c.0: PCI bridge to [bus 02]
Jan 23 17:43:34 nas kernel: [ 0.164653] pci 0000:03:00.0: [10ec:8168] type 00 class 0x020000
Jan 23 17:43:34 nas kernel: [ 0.164677] pci 0000:03:00.0: reg 0x10: [io 0xee00-0xeeff]
Jan 23 17:43:34 nas kernel: [ 0.164700] pci 0000:03:00.0: reg 0x18: [mem 0xfbcff000-0xfbcfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.164715] pci 0000:03:00.0: reg 0x20: [mem 0xfbcf8000-0xfbcfbfff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.164725] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
Jan 23 17:43:34 nas kernel: [ 0.164797] pci 0000:03:00.0: supports D1 D2
Jan 23 17:43:34 nas kernel: [ 0.164798] pci 0000:03:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Jan 23 17:43:34 nas kernel: [ 0.180027] pci 0000:00:1c.5: PCI bridge to [bus 03]
Jan 23 17:43:34 nas kernel: [ 0.180084] pci 0000:00:1c.5: bridge window [io 0xe000-0xefff]
Jan 23 17:43:34 nas kernel: [ 0.180088] pci 0000:00:1c.5: bridge window [mem 0xfbd00000-0xfbdfffff]
Jan 23 17:43:34 nas kernel: [ 0.180095] pci 0000:00:1c.5: bridge window [mem 0xfbc00000-0xfbcfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.180156] pci 0000:00:1e.0: PCI bridge to [bus 04] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180209] pci 0000:00:1e.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180210] pci 0000:00:1e.0: bridge window [io 0x0d00-0xffff window] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180212] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180213] pci 0000:00:1e.0: bridge window [mem 0x000c0000-0x000dffff window] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180214] pci 0000:00:1e.0: bridge window [mem 0xdb800000-0xfebfffff window] (subtractive decode)
Jan 23 17:43:34 nas kernel: [ 0.180766] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 10 11 *12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.180901] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 9 *10 11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.181034] ACPI: PCI Interrupt Link [LNKC] (IRQs *3 4 5 6 7 9 10 11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.181165] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.181296] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
Jan 23 17:43:34 nas kernel: [ 0.181432] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 *4 5 6 7 9 10 11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.181564] ACPI: PCI Interrupt Link [LNK0] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.181696] ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 *7 9 10 11 12 14 15)
Jan 23 17:43:34 nas kernel: [ 0.184003] SCSI subsystem initialized
Jan 23 17:43:34 nas kernel: [ 0.184022] libata version 3.00 loaded.
Jan 23 17:43:34 nas kernel: [ 0.184025] pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 23 17:43:34 nas kernel: [ 0.184052] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 23 17:43:34 nas kernel: [ 0.184122] pci 0000:00:02.0: vgaarb: bridge control possible
Jan 23 17:43:34 nas kernel: [ 0.184168] vgaarb: loaded
Jan 23 17:43:34 nas kernel: [ 0.184223] ACPI: bus type USB registered
Jan 23 17:43:34 nas kernel: [ 0.184276] usbcore: registered new interface driver usbfs
Jan 23 17:43:34 nas kernel: [ 0.184327] usbcore: registered new interface driver hub
Jan 23 17:43:34 nas kernel: [ 0.184386] usbcore: registered new device driver usb
Jan 23 17:43:34 nas kernel: [ 0.184459] EDAC MC: Ver: 3.0.0
Jan 23 17:43:34 nas kernel: [ 0.184459] PCI: Using ACPI for IRQ routing
Jan 23 17:43:34 nas kernel: [ 0.185159] PCI: Discovered peer bus 3f
Jan 23 17:43:34 nas kernel: [ 0.185200] PCI: root bus 3f: using default resources
Jan 23 17:43:34 nas kernel: [ 0.185201] PCI: Probing PCI hardware (bus 3f)
Jan 23 17:43:34 nas kernel: [ 0.185219] PCI host bridge to bus 0000:3f
Jan 23 17:43:34 nas kernel: [ 0.185261] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff]
Jan 23 17:43:34 nas kernel: [ 0.185308] pci_bus 0000:3f: root bus resource [mem 0x00000000-0xfffffffff]
Jan 23 17:43:34 nas kernel: [ 0.185357] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff]
Jan 23 17:43:34 nas kernel: [ 0.185423] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-3f])
Jan 23 17:43:34 nas kernel: [ 0.185427] pci 0000:3f:00.0: [8086:2c61] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.185461] pci 0000:3f:00.1: [8086:2d01] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.185494] pci 0000:3f:02.0: [8086:2d10] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.185523] pci 0000:3f:02.1: [8086:2d11] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.188025] pci 0000:3f:02.2: [8086:2d12] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.188055] pci 0000:3f:02.3: [8086:2d13] type 00 class 0x060000
Jan 23 17:43:34 nas kernel: [ 0.188093] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f
Jan 23 17:43:34 nas kernel: [ 0.188094] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 00-3f])
Jan 23 17:43:34 nas kernel: [ 0.188096] PCI: pci_cache_line_size set to 64 bytes
Jan 23 17:43:34 nas kernel: [ 0.188139] e820: reserve RAM buffer [mem 0x0009dc00-0x0009ffff]
Jan 23 17:43:34 nas kernel: [ 0.188140] e820: reserve RAM buffer [mem 0xdb7e0000-0xdbffffff]
Jan 23 17:43:34 nas kernel: [ 0.188225] NetLabel: Initializing
Jan 23 17:43:34 nas kernel: [ 0.188266] NetLabel: domain hash size = 128
Jan 23 17:43:34 nas kernel: [ 0.188308] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
Jan 23 17:43:34 nas kernel: [ 0.188366] NetLabel: unlabeled traffic allowed by default
Jan 23 17:43:34 nas kernel: [ 0.188717] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Jan 23 17:43:34 nas kernel: [ 0.188771] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
Jan 23 17:43:34 nas kernel: [ 0.190041] clocksource: Switched to clocksource hpet
Jan 23 17:43:34 nas kernel: [ 0.198887] VFS: Disk quotas dquot_6.6.0
Jan 23 17:43:34 nas kernel: [ 0.198952] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 23 17:43:34 nas kernel: [ 0.199100] AppArmor: AppArmor Filesystem Enabled
Jan 23 17:43:34 nas kernel: [ 0.199176] pnp: PnP ACPI init
Jan 23 17:43:34 nas kernel: [ 0.199463] system 00:00: [io 0x04d0-0x04d1] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.199510] system 00:00: [io 0x0290-0x029f] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.199556] system 00:00: [io 0x0800-0x087f] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.199601] system 00:00: [io 0x0290-0x0294] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.199648] system 00:00: [io 0x0880-0x088f] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.199699] system 00:00: Plug and Play ACPI device, IDs PNP0c02 (active)
Jan 23 17:43:34 nas kernel: [ 0.199783] pnp 00:01: Plug and Play ACPI device, IDs PNP0b00 (active)
Jan 23 17:43:34 nas kernel: [ 0.200054] system 00:02: [io 0x0400-0x047f] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.200101] system 00:02: [io 0x0580-0x05ff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.200151] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active)
Jan 23 17:43:34 nas kernel: [ 0.200394] system 00:03: [mem 0xf4000000-0xf7ffffff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.200445] system 00:03: Plug and Play ACPI device, IDs PNP0c02 (active)
Jan 23 17:43:34 nas kernel: [ 0.200690] system 00:04: [mem 0x000d1a00-0x000d3fff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.200738] system 00:04: [mem 0x000f0000-0x000f7fff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.200786] system 00:04: [mem 0x000f8000-0x000fbfff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.200836] system 00:04: [mem 0x000fc000-0x000fffff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.200885] system 00:04: [mem 0xdb7e0000-0xdb7effff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.200933] system 00:04: [mem 0x00000000-0x0009ffff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.200981] system 00:04: [mem 0x00100000-0xdb7dffff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.201029] system 00:04: [mem 0xdb7f0000-0xdb7fffff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201078] system 00:04: [mem 0xfec00000-0xfec00fff] could not be reserved
Jan 23 17:43:34 nas kernel: [ 0.201127] system 00:04: [mem 0xfed10000-0xfed1dfff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201174] system 00:04: [mem 0xfed20000-0xfed8ffff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201221] system 00:04: [mem 0xfee00000-0xfee00fff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201269] system 00:04: [mem 0xffb00000-0xffb7ffff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201318] system 00:04: [mem 0xfff00000-0xffffffff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201365] system 00:04: [mem 0x000e0000-0x000effff] has been reserved
Jan 23 17:43:34 nas kernel: [ 0.201415] system 00:04: Plug and Play ACPI device, IDs PNP0c01 (active)
Jan 23 17:43:34 nas kernel: [ 0.201421] pnp: PnP ACPI: found 5 devices
Jan 23 17:43:34 nas kernel: [ 0.207712] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 23 17:43:34 nas kernel: [ 0.207794] pci 0000:00:1c.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000
Jan 23 17:43:34 nas kernel: [ 0.207795] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000
Jan 23 17:43:34 nas kernel: [ 0.207797] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000
Jan 23 17:43:34 nas kernel: [ 0.207815] pci 0000:00:1c.0: BAR 14: assigned [mem 0xdb800000-0xdb9fffff]
Jan 23 17:43:34 nas kernel: [ 0.207866] pci 0000:00:1c.0: BAR 15: assigned [mem 0xdba00000-0xdbbfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.207934] pci 0000:00:1c.0: BAR 13: assigned [io 0x1000-0x1fff]
Jan 23 17:43:34 nas kernel: [ 0.207982] pci 0000:00:01.0: PCI bridge to [bus 01]
Jan 23 17:43:34 nas kernel: [ 0.208033] pci 0000:00:01.0: bridge window [mem 0xfbe00000-0xfbefffff]
Jan 23 17:43:34 nas kernel: [ 0.208083] pci 0000:00:1c.0: PCI bridge to [bus 02]
Jan 23 17:43:34 nas kernel: [ 0.208128] pci 0000:00:1c.0: bridge window [io 0x1000-0x1fff]
Jan 23 17:43:34 nas kernel: [ 0.208176] pci 0000:00:1c.0: bridge window [mem 0xdb800000-0xdb9fffff]
Jan 23 17:43:34 nas kernel: [ 0.208225] pci 0000:00:1c.0: bridge window [mem 0xdba00000-0xdbbfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.208295] pci 0000:03:00.0: BAR 6: assigned [mem 0xfbd00000-0xfbd1ffff pref]
Jan 23 17:43:34 nas kernel: [ 0.208360] pci 0000:00:1c.5: PCI bridge to [bus 03]
Jan 23 17:43:34 nas kernel: [ 0.208404] pci 0000:00:1c.5: bridge window [io 0xe000-0xefff]
Jan 23 17:43:34 nas kernel: [ 0.208453] pci 0000:00:1c.5: bridge window [mem 0xfbd00000-0xfbdfffff]
Jan 23 17:43:34 nas kernel: [ 0.208502] pci 0000:00:1c.5: bridge window [mem 0xfbc00000-0xfbcfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.208570] pci 0000:00:1e.0: PCI bridge to [bus 04]
Jan 23 17:43:34 nas kernel: [ 0.208621] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window]
Jan 23 17:43:34 nas kernel: [ 0.208622] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window]
Jan 23 17:43:34 nas kernel: [ 0.208623] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 17:43:34 nas kernel: [ 0.208624] pci_bus 0000:00: resource 7 [mem 0x000c0000-0x000dffff window]
Jan 23 17:43:34 nas kernel: [ 0.208625] pci_bus 0000:00: resource 8 [mem 0xdb800000-0xfebfffff window]
Jan 23 17:43:34 nas kernel: [ 0.208626] pci_bus 0000:01: resource 1 [mem 0xfbe00000-0xfbefffff]
Jan 23 17:43:34 nas kernel: [ 0.208628] pci_bus 0000:02: resource 0 [io 0x1000-0x1fff]
Jan 23 17:43:34 nas kernel: [ 0.208629] pci_bus 0000:02: resource 1 [mem 0xdb800000-0xdb9fffff]
Jan 23 17:43:34 nas kernel: [ 0.208630] pci_bus 0000:02: resource 2 [mem 0xdba00000-0xdbbfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.208631] pci_bus 0000:03: resource 0 [io 0xe000-0xefff]
Jan 23 17:43:34 nas kernel: [ 0.208632] pci_bus 0000:03: resource 1 [mem 0xfbd00000-0xfbdfffff]
Jan 23 17:43:34 nas kernel: [ 0.208633] pci_bus 0000:03: resource 2 [mem 0xfbc00000-0xfbcfffff 64bit pref]
Jan 23 17:43:34 nas kernel: [ 0.208634] pci_bus 0000:04: resource 4 [io 0x0000-0x0cf7 window]
Jan 23 17:43:34 nas kernel: [ 0.208635] pci_bus 0000:04: resource 5 [io 0x0d00-0xffff window]
Jan 23 17:43:34 nas kernel: [ 0.208636] pci_bus 0000:04: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 17:43:34 nas kernel: [ 0.208637] pci_bus 0000:04: resource 7 [mem 0x000c0000-0x000dffff window]
Jan 23 17:43:34 nas kernel: [ 0.208639] pci_bus 0000:04: resource 8 [mem 0xdb800000-0xfebfffff window]
Jan 23 17:43:34 nas kernel: [ 0.208691] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff]
Jan 23 17:43:34 nas kernel: [ 0.208692] pci_bus 0000:3f: resource 5 [mem 0x00000000-0xfffffffff]
Jan 23 17:43:34 nas kernel: [ 0.208719] NET: Registered protocol family 2
Jan 23 17:43:34 nas kernel: [ 0.208913] TCP established hash table entries: 32768 (order: 6, 262144 bytes)
Jan 23 17:43:34 nas kernel: [ 0.209041] TCP bind hash table entries: 32768 (order: 7, 524288 bytes)
Jan 23 17:43:34 nas kernel: [ 0.209170] TCP: Hash tables configured (established 32768 bind 32768)
Jan 23 17:43:34 nas kernel: [ 0.209259] UDP hash table entries: 2048 (order: 4, 65536 bytes)
Jan 23 17:43:34 nas kernel: [ 0.209319] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
Jan 23 17:43:34 nas kernel: [ 0.209411] NET: Registered protocol family 1
Jan 23 17:43:34 nas kernel: [ 0.209479] pci 0000:00:02.0: BIOS left Intel GPU interrupts enabled; disabling
Jan 23 17:43:34 nas kernel: [ 0.209553] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 23 17:43:34 nas kernel: [ 0.248429] PCI: CLS 64 bytes, default 64
Jan 23 17:43:34 nas kernel: [ 0.248465] Unpacking initramfs...
Jan 23 17:43:34 nas kernel: [ 0.940037] Freeing initrd memory: 56596K
Jan 23 17:43:34 nas kernel: [ 0.940147] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 23 17:43:34 nas kernel: [ 0.940202] software IO TLB: mapped [mem 0xd77e0000-0xdb7e0000] (64MB)
Jan 23 17:43:34 nas kernel: [ 0.940446] Scanning for low memory corruption every 60 seconds
Jan 23 17:43:34 nas kernel: [ 0.941119] Initialise system trusted keyrings
Jan 23 17:43:34 nas kernel: [ 0.941177] Key type blacklist registered
Jan 23 17:43:34 nas kernel: [ 0.941256] workingset: timestamp_bits=36 max_order=20 bucket_order=0
Jan 23 17:43:34 nas kernel: [ 0.942279] zbud: loaded
Jan 23 17:43:34 nas kernel: [ 0.942784] squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 23 17:43:34 nas kernel: [ 0.942951] fuse init (API version 7.26)
Jan 23 17:43:34 nas kernel: [ 0.944357] Key type asymmetric registered
Jan 23 17:43:34 nas kernel: [ 0.944407] Asymmetric key parser 'x509' registered
Jan 23 17:43:34 nas kernel: [ 0.944487] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 23 17:43:34 nas kernel: [ 0.944595] io scheduler noop registered
Jan 23 17:43:34 nas kernel: [ 0.944644] io scheduler deadline registered
Jan 23 17:43:34 nas kernel: [ 0.944715] io scheduler cfq registered (default)
Jan 23 17:43:34 nas kernel: [ 0.944954] pcieport 0000:00:1c.0: enabling device (0000 -> 0003)
Jan 23 17:43:34 nas kernel: [ 0.945355] intel_idle: MWAIT substates: 0x1120
Jan 23 17:43:34 nas kernel: [ 0.945355] intel_idle: v0.4.1 model 0x25
Jan 23 17:43:34 nas kernel: [ 0.945460] intel_idle: lapic_timer_reliable_states 0xffffffff
Jan 23 17:43:34 nas kernel: [ 0.945535] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 23 17:43:34 nas kernel: [ 0.945624] ACPI: Power Button [PWRB]
Jan 23 17:43:34 nas kernel: [ 0.945706] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Jan 23 17:43:34 nas kernel: [ 0.945805] ACPI: Power Button [PWRF]
Jan 23 17:43:34 nas kernel: [ 0.946688] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
Jan 23 17:43:34 nas kernel: [ 0.948376] Linux agpgart interface v0.103
Jan 23 17:43:34 nas kernel: [ 0.950201] loop: module loaded
Jan 23 17:43:34 nas kernel: [ 0.950397] libphy: Fixed MDIO Bus: probed
Jan 23 17:43:34 nas kernel: [ 0.950957] tun: Universal TUN/TAP device driver, 1.6
Jan 23 17:43:34 nas kernel: [ 0.951064] PPP generic driver version 2.4.2
Jan 23 17:43:34 nas kernel: [ 0.951174] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Jan 23 17:43:34 nas kernel: [ 0.951231] ehci-pci: EHCI PCI platform driver
Jan 23 17:43:34 nas kernel: [ 0.951416] ehci-pci 0000:00:1a.7: EHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.951473] ehci-pci 0000:00:1a.7: new USB bus registered, assigned bus number 1
Jan 23 17:43:34 nas kernel: [ 0.951558] ehci-pci 0000:00:1a.7: debug port 2
Jan 23 17:43:34 nas kernel: [ 0.955509] ehci-pci 0000:00:1a.7: cache line size of 64 is not supported
Jan 23 17:43:34 nas kernel: [ 0.955521] ehci-pci 0000:00:1a.7: irq 18, io mem 0xfbffe000
Jan 23 17:43:34 nas kernel: [ 0.968123] ehci-pci 0000:00:1a.7: USB 2.0 started, EHCI 1.00
Jan 23 17:43:34 nas kernel: [ 0.968249] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Jan 23 17:43:34 nas kernel: [ 0.968305] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.968382] usb usb1: Product: EHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.968433] usb usb1: Manufacturer: Linux 4.15.0-74-generic ehci_hcd
Jan 23 17:43:34 nas kernel: [ 0.968487] usb usb1: SerialNumber: 0000:00:1a.7
Jan 23 17:43:34 nas kernel: [ 0.968817] hub 1-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.968880] hub 1-0:1.0: 6 ports detected
Jan 23 17:43:34 nas kernel: [ 0.969152] ehci-pci 0000:00:1d.7: EHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.969207] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 2
Jan 23 17:43:34 nas kernel: [ 0.969291] ehci-pci 0000:00:1d.7: debug port 2
Jan 23 17:43:34 nas kernel: [ 0.973235] ehci-pci 0000:00:1d.7: cache line size of 64 is not supported
Jan 23 17:43:34 nas kernel: [ 0.973243] ehci-pci 0000:00:1d.7: irq 23, io mem 0xfbffd000
Jan 23 17:43:34 nas kernel: [ 0.988101] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00
Jan 23 17:43:34 nas kernel: [ 0.988232] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002
Jan 23 17:43:34 nas kernel: [ 0.988301] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.988378] usb usb2: Product: EHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.988429] usb usb2: Manufacturer: Linux 4.15.0-74-generic ehci_hcd
Jan 23 17:43:34 nas kernel: [ 0.988483] usb usb2: SerialNumber: 0000:00:1d.7
Jan 23 17:43:34 nas kernel: [ 0.988800] hub 2-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.988861] hub 2-0:1.0: 6 ports detected
Jan 23 17:43:34 nas kernel: [ 0.989018] ehci-platform: EHCI generic platform driver
Jan 23 17:43:34 nas kernel: [ 0.989077] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Jan 23 17:43:34 nas kernel: [ 0.989133] ohci-pci: OHCI PCI platform driver
Jan 23 17:43:34 nas kernel: [ 0.989188] ohci-platform: OHCI generic platform driver
Jan 23 17:43:34 nas kernel: [ 0.989244] uhci_hcd: USB Universal Host Controller Interface driver
Jan 23 17:43:34 nas kernel: [ 0.989399] uhci_hcd 0000:00:1a.0: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.989453] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3
Jan 23 17:43:34 nas kernel: [ 0.989534] uhci_hcd 0000:00:1a.0: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.989607] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000fe00
Jan 23 17:43:34 nas kernel: [ 0.989696] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.989751] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.989827] usb usb3: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.989878] usb usb3: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.989932] usb usb3: SerialNumber: 0000:00:1a.0
Jan 23 17:43:34 nas kernel: [ 0.990268] hub 3-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.990329] hub 3-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.990545] uhci_hcd 0000:00:1a.1: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.990598] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4
Jan 23 17:43:34 nas kernel: [ 0.990665] uhci_hcd 0000:00:1a.1: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.990733] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000fd00
Jan 23 17:43:34 nas kernel: [ 0.990819] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.990875] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.990951] usb usb4: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.991001] usb usb4: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.991055] usb usb4: SerialNumber: 0000:00:1a.1
Jan 23 17:43:34 nas kernel: [ 0.991398] hub 4-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.991459] hub 4-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.991668] uhci_hcd 0000:00:1a.2: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.991720] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5
Jan 23 17:43:34 nas kernel: [ 0.991788] uhci_hcd 0000:00:1a.2: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.991850] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000fc00
Jan 23 17:43:34 nas kernel: [ 0.991940] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.991995] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.992080] usb usb5: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.992130] usb usb5: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.992185] usb usb5: SerialNumber: 0000:00:1a.2
Jan 23 17:43:34 nas kernel: [ 0.992523] hub 5-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.992584] hub 5-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.992787] uhci_hcd 0000:00:1d.0: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.992840] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6
Jan 23 17:43:34 nas kernel: [ 0.992907] uhci_hcd 0000:00:1d.0: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.992969] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fb00
Jan 23 17:43:34 nas kernel: [ 0.993055] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.993111] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.993187] usb usb6: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.993237] usb usb6: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.993291] usb usb6: SerialNumber: 0000:00:1d.0
Jan 23 17:43:34 nas kernel: [ 0.993626] hub 6-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.993684] hub 6-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.993896] uhci_hcd 0000:00:1d.1: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.993945] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7
Jan 23 17:43:34 nas kernel: [ 0.994013] uhci_hcd 0000:00:1d.1: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.994075] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fa00
Jan 23 17:43:34 nas kernel: [ 0.994163] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.994218] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.994295] usb usb7: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.994345] usb usb7: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.994400] usb usb7: SerialNumber: 0000:00:1d.1
Jan 23 17:43:34 nas kernel: [ 0.994570] hub 7-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.994615] hub 7-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.994810] uhci_hcd 0000:00:1d.2: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.994859] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8
Jan 23 17:43:34 nas kernel: [ 0.994927] uhci_hcd 0000:00:1d.2: detected 2 ports
Jan 23 17:43:34 nas kernel: [ 0.994986] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000f900
Jan 23 17:43:34 nas kernel: [ 0.995064] usb usb8: New USB device found, idVendor=1d6b, idProduct=0001
Jan 23 17:43:34 nas kernel: [ 0.995112] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas kernel: [ 0.995181] usb usb8: Product: UHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 0.995231] usb usb8: Manufacturer: Linux 4.15.0-74-generic uhci_hcd
Jan 23 17:43:34 nas kernel: [ 0.995286] usb usb8: SerialNumber: 0000:00:1d.2
Jan 23 17:43:34 nas kernel: [ 0.995432] hub 8-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 0.995483] hub 8-0:1.0: 2 ports detected
Jan 23 17:43:34 nas kernel: [ 0.995612] xhci_hcd 0000:01:00.0: Resetting
Jan 23 17:43:34 nas kernel: [ 1.464086] usb 6-1: new low-speed USB device number 2 using uhci_hcd
Jan 23 17:43:34 nas kernel: [ 1.660200] usb 6-1: New USB device found, idVendor=0a81, idProduct=0101
Jan 23 17:43:34 nas kernel: [ 1.660263] usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Jan 23 17:43:34 nas kernel: [ 1.660326] usb 6-1: Product: USB Keyboard
Jan 23 17:43:34 nas kernel: [ 1.660374] usb 6-1: Manufacturer: CHESEN
Jan 23 17:43:34 nas kernel: [ 1.948103] tsc: Refined TSC clocksource calibration: 3199.961 MHz
Jan 23 17:43:34 nas kernel: [ 1.948176] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2e202556717, max_idle_ns: 440795345100 ns
Jan 23 17:43:34 nas systemd[1]: Started Read required files in advance.
Jan 23 17:43:34 nas kernel: [ 2.012295] xhci_hcd 0000:01:00.0: xHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 2.012351] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 9
Jan 23 17:43:34 nas systemd[1]: Stopped target Emergency Mode.
Jan 23 17:43:34 nas kernel: [ 2.017689] xhci_hcd 0000:01:00.0: hcc params 0x014050cf hci version 0x100 quirks 0x0000000000000090
Jan 23 17:43:34 nas kernel: [ 2.017962] usb usb9: New USB device found, idVendor=1d6b, idProduct=0002
Jan 23 17:43:34 nas systemd[1]: Stopping Emergency Shell...
Jan 23 17:43:34 nas kernel: [ 2.018018] usb usb9: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas systemd[1]: Stopped Emergency Shell.
Jan 23 17:43:34 nas kernel: [ 2.018081] usb usb9: Product: xHCI Host Controller
Jan 23 17:43:34 nas kernel: [ 2.018125] usb usb9: Manufacturer: Linux 4.15.0-74-generic xhci-hcd
Jan 23 17:43:34 nas kernel: [ 2.018179] usb usb9: SerialNumber: 0000:01:00.0
Jan 23 17:43:34 nas kernel: [ 2.018527] hub 9-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 2.018592] hub 9-0:1.0: 2 ports detected
Jan 23 17:43:34 nas systemd[1]: Started Preprocess NFS configuration.
Jan 23 17:43:34 nas kernel: [ 2.018715] xhci_hcd 0000:01:00.0: xHCI Host Controller
Jan 23 17:43:34 nas systemd[1]: Listening on Syslog Socket.
Jan 23 17:43:34 nas kernel: [ 2.018768] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 10
Jan 23 17:43:34 nas kernel: [ 2.018847] xhci_hcd 0000:01:00.0: Host supports USB 3.0 SuperSpeed
Jan 23 17:43:34 nas kernel: [ 2.021141] usb usb10: We don't know the algorithms for LPM for this host, disabling LPM.
Jan 23 17:43:34 nas kernel: [ 2.021234] usb usb10: New USB device found, idVendor=1d6b, idProduct=0003
Jan 23 17:43:34 nas kernel: [ 2.021290] usb usb10: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 17:43:34 nas systemd[1]: Started Tell Plymouth To Write Out Runtime Data.
Jan 23 17:43:34 nas kernel: [ 2.021367] usb usb10: Product: xHCI Host Controller
Jan 23 17:43:34 nas systemd[1]: Reached target System Initialization.
Jan 23 17:43:34 nas kernel: [ 2.021417] usb usb10: Manufacturer: Linux 4.15.0-74-generic xhci-hcd
Jan 23 17:43:34 nas kernel: [ 2.021472] usb usb10: SerialNumber: 0000:01:00.0
Jan 23 17:43:34 nas kernel: [ 2.021815] hub 10-0:1.0: USB hub found
Jan 23 17:43:34 nas kernel: [ 2.021880] hub 10-0:1.0: 2 ports detected
Jan 23 17:43:34 nas systemd[1]: Started Run certbot twice daily.
Jan 23 17:43:34 nas kernel: [ 2.022034] i8042: PNP: No PS/2 controller found.
Jan 23 17:43:34 nas kernel: [ 2.022077] i8042: Probing ports directly.
Jan 23 17:43:34 nas systemd[1]: Reached target Basic System.
Jan 23 17:43:34 nas systemd[1]: Started Service for snap application nextcloud.nextcloud-cron.
Jan 23 17:43:34 nas kernel: [ 2.055094] i8042: Failed to disable AUX port, but continuing anyway... Is this a SiS?
Jan 23 17:43:34 nas kernel: [ 2.055170] i8042: If AUX port is really absent please use the 'i8042.noaux' option
Jan 23 17:43:34 nas kernel: [ 2.332184] serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 23 17:43:34 nas kernel: [ 2.332589] mousedev: PS/2 mouse device common for all mice
Jan 23 17:43:34 nas kernel: [ 2.333178] rtc_cmos 00:01: RTC can wake from S4
Jan 23 17:43:34 nas kernel: [ 2.333400] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
Jan 23 17:43:34 nas kernel: [ 2.333477] rtc_cmos 00:01: alarms up to one month, 242 bytes nvram, hpet irqs
Jan 23 17:43:34 nas systemd[1]: Starting Samba NMB Daemon...
Jan 23 17:43:34 nas kernel: [ 2.333559] i2c /dev entries driver
Jan 23 17:43:34 nas kernel: [ 2.333649] device-mapper: uevent: version 1.0.3
Jan 23 17:43:34 nas kernel: [ 2.333783] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
Jan 23 17:43:34 nas kernel: [ 2.333992] ledtrig-cpu: registered to indicate activity on CPUs
Jan 23 17:43:34 nas systemd[1]: Started FUSE filesystem for LXC.
Jan 23 17:43:34 nas kernel: [ 2.334394] NET: Registered protocol family 10
Jan 23 17:43:34 nas kernel: [ 2.337794] Segment Routing with IPv6
Jan 23 17:43:34 nas kernel: [ 2.337891] NET: Registered protocol family 17
Jan 23 17:43:34 nas kernel: [ 2.337983] Key type dns_resolver registered
Jan 23 17:43:34 nas kernel: [ 2.338396] mce: Using 9 MCE banks
Jan 23 17:43:34 nas kernel: [ 2.338453] RAS: Correctable Errors collector initialized.
Jan 23 17:43:34 nas systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Jan 23 17:43:34 nas kernel: [ 2.338532] microcode: sig=0x20655, pf=0x2, revision=0x7
Jan 23 17:43:34 nas kernel: [ 2.338683] microcode: Microcode Update Driver: v2.2.
Jan 23 17:43:34 nas kernel: [ 2.338700] sched_clock: Marking stable (2338670766, 0)->(2320860081, 17810685)
Jan 23 17:43:34 nas kernel: [ 2.339049] registered taskstats version 1
Jan 23 17:43:34 nas kernel: [ 2.339100] Loading compiled-in X.509 certificates
Jan 23 17:43:34 nas kernel: [ 2.341724] Loaded X.509 cert 'Build time autogenerated kernel key: 62d24f9c726e026b112735485dad6e6e4b50c923'
Jan 23 17:43:34 nas systemd[1]: Started Service for snap application nextcloud.nextcloud-fixer.
Jan 23 17:43:34 nas kernel: [ 2.341826] zswap: loaded using pool lzo/zbud
Jan 23 17:43:34 nas kernel: [ 2.344396] Key type big_key registered
Jan 23 17:43:34 nas kernel: [ 2.344447] Key type trusted registered
Jan 23 17:43:34 nas kernel: [ 2.345694] Key type encrypted registered
Jan 23 17:43:34 nas kernel: [ 2.345745] AppArmor: AppArmor sha1 policy hashing enabled
Jan 23 17:43:34 nas kernel: [ 2.345799] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
Jan 23 17:43:34 nas kernel: [ 2.345855] ima: Allocated hash algorithm: sha1
Jan 23 17:43:34 nas kernel: [ 2.345923] evm: HMAC attrs: 0x1
Jan 23 17:43:34 nas kernel: [ 2.346226] Magic number: 4:974:641
Jan 23 17:43:34 nas systemd[1]: Started Deferred execution scheduler.
Jan 23 17:43:34 nas kernel: [ 2.346407] rtc_cmos 00:01: setting system clock to 2020-01-23 16:36:15 UTC (1579797375)
Jan 23 17:43:34 nas kernel: [ 2.347312] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
Jan 23 17:43:34 nas kernel: [ 2.347365] EDD information not available.
Jan 23 17:43:34 nas kernel: [ 2.349704] Freeing unused kernel image memory: 2428K
Jan 23 17:43:34 nas kernel: [ 2.376129] Write protecting the kernel read-only data: 20480k
Jan 23 17:43:34 nas kernel: [ 2.376864] Freeing unused kernel image memory: 2008K
Jan 23 17:43:34 nas kernel: [ 2.377358] Freeing unused kernel image memory: 1884K
Jan 23 17:43:34 nas kernel: [ 2.386012] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 17:43:34 nas systemd[1]: Listening on UUID daemon activation socket.
Jan 23 17:43:34 nas kernel: [ 2.386060] x86/mm: Checking user space page tables
Jan 23 17:43:34 nas kernel: [ 2.394333] x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 17:43:34 nas kernel: [ 2.475420] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
Jan 23 17:43:34 nas kernel: [ 2.475487] r8169 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
Jan 23 17:43:34 nas kernel: [ 2.475961] r8169 0000:03:00.0 eth0: RTL8168d/8111d at 0x (ptrval), xx:xx:xx:xx:xx:xx, XID 083000c0 IRQ 29
Jan 23 17:43:34 nas kernel: [ 2.476056] r8169 0000:03:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
Jan 23 17:43:34 nas kernel: [ 2.481746] hidraw: raw HID events driver (C) Jiri Kosina
Jan 23 17:43:34 nas kernel: [ 2.483722] ahci 0000:00:1f.2: version 3.0
Jan 23 17:43:34 nas kernel: [ 2.483982] ahci 0000:00:1f.2: SSS flag set, parallel bus scan disabled
Jan 23 17:43:34 nas kernel: [ 2.494164] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 3 Gbps 0x3f impl SATA mode
Jan 23 17:43:34 nas kernel: [ 2.494240] ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pmp pio slum part ems apst
Jan 23 17:43:34 nas kernel: [ 2.514855] r8169 0000:03:00.0 enp3s0: renamed from eth0
Jan 23 17:43:34 nas kernel: [ 2.518313] usbcore: registered new interface driver usbhid
Jan 23 17:43:34 nas systemd[1]: Listening on ACPID Listen Socket.
Jan 23 17:43:34 nas kernel: [ 2.518367] usbhid: USB HID core driver
Jan 23 17:43:34 nas kernel: [ 2.538435] input: CHESEN USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.0/0003:0A81:0101.0001/input/input3
Jan 23 17:43:34 nas kernel: [ 2.556636] scsi host0: ahci
Jan 23 17:43:34 nas kernel: [ 2.556800] scsi host1: ahci
Jan 23 17:43:34 nas kernel: [ 2.556960] scsi host2: ahci
Jan 23 17:43:34 nas kernel: [ 2.557107] scsi host3: ahci
Jan 23 17:43:34 nas kernel: [ 2.557249] scsi host4: ahci
Jan 23 17:43:34 nas kernel: [ 2.557379] scsi host5: ahci
Jan 23 17:43:34 nas kernel: [ 2.557469] ata1: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc100 irq 30
Jan 23 17:43:34 nas lxcfs[2968]: mount namespace: 5
Jan 23 17:43:34 nas kernel: [ 2.557548] ata2: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc180 irq 30
Jan 23 17:43:34 nas kernel: [ 2.557626] ata3: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc200 irq 30
Jan 23 17:43:34 nas kernel: [ 2.557703] ata4: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc280 irq 30
Jan 23 17:43:34 nas kernel: [ 2.557780] ata5: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc300 irq 30
Jan 23 17:43:34 nas kernel: [ 2.557858] ata6: SATA max UDMA/133 abar m2048@0xfbffc000 port 0xfbffc380 irq 30
Jan 23 17:43:34 nas kernel: [ 2.596176] hid-generic 0003:0A81:0101.0001: input,hidraw0: USB HID v1.10 Keyboard [CHESEN USB Keyboard] on usb-0000:00:1d.0-1/input0
Jan 23 17:43:34 nas kernel: [ 2.596390] input: CHESEN USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.1/0003:0A81:0101.0002/input/input4
Jan 23 17:43:34 nas kernel: [ 2.656251] hid-generic 0003:0A81:0101.0002: input,hidraw1: USB HID v1.10 Device [CHESEN USB Keyboard] on usb-0000:00:1d.0-1/input1
Jan 23 17:43:34 nas kernel: [ 2.871613] ata1: SATA link down (SStatus 0 SControl 300)
Jan 23 17:43:34 nas kernel: [ 2.972214] clocksource: Switched to clocksource tsc
Jan 23 17:43:34 nas lxcfs[2968]: hierarchies:
Jan 23 17:43:34 nas kernel: [ 3.065540] pci 0000:00:00.0: Intel HD Graphics Chipset
Jan 23 17:43:34 nas kernel: [ 3.065625] pci 0000:00:00.0: detected gtt size: 2097152K total, 262144K mappable
Jan 23 17:43:34 nas kernel: [ 3.066299] pci 0000:00:00.0: detected 32768K stolen memory
Jan 23 17:43:34 nas kernel: [ 3.066411] [drm] Memory usable by graphics device = 2048M
Jan 23 17:43:34 nas kernel: [ 3.066465] [drm] Replacing VGA console driver
Jan 23 17:43:34 nas kernel: [ 3.066949] Console: switching to colour dummy device 80x25
Jan 23 17:43:34 nas kernel: [ 3.073698] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
Jan 23 17:43:34 nas kernel: [ 3.073702] [drm] Driver supports precise vblank timestamp query.
Jan 23 17:43:34 nas kernel: [ 3.076251] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jan 23 17:43:34 nas kernel: [ 3.077388] ------------[ cut here ]------------
Jan 23 17:43:34 nas kernel: [ 3.077391] Could not determine valid watermarks for inherited state
Jan 23 17:43:34 nas kernel: [ 3.077457] WARNING: CPU: 0 PID: 174 at /build/linux-PIILow/linux-4.15.0/drivers/gpu/drm/i915/intel_display.c:14537 intel_modeset_init+0xfcf/0x1010 [i915]
Jan 23 17:43:34 nas lxcfs[2968]: 0: fd: 6: memory
Jan 23 17:43:34 nas lxcfs[2968]: 1: fd: 7: perf_event
Jan 23 17:43:34 nas lxcfs[2968]: 2: fd: 8: devices
Jan 23 17:43:34 nas lxcfs[2968]: 3: fd: 9: freezer
Jan 23 17:43:34 nas lxcfs[2968]: 4: fd: 10: rdma
Jan 23 17:43:34 nas lxcfs[2968]: 5: fd: 11: net_cls,net_prio
Jan 23 17:43:34 nas lxcfs[2968]: 6: fd: 12: pids
Jan 23 17:43:34 nas lxcfs[2968]: 7: fd: 13: hugetlb
Jan 23 17:43:34 nas lxcfs[2968]: 8: fd: 14: cpuset
Jan 23 17:43:34 nas lxcfs[2968]: 9: fd: 15: cpu,cpuacct
Jan 23 17:43:34 nas kernel: [ 3.077461] Modules linked in: hid_generic i915(+) video i2c_algo_bit drm_kms_helper usbhid ahci syscopyarea sysfillrect hid libahci sysimgblt fb_sys_fops drm r8169 mii
Jan 23 17:43:34 nas kernel: [ 3.077475] CPU: 0 PID: 174 Comm: systemd-udevd Not tainted 4.15.0-74-generic #84-Ubuntu
Jan 23 17:43:34 nas kernel: [ 3.077478] Hardware name: wortmann H55M-D2H/H55M-D2H, BIOS F4 02/06/2012
Jan 23 17:43:34 nas kernel: [ 3.077508] RIP: 0010:intel_modeset_init+0xfcf/0x1010 [i915]
Jan 23 17:43:34 nas kernel: [ 3.077510] RSP: 0018:ffff9c5100a439b0 EFLAGS: 00010286
Jan 23 17:43:34 nas kernel: [ 3.077513] RAX: 0000000000000000 RBX: ffff8c7247158000 RCX: ffffffff9d063a28
Jan 23 17:43:34 nas lxcfs[2968]: 10: fd: 16: blkio
Jan 23 17:43:34 nas lxcfs[2968]: 11: fd: 17: name=systemd
Jan 23 17:43:34 nas kernel: [ 3.077515] RDX: 0000000000000001 RSI: 0000000000000082 RDI: 0000000000000246
Jan 23 17:43:34 nas kernel: [ 3.077517] RBP: ffff9c5100a43a40 R08: 00000000000002b4 R09: 0720072007200720
Jan 23 17:43:34 nas lxcfs[2968]: 12: fd: 18: unified
Jan 23 17:43:34 nas kernel: [ 3.077519] R10: 0000000000000040 R11: 0774077307200764 R12: ffff8c7247a4d400
Jan 23 17:43:34 nas kernel: [ 3.077522] R13: ffff8c7247a48400 R14: 00000000ffffffea R15: ffff8c7247158358
Jan 23 17:43:34 nas kernel: [ 3.077525] FS: 00007fcce9a0d680(0000) GS:ffff8c7257c00000(0000) knlGS:0000000000000000
Jan 23 17:43:34 nas kernel: [ 3.077527] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 23 17:43:34 nas kernel: [ 3.077529] CR2: 0000561e52c8e358 CR3: 0000000107bb8004 CR4: 00000000000206f0
Jan 23 17:43:34 nas systemd[1]: Starting LXD - unix socket.
Jan 23 17:43:34 nas kernel: [ 3.077532] Call Trace:
Jan 23 17:43:34 nas kernel: [ 3.077559] i915_driver_load+0xa73/0xe60 [i915]
Jan 23 17:43:34 nas kernel: [ 3.077584] i915_pci_probe+0x42/0x70 [i915]
Jan 23 17:43:34 nas kernel: [ 3.077589] local_pci_probe+0x47/0xa0
Jan 23 17:43:34 nas kernel: [ 3.077592] pci_device_probe+0x10e/0x1c0
Jan 23 17:43:34 nas kernel: [ 3.077596] driver_probe_device+0x30c/0x490
Jan 23 17:43:34 nas systemd[1]: Started Clean PHP session files every 30 mins.
Jan 23 17:43:34 nas kernel: [ 3.077599] __driver_attach+0xcc/0xf0
Jan 23 17:43:34 nas kernel: [ 3.077601] ? driver_probe_device+0x490/0x490
Jan 23 17:43:34 nas kernel: [ 3.077604] bus_for_each_dev+0x70/0xc0
Jan 23 17:43:34 nas kernel: [ 3.077606] driver_attach+0x1e/0x20
Jan 23 17:43:34 nas kernel: [ 3.077608] bus_add_driver+0x1c7/0x270
Jan 23 17:43:34 nas kernel: [ 3.077610] ? 0xffffffffc0437000
Jan 23 17:43:34 nas kernel: [ 3.077613] driver_register+0x60/0xe0
Jan 23 17:43:34 nas systemd[1]: Starting LSB: Record successful boot for GRUB...
Jan 23 17:43:34 nas kernel: [ 3.077614] ? 0xffffffffc0437000
Jan 23 17:43:34 nas kernel: [ 3.077617] __pci_register_driver+0x5a/0x60
Jan 23 17:43:34 nas kernel: [ 3.077642] i915_init+0x5c/0x5f [i915]
Jan 23 17:43:34 nas kernel: [ 3.077647] do_one_initcall+0x52/0x19f
Jan 23 17:43:34 nas kernel: [ 3.077651] ? __vunmap+0x8e/0xc0
Jan 23 17:43:34 nas kernel: [ 3.077654] ? _cond_resched+0x19/0x40
Jan 23 17:43:34 nas kernel: [ 3.077658] ? kmem_cache_alloc_trace+0xa6/0x1b0
Jan 23 17:43:34 nas systemd[1]: Started SKS database service.
Jan 23 17:43:34 nas kernel: [ 3.077663] ? do_init_module+0x27/0x213
Jan 23 17:43:34 nas kernel: [ 3.077666] do_init_module+0x5f/0x213
Jan 23 17:43:34 nas kernel: [ 3.077668] load_module+0x16bc/0x1f10
Jan 23 17:43:34 nas kernel: [ 3.077673] ? ima_post_read_file+0x96/0xa0
Jan 23 17:43:34 nas kernel: [ 3.077676] SYSC_finit_module+0xfc/0x120
Jan 23 17:43:34 nas kernel: [ 3.077678] ? SYSC_finit_module+0xfc/0x120
Jan 23 17:43:34 nas kernel: [ 3.077681] SyS_finit_module+0xe/0x10
Jan 23 17:43:34 nas kernel: [ 3.077684] do_syscall_64+0x73/0x130
Jan 23 17:43:34 nas systemd[1]: Started SKS reconciliation service.
Jan 23 17:43:34 nas kernel: [ 3.077688] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Jan 23 17:43:34 nas kernel: [ 3.077690] RIP: 0033:0x7fcce9516839
Jan 23 17:43:34 nas kernel: [ 3.077691] RSP: 002b:00007ffcb7dc29c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
Jan 23 17:43:34 nas kernel: [ 3.077695] RAX: ffffffffffffffda RBX: 0000561e52c77250 RCX: 00007fcce9516839
Jan 23 17:43:34 nas kernel: [ 3.077697] RDX: 0000000000000000 RSI: 00007fcce91f5145 RDI: 0000000000000013
Jan 23 17:43:34 nas kernel: [ 3.077699] RBP: 00007fcce91f5145 R08: 0000000000000000 R09: 0000561e52c77250
Jan 23 17:43:34 nas kernel: [ 3.077701] R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000
Jan 23 17:43:34 nas systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 17:43:34 nas kernel: [ 3.077703] R13: 0000561e52c58510 R14: 0000000000020000 R15: 0000561e52c77250
Jan 23 17:43:34 nas kernel: [ 3.077706] Code: e9 46 fc ff ff 48 c7 c6 d7 5d 3c c0 48 c7 c7 2f 51 3c c0 e8 94 68 95 db 0f 0b e9 73 fe ff ff 48 c7 c7 b0 b5 3d c0 e8 81 68 95 db <0f> 0b e9 4b fe ff ff 48 c7 c6 e4 5d 3c c0 48 c7 c7 2f 51 3c c0
Jan 23 17:43:34 nas kernel: [ 3.077728] ---[ end trace 7ffb4452df44d661 ]---
Jan 23 17:43:34 nas kernel: [ 3.083956] [drm] RC6 disabled, disabling runtime PM support
Jan 23 17:43:34 nas kernel: [ 3.084415] [drm] Initialized i915 1.6.0 20171023 for 0000:00:02.0 on minor 0
Jan 23 17:43:34 nas kernel: [ 3.268073] random: fast init done
Jan 23 17:43:34 nas systemd[1]: Starting Accounts Service...
Jan 23 17:43:34 nas kernel: [ 3.344661] fbcon: inteldrmfb (fb0) is primary device
Jan 23 17:43:34 nas kernel: [ 3.344701] random: systemd-udevd: uninitialized urandom read (16 bytes read)
Jan 23 17:43:34 nas kernel: [ 3.344752] random: systemd-udevd: uninitialized urandom read (16 bytes read)
Jan 23 17:43:34 nas kernel: [ 3.344764] random: systemd-udevd: uninitialized urandom read (16 bytes read)
Jan 23 17:43:34 nas kernel: [ 3.352105] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Jan 23 17:43:34 nas systemd[1]: Started BIND Domain Name Server.
Jan 23 17:43:34 nas kernel: [ 3.397805] Console: switching to colour frame buffer device 180x56
Jan 23 17:43:34 nas kernel: [ 3.417372] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
Jan 23 17:43:34 nas kernel: [ 3.769092] ata2.00: ATA-8: WDC WD20EARX-00PASB0, 51.0AB51, max UDMA/133
Jan 23 17:43:34 nas kernel: [ 3.769155] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 23 17:43:34 nas kernel: [ 3.776512] ata2.00: configured for UDMA/133
Jan 23 17:43:34 nas kernel: [ 3.776876] scsi 1:0:0:0: Direct-Access ATA WDC WD20EARX-00P AB51 PQ: 0 ANSI: 5
Jan 23 17:43:34 nas systemd[1]: Started Discard unused blocks once a week.
Jan 23 17:43:34 nas kernel: [ 3.777446] sd 1:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
Jan 23 17:43:34 nas kernel: [ 3.777466] sd 1:0:0:0: Attached scsi generic sg0 type 0
Jan 23 17:43:34 nas kernel: [ 3.777495] sd 1:0:0:0: [sda] 4096-byte physical blocks
Jan 23 17:43:34 nas kernel: [ 3.777572] sd 1:0:0:0: [sda] Write Protect is off
Jan 23 17:43:34 nas kernel: [ 3.777605] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00
Jan 23 17:43:34 nas kernel: [ 3.777625] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 23 17:43:34 nas systemd[1]: Starting LSB: automatic crash report generation...
Jan 23 17:43:34 nas kernel: [ 3.841221] sda: sda1 sda2
Jan 23 17:43:34 nas kernel: [ 3.841611] sd 1:0:0:0: [sda] Attached SCSI disk
Jan 23 17:43:34 nas kernel: [ 4.252089] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Jan 23 17:43:34 nas kernel: [ 4.274076] ata3.00: ATA-8: WDC WD20EARX-00PASB0, 51.0AB51, max UDMA/133
Jan 23 17:43:34 nas kernel: [ 4.274138] ata3.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 23 17:43:34 nas kernel: [ 4.281669] ata3.00: configured for UDMA/133
Jan 23 17:43:34 nas kernel: [ 4.282162] scsi 2:0:0:0: Direct-Access ATA WDC WD20EARX-00P AB51 PQ: 0 ANSI: 5
Jan 23 17:43:34 nas kernel: [ 4.282638] sd 2:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
Jan 23 17:43:34 nas systemd[1]: Started Service for snap application nextcloud.php-fpm.
Jan 23 17:43:34 nas kernel: [ 4.282655] sd 2:0:0:0: Attached scsi generic sg1 type 0
Jan 23 17:43:34 nas kernel: [ 4.282688] sd 2:0:0:0: [sdb] 4096-byte physical blocks
Jan 23 17:43:34 nas kernel: [ 4.282765] sd 2:0:0:0: [sdb] Write Protect is off
Jan 23 17:43:34 nas kernel: [ 4.282796] sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00
Jan 23 17:43:34 nas kernel: [ 4.282823] sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 23 17:43:34 nas kernel: [ 4.338768] sdb: sdb1 sdb2
Jan 23 17:43:34 nas systemd[1]: Started Service for snap application docker.dockerd.
Jan 23 17:43:34 nas kernel: [ 4.339351] sd 2:0:0:0: [sdb] Attached SCSI disk
Jan 23 17:43:34 nas kernel: [ 4.756099] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Jan 23 17:43:34 nas kernel: [ 4.772003] ata4.00: ATA-9: WDC WD80EZZX-11CSGA0, 83.H0A03, max UDMA/133
Jan 23 17:43:34 nas kernel: [ 4.773287] ata4.00: 15628053168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jan 23 17:43:34 nas kernel: [ 4.782483] ata4.00: configured for UDMA/133
Jan 23 17:43:34 nas kernel: [ 4.783943] scsi 3:0:0:0: Direct-Access ATA WDC WD80EZZX-11C 0A03 PQ: 0 ANSI: 5
Jan 23 17:43:34 nas systemd[1]: Starting Thermal Daemon Service...
Jan 23 17:43:34 nas kernel: [ 4.785730] sd 3:0:0:0: Attached scsi generic sg2 type 0
Jan 23 17:43:34 nas kernel: [ 4.785759] sd 3:0:0:0: [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Jan 23 17:43:34 nas kernel: [ 4.788379] sd 3:0:0:0: [sdc] 4096-byte physical blocks
Jan 23 17:43:34 nas kernel: [ 4.789689] sd 3:0:0:0: [sdc] Write Protect is off
Jan 23 17:43:34 nas kernel: [ 4.790988] sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00
Jan 23 17:43:34 nas kernel: [ 4.791002] sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 23 17:43:34 nas systemd[1]: Starting The Apache HTTP Server...
Jan 23 17:43:34 nas kernel: [ 4.852232] sdc: sdc1 sdc2 sdc3 sdc4 sdc5 sdc6 sdc7 sdc8 sdc9 sdc10 sdc11
Jan 23 17:43:34 nas kernel: [ 4.854225] sd 3:0:0:0: [sdc] Attached SCSI disk
Jan 23 17:43:34 nas kernel: [ 5.264091] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Jan 23 17:43:34 nas kernel: [ 5.266231] ata5.00: ATA-10: WDC WD40EFRX-68N32N0, 82.00A82, max UDMA/133
Jan 23 17:43:34 nas kernel: [ 5.267634] ata5.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 23 17:43:34 nas kernel: [ 5.269928] ata5.00: configured for UDMA/133
Jan 23 17:43:34 nas kernel: [ 5.271624] scsi 4:0:0:0: Direct-Access ATA WDC WD40EFRX-68N 0A82 PQ: 0 ANSI: 5
Jan 23 17:43:34 nas systemd[1]: Starting System Logging Service...
Jan 23 17:43:34 nas systemd[1]: Started D-Bus System Message Bus.
Jan 23 17:43:34 nas kernel: [ 5.273532] sd 4:0:0:0: [sdd] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
Jan 23 17:43:34 nas kernel: [ 5.273572] sd 4:0:0:0: Attached scsi generic sg3 type 0
Jan 23 17:43:34 nas kernel: [ 5.274925] sd 4:0:0:0: [sdd] 4096-byte physical blocks
Jan 23 17:43:34 nas kernel: [ 5.277667] sd 4:0:0:0: [sdd] Write Protect is off
Jan 23 17:43:34 nas kernel: [ 5.278982] sd 4:0:0:0: [sdd] Mode Sense: 00 3a 00 00
Jan 23 17:43:34 nas kernel: [ 5.278995] sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 23 17:43:34 nas named[3073]: starting BIND 9.11.3-1ubuntu1.11-Ubuntu (Extended Support Version) <id:a375815>
Jan 23 17:43:34 nas kernel: [ 5.331423] sdd: sdd1 sdd2 sdd3 sdd4
Jan 23 17:43:34 nas kernel: [ 5.333086] sd 4:0:0:0: [sdd] Attached SCSI disk
Jan 23 17:43:34 nas kernel: [ 5.752092] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Jan 23 17:43:34 nas kernel: [ 5.754230] ata6.00: ATA-10: WDC WD40EFRX-68N32N0, 82.00A82, max UDMA/133
Jan 23 17:43:34 nas kernel: [ 5.755648] ata6.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Jan 23 17:43:34 nas kernel: [ 5.757956] ata6.00: configured for UDMA/133
Jan 23 17:43:34 nas kernel: [ 5.759611] scsi 5:0:0:0: Direct-Access ATA WDC WD40EFRX-68N 0A82 PQ: 0 ANSI: 5
Jan 23 17:43:34 nas kernel: [ 5.761201] sd 5:0:0:0: [sde] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
Jan 23 17:43:34 nas kernel: [ 5.761222] sd 5:0:0:0: Attached scsi generic sg4 type 0
Jan 23 17:43:34 nas kernel: [ 5.763104] sd 5:0:0:0: [sde] 4096-byte physical blocks
Jan 23 17:43:34 nas kernel: [ 5.766426] sd 5:0:0:0: [sde] Write Protect is off
Jan 23 17:43:34 nas kernel: [ 5.767814] sd 5:0:0:0: [sde] Mode Sense: 00 3a 00 00
Jan 23 17:43:34 nas kernel: [ 5.767829] sd 5:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 23 17:43:34 nas kernel: [ 5.811099] sde: sde1 sde2 sde3 sde4
Jan 23 17:43:34 nas kernel: [ 5.812982] sd 5:0:0:0: [sde] Attached SCSI disk
Jan 23 17:43:34 nas kernel: [ 6.038818] random: crng init done
Jan 23 17:43:34 nas kernel: [ 6.040761] random: 7 urandom warning(s) missed due to ratelimiting
Jan 23 17:43:34 nas kernel: [ 6.433314] md/raid1:md7: active with 1 out of 1 mirrors
Jan 23 17:43:34 nas kernel: [ 6.439254] md7: detected capacity change from 0 to 999863353344
Jan 23 17:43:34 nas kernel: [ 6.468010] raid6: sse2x1 gen() 7221 MB/s
Jan 23 17:43:34 nas kernel: [ 6.516011] raid6: sse2x1 xor() 4775 MB/s
Jan 23 17:43:34 nas kernel: [ 6.557117] md/raid1:md5: active with 1 out of 1 mirrors
Jan 23 17:43:34 nas kernel: [ 6.562858] md5: detected capacity change from 0 to 999863353344
Jan 23 17:43:34 nas kernel: [ 6.564202] raid6: sse2x2 gen() 9165 MB/s
Jan 23 17:43:34 nas kernel: [ 6.612037] raid6: sse2x2 xor() 6820 MB/s
Jan 23 17:43:34 nas kernel: [ 6.660019] raid6: sse2x4 gen() 9601 MB/s
Jan 23 17:43:34 nas kernel: [ 6.670521] md/raid1:md6: active with 1 out of 1 mirrors
Jan 23 17:43:34 nas kernel: [ 6.702075] md6: detected capacity change from 0 to 999863353344
Jan 23 17:43:34 nas kernel: [ 6.708038] raid6: sse2x4 xor() 6834 MB/s
Jan 23 17:43:34 nas kernel: [ 6.709364] raid6: using algorithm sse2x4 gen() 9601 MB/s
Jan 23 17:43:34 nas kernel: [ 6.710690] raid6: .... xor() 6834 MB/s, rmw enabled
Jan 23 17:43:34 nas kernel: [ 6.712024] raid6: using ssse3x2 recovery algorithm
Jan 23 17:43:34 nas kernel: [ 6.715142] xor: measuring software checksum speed
Jan 23 17:43:34 nas kernel: [ 6.756034] prefetch64-sse: 13127.000 MB/sec
Jan 23 17:43:34 nas kernel: [ 6.796033] generic_sse: 11624.000 MB/sec
Jan 23 17:43:34 nas kernel: [ 6.797306] xor: using function: prefetch64-sse (13127.000 MB/sec)
Jan 23 17:43:34 nas kernel: [ 6.800362] async_tx: api initialized (async)
Jan 23 17:43:34 nas kernel: [ 6.812602] md/raid:md3: device sdc7 operational as raid disk 3
Jan 23 17:43:34 nas kernel: [ 6.813368] md/raid:md1: device sdc5 operational as raid disk 3
Jan 23 17:43:34 nas kernel: [ 6.814025] md/raid:md2: device sdc6 operational as raid disk 3
Jan 23 17:43:34 nas kernel: [ 6.814026] md/raid:md2: device sdb2 operational as raid disk 2
Jan 23 17:43:34 nas kernel: [ 6.814027] md/raid:md2: device sde2 operational as raid disk 0
Jan 23 17:43:34 nas kernel: [ 6.814028] md/raid:md2: device sdd2 operational as raid disk 1
Jan 23 17:43:34 nas kernel: [ 6.814402] md/raid:md3: device sda1 operational as raid disk 2
Jan 23 17:43:34 nas kernel: [ 6.814773] md/raid:md2: raid level 5 active with 4 out of 4 devices, algorithm 2
Jan 23 17:43:34 nas kernel: [ 6.814859] md/raid:md4: device sdc8 operational as raid disk 3
Jan 23 17:43:34 nas kernel: [ 6.814860] md/raid:md4: device sda2 operational as raid disk 2
Jan 23 17:43:34 nas kernel: [ 6.814861] md/raid:md4: device sde4 operational as raid disk 0
Jan 23 17:43:34 nas kernel: [ 6.814862] md/raid:md4: device sdd4 operational as raid disk 1
Jan 23 17:43:34 nas kernel: [ 6.815238] md/raid:md4: raid level 5 active with 4 out of 4 devices, algorithm 2
Jan 23 17:43:34 nas kernel: [ 6.815966] md/raid:md1: device sdb1 operational as raid disk 2
Jan 23 17:43:34 nas kernel: [ 6.815967] md/raid:md1: device sde1 operational as raid disk 0
Jan 23 17:43:34 nas kernel: [ 6.815968] md/raid:md1: device sdd1 operational as raid disk 1
Jan 23 17:43:34 nas kernel: [ 6.816432] md/raid:md1: raid level 5 active with 4 out of 4 devices, algorithm 2
Jan 23 17:43:34 nas kernel: [ 6.817268] md/raid:md3: device sde3 operational as raid disk 0
Jan 23 17:43:34 nas kernel: [ 6.834590] md/raid:md3: device sdd3 operational as raid disk 1
Jan 23 17:43:34 nas kernel: [ 6.836057] md/raid:md3: raid level 5 active with 4 out of 4 devices, algorithm 2
Jan 23 17:43:34 nas kernel: [ 6.870862] md4: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 6.953102] md2: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 7.065292] md3: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 7.065296] md1: detected capacity change from 0 to 2999590060032
Jan 23 17:43:34 nas kernel: [ 122.791708] Btrfs loaded, crc32c=crc32c-intel
Jan 23 17:43:34 nas kernel: [ 123.370491] EXT4-fs (sdc2): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 275.610958] ip_tables: (C) 2000-2006 Netfilter Core Team
Jan 23 17:43:34 nas kernel: [ 275.635129] systemd[1]: systemd 237 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
Jan 23 17:43:34 nas kernel: [ 275.656197] systemd[1]: Detected architecture x86-64.
Jan 23 17:43:34 nas kernel: [ 275.683378] systemd[1]: Set hostname to <nas>.
Jan 23 17:43:34 nas kernel: [ 279.427419] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 23 17:43:34 nas kernel: [ 279.430401] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 23 17:43:34 nas kernel: [ 279.433878] systemd[1]: Created slice System Slice.
Jan 23 17:43:34 nas kernel: [ 279.436730] systemd[1]: Listening on udev Control Socket.
Jan 23 17:43:34 nas kernel: [ 279.439611] systemd[1]: Listening on Journal Audit Socket.
Jan 23 17:43:34 nas kernel: [ 279.442482] systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 23 17:43:34 nas kernel: [ 279.445369] systemd[1]: Listening on LVM2 metadata daemon socket.
Jan 23 17:43:34 nas kernel: [ 279.635478] EXT4-fs (sdc2): re-mounted. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 279.721873] Loading iSCSI transport class v2.0-870.
Jan 23 17:43:34 nas kernel: [ 279.824882] RPC: Registered named UNIX socket transport module.
Jan 23 17:43:34 nas kernel: [ 279.825960] RPC: Registered udp transport module.
Jan 23 17:43:34 nas kernel: [ 279.827017] RPC: Registered tcp transport module.
Jan 23 17:43:34 nas kernel: [ 279.828095] RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 23 17:43:34 nas kernel: [ 279.871330] iscsi: registered transport (tcp)
Jan 23 17:43:34 nas kernel: [ 280.013556] systemd-journald[847]: Received request to flush runtime journal from PID 1
Jan 23 17:43:34 nas kernel: [ 280.135237] iscsi: registered transport (iser)
Jan 23 17:43:34 nas kernel: [ 280.150292] arp_tables: arp_tables: (C) 2002 David S. Miller
Jan 23 17:43:34 nas kernel: [ 280.180394] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 23 17:43:34 nas kernel: [ 280.189034] Bridge firewalling registered
Jan 23 17:43:34 nas kernel: [ 280.199590] ip6_tables: (C) 2000-2006 Netfilter Core Team
Jan 23 17:43:34 nas kernel: [ 280.301386] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Jan 23 17:43:34 nas kernel: [ 281.552439] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 23 17:43:34 nas kernel: [ 281.600622] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x000000000000042C-0x000000000000042D (\GP2C) (20170831/utaddress-247)
Jan 23 17:43:34 nas kernel: [ 281.600629] ACPI Warning: SystemIO range 0x0000000000000428-0x000000000000042F conflicts with OpRegion 0x000000000000042C-0x000000000000042D (\GP2C) (20170831/utaddress-247)
Jan 23 17:43:34 nas kernel: [ 281.600633] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
Jan 23 17:43:34 nas kernel: [ 281.600664] lpc_ich: Resource conflict(s) found affecting gpio_ich
Jan 23 17:43:34 nas kernel: [ 282.064785] kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround
Jan 23 17:43:34 nas kernel: [ 282.389134] snd_hda_intel 0000:00:1b.0: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
Jan 23 17:43:34 nas kernel: [ 282.772312] snd_hda_codec_realtek hdaudioC0D2: autoconfig for ALC887: line_outs=4 (0x14/0x15/0x16/0x17/0x0) type:line
Jan 23 17:43:34 nas kernel: [ 282.772315] snd_hda_codec_realtek hdaudioC0D2: speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
Jan 23 17:43:34 nas kernel: [ 282.772316] snd_hda_codec_realtek hdaudioC0D2: hp_outs=1 (0x1b/0x0/0x0/0x0/0x0)
Jan 23 17:43:34 nas kernel: [ 282.772317] snd_hda_codec_realtek hdaudioC0D2: mono: mono_out=0x0
Jan 23 17:43:34 nas kernel: [ 282.772318] snd_hda_codec_realtek hdaudioC0D2: dig-out=0x1e/0x0
Jan 23 17:43:34 nas kernel: [ 282.772319] snd_hda_codec_realtek hdaudioC0D2: inputs:
Jan 23 17:43:34 nas kernel: [ 282.772321] snd_hda_codec_realtek hdaudioC0D2: Rear Mic=0x18
Jan 23 17:43:34 nas kernel: [ 282.772322] snd_hda_codec_realtek hdaudioC0D2: Front Mic=0x19
Jan 23 17:43:34 nas kernel: [ 282.772323] snd_hda_codec_realtek hdaudioC0D2: Line=0x1a
Jan 23 17:43:34 nas kernel: [ 282.772323] snd_hda_codec_realtek hdaudioC0D2: CD=0x1c
Jan 23 17:43:34 nas kernel: [ 282.772324] snd_hda_codec_realtek hdaudioC0D2: dig-in=0x1f
Jan 23 17:43:34 nas kernel: [ 282.964332] input: HDA Intel MID Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input5
Jan 23 17:43:34 nas kernel: [ 282.964401] input: HDA Intel MID Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6
Jan 23 17:43:34 nas kernel: [ 282.964459] input: HDA Intel MID Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7
Jan 23 17:43:34 nas kernel: [ 282.964531] input: HDA Intel MID Line Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8
Jan 23 17:43:34 nas kernel: [ 282.964593] input: HDA Intel MID Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9
Jan 23 17:43:34 nas kernel: [ 282.964652] input: HDA Intel MID Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10
Jan 23 17:43:34 nas kernel: [ 282.964710] input: HDA Intel MID Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11
Jan 23 17:43:34 nas kernel: [ 282.964773] input: HDA Intel MID HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12
Jan 23 17:43:34 nas kernel: [ 287.557349] gpio_ich: GPIO from 436 to 511 on gpio_ich
Jan 23 17:43:34 nas kernel: [ 288.045510] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 288.419723] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 288.667958] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 289.462175] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 289.773794] Adding 4194300k swap on /dev/sdc3. Priority:-2 extents:1 across:4194300k FS
Jan 23 17:43:34 nas kernel: [ 370.318756] audit: type=1400 audit(1579797743.465:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1746 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.330353] audit: type=1400 audit(1579797743.477:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1747 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.330357] audit: type=1400 audit(1579797743.477:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1747 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.330360] audit: type=1400 audit(1579797743.477:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1747 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.396718] audit: type=1400 audit(1579797743.545:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1748 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.396721] audit: type=1400 audit(1579797743.545:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1748 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.400207] audit: type=1400 audit(1579797743.549:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1744 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.400211] audit: type=1400 audit(1579797743.549:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=1744 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.400213] audit: type=1400 audit(1579797743.549:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1744 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 370.400214] audit: type=1400 audit(1579797743.549:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1744 comm="apparmor_parser"
Jan 23 17:43:34 nas kernel: [ 374.172956] r8169 0000:03:00.0 enp3s0: link down
Jan 23 17:43:34 nas kernel: [ 374.172971] r8169 0000:03:00.0 enp3s0: link down
Jan 23 17:43:34 nas kernel: [ 374.173048] IPv6: ADDRCONF(NETDEV_UP): enp3s0: link is not ready
Jan 23 17:43:34 nas kernel: [ 374.266175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Jan 23 17:43:34 nas kernel: [ 374.271977] NFSD: starting 90-second grace period (net f0000099)
Jan 23 17:43:34 nas kernel: [ 376.473447] r8169 0000:03:00.0 enp3s0: link up
Jan 23 17:43:34 nas kernel: [ 376.473459] IPv6: ADDRCONF(NETDEV_CHANGE): enp3s0: link becomes ready
Jan 23 17:43:34 nas kernel: [ 408.927845] device-mapper: raid: Loading target version 1.13.0
Jan 23 17:43:34 nas kernel: [ 409.032190] md/raid1:mdX: not clean -- starting background reconstruction
Jan 23 17:43:34 nas kernel: [ 409.032192] md/raid1:mdX: active with 2 out of 2 mirrors
Jan 23 17:43:34 nas kernel: [ 409.115523] EXT4-fs (dm-17): warning: mounting fs with errors, running e2fsck is recommended
Jan 23 17:43:34 nas kernel: [ 409.157836] EXT4-fs (dm-13): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 409.372440] EXT4-fs (dm-6): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 409.460599] EXT4-fs (dm-8): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 409.461070] EXT4-fs (dm-17): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 409.505924] EXT4-fs (dm-15): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 409.664648] md: resync of RAID array mdX
Jan 23 17:43:34 nas kernel: [ 409.664813] md: mdX: resync done.
Jan 23 17:43:34 nas kernel: [ 410.012934] EXT4-fs (dm-5): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.013194] EXT4-fs (dm-9): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.013488] EXT4-fs (dm-16): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.102753] EXT4-fs (dm-10): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.240185] EXT4-fs (dm-14): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.399385] EXT4-fs (dm-7): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.461693] EXT4-fs (dm-12): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.514559] EXT4-fs (dm-11): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 410.611120] md/raid1:mdX: active with 2 out of 2 mirrors
Jan 23 17:43:34 nas kernel: [ 410.932052] md/raid1:mdX: not clean -- starting background reconstruction
Jan 23 17:43:34 nas kernel: [ 410.932054] md/raid1:mdX: active with 2 out of 2 mirrors
Jan 23 17:43:34 nas kernel: [ 410.949011] EXT4-fs (dm-24): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 411.109037] md: resync of RAID array mdX
Jan 23 17:43:34 nas kernel: [ 411.109220] md: mdX: resync done.
Jan 23 17:43:34 nas kernel: [ 411.369163] EXT4-fs (dm-29): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 411.807026] EXT4-fs (dm-34): mounted filesystem with ordered data mode. Opts: (null)
Jan 23 17:43:34 nas kernel: [ 440.240725] new mount options do not match the existing superblock, will be ignored
Jan 23 17:43:34 nas named[3073]: running on Linux x86_64 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019
Jan 23 17:43:34 nas dbus-daemon[3172]: dbus[3172]: Unknown group "power" in message bus configuration file
Jan 23 17:43:34 nas named[3073]: built with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--libexecdir=/usr/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-libjson=/usr' '--without-lmdb' '--with-gnu-ld' '--with-geoip=/usr' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--enable-native-pkcs11' '--with-pkcs11=/usr/lib/softhsm/libsofthsm2.so' '--with-randomdev=/dev/urandom' '--with-eddsa=no' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-uW3Pyl/bind9-9.11.3+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
Jan 23 17:43:34 nas named[3073]: running as: named -f -u bind
Jan 23 17:43:34 nas named[3073]: ----------------------------------------------------
Jan 23 17:43:34 nas named[3073]: BIND 9 is maintained by Internet Systems Consortium,
Jan 23 17:43:34 nas named[3073]: Inc. (ISC), a non-profit 501(c)(3) public-benefit
Jan 23 17:43:34 nas named[3073]: corporation. Support and training for BIND 9 are
Jan 23 17:43:34 nas named[3073]: available at https://www.isc.org/support
Jan 23 17:43:34 nas named[3073]: ----------------------------------------------------
Jan 23 17:43:34 nas named[3073]: adjusted limit on open files from 4096 to 1048576
Jan 23 17:43:34 nas named[3073]: found 4 CPUs, using 4 worker threads
Jan 23 17:43:34 nas named[3073]: using 3 UDP listeners per interface
Jan 23 17:43:34 nas named[3073]: using up to 4096 sockets
Jan 23 17:43:34 nas rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.32.0]
Jan 23 17:43:34 nas rsyslogd: rsyslogd's groupid changed to 106
Jan 23 17:43:34 nas rsyslogd: rsyslogd's userid changed to 102
Jan 23 17:43:34 nas rsyslogd: [origin software="rsyslogd" swVersion="8.32.0" x-pid="3166" x-info="http://www.rsyslog.com"] start
Jan 23 17:43:34 nas named[3073]: loading configuration from '/etc/bind/named.conf'
Jan 23 17:43:35 nas named[3073]: reading built-in trust anchors from file '/etc/bind/bind.keys'
Jan 23 17:43:35 nas named[3073]: initializing GeoIP Country (IPv4) (type 1) DB
Jan 23 17:43:35 nas named[3073]: GEO-106FREE 20180315 Build
Jan 23 17:43:35 nas named[3073]: initializing GeoIP Country (IPv6) (type 12) DB
Jan 23 17:43:35 nas named[3073]: GEO-106FREE 20180315 Build
Jan 23 17:43:35 nas named[3073]: GeoIP City (IPv4) (type 2) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP City (IPv4) (type 6) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP City (IPv6) (type 30) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP City (IPv6) (type 31) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP Region (type 3) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP Region (type 7) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP ISP (type 4) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP Org (type 5) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP AS (type 9) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP Domain (type 11) DB not available
Jan 23 17:43:35 nas named[3073]: GeoIP NetSpeed (type 10) DB not available
Jan 23 17:43:35 nas named[3073]: using default UDP/IPv4 port range: [32768, 60999]
Jan 23 17:43:35 nas named[3073]: using default UDP/IPv6 port range: [32768, 60999]
Jan 23 17:43:35 nas named[3073]: listening on IPv6 interfaces, port 53
Jan 23 17:43:35 nas dbus-daemon[3172]: [system] AppArmor D-Bus mediation is enabled
Jan 23 17:43:35 nas systemd[1]: Starting Samba AD Daemon...
Jan 23 17:43:35 nas systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 23 17:43:35 nas systemd[1]: Started Message of the Day.
Jan 23 17:43:35 nas named[3073]: listening on IPv4 interface lo, 127.0.0.1#53
Jan 23 17:43:35 nas named[3073]: listening on IPv4 interface enp3s0, 192.168.2.2#53
Jan 23 17:43:35 nas named[3073]: generating session key for dynamic DNS
Jan 23 17:43:35 nas thermald[3078]: NO RAPL sysfs present
Jan 23 17:43:35 nas thermald[3078]: 11 CPUID levels; family:model:stepping 0x6:25:5 (6:37:5)
Jan 23 17:43:35 nas thermald[3078]: Need Linux PowerCap sysfs
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application nextcloud.mysql.
Jan 23 17:43:35 nas systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 23 17:43:35 nas systemd[1]: Started Regular background program processing daemon.
Jan 23 17:43:35 nas systemd[1]: Starting Dispatcher daemon for systemd-networkd...
Jan 23 17:43:35 nas systemd[1]: Started irqbalance daemon.
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application nextcloud.redis-server.
Jan 23 17:43:35 nas systemd[1]: Started Self Monitoring and Reporting Technology (SMART) Daemon.
Jan 23 17:43:35 nas systemd[1]: Started Daily apt download activities.
Jan 23 17:43:35 nas systemd[1]: Started Daily apt upgrade and clean activities.
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application canonical-livepatch.canonical-livepatchd.
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application nextcloud.apache.
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application nextcloud.mdns-publisher.
Jan 23 17:43:35 nas systemd[1]: Starting Permit User Sessions...
Jan 23 17:43:35 nas systemd[1]: Starting Login Service...
Jan 23 17:43:35 nas named[3073]: sizing zone task pool based on 7 zones
Jan 23 17:43:35 nas systemd[1]: Started Service for snap application nextcloud.renew-certs.
Jan 23 17:43:35 nas named[3073]: Loading 'AD DNS Zone' using driver dlopen
Jan 23 17:43:36 nas cron[3351]: (CRON) INFO (pidfile fd = 3)
Jan 23 17:43:36 nas systemd[1]: Started Set the CPU Frequency Scaling governor.
Jan 23 17:43:36 nas thermald[3078]: Unsupported cpu model, use thermal-conf.xml file or run with --ignore-cpuid-check
Jan 23 17:43:36 nas thermald[3078]: THD engine start failed
Jan 23 17:43:36 nas dbus-daemon[3172]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.3' (uid=0 pid=3068 comm="/usr/lib/accountsservice/accounts-daemon " label="unconfined")
Jan 23 17:43:36 nas systemd[1]: Starting Socket activation for snappy daemon.
Jan 23 17:43:36 nas systemd[1]: Started ACPI Events Check.
Jan 23 17:43:36 nas systemd[1]: Started System Logging Service.
Jan 23 17:43:36 nas systemd[1]: Stopped LVM2 PV scan on device 253:19.
Jan 23 17:43:36 nas systemd[1]: Stopped LVM2 PV scan on device 8:36.
Jan 23 17:43:36 nas systemd[1]: Stopped LVM2 PV scan on device 253:3.
Jan 23 17:43:36 nas systemd[1]: Listening on LXD - unix socket.
Jan 23 17:43:36 nas systemd[1]: Started Thermal Daemon Service.
Jan 23 17:43:36 nas systemd[1]: Started Permit User Sessions.
Jan 23 17:43:36 nas systemd[1]: Listening on Socket activation for snappy daemon.
Jan 23 17:43:36 nas grub-common[3052]: * Recording successful boot for GRUB
Jan 23 17:43:36 nas apport[3074]: * Starting automatic crash report generation: apport
Jan 23 17:43:36 nas systemd[1]: Starting Authorization Manager...
Jan 23 17:43:36 nas smartd[3373]: smartd 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-74-generic] (local build)
Jan 23 17:43:36 nas smartd[3373]: Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Jan 23 17:43:36 nas smartd[3373]: Opened configuration file /etc/smartd.conf
Jan 23 17:43:36 nas smartd[3373]: Drive: DEVICESCAN, implied '-a' Directive on line 21 of file /etc/smartd.conf
Jan 23 17:43:36 nas smartd[3373]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan 23 17:43:36 nas smartd[3373]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan 23 17:43:36 nas smartd[3373]: Device: /dev/sda [SAT], opened
Jan 23 17:43:36 nas smartd[3373]: Device: /dev/sda [SAT], WDC WD20EARX-00PASB0, S/N:WD-WMAZA6379497, WWN:5-0014ee-0586641dd, FW:51.0AB51, 2.00 TB
Jan 23 17:43:36 nas smartd[3373]: Device: /dev/sda [SAT], found in smartd database: Western Digital Green
Jan 23 17:43:36 nas systemd[1]: Starting Clean php session files...
Jan 23 17:43:36 nas systemd[1]: Starting Snappy daemon...
Jan 23 17:43:36 nas systemd[1]: Starting Terminate Plymouth Boot Screen...
Jan 23 17:43:36 nas systemd[1]: Starting Hold until boot process finishes up...
Jan 23 17:43:36 nas systemd[1]: Starting LXD - container startup/shutdown...
Jan 23 17:43:36 nas systemd[1]: Stopping LVM2 metadata daemon...
Jan 23 17:43:36 nas systemd[1]: Started Hold until boot process finishes up.
Jan 23 17:43:36 nas systemd[1]: Started Getty on tty1.
Jan 23 17:43:36 nas systemd[1]: Started Terminate Plymouth Boot Screen.
Jan 23 17:43:36 nas apport[3074]: ...done.
Jan 23 17:43:36 nas systemd[1]: Started LSB: automatic crash report generation.
Jan 23 17:43:36 nas systemd[1]: Started Login Service.
Jan 23 17:43:36 nas systemd[1]: Started Unattended Upgrades Shutdown.
Jan 23 17:43:36 nas cron[3351]: (CRON) INFO (Running @reboot jobs)
Jan 23 17:43:36 nas smartd[3373]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD20EARX_00PASB0-WD_WMAZA6379497.ata.state
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb, type changed from 'scsi' to 'sat'
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb [SAT], opened
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb [SAT], WDC WD20EARX-00PASB0, S/N:WD-WMAZA8492756, WWN:5-0014ee-25c303777, FW:51.0AB51, 2.00 TB
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb [SAT], found in smartd database: Western Digital Green
Jan 23 17:43:37 nas grub-common[3052]: ...done.
Jan 23 17:43:37 nas systemd[1]: Started LSB: Record successful boot for GRUB.
Jan 23 17:43:37 nas polkitd[3629]: started daemon version 0.105 using authority implementation `local' version `0.105'
Jan 23 17:43:37 nas dbus-daemon[3172]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Jan 23 17:43:37 nas systemd[1]: Started Authorization Manager.
Jan 23 17:43:37 nas accounts-daemon[3068]: started daemon version 0.6.45
Jan 23 17:43:37 nas systemd[1]: Started Accounts Service.
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb [SAT], is SMART capable. Adding to "monitor" list.
Jan 23 17:43:37 nas networkd-dispatcher[3354]: No valid path found for iwconfig
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdb [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD20EARX_00PASB0-WD_WMAZA8492756.ata.state
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdc, type changed from 'scsi' to 'sat'
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdc [SAT], opened
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdc [SAT], WDC WD80EZZX-11CSGA0, S/N:VLHV6PTY, WWN:5-000cca-260da0250, FW:83.H0A03, 8.00 TB
Jan 23 17:43:37 nas smartd[3373]: Device: /dev/sdc [SAT], not found in smartd database.
Jan 23 17:43:38 nas sks[3062]: 2020-01-23 17:43:38 sks_recon, SKS version 1.1.6
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 sks_db, SKS version 1.1.6
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 Using BerkelyDB version 5.3.28
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 Copyright Yaron Minsky 2002, 2003, 2004
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 Licensed under GPL. See LICENSE file for details
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 http port: 11371
Jan 23 17:43:38 nas sks[3062]: 2020-01-23 17:43:38 Using BerkelyDB version 5.3.28
Jan 23 17:43:38 nas sks[3062]: 2020-01-23 17:43:38 Copyright Yaron Minsky 2002-2013
Jan 23 17:43:38 nas sks[3062]: 2020-01-23 17:43:38 Licensed under GPL. See LICENSE file for details
Jan 23 17:43:38 nas sks[3062]: 2020-01-23 17:43:38 Opening PTree database
Jan 23 17:43:38 nas systemd[1]: Started Dispatcher daemon for systemd-networkd.
Jan 23 17:43:38 nas sks[3055]: 2020-01-23 17:43:38 Opening KeyDB database
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdc [SAT], is SMART capable. Adding to "monitor" list.
Jan 23 17:43:38 nas nextcloud.nextcloud-cron[2961]: Waiting for Nextcloud config dir... done
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdc [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD80EZZX_11CSGA0-VLHV6PTY.ata.state
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdd, type changed from 'scsi' to 'sat'
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdd [SAT], opened
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdd [SAT], WDC WD40EFRX-68N32N0, S/N:WD-WCC7K2XCACHC, WWN:5-0014ee-20fed88f2, FW:82.00A82, 4.00 TB
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdd [SAT], found in smartd database: Western Digital Red
Jan 23 17:43:38 nas smartd[3373]: Device: /dev/sdd [SAT], is SMART capable. Adding to "monitor" list.
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdd [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD40EFRX_68N32N0-WD_WCC7K2XCACHC.ata.state
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde, type changed from 'scsi' to 'sat'
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], opened
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], WDC WD40EFRX-68N32N0, S/N:WD-WCC7K5YL7ULP, WWN:5-0014ee-2ba9877e6, FW:82.00A82, 4.00 TB
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], found in smartd database: Western Digital Red
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], is SMART capable. Adding to "monitor" list.
Jan 23 17:43:39 nas nextcloud.nextcloud-fixer[2978]: Waiting for Apache...
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD40EFRX_68N32N0-WD_WCC7K5YL7ULP.ata.state
Jan 23 17:43:39 nas nextcloud.apache[3379]: Making sure nextcloud is setup...
Jan 23 17:43:39 nas nextcloud.php-fpm[3075]: Waiting for MySQL...
Jan 23 17:43:39 nas smartd[3373]: Monitoring 5 ATA/SATA, 0 SCSI/SAS and 0 NVMe devices
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sda [SAT], SMART Prefailure Attribute: 3 Spin_Up_Time changed from 174 to 171
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 108 to 109
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdb [SAT], SMART Prefailure Attribute: 3 Spin_Up_Time changed from 165 to 161
Jan 23 17:43:39 nas kernel: [ 446.406784] aufs 4.15-20180219
Jan 23 17:43:39 nas kernel: [ 446.407068] aufs aufs_fill_super:912:mount[3935]: no arg
Jan 23 17:43:39 nas nextcloud.apache[3379]: Waiting for PHP...
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdc [SAT], SMART Prefailure Attribute: 3 Spin_Up_Time changed from 175 to 168
Jan 23 17:43:39 nas kernel: [ 446.580618] overlayfs: missing 'lowerdir'
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 3 Spin_Up_Time changed from 191 to 174
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 109 to 110
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], SMART Prefailure Attribute: 3 Spin_Up_Time changed from 196 to 180
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD20EARX_00PASB0-WD_WMAZA6379497.ata.state
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdb [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD20EARX_00PASB0-WD_WMAZA8492756.ata.state
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdc [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD80EZZX_11CSGA0-VLHV6PTY.ata.state
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sdd [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD40EFRX_68N32N0-WD_WCC7K2XCACHC.ata.state
Jan 23 17:43:39 nas smartd[3373]: Device: /dev/sde [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD40EFRX_68N32N0-WD_WCC7K5YL7ULP.ata.state
Jan 23 17:43:39 nas systemd[1]: Started The Apache HTTP Server.
Jan 23 17:43:40 nas nextcloud.mdns-publisher[3380]: 2020/01/23 17:43:39 Publishing nas.local -> [192.168.2.2 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx xxxx:xxxx:xxxx:xxxx:xxxx:xxxx] with 60-second TTL
Jan 23 17:43:40 nas canonical-livepatch[3378]: starting client daemon version 9.4.8
Jan 23 17:43:40 nas canonical-livepatch[3378]: starting svc "mitigation loop"
Jan 23 17:43:40 nas canonical-livepatch[3378]: service "mitigation loop" started
Jan 23 17:43:40 nas canonical-livepatch[3378]: starting svc "socket servers"
Jan 23 17:43:40 nas canonical-livepatch[3378]: service "socket servers" started
Jan 23 17:43:40 nas canonical-livepatch[3378]: starting svc "refresh loop"
Jan 23 17:43:40 nas canonical-livepatch[3378]: service "refresh loop" started
Jan 23 17:43:40 nas canonical-livepatch[3378]: client daemon started
Jan 23 17:43:40 nas canonical-livepatch[3378]: Client.Check
Jan 23 17:43:40 nas kernel: [ 447.509184] kauditd_printk_skb: 40 callbacks suppressed
Jan 23 17:43:40 nas kernel: [ 447.509186] audit: type=1400 audit(1579797820.657:52): apparmor="DENIED" operation="capable" profile="snap.nextcloud.redis-server" pid=3930 comm="redis-server" capability=24 capname="sys_resource"
Jan 23 17:43:40 nas nmbd[2964]: [2020/01/23 17:43:40.549216, 0] ../source3/nmbd/nmbd.c:923(main)
Jan 23 17:43:40 nas nmbd[2964]: server role = 'active directory domain controller' not compatible with running nmbd standalone.
Jan 23 17:43:40 nas nmbd[2964]: You should start 'samba' instead, and it will control starting the internal nbt server
Jan 23 17:43:40 nas canonical-livepatch[3378]: Checking with livepatch service.
Jan 23 17:43:40 nas samba[3322]: [2020/01/23 17:43:40.341344, 0] ../source4/smbd/server.c:448(binary_smbd_main)
Jan 23 17:43:41 nas systemd[1]: nmbd.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 17:43:41 nas sks[3062]: 2020-01-23 17:43:41 Setting up PTree data structure
Jan 23 17:43:41 nas sks[3062]: 2020-01-23 17:43:41 PTree setup complete
Jan 23 17:43:41 nas samba[3322]: samba version 4.7.6-Ubuntu started.
Jan 23 17:43:41 nas systemd[1]: nmbd.service: Failed with result 'exit-code'.
Jan 23 17:43:41 nas samba[3322]: Copyright Andrew Tridgell and the Samba Team 1992-2017
Jan 23 17:43:41 nas systemd[1]: Failed to start Samba NMB Daemon.
Jan 23 17:43:41 nas set-cpufreq[3532]: Setting ondemand scheduler for all CPUs
Jan 23 17:43:41 nas systemd[1]: Starting Samba Winbind Daemon...
Jan 23 17:43:42 nas postfix/postfix-script[4167]: starting the Postfix mail system
Jan 23 17:43:42 nas kernel: [ 448.997404] audit: type=1400 audit(1579797822.149:53): apparmor="DENIED" operation="open" profile="snap.nextcloud.mysql" name="/etc/mysql/my.cnf.fallback" pid=3937 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jan 23 17:43:42 nas kernel: [ 448.997897] audit: type=1400 audit(1579797822.149:54): apparmor="DENIED" operation="capable" profile="snap.nextcloud.mysql" pid=3937 comm="mysqld" capability=24 capname="sys_resource"
Jan 23 17:43:42 nas postfix/master[4171]: daemon started -- version 3.3.0, configuration /etc/postfix
Jan 23 17:43:42 nas systemd[1]: Started Postfix Mail Transport Agent (instance -).
Jan 23 17:43:42 nas systemd[1]: Starting Postfix Mail Transport Agent...
Jan 23 17:43:42 nas systemd[1]: Started Postfix Mail Transport Agent.
Jan 23 17:43:42 nas snapd[3649]: AppArmor status: apparmor is enabled and all features are available
Jan 23 17:43:42 nas systemd[1]: winbind.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 17:43:42 nas systemd[1]: winbind.service: Failed with result 'exit-code'.
Jan 23 17:43:42 nas systemd[1]: Failed to start Samba Winbind Daemon.
Jan 23 17:43:42 nas systemd[1]: Starting Samba SMB Daemon...
Jan 23 17:43:43 nas sks[3055]: 2020-01-23 17:43:43 Calculating DB stats
Jan 23 17:43:43 nas sks[3055]: 2020-01-23 17:43:43 Done calculating DB stats
Jan 23 17:43:43 nas sks[3055]: 2020-01-23 17:43:43 Database opened
Jan 23 17:43:43 nas sks[3055]: 2020-01-23 17:43:43 Applied filters: yminsky.dedup, yminsky.merge
Jan 23 17:43:44 nas nextcloud.mysql[3338]: 2020-01-23T16:43:42.151498Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
Jan 23 17:43:44 nas nextcloud.mysql[3338]: 2020-01-23T16:43:42.151599Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
Jan 23 17:43:44 nas nextcloud.mysql[3338]: 2020-01-23T16:43:42.839077Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
Jan 23 17:43:44 nas nextcloud.mysql[3338]: 2020-01-23T16:43:44.021412Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
Jan 23 17:43:44 nas nextcloud.mysql[3338]: 2020-01-23T16:43:44.021548Z 0 [ERROR] Aborting
Jan 23 17:43:44 nas smbd[4201]: [2020/01/23 17:43:43.984830, 0] ../source3/smbd/server.c:1815(main)
Jan 23 17:43:44 nas smbd[4201]: server role = 'active directory domain controller' not compatible with running smbd standalone.
Jan 23 17:43:44 nas smbd[4201]: You should start 'samba' instead, and it will control starting smbd if required
Jan 23 17:43:44 nas systemd[1]: smbd.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 17:43:44 nas systemd[1]: smbd.service: Failed with result 'exit-code'.
Jan 23 17:43:44 nas systemd[1]: Failed to start Samba SMB Daemon.
Jan 23 17:43:44 nas snapd[3649]: AppArmor status: apparmor is enabled and all features are available
Jan 23 17:43:44 nas kernel: [ 451.398370] audit: type=1400 audit(1579797824.545:55): apparmor="DENIED" operation="exec" profile="snap.nextcloud.mysql" name="/bin/systemctl" pid=4266 comm="mysql.server" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:44 nas nextcloud.mysql[3338]: Starting MySQL
Jan 23 17:43:44 nas kernel: [ 451.518813] audit: type=1400 audit(1579797824.665:56): apparmor="DENIED" operation="open" profile="snap.nextcloud.mysql" name="/etc/mysql/my.cnf.fallback" pid=4288 comm="my_print_defaul" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jan 23 17:43:44 nas kernel: [ 451.774830] audit: type=1400 audit(1579797824.921:57): apparmor="DENIED" operation="open" profile="snap.nextcloud.mysql" name="/etc/mysql/my.cnf.fallback" pid=4343 comm="my_print_defaul" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jan 23 17:43:45 nas kernel: [ 451.876459] audit: type=1400 audit(1579797825.025:58): apparmor="DENIED" operation="open" profile="snap.nextcloud.renew-certs" name="/proc/3968/mounts" pid=3968 comm="python2" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jan 23 17:43:45 nas systemd[1]: Started LXD - container startup/shutdown.
Jan 23 17:43:45 nas snapd[3649]: daemon.go:346: started snapd/2.42.5 (series 16; classic) ubuntu/18.04 (amd64) linux/4.15.0-74-generic.
Jan 23 17:43:45 nas snapd[3649]: daemon.go:439: adjusting startup timeout by 50s (pessimistic estimate of 30s plus 5s per snap)
Jan 23 17:43:45 nas kernel: [ 452.246740] audit: type=1400 audit(1579797825.393:59): apparmor="DENIED" operation="open" profile="snap.nextcloud.mysql" name="/etc/mysql/my.cnf.fallback" pid=4489 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
Jan 23 17:43:45 nas kernel: [ 452.247654] audit: type=1400 audit(1579797825.393:60): apparmor="DENIED" operation="capable" profile="snap.nextcloud.mysql" pid=4489 comm="mysqld" capability=24 capname="sys_resource"
Jan 23 17:43:45 nas systemd[1]: Started Snappy daemon.
Jan 23 17:43:45 nas systemd[1]: Starting Wait until snapd is fully seeded...
Jan 23 17:43:45 nas systemd[1]: Started Wait until snapd is fully seeded.
Jan 23 17:43:45 nas systemd[1]: Starting Apply the settings specified in cloud-config...
Jan 23 17:43:46 nas samba[3322]: [2020/01/23 17:43:46.128167, 0] ../source4/smbd/server.c:620(binary_smbd_main)
Jan 23 17:43:46 nas samba[3322]: samba: using 'standard' process model
Jan 23 17:43:46 nas named[3073]: samba_dlz: started for DN DC=harms,DC=lan
Jan 23 17:43:46 nas named[3073]: samba_dlz: starting configure
Jan 23 17:43:46 nas named[3073]: samba_dlz: configured writeable zone 'harms.lan'
Jan 23 17:43:46 nas named[3073]: samba_dlz: configured writeable zone '_msdcs.xxx.lan'
Jan 23 17:43:46 nas named[3073]: none:103: 'max-cache-size 90%' - setting to 3371MB (out of 3745MB)
Jan 23 17:43:46 nas named[3073]: obtaining root key for view _default from '/etc/bind/bind.keys'
Jan 23 17:43:46 nas named[3073]: set up managed keys zone for view _default, file 'managed-keys.bind'
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 10.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 16.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 17.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 18.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 19.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 20.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 21.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 22.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 23.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 24.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 25.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 26.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 27.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 28.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 29.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 30.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 31.172.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 168.192.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 64.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 65.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 66.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 67.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 68.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 69.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 70.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 71.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 72.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 73.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 74.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 75.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 76.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 77.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 78.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 79.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 80.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 81.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 82.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 83.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 84.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 85.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 86.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 87.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 88.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 89.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 90.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 91.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 92.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 93.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 94.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 95.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 96.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 97.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 98.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 99.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 100.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 101.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 102.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 103.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 104.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 105.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 106.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 107.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 108.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 109.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 110.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 111.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 112.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 113.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 114.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 115.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 116.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 117.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 118.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 119.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 120.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 121.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 122.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 123.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 124.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 125.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 126.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 127.100.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 254.169.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 2.0.192.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 100.51.198.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 113.0.203.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 255.255.255.255.IN-ADDR.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: D.F.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 8.E.F.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 9.E.F.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: A.E.F.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: B.E.F.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
Jan 23 17:43:46 nas named[3073]: automatic empty zone: EMPTY.AS112.ARPA
Jan 23 17:43:46 nas named[3073]: none:103: 'max-cache-size 90%' - setting to 3371MB (out of 3745MB)
Jan 23 17:43:46 nas named[3073]: configuring command channel from '/etc/bind/rndc.key'
Jan 23 17:43:46 nas named[3073]: command channel listening on 127.0.0.1#953
Jan 23 17:43:46 nas named[3073]: configuring command channel from '/etc/bind/rndc.key'
Jan 23 17:43:46 nas named[3073]: command channel listening on ::1#953
Jan 23 17:43:46 nas named[3073]: managed-keys-zone: journal file is out of date: removing journal file
Jan 23 17:43:46 nas named[3073]: managed-keys-zone: loaded serial 373
Jan 23 17:43:47 nas named[3073]: zone 2.168.192.in-addr.arpa/IN: loaded serial 2015052501
Jan 23 17:43:47 nas named[3073]: zone 255.in-addr.arpa/IN: loaded serial 1
Jan 23 17:43:47 nas named[3073]: zone localhost/IN: loaded serial 2
Jan 23 17:43:47 nas named[3073]: (re)loading policy zone 'rpz' changed from 0 to 8 qname, 0 to 0 nsdname, 0 to 0 IP, 0 to 0 NSIP, 0 to 0 CLIENTIP entries
Jan 23 17:43:47 nas named[3073]: zone rpz/IN: loaded serial 2019070301
Jan 23 17:43:47 nas named[3073]: zone 0.in-addr.arpa/IN: loaded serial 1
Jan 23 17:43:47 nas systemd[1]: Started Clean php session files.
Jan 23 17:43:47 nas named[3073]: zone 127.in-addr.arpa/IN: loaded serial 1
Jan 23 17:43:47 nas named[3073]: all zones loaded
Jan 23 17:43:47 nas named[3073]: running
Jan 23 17:43:47 nas named[3073]: zone 2.168.192.in-addr.arpa/IN: sending notifies (serial 2015052501)
Jan 23 17:43:47 nas winbindd[4614]: [2020/01/23 17:43:47.531363, 0] ../source3/winbindd/winbindd_cache.c:3170(initialize_winbindd_cache)
Jan 23 17:43:47 nas winbindd[4614]: initialize_winbindd_cache: clearing cache and re-creating with version number 2
Jan 23 17:43:47 nas cloud-init[4543]: Cloud-init v. 19.3-41-gc4735dd3-0ubuntu1~18.04.1 running 'modules:config' at Thu, 23 Jan 2020 16:43:46 +0000. Up 453.29 seconds.
Jan 23 17:43:47 nas systemd[1]: Started Apply the settings specified in cloud-config.
Jan 23 17:43:47 nas named[3073]: managed-keys-zone: Key 20326 for zone . acceptance timer complete: key now trusted
Jan 23 17:43:47 nas named[3073]: resolver priming query complete
Jan 23 17:43:47 nas canonical-livepatch[3378]: Updated last-check.
Jan 23 17:43:47 nas canonical-livepatch[3378]: No updates available at this time.
Jan 23 17:43:47 nas canonical-livepatch[3378]: during refresh: cannot apply patches: Payload does not match current kernel version.
Jan 23 17:43:49 nas nextcloud.mysql[3338]: ..... *
Jan 23 17:43:49 nas kernel: [ 456.476989] audit: type=1400 audit(1579797829.625:61): apparmor="DENIED" operation="exec" profile="snap.docker.dockerd" name="/bin/kmod" pid=4750 comm="containerd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:49 nas nextcloud.mysql[3338]: Checking/upgrading mysql tables if necessary...
Jan 23 17:43:49 nas winbindd[4614]: [2020/01/23 17:43:49.840350, 0] ../lib/util/become_daemon.c:124(daemon_ready)
Jan 23 17:43:49 nas systemd[1]: Started Samba AD Daemon.
Jan 23 17:43:49 nas winbindd[4614]: STATUS=daemon 'winbindd' finished starting up and ready to serve connections
Jan 23 17:43:49 nas systemd[1]: Reached target Multi-User System.
Jan 23 17:43:49 nas systemd[1]: Starting Execute cloud user/final scripts...
Jan 23 17:43:49 nas systemd[1]: Reached target Graphical Interface.
Jan 23 17:43:49 nas systemd[1]: Starting Update UTMP about System Runlevel Changes...
Jan 23 17:43:49 nas systemd[1]: Started Update UTMP about System Runlevel Changes.
Jan 23 17:43:50 nas smbd[4611]: [2020/01/23 17:43:50.237826, 0] ../lib/util/become_daemon.c:124(daemon_ready)
Jan 23 17:43:50 nas smbd[4611]: STATUS=daemon 'smbd' finished starting up and ready to serve connections
Jan 23 17:43:50 nas kernel: [ 457.145843] audit: type=1400 audit(1579797830.293:62): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.nextcloud-cron"
Jan 23 17:43:50 nas kernel: [ 457.146358] audit: type=1400 audit(1579797830.293:63): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.nextcloud-fixer"
Jan 23 17:43:50 nas kernel: [ 457.147830] audit: type=1400 audit(1579797830.293:64): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="/usr/sbin/named"
Jan 23 17:43:50 nas kernel: [ 457.147834] audit: type=1400 audit(1579797830.293:65): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.php-fpm"
Jan 23 17:43:50 nas kernel: [ 457.148960] audit: type=1400 audit(1579797830.297:66): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.mysql"
Jan 23 17:43:50 nas kernel: [ 457.150169] audit: type=1400 audit(1579797830.297:67): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.redis-server"
Jan 23 17:43:50 nas kernel: [ 457.150779] audit: type=1400 audit(1579797830.297:68): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.canonical-livepatch.canonical-livepatchd"
Jan 23 17:43:50 nas kernel: [ 457.151207] audit: type=1400 audit(1579797830.297:69): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.apache"
Jan 23 17:43:50 nas kernel: [ 457.151669] audit: type=1400 audit(1579797830.297:70): apparmor="DENIED" operation="ptrace" profile="snap.docker.dockerd" pid=4787 comm="ps" requested_mask="trace" denied_mask="trace" peer="snap.nextcloud.mdns-publisher"
Jan 23 17:43:50 nas cloud-init[4755]: Cloud-init v. 19.3-41-gc4735dd3-0ubuntu1~18.04.1 running 'modules:final' at Thu, 23 Jan 2020 16:43:50 +0000. Up 457.25 seconds.
Jan 23 17:43:50 nas cloud-init[4755]: Cloud-init v. 19.3-41-gc4735dd3-0ubuntu1~18.04.1 finished at Thu, 23 Jan 2020 16:43:50 +0000. Datasource DataSourceNoCloud [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net]. Up 457.78 seconds
Jan 23 17:43:51 nas systemd[1]: Started Execute cloud user/final scripts.
Jan 23 17:43:51 nas nextcloud.mysql[3338]: Checking if update is needed.
Jan 23 17:43:51 nas kernel: [ 458.460150] usb 6-1: USB disconnect, device number 2
Jan 23 17:43:51 nas nextcloud.mysql[3338]: This installation of MySQL is already upgraded to 5.7.28, use --force if you still need to run mysql_upgrade
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Saving debug log to /var/snap/nextcloud/current/certs/certbot/logs/letsencrypt.log
Jan 23 17:43:52 nas nextcloud.php-fpm[3075]: done
Jan 23 17:43:52 nas nextcloud.php-fpm[3075]: Obtaining nextcloud mysql credentials... done
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.jokergermany.de.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.de/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.de.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.de.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.de/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.de.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0001.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.myfritz.net-0001/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0001.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0002.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.myfritz.net-0002/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0002.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0003.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.myfritz.net-0003/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0003.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxxx.myfritz.net-0004.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Cert not yet due for renewal
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Processing
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net.conf
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Traceback (most recent call last):
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/renewal.py", line 65, in _reconstitute
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: renewal_candidate = storage.RenewableCert(full_path, config)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 462, in __init__
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: self._check_symlinks()
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: File "/snap/nextcloud/18204/lib/python2.7/site-packages/certbot/storage.py", line 521, in _check_symlinks
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: "expected {0} to be a symlink".format(link))
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: CertStorageError: expected /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.myfritz.net/cert.pem to be a symlink
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Renewal configuration file /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net.conf is broken. Skipping.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: The following certs are not due for renewal yet:
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/live/nas.xxx.myfritz.net-0004/fullchain.pem expires on 2020-03-31 (skipped)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: No renewals were attempted.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: No hooks were run.
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: Additionally, the following renewal configurations were invalid:
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.de.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0001.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0002.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxx.myfritz.net-0003.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: /var/snap/nextcloud/current/certs/certbot/config/renewal/nas.xxxx.myfritz.net.conf (parsefail)
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jan 23 17:43:52 nas nextcloud.renew-certs[3385]: 0 renew failure(s), 6 parse failure(s)
Jan 23 17:43:53 nas nextcloud.apache[3379]: done
Jan 23 17:43:55 nas kernel: [ 461.955431] kauditd_printk_skb: 15 callbacks suppressed
Jan 23 17:43:55 nas kernel: [ 461.955432] audit: type=1400 audit(1579797835.101:86): apparmor="DENIED" operation="exec" profile="snap.docker.dockerd" name="/bin/kmod" pid=4940 comm="dockerd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:55 nas kernel: [ 461.955636] audit: type=1400 audit(1579797835.101:87): apparmor="DENIED" operation="exec" profile="snap.docker.dockerd" name="/bin/kmod" pid=4941 comm="dockerd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:55 nas kernel: [ 462.127807] nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
Jan 23 17:43:55 nas kernel: [ 462.296236] audit: type=1400 audit(1579797835.445:88): apparmor="DENIED" operation="exec" profile="snap.docker.dockerd" name="/bin/kmod" pid=4985 comm="dockerd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:55 nas kernel: [ 462.308636] Initializing XFRM netlink socket
Jan 23 17:43:55 nas kernel: [ 462.309123] audit: type=1400 audit(1579797835.457:89): apparmor="DENIED" operation="exec" profile="snap.docker.dockerd" name="/bin/kmod" pid=4989 comm="dockerd" requested_mask="x" denied_mask="x" fsuid=0 ouid=0
Jan 23 17:43:55 nas kernel: [ 462.356697] Netfilter messages via NETLINK v0.30.
Jan 23 17:43:55 nas named[3073]: listening on IPv4 interface docker0, 172.17.0.1#53
Jan 23 17:43:55 nas systemd-udevd[4992]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jan 23 17:43:55 nas systemd-timesyncd[1598]: Network configuration changed, trying to establish connection.
Jan 23 17:43:55 nas networkd-dispatcher[3354]: WARNING:Unknown index 3 seen, reloading interface list
Jan 23 17:43:55 nas systemd-timesyncd[1598]: Synchronized to time server [2001:67c:1560:8003::c8]:123 (ntp.ubuntu.com).
Jan 23 17:43:55 nas systemd-timesyncd[1598]: Network configuration changed, trying to establish connection.
Jan 23 17:43:55 nas kernel: [ 462.616432] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Jan 23 17:43:55 nas systemd-timesyncd[1598]: Synchronized to time server [2001:67c:1560:8003::c8]:123 (ntp.ubuntu.com).
Jan 23 17:43:57 nas kernel: [ 464.018253] aufs au_opts_verify:1623:dockerd[4294]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 23 17:43:57 nas nextcloud.apache[3379]: System config value redis => host set to string /tmp/sockets/redis.sock
Jan 23 17:43:58 nas nextcloud.apache[3379]: System config value redis => port set to integer 0
Jan 23 17:43:58 nas nextcloud.apache[3379]: System config value memcache.locking set to string \OC\Memcache\Redis
Jan 23 17:43:58 nas nextcloud.apache[3379]: System config value memcache.local set to string \OC\Memcache\Redis
Jan 23 17:43:59 nas nextcloud.apache[3379]: No such app enabled: updatenotification
Jan 23 17:43:59 nas nextcloud.apache[3379]: Making sure nextcloud is fully upgraded...
Jan 23 17:44:00 nas nextcloud.apache[3379]: Nextcloud is already latest version
Jan 23 17:44:00 nas nextcloud.apache[3379]: All set! Running httpd...
Jan 23 17:44:00 nas nextcloud.apache[3379]: Certificates have been activated: using HTTPS only
Jan 23 17:44:00 nas nextcloud.apache[3379]: Certificates look to be in order: enabling HSTS
Jan 23 17:44:01 nas nextcloud.nextcloud-fixer[2978]: done
Jan 23 17:44:01 nas nextcloud.nextcloud-fixer[2978]: run-parts: executing /snap/nextcloud/18204/fixes/existing-install/1_upgrade.sh
Jan 23 17:44:01 nas nextcloud.nextcloud-fixer[2978]: Nextcloud is already latest version
Jan 23 17:44:01 nas nextcloud.nextcloud-fixer[2978]: run-parts: executing /snap/nextcloud/18204/fixes/existing-install/2_update-apps.sh
Jan 23 17:44:03 nas samba[4601]: [2020/01/23 17:44:03.993629, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:44:03 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:44:04 nas samba[4601]: [2020/01/23 17:44:04.021947, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:44:04 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:44:04 nas samba[4601]: [2020/01/23 17:44:04.049297, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:44:04 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:44:04 nas samba[4601]: [2020/01/23 17:44:04.079459, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:44:04 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:44:04 nas samba[4601]: [2020/01/23 17:44:04.108349, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:44:04 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:44:04 nas samba[4601]: [2020/01/23 17:44:04.132184, 0] ../source4/dsdb/dns/dns_update.c:290(dnsupdate_nameupdate_done)
Jan 23 17:44:04 nas samba[4601]: ../source4/dsdb/dns/dns_update.c:290: Failed DNS update - with error code 5
Jan 23 17:44:08 nas nextcloud.nextcloud-fixer[2978]: contacts new version available: 3.1.8
Jan 23 17:44:11 nas nextcloud.nextcloud-fixer[2978]: contacts updated
Jan 23 17:44:11 nas nextcloud.nextcloud-fixer[2978]: rainloop new version available: 6.0.5
Jan 23 17:44:14 nas nextcloud.nextcloud-fixer[2978]: rainloop updated
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Nextcloud is already latest version
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: run-parts: executing /snap/nextcloud/18204/fixes/existing-install/3_add-missing-indices.sh
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the share table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the filecache table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the twofactor_providers table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the login_flow_v2 table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the whats_new table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the cards table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Check indices of the cards_properties table.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Done.
Jan 23 17:44:15 nas nextcloud.nextcloud-fixer[2978]: Enabling maintenance mode...
Jan 23 17:44:16 nas nextcloud.nextcloud-fixer[2978]: done
Jan 23 17:44:17 nas canonical-livepatch[3378]: Client.Check
Jan 23 17:44:17 nas canonical-livepatch[3378]: Checking with livepatch service.
Jan 23 17:44:18 nas canonical-livepatch[3378]: Updated last-check.
Jan 23 17:44:18 nas canonical-livepatch[3378]: No updates available at this time.
Jan 23 17:44:18 nas canonical-livepatch[3378]: No payload available.
Jan 23 17:44:18 nas nextcloud.nextcloud-fixer[2978]: run-parts: executing /snap/nextcloud/18204/fixes/existing-install/maintenance/1_convert-filecache-bigint.sh
Jan 23 17:44:18 nas nextcloud.nextcloud-fixer[2978]: Nextcloud is in maintenance mode - no apps have been loaded
Jan 23 17:44:18 nas nextcloud.nextcloud-fixer[2978]: All tables already up to date!
Jan 23 17:44:18 nas nextcloud.nextcloud-fixer[2978]: Disabling maintenance mode...
Jan 23 17:44:18 nas nextcloud.nextcloud-fixer[2978]: done
Jan 23 17:44:40 nas sks[3062]: 2020-01-23 17:44:40 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:45:03 nas systemd[1]: dm-event.service: State 'stop-sigterm' timed out. Killing.
Jan 23 17:45:03 nas systemd[1]: dm-event.service: Killing process 2572 (dmeventd) with signal SIGKILL.
Jan 23 17:45:03 nas systemd[1]: dm-event.service: Main process exited, code=killed, status=9/KILL
Jan 23 17:45:03 nas systemd[1]: dm-event.service: Failed with result 'timeout'.
Jan 23 17:45:03 nas systemd[1]: Stopped Device-mapper event daemon.
Jan 23 17:45:06 nas systemd[1]: lvm2-lvmetad.service: State 'stop-sigterm' timed out. Killing.
Jan 23 17:45:06 nas systemd[1]: lvm2-lvmetad.service: Killing process 880 (lvmetad) with signal SIGKILL.
Jan 23 17:45:06 nas systemd[1]: lvm2-lvmetad.service: Main process exited, code=killed, status=9/KILL
Jan 23 17:45:06 nas systemd[1]: lvm2-lvmetad.service: Failed with result 'timeout'.
Jan 23 17:45:06 nas systemd[1]: Stopped LVM2 metadata daemon.
Jan 23 17:45:42 nas sks[3062]: 2020-01-23 17:45:42 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:46:42 nas sks[3062]: 2020-01-23 17:46:42 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:47:41 nas sks[3062]: 2020-01-23 17:47:41 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:48:02 nas kernel: [ 709.400104] EXT4-fs (dm-17): error count since last fsck: 556
Jan 23 17:48:02 nas kernel: [ 709.400161] EXT4-fs (dm-17): initial error at time 1555230560: ext4_wait_block_bitmap:524
Jan 23 17:48:02 nas kernel: [ 709.400181] EXT4-fs (dm-17): last error at time 1555312583: ext4_journal_check_start:61: inode 1703938
Jan 23 17:48:44 nas sks[3062]: 2020-01-23 17:48:44 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:49:44 nas sks[3062]: 2020-01-23 17:49:44 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:50:41 nas sks[3062]: 2020-01-23 17:50:41 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:51:14 nas systemd[1]: Starting Cleanup of Temporary Directories...
Jan 23 17:51:14 nas systemd[1]: Started Cleanup of Temporary Directories.
Jan 23 17:51:40 nas sks[3062]: 2020-01-23 17:51:40 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:52:40 nas sks[3062]: 2020-01-23 17:52:40 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:53:40 nas sks[3062]: 2020-01-23 17:53:40 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.723998, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.759234, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.793151, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.827580, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.864118, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.900069, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.934636, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:56 nas samba[4601]: [2020/01/23 17:53:56.971512, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:56 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:57 nas samba[4601]: [2020/01/23 17:53:57.007317, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:57 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:57 nas samba[4601]: [2020/01/23 17:53:57.042098, 0] ../lib/util/util_runcmd.c:327(samba_runcmd_io_handler)
Jan 23 17:53:57 nas samba[4601]: /usr/sbin/samba_dnsupdate: dns_tkey_gssnegotiate: TKEY is unacceptable
Jan 23 17:53:57 nas samba[4601]: [2020/01/23 17:53:57.067358, 0] ../source4/dsdb/dns/dns_update.c:290(dnsupdate_nameupdate_done)
Jan 23 17:53:57 nas samba[4601]: ../source4/dsdb/dns/dns_update.c:290: Failed DNS update - with error code 10
Jan 23 17:54:40 nas sks[3062]: 2020-01-23 17:54:40 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:55:39 nas sks[3062]: 2020-01-23 17:55:39 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:56:36 nas sks[3062]: 2020-01-23 17:56:36 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:57:34 nas sks[3062]: 2020-01-23 17:57:34 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:58:37 nas sks[3062]: 2020-01-23 17:58:37 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 17:59:36 nas sks[3062]: 2020-01-23 17:59:36 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 18:00:33 nas sks[3062]: 2020-01-23 18:00:33 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 18:01:36 nas sks[3062]: 2020-01-23 18:01:36 <recon as client> error in callback.: Failure("No gossip partners available")
Jan 23 18:01:54 nas systemd[1]: Created slice User Slice of jokergermany.
Jan 23 18:01:54 nas systemd[1]: Starting User Manager for UID 1000...
Jan 23 18:01:54 nas systemd[1]: Started Session 1 of user jokergermany.
Jan 23 18:01:55 nas systemd[7445]: Reached target Paths.
Jan 23 18:01:55 nas systemd[7445]: Listening on GnuPG network certificate management daemon.
Jan 23 18:01:55 nas systemd[7445]: Listening on REST API socket for snapd user session agent.
Jan 23 18:01:55 nas systemd[7445]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Jan 23 18:01:55 nas systemd[7445]: Listening on GnuPG cryptographic agent and passphrase cache.
Jan 23 18:01:55 nas systemd[7445]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Jan 23 18:01:55 nas systemd[7445]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Jan 23 18:01:55 nas systemd[7445]: Reached target Sockets.
Jan 23 18:01:55 nas systemd[7445]: Reached target Timers.
Jan 23 18:01:55 nas systemd[7445]: Reached target Basic System.
Jan 23 18:01:55 nas systemd[7445]: Reached target Default.
Jan 23 18:01:55 nas systemd[7445]: Startup finished in 187ms.
Jan 23 18:01:55 nas systemd[1]: Started User Manager for UID 1000.
Jan 23 18:01:58 nas systemd[1]: Started Session 3 of user jokergermany.
|
Wie genau mach ich das?
Siehe auch man lvm.conf. Grundsätzlich sind die Direktiven scan und filter relevant. Du musst entscheiden, was für dich praktikabler ist: Entweder die physischen Devices fürs RAID ausschließen, oder nur bestimmte RAID-Devices einschließen. Ersteres könnte so aussehen:
...
devices {
...
scan = [ "/dev/" ]
filter = [ "r|^/dev/sd.*$|", "a|^/dev/sdc4$|" ]
}
Danke hab mich für Variante 1 entschieden.
|
misterunknown
Ehemalige
Anmeldungsdatum: 28. Oktober 2009
Beiträge: 4403
Wohnort: Sachsen
|
Hm, ich kann erstmal keinen Fehler bzgl. LVM erkennen. Läuft der Dienst nach dem Booten? Was sagt
pvs
vgs
lvs
|
jokerGermany
(Themenstarter)
Anmeldungsdatum: 11. Mai 2008
Beiträge: 1004
|
misterunknown schrieb: Hm, ich kann erstmal keinen Fehler bzgl. LVM erkennen. Läuft der Dienst nach dem Booten? Was sagt
pvs
vgs
lvs
Das hier sollte ich direkt nach dem booten abgegriffen haben:
jokerGermany schrieb: sudo pvs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
PV VG Fmt Attr PSize PFree
/dev/md1 Raid lvm2 a-- <1,82t <1,28t
/dev/md2 Raid lvm2 a-- <1,82t 0
/dev/md3 Raid lvm2 a-- <1,82t <441,39g
/dev/md4 Raid lvm2 a-- <1,82t 774,77g
/dev/md5 non-Raid lvm2 a-- 931,19g 0
/dev/md6 non-Raid lvm2 a-- 931,19g 0
/dev/md7 non-Raid lvm2 a-- 931,19g 520,57g
/dev/non-Raid/Wichtig Wichtig lvm2 a-- <1024,00g 0
/dev/sdc4 non-Raid lvm2 a-- <888,26g <888,26g
[unknown] Wichtig lvm2 a-m <1024,00g 0 sudo vgs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
VG #PV #LV #SN Attr VSize VFree
Raid 4 15 0 wz--n- 7,27t 2,46t
Wichtig 2 3 0 wz-pn- <2,00t 0
non-Raid 4 5 0 wz--n- <3,60t <1,38t sudo lvs
WARNING: Device for PV TeLvea-KTKp-ZI9t-IVES-A2Jj-P8W8-BkH1FL not found or rejected by a filter.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
Backup Raid -wi------- 2,50t
Emulatoren Raid -wi------- 5,00g
Filme-Aufnahmen Raid -wi------- 5,00g
Filme-Serien Raid -wi------- 300,00g
Filme-unfertig Raid -wi------- 300,00g
Hoerbuecher Raid -wi------- 15,00g
Images Raid -wi------- 25,00g
Musik Raid -wi------- 15,00g
Spiele Raid -wi------- 100,00g
Spielfilme Raid -wi------- 500,00g
Ubuntu-Installation Raid -wi------- 1,00g
Wichtig Raid -wi------- 1,00t
cloud Raid -wi------- 1,00g
eBook Raid -wi------- 1,00g
gorleben Raid -wi------- 75,00g
Bilder Wichtig rwi---r-p- 250,00g
Wichtige-Daten Wichtig rwi---r-p- <24,99g
cloud Wichtig rwi---r-p- <749,00g
Wichtig non-Raid -wi-a----- 1,00t
WinFreigabe non-Raid -wi-ao---- 50,00g
downloads non-Raid -wi-ao---- 1,00t
gorleben non-Raid -wi-ao---- 75,00g
streetview non-Raid -wi-ao---- 100,00g
|