ubuntuusers.de

System macht alle 1-2 wochen einen MD-Raid resync, kein Grund erkennbar

Status: Ungelöst | Ubuntu-Version: Ubuntu 22.04 (Jammy Jellyfish)
Antworten |

MikeSouth

Anmeldungsdatum:
17. Oktober 2024

Beiträge: 1

Hi, ich habe ein aktuelles ubuntu jammy welches seit 2-3 Monaten alle ein bis zwei Wochen mal einen MD-Raid-Check lostritt. Nicht nur am ersten Sonntag des Monats, heute z.B..

Wie bekomme ich raus was die Checks auslöst? dmesg zeigt nichts an, laut S.M.A.R.T sind die HDDs alle OK.

Vielen Dank im Voraus, Mike

# dmesg
[1730048.007682] perf: interrupt took too long (4922 > 4915), lowering kernel.perf_event_max_sample_rate to 40500
...
[2477498.536127] ffmpeg[2813716]: segfault at 0 ip 00007f7cf1d70014 sp 00007f7ca57f8f00 error 4 in libx265.so.199[7f7cf1d19000+e98000]
[2477498.536145] Code: 80 05 00 00 45 85 f6 0f 85 11 04 00 00 48 8b 43 08 f2 0f 10 93 88 01 00 00 4c 8b 80 98 00 00 00 4c 8b 48 10 8b 80 e8 08 00 00 <49> 8b 30 49 8b 39 4c 8b 36 48 8b 17 f2 0f 10 87 c0 05 00 00 f2 0f
[2502389.772281] md: data-check of RAID array md0
[2502389.833065] md: delaying data-check of md1 until md0 has finished (they share one or more physical units)
[2502389.856405] md: delaying data-check of md2 until md1 has finished (they share one or more physical units)
[2502389.889902] md: delaying data-check of md3 until md1 has finished (they share one or more physical units)
[2502484.380381] md: md0: data-check done.
[2502484.394629] md: delaying data-check of md1 until md2 has finished (they share one or more physical units)
[2502484.394648] md: data-check of RAID array md2
[2502484.394647] md: delaying data-check of md3 until md2 has finished (they share one or more physical units)
[2502522.546175] md: md2: data-check done.
[2502522.612408] md: data-check of RAID array md3
[2502522.612410] md: delaying data-check of md1 until md3 has finished (they share one or more physical units)

.

$ cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
md1 : active raid1 sdb2[3] sdc2[1] sdd2[2] sda2[0]
      1046528 blocks super 1.2 [4/4] [UUUU]
        resync=DELAYED
      
md0 : active raid1 sdc1[1] sdd1[2] sdb1[3] sda1[0]
      16759808 blocks super 1.2 [4/4] [UUUU]
      
md2 : active raid5 sdb3[4] sdc3[1] sdd3[2] sda3[0]
      20954112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
md3 : active raid5 sdd4[2] sdb4[4] sdc4[1] sda4[0]
      29224459776 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      [======>..............]  check = 34.6% (3371764040/9741486592) finish=590.0min speed=179905K/sec
      bitmap: 1/73 pages [4KB], 65536KB chunk
$ sudo smartctl -a /dev/sda | grep "self-assessment"
SMART overall-health self-assessment test result: PASSED
$ sudo smartctl -a /dev/sdb | grep "self-assessment"
SMART overall-health self-assessment test result: PASSED
$ sudo smartctl -a /dev/sdc | grep "self-assessment"
SMART overall-health self-assessment test result: PASSED
$ sudo smartctl -a /dev/sdd | grep "self-assessment"
SMART overall-health self-assessment test result: PASSED

.

$ systemctl list-timers mdcheck_start --all
NEXT LEFT LAST                         PASSED      UNIT                ACTIVATES            
n/a  n/a  Thu 2024-10-17 09:28:03 CEST 5h 6min ago mdcheck_start.timer mdcheck_start.service

1 timers listed.

.

/etc $tree cron.*
cron.d
├── certbot
├── e2scrub_all
└── sysstat
cron.daily
├── apport
├── apt-compat
├── dpkg
├── logrotate
├── man-db
└── sysstat
cron.hourly
cron.monthly
cron.weekly
└── man-db

0 directories, 1 file

Bearbeitet von Berlin_1946:

Forensyntax korrigiert.

Antworten |