hotfix typos
This commit is contained in:
parent
b6a4123576
commit
36d2339182
|
@ -48,8 +48,6 @@ node_cpu_seconds_total{cpu="0",mode="softirq"} 743.4
|
|||
**Grafana**
|
||||
Grafana is an open-source data visualization and monitoring platform. It has hundreds of features embedded that can help you query from data sources like Prometheus, InfluxDB, MySQL and so on...
|
||||
|
||||
For todays guide I will show you how to setup a few exporters (node-exporter, smartctl) and collect their metrics with prometheus. Then visualize that data via graphana.
|
||||
|
||||
## NixOs
|
||||
|
||||
Nix makes it trivial to set up these services, as there are already predefined options for it in nixpkgs. I will give you example configuration files below that you can just copy and paste.
|
||||
|
@ -98,8 +96,8 @@ First is node-exporter. It exports all kind of system metrics ranging from cpu u
|
|||
|
||||
#### Smartctl
|
||||
|
||||
Smartctl is a tool included in the smartmontools package. This is a collection of monitoring tools for hard-drives, SSDs and filesystems.
|
||||
This exporter enables you to check up on the health of your drive(s). And it will also give you a wall notification if one of your drives has a bad sector(s), which mainly suggests it's dying off.
|
||||
Smartctl is a tool included in the smartmontools package. It is a collection of monitoring tools for hard-drives, SSDs and filesystems.
|
||||
This exporter enables you to check up on the health of your drive(s). And it will also give you a wall notifications if one of your drives has a bad sector(s), which mainly suggests it's dying off.
|
||||
|
||||
```nix
|
||||
# /services/monitoring/exporters/smartctl.nix
|
||||
|
@ -184,7 +182,7 @@ Here is a config file for prometheus, with the scrape configs already written do
|
|||
}
|
||||
```
|
||||
|
||||
I recommend setting the 5s delay to a bigger number as you can imagine it can generate a lot of data.
|
||||
I recommend setting the 5s delay to a bigger number if you have little storage as you can imagine it can generate a lot of data.
|
||||
~16kB average per scrape (node-exporter). 1 day has 86400 seconds, divide that by 5 thats 17280 scrapes a day.
|
||||
17280 \* 16 = 276480 kB. Thats 270 megabytes a day. And if you have multiple servers that causes X times as much.
|
||||
30 days of scarping is about 8 gigabytes (1x). **But remember, by default prometheus stores data for 30 days!**
|
||||
|
@ -233,7 +231,7 @@ If you want to access it via the internet, change the following:
|
|||
- `http_addr = "127.0.0.1"`
|
||||
- remove the firewall allowed ports
|
||||
|
||||
This insures data will only flow via the nginx reverse proxy
|
||||
This insures data will only flow thru the nginx reverse proxy
|
||||
|
||||
Remember to set `networking.domain = "example.com"` to your domain.
|
||||
|
||||
|
|
Loading…
Reference in a new issue