Logging a P1 Smart Meter to elastic

In the Netherlands, power companies have started rolling out “slimme meters” (Dutch for “smart meters”) which are electrical meters that automatically send your electricity consumption to the power companies. They no longer have to physically visit houses to get the electricity readings.

Cool thing about these meters is that, as a user, you can also get the usage logs. Most power companies provide (proprietary) dashboards for all their customers. If you like to do a bit of hacking though, there is also the option of the “P1” port. This port takes a RJ-11 type connector and outputs frequent logs of the current state of the meter. Where the power companies receive their data every 10 minutes or so, the P1 port outputs a new reading every second for my model of meter. You can find more information about the port on the Domoticz website (Dutch).

Methods for logging and visualizing this data have already been developed. Many people have connected a Raspberry Pi and use Domoticz as a dashboard. I truly respect the effort put into Domoticz in both the software and documentation for the smart meter. However, I have some reluctance to run such a single-purpose piece of software. Instead, I know Elastic (previously Elasticsearch) and if I log the data there, I can easily create my own visualizations and more. In this article I’ll explain how I’ve set up a Raspberry Pi to log the data via Logstash and I show a sample visualization you can then make.

Reading the P1 port

By far the easiest way to read the data from the P1 port is with a RJ-11 to USB cable. Apparently these are easy to solder yourself, but I decided just to buy one. I got mine from SOS Solutions which is not cheap and the build quality is frankly quite poor, but since this is a niche product, there just aren’t that many options.

With this USB cable, reading the data is simple. Assuming you have a Raspberry Pi with Raspbian (Lite), you can install the cu commandline tool

sudo apt install cu

and then read the data

cu -l /dev/ttyUSB0 -s 115200

Messages should start rolling in like the one below. Great!

/ISK5\2M550T-1011
 
1-3:0.2.8(50)
0-0:1.0.0(181106140429W)
0-0:96.1.1(4530303334303036383130353136343136)
1-0:1.8.1(003808.351*kWh)
1-0:1.8.2(002948.827*kWh)
1-0:2.8.1(001285.951*kWh)
1-0:2.8.2(002876.514*kWh)
0-0:96.14.0(0002)
1-0:1.7.0(00.000*kW)
1-0:2.7.0(00.498*kW)
0-0:96.7.21(00006)
0-0:96.7.9(00003)
1-0:99.97.0(1)(0-0:96.7.19)(180529135630S)(0000002451*s)
1-0:32.32.0(00003)
1-0:52.32.0(00002)
1-0:72.32.0(00002)
1-0:32.36.0(00001)
1-0:52.36.0(00001)
1-0:72.36.0(00001)
0-0:96.13.0()
1-0:32.7.0(236.0*V)
1-0:52.7.0(232.6*V)
1-0:72.7.0(235.1*V)
1-0:31.7.0(002*A)
1-0:51.7.0(000*A)
1-0:71.7.0(000*A)
1-0:21.7.0(00.000*kW)
1-0:41.7.0(00.033*kW)
1-0:61.7.0(00.132*kW)
1-0:22.7.0(00.676*kW)
1-0:42.7.0(00.000*kW)
1-0:62.7.0(00.000*kW)
0-1:24.1.0(003)
0-1:96.1.0(4730303339303031373030343630313137)
0-1:24.2.1(181106140010W)(01569.646*m3)
!1F28

Installing the ELK stack

We want to log the data to Elastic, so running the ELK stack is a requirement. How to set this up is a beyond the scope of this article and already documented extensively elsewhere. For Debian based OSes, Elastic offers some easy to use packages. I am personally running it on a computer in my house, but a VM from Digital Ocean (tutorial) should also be fine. There is also Elastic Cloud and other managed hosting options.

I don’t recommend you try running ELK on a Raspberry Pi because of memory constraints on the Pi. Elastic also doesn’t offer ARM builds, so you would have to build everything yourself.

Logging the data to Elastic / Logstash

To ship the data we can already read with cu, we will use filebeat. Filebeat is Elastic’s lightweight log shipper which we can pipe the data into.

Configuring Logstash

First, we need to configure Logstash to accept input from Filebeat. Filebeat uses the “beats” protocol to communicate with logstash, so we configure that as an input on the default port. We also add elastic as an output configuration and set an index name for easy retrieval later.


input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "slimmemeter-%{+YYYY.MM.dd}"
  }
}

Configuring Filebeat

Getting started with Filebeat should be easy in most cases. Unfortunately, Elastic doesn’t offer ARM builds for their executables, so we need to build it ourselves. You can build Filebeat for ARM relatively easily using any machine that runs Docker (so also non-ARM machines). I found some instructions on how to do this on the Elastic forum. Please note that the warning/error “No buildable Go source files” is expected and you can simply proceed with the commands. Tricked me.

If you trust me, you can also download the ARM package I built for myself. It is for Filebeat version 6.5.4.

Once you have Filebeat we need to create a configuration file that tells it to read from stdin (so we can pipe cu to it) and to output the data to Logstash. We also need to specify that the messages are multiline and give a regular expression that filebeat can recognize as the start of a new message. Beware that the regex used in the example ('^/ISK5\\') is specific for my model of smart meter. Other models may have different headers, however all should have some header above each message that you can write a regex for. My filebeat.yml looks like this:

filebeat.prospectors:
  -
    input_type: stdin
    multiline:
      pattern: '^/ISK5\\'
      negate: true
      match: after

output:
  logstash:
    hosts: ["adress-of-logstash:5044"]

With the ARM Filebeat executable and this configuration file, we can test whether the logs are shipped correctly. Simply pipe the cu command into filebeat.

cu -l /dev/ttyUSB0 -s 115200 | ./filebeat-6.5.4-linux-arm/filebeat -c filebeat.yml -e -v

You should now see the events appear in Kibana. This concludes the setup on the Raspberry Pi. We can just leave the command running and Filebeat will automatically retry the connection when it fails (for example when you restart the ELK server).

Interpreting the messages with Logstash

Right now, everything is logged in Elastic as a plain text message attribute. We will need to interpret the messages to create visualization of our electricity use. We will write what Logstash calls a filter and insert it into our existing config file.

We can use grok syntax to write statements that will filter out the individual values from the message. I’ve chosen to filter out 3 values and looked up their identifiers (between the brackets) in the Domoticz documentation.

For each of these, we create a matching grok statement. For example, current_use would make 1-0:1\.7\.0\(%{NUMBER:current_use:float}\*kW\). You could also extract all values with a single statement, but I thought this sacrificed a lot of readability. The input and output remain the same as before.


input {
  beats {
    port => 5044
  }
}


filter {
  grok {
    match => { "message" => "*1-0:1\.7\.0\(%{NUMBER:current_use:float}\*kW\)" }
  }
  grok {
    match => { "message" => "1-0:1\.8\.2\(%{NUMBER:total_use_day:float}\*kWh\)" }
  }
  grok {
    match => { "message" => "1-0:1\.8\.1\(%{NUMBER:total_use_night:float}\*kWh\)" }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "slimmemeter-%{+YYYY.MM.dd}"
  }
}

Visualizing with Kibana

I really enjoyed seeing the data show up correctly in Elastic.

It is now easy to create a visualization. You can see the configuration of a simple usage graph in the screenshot below. The rest is up to you. There are many articles on on how use Kibana just a search away.

Kibana visualization example

Extra

Running filebeat on boot

Having verified that Filebeat is shipping the logs correctly, we would like to run it automatically on boot. This way, we don’t have to remember to restart the script after a power outage, for example.

You can add commands to run at boot to your crontab. You edit the crontab by running crontab -e. To the bottom of this file we can add a line to start cu and filebeat automatically

@reboot cu -l /dev/ttyUSB0 -s 115200 | /home/pi/filebeat/filebeat -c /home/pi/filebeat.yml -e -v

(Note the full paths to the executable and the configuration file.)

However, when you do this, you will find that only one or two messages get logged after a reboot, and then it stops. It turns this is caused by cron and cu. When cron runs the command, it pipes something to stdin of cu and then closes the pipe. This by itself is fine, however cu responds to the closed pipe by killing itself. You can replicate this behavior by running

sleep 2s | cu -l /dev/ttyUSB0 -s 115200

cu should exit after two seconds.

There doesn’t seem to be a command line option that prevents this behavior, so instead we isolate cu from the stdin send from cron. We can again use sleep for this, but this time we sleep forever

@reboot sleep infinite | cu -l /dev/ttyUSB0 -s 115200 | /home/pi/filebeat/filebeat -c /home/pi/filebeat.yml -e -v

sleep ignores the closed pipe from cron and allows cu to run forever. After saving your crontab, logging should resume automatically after reboot.