Hi,
I've made a few minor changes to the code (logging to syslog)
it's still a work in progress as I want to check what gets logged where in verbose mode, but current version that I'm using is on github
if anyone wants to pull.
Hi,
I've made a few minor changes to the code (logging to syslog)
it's still a work in progress as I want to check what gets logged where in verbose mode, but current version that I'm using is on github
if anyone wants to pull.
Hi Andrew,
I compiled and run your modified code a little.
To insert the new -L option, first I had the impression, dont happened anything. It was funny, because I made an extended search, to find a new logfile, search the "bitzortung" string, but to make a look into the code, I realized, it is not blitzortung, but ""blizortung"", and finally I found the new log items in the "messages" general log file.
It seems me your concept rooted in the general linux practices, to make/store the log files in the dedicated part of the filesystem (in /var/log tree), and you used the standard syslog.h tools.
I aggread with you, we can use the standard linux features, if it is possible, but in this case, and in my opinion and recent practice more confortable for me to keep in a "private" directory all the log files, made by the tracker program.
I use a simple script, start the tracker prog with the parameters, which make me the log files use the date, as filenames. It look like:
nap=$(date --rfc-3339=date)
./tracker22 -v -l $nap.log SiRF 19200 /dev/ttyS0 JanosTol (password)
The result files are:
-rw-r--r-- 1 root root 7830 Mar 22 20:57 2011-03-22.log
-rw-r--r-- 1 root root 9545 Mar 23 20:43 2011-03-23.log
-rw-r--r-- 1 root root 7018 Mar 24 21:00 2011-03-24.log
-rw-r--r-- 1 root root 4293 Mar 25 19:42 2011-03-25.log
-rw-r--r-- 1 root root 10898 Mar 26 22:09 2011-03-26.log
-rw-r--r-- 1 root root 5770 Mar 27 20:20 2011-03-27.log
-rw-r--r-- 1 root root 7946 Mar 28 22:05 2011-03-28.log
-rw-r--r-- 1 root root 6458 Mar 29 21:03 2011-03-29.log
-rw-r--r-- 1 root root 2315 Mar 31 18:54 2011-03-31.log
-rw-r--r-- 1 root root 7091 Apr 1 21:53 2011-04-01.log
This has the advantages, that in one minute can get an overview, if something "extraordinary" happened during the day, because the program pruduced larger log file, as usual...(By the way, I dont run continuously my sensor, if somebody run it 24/7 mode, this practice willnot be usefull)
So:
If you have motivation, to implement some new idea in this exiting lightning detector systems tools, I would be happy to cooperate with you, and I will be ready to test your ideas.
If you have more capacity and practice to realize some other idea, connected to the data around our system, I am ready to discuss with you. I have near 2 years long membership, and tried some idea, but sometimes I stopped because my limitations in programming.
best:
t.janos, hg5apz
> it is not blitzortung, but ""blizortung""
oops. Typo. Will fix and push that later, If I leave it blank then it uses the program name as the input, which would be fine if I compile it to something other than 'tracker_Linux' (not terribly helpful in syslog)
I'm currently working on some quick scripts to generate gnuplot traces of the data ($BLSEQ) similar to your php scripts on h621316.serverkompetenz -- can you share the perl / bash ones?
as to log files - I'd like the data in /var/log somewhere, then I can use logrotate to cycle it properly (but I'd need to make the tracker cope with say being sigHUPd to re-open logfile
yes I'm happy to help, but again my programming skills are limited (esp in C!)
Andrew
Hi, dear Andrew,
You are right, in my personal "blog", following the developments on the tracker porgrams, made by Egon and Edmund, I didnot put there the simple script, which make the first stepp for myself, to produce local graphicons from the tracker log file.
(this text, with the procedures in the second stepp to make graphs is here:
Notes on Graphs Collection at Blitzortung Network )
Now I put this small sript here, with some notes, because it was used as my personal tool, in the original form havenot any comments. You can cut it, save it, and run (I call it hexproc ).
========================================
#!/usr/bin/perl
#
# Szetvalogatja a hex puffer tartalmat, es kulon fileokba teszi.
# A dec fileokba sorszamokat is tesz az elso poziciora a gplot szamara
################################################################################
# This small script process the data puffer from the Blitzortung TOA board serial output log.
# The input file contains the possible lightning patterns in hex format, the data
# are in pairs on the two channels.
# This script generates 4 output files, the first 2 are the separated two channels data in hex,
# the other two files contains this data, but in decimal format, and in the first positions in the
# rows are counter numbers, usefull to genetare simple plot with gplot.pl "frontend".
$INFIL = shift(@ARGV);
open ($INF , "<$INFIL") or die "Need an input file of the hex puffer\n ";
#
read ( $INF, $HH, 4096);
#
open ($HX1 , ">$INFIL.hex1");
open ($HX2 , ">$INFIL.hex2");
open ($DC1 , ">$INFIL.dec1");
open ($DC2 , ">$INFIL.dec2");
#
## print ($HH ,":string \n");
$LL = length($HH);
$PP = 1;
$P = 1;
while ($PP < $LL) {
$TT = substr ( $HH , $PP, 2);
$HH1 = hex($TT);
print $HX1 ($TT , "\n");
print $DC1 ($P," ", $HH1 , "\n");
# print ($P," ", $HH1 , "\n"); ### This line put the contains to the stdout, for debug
$PP +=2;
#
$TT = substr ($HH , $PP, 2);
$HH1 = hex($TT);
print $HX2 ($TT , "\n");
print $DC2 ($P," ", $HH1 , "\n");
$PP +=2;
$P= $P +1 ;
}
wend
====================
Maybe we can make the graphs in more simple way with the gnuplot, but I was lazy, and used the confortable gplot.pl "frontend". This porgram usefull to make simple graphs, but it need special input file format, I prepare it with this script.
best:
t.janos
Hi Andrew,
As I noted, it can be produce usefull local graphs with the gnuplot itself, without any additional data manipulations, and you dont need to install the gplot.pl "frontend" too. But the gnuplot isnot a "user-firendly" animal...
I spend time to play with its tremendous options, and finally find the minimal set, whith we can produce from our data a more-or-less usefull graphs.
Here is my "minimalist" command file.
You can save it, and run the gnuplot with it. It need two data files, in the first colums some data... I used the previously attached shell script to separate and convert the puffer data.
========================= cut here ========================
# This gnuplot commands produce a graph from the data, recorded by the
# Blitzortund TOA board in its log file.
# This data are in hex format, and consists of the measured signal to the 2 channels
# Need to preprocess this data, separate the 2 channels into two different files.
# In this example the data files are: apr0702.dec1 and apr0702.dec2, and
# must be stored in the same directory, from where you start the gnuplot with this command file
set terminal png size 800,600
set output 'apr0702.png'
set title "Lightning data, apr 07, prduced by site 62. Nr 02 "
set xlabel "Nr of digitized data"
plot "apr0702.dec1" with lines, "apr0702.dec2" with lines
replot
================ cut here ===================================
The result look like as the attached picture.
Best: t.janos
Great -- I also knocked up a quick python script to do much the same -- see blseq2plt.py on my github page () which I used to check a pile of BLSEQ log entries (pulled out with grep)
Haven't attached sample image, but see
Hi Andrew,
your code is really great! Small, compact, elegant, and working!
It is a good idea, to identify the each graphs with the timestamp.
I made in it for my test only a very-very small change in the order of the plot lines.
This version process all the $BLSEQ records, find in the sample.dat file, and plot the graphs periodically.
I used a sample.dat file with 6 $BLSEQ records.
I put here the code with this small mod.
The python matplotlib is really a good tool to make graphs. Now I havenot time to look into it, hope it has option to save the generated graphs, it would be nice to save all the graphes and can study it immediatly (maybe on a local web page)
By the way, when the last ver22 version of the LT came out, I used it with the -e option. This new option put the $BLSEC and the $BLSEQ records to an open terminal window, parallel with the "normal" serial line data stream come from the /dev/ttySx device.
I made notices in my "blog" web page, used this options:
/tracker22 -v -l $nap.log -e /dev/pts/5 SiRF 19200 /dev/ttyS0 JanosTol xxxxx
to dump the data to a virtual terminal. But it would be better, to put some lines of code into the tracker_Linux.c, or make a new option, which open a file and put only the $BLSEQ records into it. Maybe Egon make it? This data file can be use immediatly with your new python code to generate the graphs.
Another notes:
you made a reference to my "blog" page, as a source to the data formats, but I have a separate page about the different record structures, used at different lightning systems, and I found anywhere:
This notes on this web page was for personal use only...
best, t.janos
===============
#!/usr/bin/python
# blseq2plt.py - Generate gnuplot output from logfile
# Andrew Elwell <Andrew.Elwell@gmail.com>
# Licenced under GPL2+
# See the TOA_Blitzortung.pdf () and
# for background
# into the data format.
#### This version process all the $BLSEQ records form the sample.dat file
#### and plot sequencially the graphs
import struct
import matplotlib.pyplot as plt
logfile = open("sample.dat")
try:
for line in logfile:
line = line.rstrip()
(ident, timestamp, data) = line.split(',')
(data,checksum) = data.split('*')
# split data string into chunks for each pair of readings
# if using 2 channel firmware:
chan1 = []
chan2 = []
readings = [data[k:k+4] for k in xrange(0,len(data),4)]
plot = []
for v in readings:
c1,c2 = struct.unpack('B B',v.decode('hex'))
chan1.append(c1)
chan2.append(c2)
plt.plot(chan1, label='Channel 1')
plt.plot(chan2, label='Channel 2')
plt.title(timestamp)
plt.legend()
plt.show()
finally:
logfile.close()
=============
#!/usr/bin/python
# blseq2plt.py - Generate gnuplot output from logfile
# Andrew Elwell <Andrew.Elwell@gmail.com>
# Licenced under GPL2+
# See the TOA_Blitzortung.pdf (http://blitzortung.org) and
# https://sites.google.com/site/blitzgraphs/ for background
# into the data format.
#### This version process all the $BLSEQ records form the sample.dat file
#### and plot sequencially the graphs
import struct
import matplotlib.pyplot as plt
logfile = open("sample.dat")
try:
for line in logfile:
line = line.rstrip()
(ident, timestamp, data) = line.split(',')
(data,checksum) = data.split('*')
# split data string into chunks for each pair of readings
# if using 2 channel firmware:
chan1 = []
chan2 = []
readings = [data[k:k+4] for k in xrange(0,len(data),4)]
plot = []
for v in readings:
c1,c2 = struct.unpack('B B',v.decode('hex'))
chan1.append(c1)
chan2.append(c2)
plt.plot(chan1, label='Channel 1')
plt.plot(chan2, label='Channel 2')
plt.title(timestamp)
plt.legend()
plt.show()
finally:
logfile.close()
Alles anzeigen
Ok, now I found the option in this editor, how can insert the code here.....
Opps... this editor hide the indentations in the python code, I put here a screenshot with the correct form.
t.janos
Thanks -- It was via your page that I realised I could use the '-e' option to write to a file -- it has to be created before starting tracker_Linux but a simple 'touch' before works OK
ie:
$> touch blitz.mdm
$> screen ./tracker_test -L -l blitz.log -e blitz.mdm SiRF 19200 /dev/ttyUSB0 username password
My plan with the plotting script is to tail the logfile and plot more or less in realtime to see the results locally (I need to 'tune' my preamp at the moment - I'm finding it hard to get nice clean signal peaks without going into interference mode and sending garbage) until I'm happy that I've got the signal levels correct.
also, I'm aware that I'm writing in english on this board -- I don't wish to offend anyone -- is there a better place for non-german speakers to discuss?
my next change to tracker_Linux.c will be to make it parse a configuration file (so that username/password isn't visible in the process tree)
Andrew,
We are talking more or less about the same things.
In my imagine the "real time study" of the recorded lightning pattern at a local site looks like:
- the "standard" tracker program make localy accessible log/data file.
- more script (actually your python script) process it, and make the series of the graphs immediatly in a directory, accessible from the local web server
- you can list the contains of this directory, and examine the appeared graphs.
- mybe, day-by-day need to "rotate" the created graph files, or open a new directory for the actual versions.
About the usefullness of this forum for special (linux) problems on english:
- It is true, in the Blitzortung community we are the minority, using linux, but I think, to make the linux environment more confortable to study this lightning phenomen, would be good. And use some linux "native" features make this develoment more easy, and interesting.
- It is true, on this Forum the majority of the active participants use they native language, german, in the discussions. But we are (sry, majority of us) live in the EU, and there we have many "official languages" use them in equal "rights", but we know there are a first one in this equaly important langauages, and we have the right to use it, if we want to communicate an easy way each others....
So: I think, many people here understand, and able to follow the english discussions, and there is the google translate tool, which produce acceptable quality translation between the anglish and german language pairs (not this is the case with my nice mother language , but this is my problem....)
t.janos
Thanks for your notes and links (on the other thread).
In the meantime I've made a shortlist of the tasks I'm hoping to work on with the linux tracker software on
of course, patches are always welcome
who runs the h621316 php stuff? It'd be *really* nice if the index.php that generates the table with the graphs on also had a 3rd column with some text either the userid or the location string (this is so that I can search for my plots on a webpage compared to others using the browser 'find' capability)
Hello there,
its good to see that there is developing more activity regarding the use of tracker software on linux. In order to avoid reinventing the wheel I would like to introduce my tracker software project with some integrated local data analysis as well.
You can have a look at the actual analysis here. In order to improve the quality of the delivered data there are some plots which should help to figure out problems with noise signals.
The project itself is hosted on launchpad, and can be found here. It is a completely written in C++ from scratch, should be easily expandable and already includes the following features:
The last feature I added to the software is a local socket interface which will can be used to display the actual tracker status on a dynamic webpage or which allows to develop a user interface to supervise and control the operation of the tracker software.
The software provides debian package support, but the build process is based on automake/autoconf which means that it can be compiled on any unix like system without problems.
If you like as well, I would appreciate further discussion regarding possible features of analysis and visualization to improve software for blitzortung.org participants on linux.
With kind regards,
Andreas
Thanks Andreas -- I had already been pouring over the lp: code but I'm afraid I know c++ even less than c (and thats pretty much minimal). My problem was (is) that I don't know what a 'good' signal looks like so its hard to know how to optimise the preamp.
I notice you're using a guruplug -- I also have one but its still sitting on my desk waiting to be used (present 'headnode' for TOA board is an old thinkpad with a pentium-M): which generation do you have? (without fan 1st gen?) -- do you have any temperature problems?
also your setup photo answered my question if I can get away with a cheap usb-serial converter rather than needing to use my keyspan one.
ditto for 'what should I use for mounting -- hardboard looks great and relatively cheap.
if I don't end up using the guruplug I'll probably run the code on my bifferboard eventually. Has anyone measured how many watts the evaluation + preamp boards use when running (ie, is it low enough to be able to run it off a PoE system?)
Thanks again
Andrew
Coding should not be a problem. Its more interesting for me to find out what features are helpful for operation and maintenance of the tracker and to have some discussion about improvements which could be made.
In order to have a look at the measured waveforms I plan to create a gui similar to the windows tracker which enables online viewing of waveforms and other parameters. This will be a separate program written in python and gtk for the UI which communicates with the tracker software.
The picture with the guruplug in original packaging is a little bit outdated and should be updated. As the system resides in a corner of the attic it gets very hot during summer days, so i had to unpack the GuruPlug and mount a bigger heat sink. At the moment it is still passively cooled and seems to run quite stable now.
As I rely on wireless lan for data connection to the router I had lots of problems with the stability of the system. After Installing a WLAN firmware which I found somewhere on the guruplug forum and using recent kernels from here the system is running quite stable. It requires a restart from time to time, but that is not a big deal. Using cable network should be even more stable.
Regards,
Andreas
Hi Andreas,
I think we have some common questions or problems, but we have different approaches, maybe, based on our practices.
You developed a confortable system/framework to "control" the blitzortung site. When I made some test on it first time, my basic problem was, that it was closed in debian package creation framework. I tried this framework to install on my NSLU2, running debian, without any success.
I can tell you, I am a "conservative" linux user, use more than a decad the redhat, later the "most loyal redhat clone', centos, on this system I wasnot able to compile your system. Maybe it is possible with the newer version, I will try it.
Now, with Andrew we started to make small scripts, handy tools, to examine some problems manually. I have very limited practice in programming, and this stepp-by-stepp approaches offers me more overviews, what happened, and more flexible tools to try some alternatives manually. It is true, can be imagine the final aims as a well-designed system...
Your presentation page is very impressive, it has all the available, important data in nice presentation form, but this isnot help too much me to answer such questions: What is the real (lightning) signal/pattern in my recorded data, how can I improove the probability at my site to detect more "real signal" and less local noises. Ofcourse this quiestions have some hardware-like implications, (shielding, cables, antenna questions, calibrations, and so) but not only this.
best:
t.janos
Hi Andrew,
I spend time with your program, and insert into it 2 new commands, and I payed it with a little.
In the comments I summarizes this tryes.
The results are enough for me, to study "immediatly" the produced data at my sensor site.
I tested the possible vizualization in 3 forms:
1. The program generate all the graphs in separate files, use the timestamp variable as filenames. This is the same, as you made, and published on your flickr page:
2. The program create files for each graphs, but dont close/clear the used memory buffer, the result will be pictures with increasing the graphs in it.
(Maybe, this is a funny behaviour or bug in the matplotlib plt.savefig rutin, see the discussion here:
)
It produce nice picture, you can get a quick overview, what happened on your site in the last time frame.
I put here one of such picture.
osszes.png
A short interpretation of it:
My site produce mainly 3 types of signals:
- the majority of it started at the value 100 on y scale, increased to 150, and after ring off...
It seems me, this type of data comes from periodic local noises
- the second type (few only) started at 180 on y axes, have some sinus-like periods
- the third one strong signal, first peaks at 250, and carries sinus periods.
3. Finally I generated animation from the generated picture files, all the 2 different pictures series.
This first try have this problems:
The program, make the animation (mencoder) process all the files, but dont understand the hex numbers in the filenames, consequently it dont insert the each files in a right "time series"
The python program, generated the graph files, in recent version use an automatic scaling on the y axes, and the animation need the fixed scale, for better studying...
I attache here the pic contains all graphs of my test data file.
The generated mpg animation files sizes around 1 Meg, I dont able attache here.
best:
t.janos
Here is your program version with my small changes and the comments, how can use it.
====================
#!/usr/bin/python
# blseq2plt.py - Generate gnuplot output from logfile
# Andrew Elwell <[email]Andrew.Elwell@gmail.com[/email]>
# Licenced under GPL2+
# See the TOA_Blitzortung.pdf ([url]http://blitzortung.org[/url]) and
# [url]https://sites.google.com/site/blitzgraphs/[/url] for background
# into the data format.
#### This version process all the $BLSEQ records form the sample.dat file
#### and save the the graphs
import struct
import matplotlib.pyplot as plt
logfile = open("sample.dat")
try:
for line in logfile:
line = line.rstrip()
(ident, timestamp, data) = line.split(',')
(data,checksum) = data.split('*')
# split data string into chunks for each pair of readings
# if using 2 channel firmware:
chan1 = []
chan2 = []
readings = [data[k:k+4] for k in xrange(0,len(data),4)]
plot = []
for v in readings:
c1,c2 = struct.unpack('B B',v.decode('hex'))
chan1.append(c1)
chan2.append(c2)
plt.plot(chan1, label='Channel 1')
plt.plot(chan2, label='Channel 2')
plt.title(timestamp)
## This legened() command draw the legends. If you want more graphs in a pic, this legends hide parts of the graph.
## Comment out, if you dont need legends
## plt.legend()
## This show() command enable to display the graph on the screen. If you enable it, you can see the graphs on the
## sceen only
## plt.show()
## This savefig(fn) command save the generated graphs in separate files. Use it with the next, clf() cpommand, to
## make each graphs in separated files
plt.savefig(timestamp)
## If you leave this clf() command commented out, all the data will put increasengly into the generated files.
## plt.clf()
finally:
logfile.close()
Alles anzeigen
Hi Janos,
at my site i seem to have less problems with noise (only some pronounced spikes in activity from time to time). At the moment my focus is to find out why the data is not used in stroke location which are far away (> 500 km). Although I can clearly see in the x-y-Graph that I receive a certain amount of signals from that direction which somehow scales with the total amount of strokes detected by the network, the data of the station is mostly not considered in the location of the stroke.
To look inside this behaviour the residual time plot was created and it seems that the residual time is around 0 on average, but that there is a slight deviation when the stroke locations are further away.
All the analysis scripts are written in Python/Bash and parts (if not depending on recorded events) are usable independenly from the tracker software. So it is maybe also helpful for you.
Besides that I will be happy to share ideas with interested people here.
Regards,
Andreas
Hi, Andreas,
I made a "look around" on my system, found an old version of your bzr branch, its date is 2010-aug-5. In this tree I didnont find python scripts.
Now I downloaded again the fresh version.
I understand, to run and see your python scripts, need a working system in the background.
Consequently the first stepp was taken: reading the INSTALL text. It has general instructions, how can use the ./configure, if you havenot, how can start with autoconf.
In this point I get an old error, mybe comes from my old environment, some problematic parts around m4 macros. Here is the message:
--------------
[root@centos5 blitzortung-tracker]# autoconf
configure.in:6: error: possibly undefined macro: AM_INIT_AUTOMAKE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.in:11: error: possibly undefined macro: AM_CONFIG_HEADER
configure.in:17: error: possibly undefined macro: AC_LIBTOOL_DLOPEN
configure.in:18: error: possibly undefined macro: AC_PROG_LIBTOOL
-----------------
This run generate a ./configure script, but it has the error, referenced in the previous run, to start it, I generate only another errors:
[root@centos5 blitzortung-tracker]# ./configure
./configure: line 1715: syntax error near unexpected token `AM_CONFIG_HEADER'
./configure: line 1715: `AM_CONFIG_HEADER(config.h)'
----------
So: better to try it on a more recent system, I will restart later my tries on the new ubuntu...
To run "without the background" your attached 3 python porograms, my experiences here:
blitzortung-info:
It stopped first, because dont find the dir with the statistics, call the gnuplot, it is running, but send a dozen of errors.... the first lines are here:
-----------------------
[root@centos5 blitzortung-tracker]# [root@centos5 scripts]# ./blitzortung-info
nice: blitzortung-statistics: No such file or directory
gnuplot> plot 1 lt -2 not, "/tmp/blitzortung-info.F23200" u 1:3 not w boxes lt 2 fs solid 0.5
-----------------------
blitzortung-status:
running this program, the first error messages more informative, need to install more python modules:
---------------
[root@centos5 scripts]# ./blitzortung-status
Traceback (most recent call last):
File "./blitzortung-status", line 5, in ?
import json
ImportError: No module named json
---------------
blitzortung-statistics:
[root@centos5 scripts]# ./blitzortung-statistics
File "./blitzortung-statistics", line 15
class Config():
^
SyntaxError: invalid syntax
-------------
Maybe this error comes from my older python version? This version run on my machine:
Python 2.4.3 (#1, Sep 3 2009, 15:37:12)
The short consequences:
My test enviroment, running on my "workhorse machine", represents the "worst cases", with many problems, basically the different, (older) versions of the dependent programs/libs.
I know, if somebody start to run this system on a fresh, and well-configured debian linux, its shure, he havenot this problems. For a "tester" this case havenot too much challeges...
And I am shure, your python programs will be helpfull for me/us, but first I need to install the backgrounds....
best:
t.janos
Hi Janos,
you found a lot of incompatibilities with your system. I propose to rework the interesting scripts (at the moment only blitzortung-statistics should be in focus) so that it can use the output of the orignial linux tracker as well.
I have not made much investigation, but maybe the script could be made compatible with python 2.4. I will try to fix the issues seen here.
Anyway, the package will build without further problems on Debian Squeeze and current Ubuntu versions, so its worth to try one of these.
Regards,
Andreas
Hi Andreas,
I tend to use Fedora / RHEL clones so will try a build here -- should I start with the existing bzr branch or are you planning a new one soon? (no promises it'll work, but happy to submit bug reports)