Hi -
smartenergygroups.com has recently announced that they (he - Sam) plans to start charging users who use more than 3 streams, and part of the charge is based on the data point interval.
I currently have my ecm1240's at a fairly fine-grained (10s) interval, and I'm storing the data locally to a database; another instance of btmon picks up from the database & sends it to several targets, including smartenergygroups.com
I should probably know this, but I don't see it offhand - is it possible to take 10s intervals from a source, and aggregate it into, say, 60s data for a target? Or does this need a bit more code in btmon.py?
Thanks,
-Eric
btmon.py- possible to reduce data interval for some targets?
-
- Posts: 41
- Joined: Fri Jan 10, 2014 3:33 pm
-
- Posts: 25
- Joined: Thu Feb 10, 2011 1:17 pm
- Location: California
Re: btmon.py- possible to reduce data interval for some targ
in btmon.py, the only intervals I've noticed relate to frequency of collection, and frequency of delivery of the entire bucket of collected stuff.sandeen wrote: I should probably know this, but I don't see it offhand - is it possible to take 10s intervals from a source, and aggregate it into, say, 60s data for a target? Or does this need a bit more code in btmon.py?
I've not seen any aggregation/min/max/p90/etc type functionality in the codebase I was looking at, although it you're familiar with Python it wouldn't take long to add that functionality.
-
- Posts: 11
- Joined: Tue Dec 03, 2013 4:25 pm
Re: btmon.py- possible to reduce data interval for some targ
Could you run multiple instances of btmon that read from the database and each instance uses different upload frequencies? i.e, for SEG specify "seg_upload_period = 60" in the config.cfg file?
btmon calculates power and energy from the accumulated watt-second values, taking the difference between time intervals. So presumably if your database had values stored every 10 seconds, the SEG instance of btmon would use the difference between every 6th reading to calculate the 60 second power and energy values to send to SEG. however, I haven't check the btmon python code to see if this is true.
btmon calculates power and energy from the accumulated watt-second values, taking the difference between time intervals. So presumably if your database had values stored every 10 seconds, the SEG instance of btmon would use the difference between every 6th reading to calculate the 60 second power and energy values to send to SEG. however, I haven't check the btmon python code to see if this is true.
-
- Site Admin
- Posts: 4266
- Joined: Fri Jun 04, 2010 9:39 am
Re: btmon.py- possible to reduce data interval for some targ
Untested, but it looks like you could do the following, add:
to the imports.
Add the variables
Under SmartEnergyGroupsProcessor, process_calculated wrap the whole thing in a conditional (there might be a better place to wrap with the conditional):
Code: Select all
import datetime
Add the variables
Code: Select all
SEG_DATE = datetime.now()
SEG_SECONDS = 60
Code: Select all
curr_date = datetime.now()
if (curr_date - SEG_DATE) >= SEG_SECONDS:
SEG_DATE = curr_date
...
Ben
Brultech Research Inc.
E: ben(at)brultech.com
Brultech Research Inc.
E: ben(at)brultech.com
-
- Site Admin
- Posts: 4266
- Joined: Fri Jun 04, 2010 9:39 am
Re: btmon.py- possible to reduce data interval for some targ
It looks like it'll be a bit more complicated then above. You'll need a way to accumulate values when it doesn't send, then send those accumulated values when it does.
Maybe something along the lines of:
Maybe something along the lines of:
Code: Select all
SEG_DATE = datetime.now() + 60
Code: Select all
def process_calculated(self, packets):
curr_date = datetime.now()
if (curr_date - SEG_DATE) >= 60:
SEG_DATE = curr_date
SEG_PACKETS += 1
nodes = []
for p in packets:
s = []
if self.map:
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PE_LABELS)):
key = mklabel(p['serial'], c) # str(idx+1)
if key in self.map:
meter = self.map[key] or c
s.append('(p_%s %.2f)' % (meter, (p[c+'_w']+seg[c+'_w'])/SEG_PACKETS))
s.append('(e_%s %.5f)' % (meter, p[c+'_dwh']+seg[c+'_dwh']))
seg[c+'_w'] = 0
seg[c+'_dwh'] = 0
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PULSE)):
key = mklabel(p['serial'], c) # str(idx+1)
if key in self.map:
meter = self.map[key] or c
s.append('(n_%s %d)' % (meter,p[c]+seg[c]))
seg[c] = 0
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_SENSOR)):
key = mklabel(p['serial'], c) # str(idx+1)
if key in self.map:
meter = self.map[key] or c
s.append('(temperature_%s %.2f)' % (meter, (p[c]+seg[c])/SEG_PACKETS))
seg[c] = 0
else:
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PE_LABELS)):
meter = c # str(idx+1)
s.append('(p_%s %.2f)' % (meter, (p[c+'_w']+seg[c+'_w'])/SEG_PACKETS))
s.append('(e_%s %.5f)' % (meter, p[c+'_dwh']+seg[c+'_dwh']))
seg[c+'_w'] = 0
seg[c+'_dwh'] = 0
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PULSE)):
meter = c # str(idx+1)
s.append('(n_%s %d)' % (meter,p[c]+seg[c]))
seg[c] = 0
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_SENSOR)):
meter = c # str(idx+1)
s.append('(temperature_%s %.2f)' % (meter, (p[c]+seg[c])/SEG_PACKETS))
seg[c] = 0
if len(s):
ts = mkts(p['time_created'])
node = obfuscate_serial(p['serial'])
s.insert(0, '(node %s %s ' % (node, ts))
s.append(')')
nodes.append(''.join(s))
if len(nodes):
nodes.insert(0, 'data_post=(site %s ' % self.token)
nodes.append(')')
result = self._urlopen(self.url, ''.join(nodes))
if result and result.read:
resp = result.read()
resp = resp.replace('\n', '')
if not resp == '(status ok)':
wrnmsg('SEG: upload failed: %s' % resp)
SEG_PACKETS = 0
else:
SEG_PACKETS += 1
for p in packets:
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PE_LABELS)):
seg[c+'_w'] += p[c+'_w']
seg[c+'_dwh'] += p[c+'_dwh']
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_PULSE)):
seg[c] += p[c]
for idx, c in enumerate(PACKET_FORMAT.channels(FILTER_SENSOR)):
seg[c] += p[c]
Ben
Brultech Research Inc.
E: ben(at)brultech.com
Brultech Research Inc.
E: ben(at)brultech.com