Missing Historical Values

Hi Tim,

Yes, I am happy to. Currently, I am polling 11 tags. I am polling the IO server at a rate of 100 ms and am logging the tags every second.

The requested files are attached. Let me know what you think!MOVED TO STAFF NOTE (167 KB)

Thanks for the help!

–Connor

Hey Connor,

I think this post might be able to help you out

Can you try that and let me know if you’re still running into issues?

Yeah… even if ONE of your tags that is referencing a PLC variable that doesn’t exist/has changed/is in error, then NONE of your tags will be read.

Thanks for the responses. I did that and no tags have been disabled. All tags are reading well and are healthy. I believe the reason that message appears in the logs is from intermittent updates of the PLC code. When I download a new program, the Flexy can’t contact the PLC, and so loses the tags until the PLC downloads the program (this makes sense to me, I’m just clarifying why I think that error code appears in the logs).

Tags continue to get dropped from the historical value database even when this error is not entered into the logs. In fact, the missing values do not appear to correlate with any events in the logs. However, I have noticed that the Flexy has been dropping entries less frequently since I increased the NTP synchronization rate–is it possible this is due to some sort of time sync issue?

I can try and take a closer look at the device if you PM me some log in credentials for eCatcher. But based on what I was seeing from the logs I only saw the CIP error code issues

Will do, I’ll PM you here shortly.

I’ve continued to observe this issue, and with events log have been able to narrow down the issue. During DMailbox data exports, the Flexy does not log any historical values. This is a serious issue for sensitive applications. How can I get this resolved? I have a backup with support files showing the missing values and DMailbox events ready to PM. Thanks for the help!

Additionally, I just observed the same behavior with the M2Web API, even with the Data Mailbox uploads disabled. Can you please assist with this?

Hi Conner,

Can you send me those files?

Yup, just did!

Hey Conner,

We’re wondering if you’ll still run into this issue if you change the device so that it’s not logging once per second for most of the tags and change it so that it’s logging with a deadband. It could be that the export file is very large and the export may be slowing down other areas of the device while it goes through this process.

We’re seeing that it looks like your logging is starting to get circularized. This typically happens when you’re approaching the memory limits and FIFO starts to be applied.

One thing I was wondering is if you’re running into this issue initially or if it seems to be running into issues after a period of time. I was also wondering if that period of time falls in line with the time limit described in this post below. Based off the number of tags we should be able to calculate how long it would take before your memory gets filled.

My estimate:
314,572 points / 42 tags * 1 second * 1 day / 86400 seconds = .08 days

Is there a reason these tags need to be logged so quickly?

Hi Tim,

I am confident you are correct–this is a large file and we are consuming the Flexy’s memory. I am seeing just under 3 hours of logging [I reformatted the Flexy to have a larger data partition].

Unfortunately, this is somewhat unavoidable. This setup is a mockup of one part of a much larger setup, where we do not need to log tags as quickly, but we do need to log a lot of tags (~3000) and about 1/5 of those will change with some frequency. This is a large number of tags and we will be quickly consuming the allotted memory. We are logging these for reporting to our cloud over the M2Web API.

So even though for the actual deployment we will be logging on a deadband, the transmission will be a similar memory size. How can we resolve this? Is there any way to slow down communications so the system doesn’t skip any logging?

Before going further, does the device seem to behave correctly during those first 3 hours? If so we might just be able to make it so that when it reaches a certain amount of memory usage that it just deletes the old data

Great catch! I erased the logs yesterday and checked this extensively yesterday into last night. It does seem like the issue only happens once the data becomes circularized. What’s the best way to manage the memory deletion? Thanks so much for the help!

I think you should be able to do something like this:

tset 1,7200 // goes off once every 2 hours

Ontimer 1, “@Erase_Log()”

Function Erase_Log()
print “Historical Log Cleared”
ERASE “#ircall

Endfn

Thanks Tim. I’ll give this a try.

–Connor

While I’m at it, I came up with another question. I’ve noticed that most of the examples in the forum are BASIC scripts, is there a particular reason for that over java? What are the advantages/disadvantages of using one versus the other from a Flexy point of view? Are the integrated differently or have different capabilities within the Flexy? Thanks!

Coding in Java should be fine as well. It just takes a little longer initially because you need to compile the code and it is something that needs to be created outside of the Flexy UI. Personally I don’t have much experience with Java, so I’ve just been sticking to BASIC, but either one should work

Got it, thanks Tim! I’ll follow up here with results of testing the code once I deploy it (likely next week).

Hi Tim,

I have been facing this issue as well with my 1s data, which is causing a loss of revenue as we are using it for billing. I have tried the same method of clearing the historical data to no avail. I am logging only 11 tags.
image
I’ve tried using JAVA instead of BASIC as well, but data is still missing. Urgently need some help !