From counting ‘clicks’ to counting ‘ticks’,
Realizing the impact quality RDBMS has on a dashboard’s Perceived Value
How our project with AdminiTrack Issue Tracking helped bring our interactive dashboards to a whole new level.
Over the past month, RYIT was working alongside AdminiTrack for AIMFleet Asset Monitors. Together we worked to end the tedious manual & daily exports required by the system to keep dashboards up-to-date.
This issue was tackled by 2 stages/project, the first project the AdminiTrack team and us worked to connect their secure data to a Google Spreadsheet. This would help maintain an automated export which we could connect to, what we didn’t anticipate is the agonizingly slow connection that would result in a high usage system, already taxed by 100,000’s of users. Eventually we made the push to connect the AdminiTrack local SORSystem of Record to RYITs High-Performance AWS Aurora Database. What we had discovered was almost immediately the usage and appreciation for the insights the reports brought had skyrocketed.
As a developer, it is important to understand what your users have to overcome to implement Data Visualization Tools and Dashboards into the organization. From our side, it’s easy to see your reports as a “gift” making the users’ lives easier. To a point where they would be crazy not to use every drop of insight, you have painstakingly implemented into the Data Miracle that is your visualizations.
Sadly, that way of thinking is not always shared by both parties. To them you may be adding yet another item to their already loaded plates, for example:
- An additional monthly, weekly, daily email in their inbox
- Another URL
- Another Login
- Another potential reason for a manager to harass you
3 Long-Term Effects “Data Stalling” Has On Usability
In the dreaded world of wasting time, for you and your users, it is key to remember the most important aspect of usability, SPEED!
Just as it is important to minimize the number of mouse clicks, it takes to use and navigate your company’s apps and sites. Every additional second spent waiting for a dashboard to load or a drill-down to take effect across your aggregations is another chance your user clicks the “X” and sends you an e-mail instead.
To your users, this may seem like a more “simple” solution, closing the dashboard and asking you for the status, but what long-term effects does this have on your reports usage and integrity?
1) The Fear of Wasted Time
This is something I found true, not only for my users but myself.
Our first set-up granted us a level of automation which was useful and reassuring. No matter what the information was up-to-date within 24hours without having to ask the team who updated it last, and when.
However, what I did not expect were the downsides that came along with using this open source hosted service. With no monitoring or control over the speed of the system, we were completely in the hands of whatever the current traffic level of the service was.
This created an environment like one would think of when going to the DMV, sure, sometimes you get in and out in an acceptable amount of time. However, every time you think of the next appointment you know the potential to be stuck there and waiting is always a threat. This is what we were left with on these reports, we traded accuracy for time, and it was not an acceptable trade-off. I found myself fretting pulling up the report when I needed to reference the data individually and even more if we had to use it to present the data to the customer. As a developer, you can be assured that if something bothers you, it will bother your user exponentially more.
2) The “Just Print It Out” Request
Once you hear this request, you know you have successfully failed to convey the benefits of insights and interactivity to your user. If you find yourself paper-printing dashboards before a big meeting or a boss requests this to review, instead of loading it up on their computer, than the pain of finding, loading, and unreliable click-to-response speed has officially overcome any and all benefits of the hierarchies, datatypes, links, and filters you painstakingly linked across all of the reports.
This habit may be difficult to overcome, the users have been accustomed to receiving emails, and they may have convinced themselves it is still “the best way” to access and read the metrics, sans any dynamic elements.
In my experience, the only way to overcome this (given you have increased the usability by boosting the speed and connections) is to reply to these emails with the pdf versions or specific metric requests with the links to the dashboards rather than adding the attachment. Eventually, the user can be conditioned to stop asking and begin building new trust for the speed and ease of the newly configured aggregations.
3) Blaming It On The ‘Reports’
Your users do not know what a back-end is or what it consists of.
The difference between the two can begin to get blurry and even start to become one… your “Report.” With low performance affecting speeds, you may start to hear the words being thrown around like “this report is broken,” “I can’t find what I need in this,” or “go load this up before the meeting, so we don’t need to wait.”
You may luck out with the latter and have a user that understands that the issue is simply waste of time, but far too often we end up with the blame being put on the data and the aggregations. When a user becomes aggravated, it is possible for the overall trust of the report to diminish. The blame can shift from the tunnel between the report and the data to just the data, itself. Remember, the end user may not know that connection even exists, nor should they have to be burdened with that information. It is YOUR job!
However, because of that same lack of knowledge, the only reason they may comprehend that the report is acting in this way is that YOUR tool is broken or THEIR data is wrong. Believe me, very rarely will the blame first go to their own system, therefore the “report” can be perceived as not only slow, but unreliable or straight-up wrong. This is where our teamwork with AdminiTrack came in. The headache that comes from proving the validity of the report again and again, is worth the effort of switching the data from a SORSystem Of Record: the app the user uses to store their business’ data/metrics->BI Connection to a SOR->RDBMS/Datawarehouse->BI.
Making the insertion of new metric monitoring smooth and painless
Say what you may about static reports, the one thing (and only) they have going for them over modern analytics is an almost complete lack of load time. Putting aside that typically more questions arise than are answered, it does not leave much room for Aggregation Aggravation that you see in live and interactive data.
However, if we can maintain the prompt load of the static reports with the insights and answers of the live and dynamic reports, we have indeed constructed a highly usable dashboard.
But what damage can a few extra SECONDS inflict on how a Data Visualization tool can be considered a Success vs. Failure? From my experience… ALL of it!
If you are operating reports from less than optimal setting and your users, don’t bat an eye, good for you! You’re one of the lucky few, in the world of IT… no news is good news. BUT, if you are noticing an increase in specific requests, a decrease in traffic, or fewer and fewer adjustments to your Visualizations, It may be because of a Low- Performance connection driving your users to ignore the metrics. It may be time to move from a Direct Excel Connection and (in my experience) Online Spreadsheet services.
These are shared between you and 100,000’s of other users and API’s. Instead, try using their API to connect to your cloud or public facing database instead of your dashboard’s ODBC direct connection. Use that tool to do all the heavy lifting for you, instead of relying on the shared bandwidth of someone else’s service. As a bonus you can work all the custom queries you want, which can be key if your Data Visualization tool of choice does not offer a high level of in-app data management… we’re looking at you SSRS!