The Past, Present, and Future of Transportation Data – Part One
THEO AGELOPOULOS & JOHN MACADAM & ERIC RENSEL, FLTA & BRENDAN WESDOCK, MCP, GISP
A Conversation with Ohio Transportation Engineering Conference Panel Members
Recently, Gannett Fleming’s VP and National Transportation Planning Leader Eric Rensel gathered the members of a panel scheduled to present at the Ohio Transportation Engineering Conference (OTEC) on October 26 in Columbus, OH. In advance of their OTEC roundtable, they met for an insightful discussion of data’s past, present, and future uses in transportation.
Eric was joined by scheduled panel guests:
- Theo Agelopoulos, Senior Director, Architecture & Engineering Design Strategy, Autodesk.
- John MacAdam, Administrator and Web Developer, Ohio DOT.
- Brendan Wesdock, President, GeoDecisions.
The group was also joined by Tracy Riedel of Autodesk and Craig Hoffman of Gannett Fleming. The conversation has been edited and condensed. Below is part one of a two-part series leading up to OTEC.
We come from various parts of the industry that include the transportation perspective, but also beyond transportation towards data. We have input from the Autodesk space and design-oriented software. We are also represented in transportation services by forward-thinking projects in Ohio. I thought we could talk about data in transportation in a past, present, and future conversation.
We talk a lot about the fourth industrial revolution and the impact it’s going to have on the way we work. I know in the field of data; it’s going to have tremendous impacts that will carry into everything we do.
We’ve spent a lot of time, 20-30 years ago, creating data and getting our data organized. That data turned into information. We had silos of information all over the place, but the future is really about prediction. It’s about taking what we’ve done and going through the past, present, and future, from organizing data to using that data to predict what’s going to happen next.
We’re already starting to see things like using real-time data integrating with past data to come up with logical conclusions about what’s going to happen. For example, like things we’re doing with the Virginia Department of Transportation with traffic calming. There’s going to be so much data that you really can’t look at it all. You must use artificial intelligence (AI) and machine learning to be able to make sense of it.
I think when we look at a particular asset, the reality is more than 80% of that asset, or the total operating cost of that asset is in the operations and maintenance phase. But it’s also where most of the historical innovation and digitization has happened. We’ve never really closed the loop on optimizing the total operating cost.
At Autodesk, we build technology and platforms that enable our customers to solve some of these problems. We’re seeing different types of data that sit in different environments. And historically, we spent many years trying to migrate data from one environment into another environment. By the time we migrate that data, it’s redundant. The reality is, with the technology and the platforms we now have, we can create these federated data environments that open access.
But to get the benefits of things like AI and machine learning, you need very strong data. You need lots of data, but you also need a data platform, almost like a data river that allows you to go get access to run these different types of algorithms to get insights. I think that’s the tipping point we’re at right now. The cloud and these emerging technologies like AI and machine learning allow us to transform the way we plan, design, build, and ultimately operate these assets.
I think how we collected data historically was siloed, like speed data for example. We used to have loops in the roadway, and they would break all the time, and they weren’t accurate. And you only had data where you had those loops. And the Ohio Department of Transportation was so excited when we could put up sensors in the median and to shoot Doppler radar and get speed in both directions. That was a huge step for us. Then we could do it on all our interstates, which we were so excited about, and it was half the cost. Now, we’re taking data from a feed, and it has more coverage for less cost. You can see that trend going now. We can tell the public if it’s slow or fast with a simple toggle on Google Maps or Bing Maps. So that real-time speed data has almost become a commodity.
Now we’re using that real-time speed data to make predictions where we think the bottlenecks are going to happen, where are the most dangerous locations, and where are the most likely locations a secondary crash might occur. And we’re just starting to plug all those predictions into our decision systems. But the question is how do we notify the public – like traveler information? How do we tell them what’s going on? We’re trying hard to get directly into cabs of trucks and into vehicle systems to tell them there’s a queue coming ahead. There’s a crash ahead. So that traveler information strength position is a huge piece we can focus on.
So, let me ask you another question thinking 10-15 years ago versus where we are today, what are some good decisions we thought we were making at that time that really didn’t pan out in comparison with where we are today. We always say we want to learn from the past, right?
15 years ago we were trying to do things like predictive routing, that never panned out. There were companies ahead of the curve like Tom Tom, who at the time had data that could do things, but it just never panned out. There was this technology leapfrog in how you did predictive routing. You took a lot of historical information and said, okay, I’m going to do this route across Ohio, and it’s going to take me five and a half hours to get across the state.
Some things in architecture, too. People start with the ideas of data warehouses and data lakes; those just turned into data junkyards and data swamps. It’s just a bunch of data out there, but it’s not usable. I think we have great ideas of how we can take data, organize it, and make it efficient and usable. But you have policies, procedures, individual ways of doing things, and you have the information or the data there, but it never quite turned into what you want it to be for the information that you wanted to get out of.
If I look back on the last 15 years, I think different stakeholders, whether it was the DOT, the city, or the engineering consultants, they were doing work for these stakeholders and made the best decisions they could with the data they had. Because we have a lot more data now, you almost have the opposite problem. How do you really extract the right level of insights to turn the data into usable information?
Now, we’re collecting real-time analytics and data on the actual performance of that asset. We can go back and compare the intent or the metrics we thought it was, designing around all the targets versus how it’s really performing. The big difference now is that we have much more knowledge that we can use as part of the planning and ultimately to automate the design and delivery process into operations.
We used certified traffic and traffic demand modeling, and it’s like this black box to get these random numbers. That was the best data we had the time, but we put our blinders on to the possibility that this data wasn’t accurate and that these demand models weren’t perfect. We weren’t considering real-time operations data soon enough.
I think when we look now, we should be making decisions with data that in the future put us into this idea of whole infrastructure management instead of the traditional way of looking at things where we plan it, we design it, we build it, we operate, maintain it, and then later we replace it.
Theo, you started with a statistic about of how the lifecycle of a roadway is spent in operations and maintenance. Well, that’s true. But we shouldn’t spend that whole 80% essentially running the infrastructure into the ground and then seven years before it’s time to replace it at the end of life, be thinking about it. We should be moving more towards an integrated systems management approach or whole infrastructure management approach, where we’re taking the intelligence that we are creating on the operations and maintenance side of the house and more effectively feeding that into the plan.
You’re going to have to convince the people who are used to the way of doing things that we have a predicted impact, or some data based on AI. How are they going to trust that more than the conservative, traditional way? I’m not saying we shouldn’t have that direction, but there will be resistance.
That model is already being used in other industries. In the facilities management and water treatment plants, for example, their operations are based on predictive maintenance and predictive replacement. That saves quite a bit of money. But you’re right, John, it’s the same thing. It’s your guess versus my guess and how the traditional way of doing it bumps against the new way. It really comes down to the policy and the people.
And I think we need to overcome the traditional way of doing business. We have alluded to these discrete vertical silos and people have put boxes around the planning process and the design process. The reason is because we’re a risk-averse industry, right? Nobody wants to be the fall guy. There’s a connected digital delivery process that feeds the starting point of operations. There are good examples of where it’s working. I think we must figure out how to overcome the traditional way of doing business to implement some of those best practices that have been validated. The U.K. is a good progressive project delivery model, and you see it on their water cycle. We’ve got these amp cycles. They’re basically PPPs or alliances where you’re bringing together the owner, the operator, the design of the builder, and they’re all carrying the inherent risk.
"*" indicates required fields