140. AI for Public Good with Daniel Clarke
Daniel Clarke is Head of Innovation and Technology for a group of councils in the Cambridge area, where he works on emerging technology, data infrastructure, transport systems and automated vehicles.
In this episode of Humans WithAI, David and Dan explore how AI is being used in local government, from cleaning transport data and analysing public consultations to supporting autonomous buses, social care, education and future city planning.
Dan explains why the real opportunity is not replacing people, but helping them work better, make sense of complex data and focus more time on the human parts of their jobs.
Transcript
And then the next really exciting step is that you can start to build AI tools on top of it, which allow you to interrogate data in a natural language way.
So instead of having to build BI dashboards or having to be able to do kind of some sort of data analysis, I, as a non data expert, can just go in and type how many cars went down mill Road from 12 o' clock till 3 o' clock on Wednesday the 5th, and it will tell us, it will draw a graph and then you might be able to say what was the weather like? And it will say, well, it was raining.
And you know, that will give you a bit of contextual information and I think that's really exciting because that unlocks data for people who are non data experts.
David Brown:Hello, everybody. Welcome to Humans with AI.
Today we're broadcasting live, so, so I'm trying out this live gig the last couple of times and Today we're on YouTube and I think we're on Riverside as well, which I didn't even know you could go live on Riverside, but apparently you can, so I just turned it on. So if anybody drops in later or has come in from Riverside, welcome. Today I've got Dan Clark with me, who is a longtime friend of the show.
ave had chats since, I guess,: Daniel Clarke:Thanks, David. Hopefully it'll be an interesting conversation. I'm sure it will.
David Brown:It's always interesting to us. So.
So just give Everybody like a 30 second because some people may not know who you are, you don't have to go into great detail, but if you give sort of a little bit of a 30 second just about your background and, and where you are and what you do, just to have a little bit of context to the conversation.
Daniel Clarke:Yeah, sure. So I work for a collection of councils in the Cambridge area.
I'm the head of Innovation and technology and I'm looking at how emerging technology and data can support some of the projects that we're delivering in Cambridge, primarily around kind of transport. So been in this role for some time now. And there are two areas that we're kind of primarily working in at the moment.
The first is how we can build a data infrastructure and then use that data infrastructure to support the Transport system, making sure that that is AI ready and obviously starting to expl how AI is impacting the transport system. And then on the other side, we've been doing work for about 10 years now looking at automated vehicles.
It actually came out the University of Cambridge. We did some work with a professor there, very early doors and we are now running a project called Connector.
So we've got three full size automated buses running around Cambridge. So really interested in how autonomy can support the public transport system.
My, my kind of background, I stumbled into the, the use of technology in transport.
Before this role I worked on a number of kind of transport projects and before that worked in kind of new developments around Cambridge and in a, in a previous life in the civil service, working in kind of housing all sorts of things. So I am not a technologist.
I always say that what my job really is to take emerging technology and then translate it into the real world and look at how we can apply it.
David Brown:Amazing. That's a great summary and it's, it's interesting.
You've already hit on something that's really interesting that not a lot of people talk about around AI particularly maybe more in a corporate sense, but I don't think a lot of people talk about it, but it's the readiness of the data to be able to be used by AI.
And you know, you and I have worked together, I mean we worked together years ago for, for quite a few years on looking at data platforms and stuff like that.
Has what I, I guess my question is is has that moved on in the last couple of years and, and the way, what's your thinking now about how you implement AI and how you get the data ready to be used by AI? Because if it's not structured and it's not sort of presented in the right way, then I guess AI can't even really use it to any good effect.
So is that the biggest challenge that you have at the minute?
Daniel Clarke:So that's been a huge challenge over a number of years and as you know, we've tried to crack this before to develop a kind of a data platform that allowed us to aggregate all our transport data onto. We do now have a platform, we've got significant amounts of data on there and actually that was driven by need.
So we built a big sensor network across Cambridge collecting huge amounts of data and it inundated our business intelligence team. There was just too much data for them to do anything with.
So we now have a data platform that actually uses AI to support the kind of cleaning and structuring of that, that data that's then stored on that platform, we begin to aggregate other types of data.
So parking data and real time bus data and incidents on the network, roadworks kind of all that data and that now is really well structured, it's cleaned and we've really thought about how we use it. So there are a number of connectors that go into things like Power bi. Power BI is used across the authority.
And so we've been able to build dashboards which allow people who aren't data experts to really begin to use that data. And more importantly that data is now kind of AI ready.
So we've had a number of people approach us about projects, funded projects that utilize AI and we've been able to quite quickly say yes, join the project. And it's been very easy for them to use our data. Up until this point though, there was data across the whole authority.
It was stored in closed systems, impossible to use. It's messy.
Even some of the data where we were using the latest sensing technology, so we were using vision based sensors, machine learning, collecting huge amounts of data. Even that was very messy because things happen when you deploy hardware in the real world.
You get trees growing over it, which means that you get dropouts in data. It erodes data quality. Someone will chop through a power cable, a bus will park in front of the kind of screen line that you're using to count.
And what we found was that things like that, events like that won't be marked in the metadata. But we're working really hard to kind of address that, clean it up, to make data as usable as possible, but also as accurate as possible.
And that's really improved, I think the way that we use data as an organization. We found, you know, we had some initial use cases where we'd be using it to monitor new infrastructure, the impact that that has.
But actually, you know, once you've got the data, once it's easy to build tools, then you find more and more people want to use it.
And so it's, it really has kind of, I think changed the organization to a kind of a data led organization that's really embedding it kind of within its work.
And then the next really exciting step is that you can start to build AI tools on top of it which allow you to interrogate data in a natural language way.
So instead of having to build BI dashboards or having to be able to do some sort of data analysis, I as a non data expert can just go in and type how many cars went down mill Road from 12 o' clock till 3 o' clock on Wednesday the 5th. And it will tell us, it will draw a graph and then you might be able to say, what was the weather like? And it will say, well, it was raining.
And you know, that would give you a bit of contextual information. And I think that's really exciting because that unlocks data for people who are non data experts.
David Brown:Yeah, 100%. And I think what's also interesting about that is using AI to help clean the data.
Because we've known, and I've done data analytics for years and you know, part of it is, is that you get incomplete data sets but all of the, the entire world runs off statistical analysis, right? So you don't actually need every single line to be 100 accurate.
So cleaning that data and just getting rid of the stuff that's trash and keeping the rest of it, once you have a certain amount of that, it's still going to be accurate even if you lose all that stuff.
And again, people don't talk about this, but using AI tools that are, that are slightly different than a machine learning tool in that you can, I think the AI bit is the interactivity bit, so you can use that natural language to say, you know, look at this data, here's what it should include. Anything that it, you know, is missing or anything that looks like it's bad data, just remove it. And then you end up with a clean data set.
I mean, for someone doing data analysis, that's massive to be able to do that.
And it's, you know, people focus so much on, oh, I want it to write something for me or I want it to do that and I want it to do this bit of research and it comes back and it makes up stuff.
But they fail to realize that you've got this other side of the tool that can do these sorts of tasks so much more efficiently and so much easier for, like you said, people in the organization who aren't like software engineers and they're not data analysts. And it just, I think that, I think that's going to really open up, particularly in public sector.
If you can get other councils to start doing that as well, then you can get to some point eventually where if every council has a clean data set, then you can start to combine them together and say, okay, we're now looking at data across multiple cities to understand do we have the same issues in these different places and stuff. So that's going to be really cool.
Daniel Clarke:What, what do you.
Yeah, I, I think what one of the things that you can also do is so where you've got those gaps in data and you, yes, you know, you can, you know, you then got a kind of an accurate data set. You understand why you've got those gaps and you can kind of allow for that.
But also you, if you've got enough data then you can use synthetic data to fill those gaps.
David Brown:Yeah.
Daniel Clarke:So that you've then got a kind of a complete data set which is probably more accurate than it would have been before we were using kind of AI and synthetic data. And that really helps as well.
David Brown:Yeah, that's true. What, what's your feeling on the appetite for using AI? And again, I know you can only speak for, for your little area, but I. Little area?
Sorry, that sounds terrible.
Daniel Clarke:That's right, very patronizing.
David Brown:It's not a little area. People. But you, but you talk to other people working in public sector.
I know you work with, you know, other councils and you're, you're always at events and stuff. And what do you think the, the attitude is towards using AI these days as opposed to a couple of years ago?
Daniel Clarke:So I, I think there's a huge amount of interest in IT and people are just trying to get to grips with what that actually means.
I mean there's a lot of as you know, kind of smoke and mirrors in AI and so actually trying to get to the, the heart of, you know, what, what is AI, how, how can I use it and how can it kind of transform the way that we do business? I think is difficult. I think that the natural default is just to put copilot on, you know, allow co pilot use.
David Brown:Yeah.
Daniel Clarke:But actually, you know, it can be really transformative and I've heard some really good examples of people who are using it in a, in a way that is saving significant amount of time and effort. Now this isn't about kind of getting rid of people's jobs.
So an example is social workers who are going out, meeting clients, really difficult situations, you know, really kind of an emotional, difficult conversations.
And previously they would have to talk to them and take notes and you know, now they can take a device with them, they can record the conversation, they can, you know, get those kind of notes into some understandable kind of order.
They can share the notes with, with the client, you know, agree that the content is, is what they, what they were expecting and that saves a huge amount of effort, you know, post that appointment.
And it means that people can concentrate on doing the stuff that they should be doing, which is caring for people and you know, that kind of social care work and not be doing all the kind of admin work that comes with that. And I think that's, you know, that may not sound very exciting, but actually that has the.
I think there's a real opportunity to reduce the amount of just bureaucracy that we have to kind of wait through every day and actually concentrate on our jobs and deliver stuff that people want.
David Brown:Yeah. And, you know, you and I have talked about this, and I've talked about this since the very beginning.
And I've always said that, you know, the thing that I really want is like a Jarvis or some sort of a personal assistant. Right. I can just tell it to do stuff and it just does stuff. And I think the first step that we've all seen towards that are the note takers.
Now, I've been in zoom calls where there were more eyes than there were actually people on the call.
Daniel Clarke:I've arrived on calls where I've been expecting people. Three note takers have arrived and I'm just sat there with them, you know?
David Brown:Yeah, exactly.
Daniel Clarke:Wondering whether I ought to chat to them or not.
David Brown:Yeah, it's always, I always have to. Sometimes I can't even control myself.
But I usually try and resist the urge to just say really random stuff at the beginning just to throw the notes off. But, but, but oddly though, that has turned out to be enormously helpful.
And, you know, I mean, we're not doing it today, but when I signed in my notion, Shout out, there are other, you know, platforms, but my notion basically said, oh, do you want me to transcribe this conversation? You know, it just comes up automatically. So anytime you get on a voice call, it's like, do you want me to do this? I know Google does it.
I know everybody does it now, but, but that has. That just. That one thing has made everybody's life actually a bit easier because it does transcribe everything. It means you don't.
Because I remember before AI and, and you'll remember this is you'd be on a call or you'd be in a meeting and you're trying to take notes and you're trying to listen at the same time and somebody's saying something important and you're over here and you know, you're furiously trying to write these notes, but you're also trying to listen to what's next. And it's like you can't keep up with everything. And it was really difficult. Now you don't have to do the whole note taking thing.
You can just literally sit Focus on the conversation, let the notes come and then you get a transcript. So you actually get everything that was said by everyone and then you get the summary and the actions and all the other stuff out of it.
And it's a massive time saver.
And I can see how, particularly with, in those sorts of roles, like you were saying, where, you know, you've got home visits, you've got people with sensitive conversations and there's also, I would imagine that there's a lot of he said, she said. I didn't say that, that's not what I said. You know, back and forth and oh, you didn't tell me that. And all the other stuff.
And if, now if you're in a situation where you have a transcript, that can also make it easier. It makes it easier for both sides.
Daniel Clarke:Yeah, I mean, you know, I have terrible handwriting and in the past, you know, you're frantically scribbling in books and you've got all these kind of books of notes you've taken and constantly looking back and thinking, what have I written there? I don't really understand that. So, so I think that has made things, you know, a lot easier.
But also things like we do a lot of funding grants and you know, funding grants ask you to answer questions in 300 words and you know, you could write the most annoying thing in the world. It is the most annoying thing.
David Brown:So difficult.
Daniel Clarke:But, but, you know, now I can write a thousand words and then I can put it into copilot and say, turn this into 300 words.
Don't lose the, you know, the what, what it is that I'm trying to say, you know, and quite quickly you can get to that 300 word limit and it maintains the, you know, actually what it is you're trying to say. And that has saved a huge amount of time recently.
David Brown:Yeah, I, I my usage of AI I think in the beginning, like everybody else, it was fun to sort of play around with and get it to like write stuff for you and do all sorts of things. But where I'm with it today is that I tend to write my own stuff and then put it in and get it edited. Do you know what I mean?
It's like, okay, I've written this email now, you know, here's what I'm trying to do. Or can you shorten it or am I missing? So have I forgot something?
Particularly if I'm talking about a project and a lot of times it will come back and it will go, what about this? And I'll be like, oh yeah, That's a really good point. I should have mentioned that and then. Do you know what I mean?
It's more like an editor and a, and a. Yeah, like an editor really. And I end up doing all my own stuff and I don't ever take anything, I think, from it exactly as it's written anyway.
I always want to tweak it a little bit because it's never just right.
Daniel Clarke:But it, I think that's the mistake that some people make.
They think that, you know, it can generate text and you just use that, but you always need, I, I think at the moment a human in the loop to go through it and make sure it actually makes sense because quite often there are just some nuances where it's off or, you know, it's put something in there that it probably shouldn't. So, yeah, you do have to be really careful.
But also, you know, I still think there is value in writing in your own voice and, you know, having some kind of personality and it does strip that away somewhat.
David Brown:Yeah, for sure.
And it's the same, you know, it's the same reason that I opened a studio is it's kind of like I doubled down on actually real people, you know, and, and what we're seeing now in society is that a lot of the platforms like YouTube and Stuff are doing a lot to identify AI channels.
There's tons and tons of just AI crap channels that just generate thousands of videos and YouTube's demonetizing and, and canceling them all and you know, which is great.
And so it's, it's great for the people that are creating content, you know, that are actually real people and, and they're getting promoted more than some other channels now, you know, so I think the, the, we haven't got past the human aspect of anything yet. I don't think, I think everybody still likes having a human in the loop, thankfully.
Daniel Clarke:I, I think so. And I hope we never, we never get past that. I, you know, I think having a real experience with, with other people is, it's just really important.
David Brown:Yeah, yeah.
Now I, I know you also have school age kids as well, so what's the, My, my son's out of school now, so I've kind of lost touch with, with what the schools are thinking and stuff like that. And I know you also have a mate who runs a school. What's your, what's your view on, how are, how are schools viewing AI these days?
Daniel Clarke:So, so the school that my daughter's in very much see AI as a tool that helps and supports children's learning. You know, I think it's got enormous possibility.
I was listening to a podcast yesterday, and they were talking about how in the future, your children may be able to create, you know, agents that are bespoke for them for each of their subjects, that understands how they learn, understands where they are in the curriculum, and can act as, like, a private tutor and really help them in a way that is tailored to their learning style. Because everyone's kind of learning style is different.
David Brown:Yeah.
Daniel Clarke:So, you know, that hasn't. That hasn't arrived yet in the school that my daughter's in.
But, you know, you can see that that could really be transformational for kind of children. I mean, she has adhd, so, you know, there is a particular learning style that kind of suits her.
And being able to tailor school to her, I think, you know, would. Would. Would really have changed her kind of school experience. I think at the moment, you know, there is a bit of wariness about children using it.
They are being taught about, you know, how to use ChatGPT, that they shouldn't rely on it for their homework, but that actually is a tool that can be used.
She's just finished her arts gcse, did quite a lot of research using AI, which was really, you know, I think really beneficial and helpful, but really just used it as a. As a kind of search engine. But I know that the teachers are using it a lot in the background.
David Brown:Yeah. And when. When I went and. And did the chat at the school a couple of years ago, I remember that even then the students were using it.
And I know again, I've talked about this before, but for anybody that's new, the students were using it to practice their answers for, like, their GCSEs and their A levels. So they would take the question and they would say, you know, this is my GCSE English test, whatever, and here's a sample question.
And they'd put the question in, and then they would put their answer in and then say, you know, grade it and tell me where you, you know, I can make improvements. And they were using that as a tool. And I remember talking to the teachers, and the teachers were very wary of that in the beginning.
And they said to the kids, try it, but bring in what, you know, show us what it's telling you so we can make sure that it's correct. And they said they were absolutely floored by the results that they were getting.
And they said it was coming up with stuff that they wouldn't even have thought of, you know, that was a great idea. And I just thought, you know, that was a private school that was kind of on the forefront of using AI to help students just do better on their exams.
Daniel Clarke:Yeah.
David Brown:And again, it was the editing function. Right. So it's, you know, that. That's where it was coming in. So the kids, it wasn't giving them an answer.
It was basically saying, here's how to make your answer better and more complete. Which, again, a fantastic use of the, of the tool.
Daniel Clarke:And, you know, I think it's really important to expose children to these tools early on and get them using them, but teaching them that, you know, that they are flawed, that you can't trust everything that you're told, that, you know, you have to have a critical eye, that, you know, it's important that you really kind of understand what the AI has come back with, that you go through it, because, you know, otherwise, you know, and I'm sure pupils have been caught out where they've just cut and pasted things, where the AI has hallucinated and, you know, basically regurgitated stuff, which isn't. Isn't right.
And we've seen that in professional, you know, environments where lawyers have used cases which aren't real that, you know, have been kind of created by AI. So.
Yeah, but, you know, I think that's a really important skill for children as they kind of move through the world and, you know, become older is actually how you interact with. With these models and how you use them and, you know, how you build those kind of critical skills that will be really important in the future.
David Brown:Yeah, for sure. I know.
I know loads of people that use Notebook LM now, which is the Google kind of thing where you can gather all your research together and then it creates a. It creates a podcast with two voices that goes through whatever the research is and explains everything to you.
And what they're using it for is to analyze really big, complicated documents. Like, it could be like the UK transport policy. And they would just take that whole PDF and load it in there and just say, explain this to me.
And then it's just two voices, like a man and a woman talking on a podcast that literally will go through the whole thing and pull out all the points and everything that are the important bits that you need to know out of the whole document, and then they don't have to read it.
Daniel Clarke:Wow, that's. That is interesting, but obviously it's the human podcast.
David Brown:No, no, I know, but, but, but it's the research bit, right? It's the thing of it talking to you.
So if you're driving and you know, you've got to go to Manchester for the day or something, or you're on the train or whatever and you're like, I've got to read this transport document and it's 300 pages long, you just tuck it in there and go, give me a 15 minute podcast that explains what, you know, what's in here and what I need to know.
Daniel Clarke:And also again, you know, because children learn differently, it may be that for a child, being able to listen to a podcast is much more digestible than having to read things. So, yeah, it's really interesting.
David Brown:Yeah, yeah, yeah. So what about you personally? How, how are your feelings on it? Just in general.
Daniel Clarke:So, you know, I think on a practical level we're really, well, I'm, I'm really seeing, you know, benefits. We just finished a piece of work actually looking at public consultations. So we did a really big public consultation a couple of years ago.
We had, I think it's about 25, 000 responses. Within those responses, yeah, there was kind of, you know, there was 10 or 11 questions which were all free text.
So we've, with a, with a, a third party company, we've taken that historic data, we built kind of a large language model and also some kind of safety rails safeguarding around there.
We looked at buyers, you know, we built, or the company had built tools which mean that you can check that the results that you're getting are reflected, actually reflected in the data, which brings in some kind of accountability and that helps with our kind of governance. And then we compared the results of that to the actual analysis that was done by humans. Obviously quite a significant reduction in cost.
But more importantly, we're seeing some conclusions drawn from it which weren't in the human analysis because the AI has drawn links across different bits of data that was missed by the coding that was done on the initial data set. And what's really interesting is now that we've built that, we can then start to add more and more data into it.
If we do future consultations, we can add, you can add consultation data into it.
And previously what's happened is you do a consultation and the data sits in a silo, but now you're building a kind of a knowledge base of what people think, what people think to certain schemes, what people would like to see. And then you can kind of monitor that over time.
You can refer back to old previous consultations and also because you can do analysis now so quickly it could change the consultation process where you can do really quick consultations and then you can iterate and consult, you know, consult and, and really change the way that you kind of engage with people. So I think that sort of thing has made me kind of, you know, hopeful that we can, you know, deploy AI for public good.
The work that we're doing around autonomous buses, we're seeing, you know, the technology improving. And again, you know, it's, we're moving from a point where it's not a technology problem with a lot of this stuff.
It's, you know, it's a human kind of cultural problem. It's like, will people get on a bus where there's no driver? Will they feel safe? How do we get people with wheelchairs on and off?
And it's all that kind of stuff. So, but again, you know, I think there's some really big opportunities there.
So I think, you know, generally when it comes to work, some really interesting opportunities. We're lucky that we've had academic help and support. So through the University of Cambridge, they've got a program called AI at cam.
They've really been helping the public sector in looking at AI and thinking about the ethics of deployment, you know, the opportunities. So, yeah, I think that's really positive. I think on a personal kind of note, I mean, you know, how often do I use AI in my personal life?
Probably not very often at the moment. You know, I try and avoid AI generated music and content because I still.
David Brown:Prefer humans playing in yourself. Yeah, because you're a musician. If, if, again, if people are new. Dan, Dan's a musician, so.
Daniel Clarke:Yeah, yeah. So, you know, I, I, I've had, as, we've had conversations about AI generated music before.
I think AI is a tool, you know, as the, since synthesizer was, so it could be used in interesting ways. But I think if you're just generating slop using AI. Yeah, that's not what I want. I want to go and see somebody play something.
I mean, the, the emotion of that and, you know, so, But I, I think where, where there really is the potential for things to get interesting is with AI agents. And, you know, in the future, what I'd like to be able to do is say, look, me and my family want to go to, I don't know, Italy.
I mean, for three weeks we want to go to these places. This is the type of accommodation we like saying, this is what we like to do. Just go out and book it all for me.
And, you know, while we're there, if, if the Train's canceled or whatever.
You know, the AI agent can rebook that and let me know when the next, you know, train is and kind of just take some of the grunt work out of doing those, those kind of things or something that would, you know, would tell me what, you know, the cheapest there probably are tools that can do this, but, you know, you, you have all these kind of bills you have to pay over the year and you know, running a house and all that kind of stuff.
Just an agent that would just be changing stuff in the background to whatever's cheapest or based on the parameters, just stuff like that, that takes the work, the kind of stuff that I don't enjoy out of life so that you can spend more time doing stuff that you do enjoy would be great.
David Brown:Yeah, it would be amazing. I can't wait. I mean, it's coming, right? We're getting little, we are getting little bits and pieces, you know, one at a time.
And I think there was a mad, like everything, there was a mad rush in the beginning and everybody was just running headlong into oh, let's do this, let's deploy this, let's deploy that. And I think, you know, we, I, I think everybody realized there was going to be a sort of a shrinking of the AI market, which I think we have seen.
I think a lot of the smaller AI companies have fallen by the wayside and they either got, you know, bought up by, by larger companies who, you know, started integrating everything, or they just couldn't sustain the model because it eventually got too expensive for them to be able to run because they didn't have their own model. And now we're getting to that point that we are getting those more considered sort of applications.
So the stuff you're talking about, right, again, this isn't like public facing stuff. It's, it's not making hiring decisions. It's not making decisions about, you know, people's, you know, how their benefits are done.
That might be something that's happening in another part of government, but you know, certainly at your level, it's, it's about making sure the data is correct.
It's not doing anything with the data, it's just helping clean the data and it's making it easy for people to understand what's there and it's analyzing what people think and those sorts of things.
Things which I think is, those are the practical day to day things that are going to form the foundation of our Jarvis or whatever we decide to call it, you know, and again, do you remember in the very beginning, I was always. What would you call it?
Daniel Clarke:I do.
David Brown:But yeah, I. We are getting there. We are getting there and it's going to be pretty. It's going to be pretty weird, but kind of fun.
I mean, I'm still looking forward to it.
Daniel Clarke:Yeah. Yeah, I do.
You know, I think in the transport space, there's, there's huge opportunities around how we manage traffic networks or, you know, transport networks in cities, how people interact with it, how we, you know, how we give people information, how we better integrate systems, you know. Yeah, this huge opportunity is just working out what's the pathway to get there.
So we've done some pilots, early stage pilots, using AI and traffic signals, because traffic signals, traditionally, you know, we're just looking at the flow of cars and vehicles on the road.
But actually, you know, cities are more interested now in how do we manage our cycling networks, people walking, our bus networks, you know, kind of sustainable transport.
So actually moving to a, a system where we can look at all the different modes that are going through a junction and then making, you know, different decisions based on who we want to give priority to at potentially, you know, eight different time of day. So, Jim, rush hour, we might give, you know, I don't know, cars might have priority.
And then outside of that, we might want to give priority to cyclists or pedestrians or, you know, as we move into the center of the city, we might want to, at rush hour, give. Give pedestrians and cyclists more priority.
Then, you know, there are, there are systems emerging that can kind of help and support us with that, that use AI. And I think that's, you know, that that's potentially really interesting. Planning, infrastructure building, digital twins.
You know, there's some really exciting kind of work in, in that space as well. So, yeah, I'm really positive about how AI, I think, will improve cities, how it will help us with kind of city building.
But, you know, at the same time, I also listen to podcasts where the creators of AI tell us that we're all doomed, we're all going to die.
David Brown:Yeah, well, that's the weird thing, right, is I think what they see is the unfiltered, uncontrolled version.
And I think, you know, I think the public versions that we see have so many guardrails on them now that you, you're not actually seeing what potentially can happen in the background.
riting about since the, what,:Is everybody understood that there's, you know, sort of great potential for this to go horribly wrong.
And, I mean, even we've now seen that the 8, for example, the AI now recognizes when it's being tested, so it knows to give the correct answers to pass the test. Do you know what I mean? So it's like. It's. It's. It's aware enough to know that.
It's like, okay, they're asking me these questions to see if I give the right answer. So I'll give the right answer. Like, which is. And this all goes back again to everything I've been saying, which is, it behaves exactly like a human.
That's exactly what a human would do if a human thought on being psychologically tested. To see if I'm crazy or not, I'm going to give the answer that's going to make me seem not crazy.
Daniel Clarke:Yes. Yeah.
David Brown:And it. Do you know what I mean? And it lies and humans lie and it makes up stuff because humans make up stuff. And it.
You know, hallucinations or creativity in a human, you would call that being creative. I have some mates who are absolutely completely bonkers, and they come out with the most nutty stuff all the time, and it's amazing.
And you just go, oh, you know, that's them. But when it's in an AI, everybody goes, oh, my God. It's like hallucinating and making up stuff. I'm like, yeah.
Like, you don't have a friend that does that.
Daniel Clarke:But, but, but I think people think, you know, computers shouldn't do that. You know, because basically they, you know, they don't have a personality. It's about logic.
And so, you know, I think people are really struggling with that concept that, you know, an AI, you know, generates nonsense and. Or, you know, can I interact and kind of talk with you like that? It's just.
David Brown:Yeah, but it's. But this is what I mean, right? Like, they said, oh, it's got to pass the Turing test.
And it passed the Turing test, like, years and years and years ago. And then they went, oh, that's not a good enough test. Let's do another kind of test. And then they did that, and it passed that.
And then they're like, oh, yeah, okay, maybe that test isn't. It's like, how many tests does it have to pass before you kind of start going, it knows what it's doing.
And you know, we've had versions where they've tried to delete it and it goes off and creates a copy of itself behind the scenes on another server on the Internet somewhere else to preserve itself. So now it's actually showing, you know, signs of self preservation in a, in a sort of un prompted way.
And we've got other companies that are saying they literally, they can't release it to the public because it's too dangerous, because they don't know what it's going to do. You know what I mean?
It's like we have created it and I suspect that behind the scenes, again, you know, the, the people who are building these tools are probably looking at it and going, holy, this is like actually a thing. And it, it behaves on its own and it has its own self interest and you know, they're desperately just trying to control it.
But I reckon it won't be too many years before one like totally breaks free and then just starts doing all sorts of crazy stuff.
Daniel Clarke:I mean, well, what I find difficult is that there are people building AIs that are saying, you know, it's, it's become almost sentient, you know, it, this is dangerous. And then just continue building it as if, like, I can't help it, I've just got to continue doing it. Even though this could destroy the world.
I just, you know, I find that difficult.
David Brown:It's because they're paying them, you know, millions of dollars a year to do it and they're just kind of like, yeah, but, but this is something that goes back to Silicon Valley and, and in a previous life when I first moved to the uk, I was, I was installing software all around the world and I spent a lot of time in Silicon Valley.
And the thing is, is that a lot of those, a lot of the tech bros. We'll call them, a lot of the tech bros. Actually believe that what they're doing is a positive. And they, they general, they genuinely don't understand the concept that all of this could go horribly wrong.
They, they have this unwavering belief that humans are good and that we'll just work it out and everything will be fine in the end and we're just going to build it and we'll see what happens, but we'll sort it out.
And, and they, in almost this naive kind of way that they don't understand how the world works and then somebody comes out and does something really evil with it and then they, they genuinely can't understand that somebody would do that with the tool or that the tool would do that.
And I'm, I, I don't know how someone operates in that, in that world, but, but I genuinely believe a lot of them feel that way and they just, they, they really don't actually see that, you know, there's potentially so much bad in it.
Daniel Clarke:Yeah, yeah. I mean, I think there is a big conversation around sovereign technology and how do we move away from big Silicon Valley companies?
And it's really difficult. You know, in the uk, we're seeing, like, Palantir, obviously, doing work in the nhs.
And you just have to look at what the CEO of Palantir put out a couple of weeks ago to realize that, you know, maybe our, you know, our aims and objectives to the NHS don't really align with, with the company's ethos.
David Brown:Yeah. But then, yeah, it's a tough one.
And again, having sort of, you know, been on the inside of the data companies and stuff, that at some point you're like, we would really like to use someone else, but there isn't another platform that actually does what their platform does.
And, you know, then you, as the NHS or as any government body, you're kind of stuck between a rock and a hard place because you've got stuff that you want to do and they're the only ones that can do it.
Daniel Clarke:And I know it's really difficult. And, you know, we are reliant on Microsoft. You know, we all council use Microsoft.
You know, I think the default is to then use Copilot, to use Azure and Fabric, you know, for your data platforms. And just because it's easy, because it easily integrates with everything else that you do and.
Yeah, but, you know, I think we, we need to sometimes resist that temptation. I mean, we certainly have. We, you know, we use a specialist transport data platform because actually they really understand transport data.
They know what we're trying to do.
Really experienced in a way that, you know, the kind of big companies like Microsoft aren't, and a lot more agile, actually, and, and a lot more able to respond to kind of hyperlocal concerns.
David Brown:Yeah. Yeah. Awesome. Anything else you want to add to the conversation?
Daniel Clarke:No, no, I, you know, I, I, it's, you know, I'm glad that you're still doing these AI podcasts.
I, you know, I think when you started them, it was, you know, it was because you were having a, lots of, lots of kind of interesting conversations with people about AI, and I think we just need to keep talking about it because it's changing and evolving so quickly that we need to, we need to understand what's happening and, you know, what the, what the impacts are on both our personal kind of work lives. And so, yeah, yeah, it's. Whenever I've kind of listened to the conversations you have, it.
It always helps me kind of, you know, think about my own kind of context and how we can either harness or, you know, not AI. For our, for our own benefit.
You know, I think the really important thing is though, we just, we just keep thinking about public good and, you know, what's, what's good for us.
David Brown:And so, yeah, thank you, Dan. And I, I, I was quite.
In the beginning, I wanted to, you know, I, I wanted to be the canary in the coal mine, you know, and, and you remember, you know, I had a couple of ladies that worked in the office that I was in that had lost all their business within a couple of months of chat GPT coming out. And, you know, I thought it was going to be absolutely devastating for, for creatives in particular.
And, and it's, it is having a massive impact, that's for sure. But it's not, you know, it hasn't panned out quite as bad as we thought it would in, in some areas. And I think I'm much more positive about it now.
And I like talking, you know, I am going to go back and talk to several people, but I'm going to talk to new people as well. So I'm not just going to go back and rehash with everybody I' to before, but I'm really trying to now.
I want to focus on what people are doing and, and how they're using it in a good way.
So you talking about, you know, like, the council using it for data and all that sort of stuff to kind of show people that, yes, there are scary sides of it. Right.
And there are sides that, you know, you have to be really careful about and, you know, we don't want kids just going in and using it to do their homework and all that because of all the bad, you know, there's, there's issues with that. Sure.
But there are also very valid, very good reasons where it is saving money, bringing efficiency and it is ultimately going to make, I think, things better for everybody. So I'm trying to be a little.
Daniel Clarke:Bit more positive and save this point as well.
When you look at the applications in the NHS for things like imaging and, you know, diagnostics, I mean, it's just, you know, I, I truly think that will have a revolutionary effect on healthcare in the future.
David Brown:Yeah. And, you know, the story, you know, it's doing the scans and particularly around radiology, AI is extremely good.
It's better than humans now and more accurate than humans at doing that analysis. And so they use it. I think the US has 700 different companies that are approved to do scan analysis and stuff from radiology.
And originally everybody thought that, well, this is going to wipe out radiologists. Right? Like it's just going to take over the whole thing.
And actually it's exactly the opposite because now, because it's cheap and fast and accurate, every doctor is sending people for a scan and there aren't enough radiologists to manage the scanning process. So it's actually, I think they said something like it's going to double or triple the number of radiologists needed.
And so they're desperate for people to go into radiology because you still have to manage the people, you still have to get them in, you have to do the scans, you have to, you know, all of those human parts are now even more important because the technology is faster and better so everyone can get a scan.
And so again, going back to the tech bros, this is grudgingly, this is one of those instances where they go, well, you know, we'll see what the positives are down the line. But you know, all these new jobs will come up and it's like, it's not a new job, but I see what they mean. Right.
So it's, it was an unintended sort of consequence that now radiology is more popular than ever and the scans are better and they're doing more of them and they're helping more people. So it's kind of a win, win, win.
So I think that, so that's where I'm kind of sitting right now is I want to pick up on the, the positives, the good uses of it. Where is it being used effectively and how is that helping everybody?
Daniel Clarke:Yeah, I think that's really important to give a balance view so that people aren't just reading about the, the kind of negative.
David Brown:Yeah, because, yeah, I mean, that's fun to talk about, but yeah, of course. That's why I have a, I have a whole podcast about people who've had, you know, terrible things happen to them.
But again, it's the positive thing that happens afterwards. You know what I mean? It is the better part of it. So awesome.
Daniel Clarke:You know, I think we've, we've seen it before with, with, with lots of technologies where people have talked about it being kind of, you know, revolutionary actually. It's, it's kind of a slow evolution and, you know, things just get better. So we'll see. We'll see where we get to.
I mean, we might not be here this time next year because we've been wiped out by our AI overlords. And I am completely wrong. But I. You know.
David Brown:Or Russia.
Daniel Clarke:Yes.
David Brown:Or the Iranians. Like, who knows? Like we. Exactly. This whole conversation may be moot in a year because we may all just be bombed, but who knows?
We won't talk about any of that.
Daniel Clarke:Yeah.
David Brown:That's dangerous territory for everyone. We'll do that over beers. All right, Dan? Thanks very much. I'm conscious.
I know you've got a hard stop in about five minutes anyway, so it's about time to wind up, but. Yeah. Again, thanks very much for coming back on and having a chat. It's always a pleasure.
Daniel Clarke:No, great to speak to you and thanks for having me.
David Brown:Brilliant. And I will book in some time to come see you guys in Cambridge, just for a social, I think, and we can go sit by the river and drink some beers.
Daniel Clarke:Excellent.
David Brown:All right, thanks. Cheers. Bye.
Daniel Clarke:Bye.