On my podcast this week we have three stories that may shake your faith in our progress towards a technology utopia. You may believe that Google is always accurate, you may laugh at the idea your TV could be hacked - and you may think that intelligent assistants will work amicably together to make your life simpler. Prepare for a shock…
Is the CIA listening through your TV?
The answer to that question is probably not - unless you're an intelligence target and an agent has got in to your home to put a USB stick in to your smart TV.
But that huge dump of documents from Wikileaks this week, which allegedly revealed all of the secrets of the CIA's hacking operations, certainly raised all sorts of concerns - for technology firms and consumers as well as the intelligence agencies.
Among other hacks of all kinds of devices, it's claimed the CIA worked with Britain's MI5 to find a way to listen in on Samsung Smart TVs.
Civil liberties groups say the agency stockpiled security flaws in devices to use them for its work, but left the population at risk by doing so. How worried should we be about what this says about our vulnerability to hacking, not just by spies but anyone else? We talk to Dave Palmer, director of technology at the security firm Darktrace, who worked for British intelligence agencies before joining the company.
Google gets gamed
Earlier this week, I wrote about what happened when I asked my Google Home connected speaker: "Is Obama planning a coup?"
It replied with some questionable information suggesting the former US President was in bed with Communist China.
This highlighted an issue with Google Snippets, a relatively new feature that gives you one simple answer to a search query. Great when you're looking for a carrot cake recipe, not so good when you want accurate unbiased information about current events.
We talk to the ultimate authority on how Google works, Danny Sullivan of Search Engine Land. He explains how the search giant can sometimes get things very wrong.
When bots get cross
We're told to expect an increasingly automated future in which intelligent machines will work alongside us, educate us, diagnose us, and in some cases replace us. But will all these self-taught machines be able to get along with one another?
That's a question that Dr Taha Yasseri and his colleagues at the Oxford Internet Institute have been looking at by studying the behaviour of bots that maintain pages on Wikipedia. It turns out that sometimes they disagree over edits - the history of Aston Villa football club being one example.
Why would this matter? Dr Yasseri says he has discovered bots behave differently in different environments. He reckons, for instance, that an AI that makes a driverless car work on a German autobahn could struggle on Italian roads where the cars are driven by Italian bots with rather different cultural norms.