The US entrepreneur and Tesla-founder finds traffic “soul-destroying” – so he’s come up with this.
US entrepreneur Elon Musk has spoken at the Ted (Technology, Entertainment and Design) conference about his vision for a tunnel network under Los Angeles and shown off how it might work.
Mr Musk also talked about plans to have fully autonomous journeys across the US by the end of the year.
He spoke about how he wanted solar-powered roof tiles to be standard on “every home” within 50 years.
And he explained why he is committed to sending a rocket to Mars.
In a wide-ranging conversation with Ted curator Chris Anderson, the founder of Tesla and Space X said that he was inspired to consider a tunnel system to alleviate congestion because he found being stuck in traffic “soul-destroying”.
He showed off a concept video of how the multi-layered tunnel system might work.
Cars would stop on a trolley-like device and the ground would open up to carry them below. Cars would then drive off the platform and another would get on to be returned above ground.
He said that his vision was to have “no limits” to the amount of tunnels, but to find ways to cut the cost of boring and to speed up how quickly such tunnels could be created.
“We have a pet snail called Gary, and Gary is capable of moving 14 times faster than a tunnel boring machine – so the ambition is to beat Gary,” he said.
The firm he set up to oversee the project – The Boring Company – took up less than 3% of his time, he said, and it was run by interns and part-timers.
“It is pottering along.”
Susan Beardslee, a senior analyst at ABI Research, said the project sounded like a “moonshot”.
“He has shown his ability to be a visionary, and I believe he can take tunnelling and apply the financial capital and technical expertise, but this is not a go-it-alone project.
“He is addressing the need to look at congestion – but it will have to be a public/private partnership,” she said.
“Musk is good at coming up with a very different way of looking at things, and this might work better somewhere where it can be purpose-built rather than retro-fitted.”
Mr Musk is rarely out of the headlines these days – recently notching up another landmark for his Space X business when it launched a recycled rocket as well as starting a new firm – NeuraLink – that would aim to augment the human brain with computer technology.
His semi-autonomous Tesla car fleet has been under scrutiny since a fatal crash in May 2016, but Mr Musk showed no signs of slowing down his ambitions for the firm.
He promised a “fully autonomous” journey across the US “by the end of the year”.
“From a parking lot in California, cross-country to New York or from Seattle to Florida, these cars should be able to go anywhere on the highway system,” he said.
He also revealed that he had test-driven the semi-autonomous electric truck Tesla plans to unveil in September, saying it was “so nimble”.
“You will drive it around like a sports car,” he said.
“In a tug-of-war between a Tesla semi and a diesel semi, the Tesla would pull the diesel uphill.”
Ted curator Chris Anderson asked Mr Musk why he had so many diverse interests – on Earth and in Space.
“The value of Tesla is to accelerate the inevitable use of sustainable energy and if it accelerates that by a decade, then that would be a fundamental aspiration,” said Mr Musk.
But, he added, the advancement of space technology was not inevitable and would only happen if someone worked hard to make it a reality.
“It is important to have an inspiring future and if it doesn’t include being out there among the stars, that is incredibly depressing.
“I am not trying to be anyone’s saviour.
“I just want to think about the future and not feel sad.”
Cyber-attacks, information warfare, fake news – exactly 10 years ago Estonia was one of the first countries to come under attack from this modern form of hybrid warfare.
It is an event that still shapes the country today.
Head bowed, one fist clenched and wearing a World War Two Red Army uniform, the Bronze Soldier stands solemnly in a quiet corner of a cemetery on the edge of the Estonian capital Tallinn.
Flowers have been laid recently at his feet. It is a peaceful and dignified scene. But in April 2007 a row over this statue sparked the first known cyber-attack on an entire country.
The attack showed how easily a hostile state can exploit potential tensions within another society. But it has also helped make Estonia a cyber security hotshot today.
From outrage to outage
Unveiled by the Soviet authorities in 1947, the Bronze Soldier was originally called “Monument to the Liberators of Tallinn”. For Russian speakers in Estonia he represents the USSR’s victory over Nazism.
But for ethnic Estonians, Red Army soldiers were not liberators. They are seen as occupiers, and the Bronze Solider is a painful symbol of half a century of Soviet oppression.
In 2007 the Estonian government decided to move the Bronze Soldier from the centre of Tallinn to a military cemetery on the outskirts of the city.
The decision sparked outrage in Russian-language media and Russian speakers took to the streets. Protests were exacerbated by false Russian news reports claiming that the statue, and nearby Soviet war graves, were being destroyed.
On 26 April 2007 Tallinn erupted into two nights of riots and looting. 156 people were injured, one person died and 1,000 people were detained.
From 27 April, Estonia was also hit by major cyber-attacks which in some cases lasted weeks.
Online services of Estonian banks, media outlets and government bodies were taken down by unprecedented levels of internet traffic.
Massive waves of spam were sent by botnets and huge amounts of automated online requests swamped servers.
The result for Estonians citizens was that cash machines and online banking services were sporadically out of action; government employees were unable to communicate with each other on email; and newspapers and broadcasters suddenly found they couldn’t deliver the news.
Liisa Past was running the op-ed desk of one of Estonia’s national newspapers at the time, and remembers how journalists were suddenly unable to upload articles to be printed in time. Today she is a cyber-defence expert at Estonia’s state Information System Authority.
“Cyber aggression is very different to kinetic warfare,” she explained. “It allows you to create confusion, while staying well below the level of an armed attack. Such attacks are not specific to tensions between the West and Russia. All modern societies are vulnerable.”
That means that a hostile country can create disturbance and instability in a Nato country like Estonia, without fear of military retaliation from Nato allies.
The alliance’s Article Five guarantees that Nato members defend each other, even if that attack is in cyberspace. But Article Five would only be triggered if a cyber-attack results in major loss of life equivalent to traditional military action.
Identifying who is responsible also makes retaliation difficult. The 2007 attacks came from Russian IP addresses, online instructions were in the Russian language and Estonian appeals to Moscow for help were ignored.
But there is no concrete evidence that these attacks were actually carried out by the Russian government.
On condition of anonymity, an Estonian government official told the BBC that evidence suggested the attack “was orchestrated by the Kremlin, and malicious gangs then seized the opportunity to join in and do their own bit to attack Estonia”.
Hostile states often count on copycat hackers, criminal groups and freelance political actors jumping on the bandwagon.
2007 was a wake-up call, helping Estonians become experts in cyber defence today. “It was a great security test. We just don’t know who to send the bill to,” says Tanel Sepp, a cyber security official at Estonia’s Ministry of Defence.
The Bronze Soldier attacks may be the first suspected state-backed cyber-attacks on another nation.
But since then cyber warfare has been used all over the world, including in Russia’s war with Georgia in 2008, and in Ukraine. “Cyber has become a really serious tool in disrupting society for military purposes,” says Tanel Sepp.
That’s why Estonia’s government has now set up a voluntary Cyber Defence Unit. Since Russia’s 2014 annexation of Crimea, the Estonian Defence League has been much reported on by the international press: at weekends 25,000 volunteers don fatigues and head to the forests to learn how to shoot.
Less well known is the shadowy Cyber Defence Unit.
The country’s leading IT experts are also trained by the Ministry of Defence. But in addition they are security vetted and remain anonymous.
They donate their free time to defending their country online by practising what to do if a major utility or vital service provider is brought down by a cyber-attack.
It’s the sort of private sector talent the state could never usually afford to employ.
But the memory of 2007 is a good recruiting sergeant. The attacks have stuck in the national consciousness by proving to Estonians the importance of cyber security.
Ten years after the attacks, the Bronze Soldier is still a reminder how much Estonia’s complicated past can disrupt the present.
The oldest operating ferry in Finland is being relaunched as the country’s first all-electric vessel.
The Fori first entered service in 1904 as a steam-powered boat. It was fitted with diesel engines in 1955.
When it returns to the Aura River in Turku on Saturday, it will be fitted with two electric motors and an electric drivetrain system.
Despite the upgrade, the ferry will still make the crossing at an average speed of 2kmh (1.24mph).
The work was carried out by local boatyard Mobimar, using an electric drivetrain system designed by Finnish company Visedo.
Each of the two engines consists of a DC/DC converter to increase the voltage from the batteries, and a permanent magnet motor drive to transform the electrical signal into mechanical energy.
The new system is eight tonnes lighter than the diesel engines and hydraulic motor it has replaced.
Visedo said it should use about 3kW of energy per hour during the summer months, rising to 4kW in the winter.
The ferry only needs one engine to operate, but the design allows for both to be used when extra power is required – such as during the winter when river ice begins to form.
It also means the ferry can stay in service when one of the engines needs maintenance.
The Fori is one of Turku’s less obvious tourist attractions, operating non-stop during the day, transporting up to 75 passengers at a time from one side of the Aura River to the other.
The city authorities announced the plan to convert the light vehicle ferry from diesel to electric in 2015.
It is the favourite money-raising tool for crazy dreams and passion projects, as well as more worthy causes – and now would-be MPs are getting on board.
Crowdfunding – asking lots of people to each donate a small sum of money online – has been around since the 1990s, when fans of cult rock bands got together to fund new albums and tours for their idols.
The sites are now used by a vast array of different fundraisers.
Currently, £25 might buy you a lamb for an Indian village, de-worming tablets for 500 children, or a ukulele.
On former Deputy Prime Minister Nick Clegg’s website, though, it’ll buy 1000 A4 leaflets for his election campaign. He is not alone.
‘I could almost cry’
Sites like GoFundMe, Crowdfunder and Crowdpac are brimming with politicians.
Crowdfunder says that in the week since the UK election was called, more than £200,000 was raised for political projects on its site.
It is expecting a 50% increase in the number of candidates using crowdfunding compared to 2015.
Conservatives in Wirral South, UKIP candidate Phil Eckersley and sitting Labour MPs like Peter Kyle, Maria Eagle and Rachel Reeves are among those turning to the technique.
There is nothing new about politicians raising money online – but the snap election has left them very little time to raise money through traditional methods, forcing them to get creative.
Lib Dem Stephen Lloyd, who is trying to regain the Eastbourne seat he lost to Conservative Caroline Ansell in 2015, said: “Where you have more time, I have fundraising dinners, I’ve gone to more quizzes and raffles and tombolas than you could shake a stick at.
“I instantly realised I didn’t have time to do ten fundraisers over the next month.”
The internet offered an answer. He set up a page on his website asking for donations, and shared it on Facebook.
Within a week, 551 donors had raised over £16,000. He says the response touched him.
“The truth of it is I could almost cry. When I’ve gone out and asked people, they’ve stepped up. It makes me feel like I’m part of something.”
Businesswoman Gina Miller has crowdfunded over £300,000 to organise tactical voting and support up to 100 candidates opposed to a “hard Brexit”. That’s alongside numerous pages for SNP and Green candidates.
There’s even someone calling themselves “Mr Fish Finger” raising money to stand against Lib Dem leader Tim Farron.
Anyone who wants to take part in the election has to stump up a £500 deposit.
Green MEP Molly Scott Cato is hoping to become an MP in Bristol West. Both she and her party have crowdfunding pages.
“Greens have been using this model for a number of years. To be honest, it wasn’t that we preferred it – it was our only choice. We’re not a very well-funded party and so candidates needed to get hold of enough funds to put up for the deposit.”
But isn’t it a bit odd to ask the public to fork out even more for the election? After all, June’s poll will cost the taxpayer tens of millions of pounds to administer.
Molly Scott Cato disagrees. “It’s a democratic approach. It allows everybody to support the party with their money, and later on support it with their vote as well, hopefully.
“It’s a way of people making obvious their investment in the campaign and their commitment to a Green candidate.”
Who can donate?
Up and down the country, politicians are turning to you, the public, for help with their campaigning costs. But they need to be careful.
The Electoral Commission regulates election spending and political donations. It says candidates must collect enough information from donors to be able to check they are allowed to accept their cash.
In a statement, the elections watchdog said: “When crowdfunding, campaigners must only accept donations over a certain value from a permissible source.
“For candidates that means donations exceeding £50, for political parties and non-party campaigners it is £500.
“Candidates, parties and non-party campaigners can only accept donations from permissible, mainly UK sources.
“They must therefore collect information from every donor to ensure that they can properly check that each donation is from a permissible source. If a donation is not from a permissible source, it must be returned within 30 days.”
So while it’s an effective way of raising cash quickly, if you’re crowdfunding, make sure you’re keeping tabs on who’s donating.
A near-miss involving a passenger jet and more than one drone has been reported in the UK for the first time.
The incident happened over east London as an Airbus A320 was approaching Heathrow Airport last November.
A report by the UK Airprox Board (UKAB) found the incident had “compromised the safety of the aircraft”.
One pilot also said there would have been a “significant risk of collision” if the jet had been on a different approach path.
The plane was flying at 5,500ft on 20 November when its crew spotted two white, orb-shaped drones nearby.
The pilots “remained in constant visual contact” with the gadgets, which are estimated to have got as close as 500m to the aircraft, according to the report.
Less than 30 minutes later, a Boeing 777 approaching Heathrow flew within 50m of what is believed to have been one of the drones, described as white, about 2m wide and with four prongs.
A report was made to the Metropolitan Police, but the people flying the drones were not found.
The latest report from UKAB said there had been five near-misses between aircraft and drones in one month – bringing the total over the past year to 62.
But the Civil Aviation Authority (CAA) said the 20 November case was the first time there had been an incident involving more than one drone.
According to CAA rules, drones must not be flown above 400ft or near airports or airfields.
Earlier this month, police forces in the UK said they were being “flooded” with reports involving drones.
Last year, more than 3,456 incidents involving drones were recorded, compared with only 1,237 in 2015, according to the PA news agency.
Incidents involved include invasions of privacy, disputes between neighbours, and prison smuggling.
A marmoset monkey has been rescued by RSPCA officers after it was offered up for sale on social media.
Officers found Lola living in “hugely inappropriate conditions” in a house in Blaenymaes, Swansea.
She was running loose in the living room with a cage and a UV lamp in the corner, along with a dog she would often try to attack.
She now lives at a wildlife centre with other monkeys. RSPCA wants a ban on keeping primates as pets in Wales.
RSPCA Inspector, Neill Manley, said: “Sadly, some people like the idea of keeping a monkey as a pet, but this is another example of how unsuitable they are.
“Marmosets have very complex and specialist needs, which it would be practically impossible to meet in a domestic house such as this one.
“A Staffie-type dog also lived at this Blaenymaes home, and we heard how the monkey sometimes would try and bite the dog, which further highlights how these just weren’t the right conditions.”
He said the owners had sought advice from a vet on keeping the animal but he said a house was not the right environment for the highly intelligent and social primate to be living in.
Charlie Skinner, RSPCA campaigns assistant, said estimates showed there were more than 120 privately-owned monkeys living in homes in Wales.
He added 15 European countries had already introduced bans on keeping them as pets and it was time “for action in Wales”.
Amazon’s new smart speaker, the Echo Look, received a mixed reaction following its unveiling this week.
Some say it could inspire confidence while others voiced privacy concerns.
The $200 (£154) gadget, not yet on sale, features a camera to capture full-length selfies and video which can be stored to create a personal “look book”.
It uses smart assistant Alexa to give a verdict on outfit choices and recommend clothes to buy.
It is listed as available “by invitation only” on the Amazon website.
“With this data, Amazon won’t be able to just sell you clothes or judge you. It could analyze (sic) if you’re depressed or pregnant and much else,” tweeted Zeynap Tufekci, assistant professor at the University of North Carolina.
“Not just a privacy disaster; people don’t understand what algorithms can infer from pictures. You are disclosing a lot of health info, too.”
Amazon has been contacted by the BBC for comment.
But Fiona Blake, who runs a closed Facebook page where hundreds of women share photos of their outfits and offer each other supportive fashion advice, said she thought the Echo Look sounded like a good idea.
“People struggle with looking in the mirror and taking photographs of themselves,” she said.
“This is brilliant daily inspiration. You could flick through your own personal Pinterest board [of outfit choices] – that is key for getting up, getting dressed and getting out there.
“I’m happy for someone to recommend something. I can’t get to every high street shop. I don’t mind being sold to but I know a lot of people don’t like that approach.”
Professional stylist Donna McCulloch, from Sulky Doll stylists, said people should not rely on an app to tell them what to wear.
“If you are unsure about an outfit, then trust your own gut instinct and try a different look instead,” she said.
Ben Wood, analyst at CCS Insight, said the Echo Look may not appeal to all ages.
“For younger people that happily share regular moments of their life via SnapChat and Instagram, the general response has been positive with the main limitation being the price,” he told the BBC.
“However, for a slightly older audience it either seems completely unnecessary (I already have a full length mirror) or is regarded as a considerable privacy concern – particularly in the context of a device that it makes sense to have in a bedroom.
“It underlines Amazon’s ambitions for its growing range of Alexa-powered Echo products. The Echo Look helps extend its reach into other parts of people’s homes and also in the dramatically different product categories orientated around fashion.”
The website of the largest telecoms provider in Indonesia has been defaced with an offensive post.
Visitors to Telkomsel’s site on Friday morning were greeted by a profanity-laden message criticising the company.
The perpetrator also replaced the text that shows under Telkomsel’s name and description in search engines, with an explicit message.
Telkomsel said it was repairing the site and investigating the incident.
BBC Indonesia reported that by 08:35 local time (02:35 GMT) the message had been replaced by a notice from Telkomsel explaining that the site was “under maintenance”.
The post criticised the state-owned company for its high prices and complicated mobile plans, including its video and music streaming bundles.
The message was repeated in the company’s meta description, the text that appears beneath a site’s name and web address in search engines.
The name displayed in search results was also changed, with the company’s name replaced by an offensive phrase.
Telkomsel posted an apology on its other websites “for the inconvenience of not being able to access [the] official website” during the incident and the ensuing repairs.
“We are currently performing the necessary tracking and improvements,” Telkomsel Corporate Communication vice-president Adita Irawati said in a statement.
The company’s mobile network was not affected.
BBC Click’s Lara Lewington looks at some of the best of the week’s technology news stories.
Google and Facebook have confirmed that they fell victim to an alleged $100m (£77m) scam.
They had allegedly been tricked into wiring more than $100m to the alleged scammer’s bank accounts.
On 27 April, Fortune reported that the two victims were Facebook and Google.
In a statement, Google said that it was one of the victims.
“We detected this fraud against our vendor management team and promptly alerted the authorities,” a spokeswoman said.
“We recouped the funds and we’re pleased this matter is resolved.”
However, the firm did not reveal how much money it had transferred and recouped.
Nor did Facebook – but a spokeswoman said: “Facebook recovered the bulk of the funds shortly after the incident and has been cooperating with law enforcement in its investigation.”
Big firms targeted
“Sometimes staff [at large firms] think that they are defended, that security isn’t part of their job,” said James Maude at cyber-security firm Avecto, commenting on the phishing threat facing big companies.
“But people are part of the best security you can have – that’s why you have to train them.”
He also told the BBC that Avecto’s clients have recounted phishing attempts that used senior staff’s hacked email accounts to convince employees that a request to wire out money was genuine.
The sophistication of phishing scams has increased lately, according to a recent Europol report.
In order to avoid succumbing to such fraud, firms are advised to carefully verify new payment requests before authorising them.
Facebook has admitted that it observed attempts to spread propaganda on its site, apparently orchestrated by governments or organised parties.
The firm has seen “false news, disinformation, or networks of fake accounts aimed at manipulating public opinion”, it revealed in a new report.
“Several” such cases during the US presidential election last year required action, it added.
Some of the activity has been of a “wide-scale coordinated” nature.
Fake accounts were created to spread information stolen from email accounts during the 2016 US presidential election, the firm noted, though it said the volume of such activity was “statistically very small”.
But the company added that efforts to tackle “information operations” had led it to remove more than 30,000 fake accounts in France – where a presidential election is currently under way.
In general, Facebook said it faced a new challenge in tackling “subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people”.
Facebook described much of the activity as “false amplification” – which included the mass creation of fake accounts; the coordinated sharing of content and engagement with that content (such as likes); and the distribution of “inflammatory and sometimes racist memes”.
It added, however, that there was not much evidence that automated bots had been set up to do this, but humans appeared to be directly involved.
“We have observed many actions by fake account operators that could only be performed by people with language skills and a basic knowledge of the political situation in the target countries, suggesting a higher level of coordination and forethought,” the report explained.
The apparent objectives of those behind the propaganda efforts included one or more of the following:
- Sowing distrust in political institutions
- Spreading confusion
- Promoting or denigrating a specific cause or issue
Facebook said that it was working on a variety of methods to curb the spread of propaganda on its platform.
These included building new products to help stamp out fake news and creating new systems – some with artificial intelligence capabilities – to help quicken the response to reports of fake accounts or spam.
Authorities in Indian-administered Kashmir have announced a one-month ban on 22 social media services, including Facebook, Twitter and WhatsApp. The state government said the services were being misused by “anti-government elements” to incite violence. Gowhar Geelani reports on life without social media in the valley.
Srinagar-based photojournalist Javed Dar says that the ban on social media has left him “disconnected from people” and has also hampered his work.
Mr Dar says he had to invite friends and colleagues to a book release event, but didn’t have contact numbers for all.
“Normally, I would have invited many through Facebook and Twitter. The ban on social media has made this impossible,” he said.
Other social media services, communications tools and websites banned under the order include YouTube, Skype, Telegram, Snapchat and Reddit. Also, the faster 3G and 4G mobile phone services have been slower and erratic for more than a week.
Dr Qazi Haroon, a government doctor, says many health awareness campaigns which his department was running on social media have taken a hit.
“Now we have no other medium to promote awareness programmes related to immunisation, mother and child care, neonatal care,” he told the BBC.
The internet is often suspended or restricted in Kashmir to quell civilian protests and anti-India demonstrations, which sometimes turn violent.
According to a report by the Software Freedom Law Centre, the internet has been blocked in Kashmir at least 31 times between 2012 to 2016.
However, this is for the first time that the authorities have placed a complete ban on social networking sites.
Advocacy groups like India’s Centre for Internet and Society have described the ban a “blow to freedom of speech” and “legally unprecedented in India”.
And the New York-based Committee to Protect Journalists (CPJ) has asked India to revoke the ban.
“The sweeping censorship of social media under the pretext of ‘maintaining peace and order’ will bring neither peace nor order,” media reports quoting Steven Butler of CPJ said.
The state government order said social media networking sites were being used by “anti-national and subversive elements” to harm peace in the state.
It said “objectionable content” was being distributed to “spread disaffection” with the authorities.
The latest bout of violence began on 9 April when eight people were killed and scores injured after police clashed with protesters during a by-election in the city of Srinagar.
Since then, hundreds of students have protested on the streets, chanting anti-India slogans and throwing stones at the security forces.
Graphic videos claiming to show abuses on both sides have been shared extensively on social media and have added fuel to the conflict.
Opposition and pro-independence groups have criticised the government’s move.
“Repeated bans on means of communication in this day and age in the hope of restoring so-called peace and normalcy in the Kashmir Valley is ridiculous,” Mirwaiz Umar Farooq, one of the most influential leaders in the Kashmiri separatist movement, said.
Terming the ban “archaic”, the main opposition National Conference party accused the government of meting out “collective punishment to the people of Kashmir for expressing their political aspiration and raising voice against gross human rights excesses”.
Justifying the internet gag, a government representative said that the move was aimed at restoring normalcy and stopping rumours from spreading further.
“It [the ban] is a temporary decision to stop rumours and restore law and order to prevent further loss of life. Also, it is being done to prevent a war of provocative videos from both sides,” Waheed Ur Rehman Parra, leader of the ruling party’s youth wing, told the BBC.
Watch a queue of shoppers for a minute and they will be preparing to pay in very different ways.
Some will pull out a debit or credit card ready to put in a reader. Others might use their smartphone to complete their purchase, the rest will be paying in cash.
The group using notes and coins is still significant but their numbers are starting to dwindle, particularly among the young.
That spells trouble for the operators of cash machines. Time then for a reinvention of the humble ATM [automated teller machine] but in a digital world what can you do with a hole in the wall whose primary function is dispensing cash?
The answer is a “bank in a box”, a machine that is the alternative to a branch serving all your financial needs with 24/7 opening hours, says one manufacturer.
Other experts are more sceptical. They say all that can be done is to manage the decline of ATMs, and cash in general, until they are replaced entirely by a digital wallet found on our phones.
This debate is a far cry from the hurried signing of contracts, over a pink gin, between Barclays and John Shepherd-Barron in the UK some 50 years ago.
The deal, between bank and inventor, led to the first ever cash machine being installed in London in 1967.
All did not go entirely to plan. When one was installed in Zurich, Switzerland, there was a mysterious malfunction. Eventually, it was found that wires from two intersecting nearby tramlines were interfering with the mechanism.
Yet with other devices also being patented, the ATM soon evolved and its use spread widely.
The latest figures show that ATM numbers in the BRIC nations [Brazil, Russia, India, China] have gone up sharply and machines remain a constant if not growing sight in Western Europe.
Russia, has seen rapid growth in recent years, according to a report by Payments UK, while growth in India is coming, in part, from the development of solar-powered ATMs in rural areas.
Portugal has the highest proportion of cash machines in Western Europe with 1,516 machines per one million residents.
Sweden, typical of a Scandinavian shift towards a cashless society, has the lowest with 333 machines per one million inhabitants.
Tellingly the popularity of ATMs in Portugal may not necessarily have anything to do with cash.
“Cash machines in Portugal are part of a fully integrated cross-bank network,” says Payments UK’s Cash and Cash Machines report.
“This has allowed a number of innovations offering a range of other bank-related functions and services, such as cash and cheque deposits and also other services such as cinema and concert ticket purchases, tax payments, bill payments, and mobile phone top-ups.”
So what new technology is coming along to replace the button-operated ATMs?
One of the major international ATM manufacturers and software suppliers, NCR, has just started marketing its latest “self-service” machine.
It claims users will never look at an ATM in the same way again – mainly owing to the fact that they think they are looking at a tablet computer.
Functions allow the customer to swipe, pinch and zoom on the colourful 19-inch screen, just like on a tablet. And there is a video banking option to allow people to talk to bank staff.
Instead of fighting against the advance of smartphones, it also allows people to complete withdrawals or transactions that they started on their phone.
It has already been installed in drive-throughs in the US, and Rachel Nash, area director of NCR, says that it will soon be seen across the UK, Europe, then Australia and New Zealand and onwards.
“It bridges the gap between the mobile-first customer and those who want nothing to do with it [digital banking],” she says.
ATM use around the world
- As of 2014, Russia had the highest number of cash machines per one million inhabitants than any European country
- In Western Europe, Portugal had 1,516 ATMs per one million people in 2014. The UK had 1,074, while the average in the EU was 960
- The UK recorded 54 cash machine withdrawals per person in 2014 – the highest in the world
- Germany and Belgium have the largest amount of cash withdrawn per adult in Western Europe. Adults in these countries on average withdraw over £3,500 a year from cash machines
- India has the lowest value of cash machine withdrawals with less than £270 per adult a year. However, this value has slowly increased over the past few years
Source: Payments UK
NCR says its research shows that 80% of the transactions typically completed inside a physical branch can be completed through a video teller at an ATM.
“This is perfect to leave behind when a branch closes,” says Ms Nash. “It is a bank in a box.”
This is one of many new ATM designs on the market. But some argue that the ATM will never beat the smartphone as the hardware of choice when managing our financial affairs in the future.
Greg McBride, chief financial analyst at Bankrate.com, says: “ATMs will continue to evolve in terms of functionality, with increased use of the phone to access cash via the ATM.
“While the long-term viability of ATMs might be threatened by a move to a cashless society, it is too soon to put them on the endangered species list.”
He points out that if and when ATM usage declines in the US, fees are likely to rise for customers who use machines not in their bank’s network.
But Robert Wardrop, co-founder of the Cambridge Centre for Alternative Finance based at the Cambridge Judge Business School, believes that as smartphone digital wallets become more convenient, so cash machine use will decline.
Given that such technology has been in place for some time, only privacy concerns and regulation have stopped that happening sooner.
Some people still value the anonymity of cash, he says.
“[Digitally] all your transactions are transparent, so those worried about a Big Brother world will push back against that.”
So consumer preferences will ultimately determine the future of cash machines and, indeed, cash itself.
The power lies with those shoppers standing in line, and how they decide to pay.
Plans to introduce superfast broadband to a village in Devon have stalled because the landlord has declined to allow fibre cables on his land, BT has said.
The telecoms giant said it had not reached agreement with the Earl of Iddesleigh, who owns about 2,500 acres near Upton Pyne.
Villagers branded the objection “ridiculous”.
Lord Iddesleigh said he might accept underground cables.
Upton Pyne, which is home to about 300 people, has had broadband speeds of about two megabits.
Last November a cabinet to take fibre broadband was put up in the village, but it remains unused.
BT said it was “disappointed that it is currently not viable to provide superfast fibre broadband for the village”.
A BT spokesman said no agreement had been reached with the earl, despite two meetings with him, on “either an overhead or underground cable across the landowner’s fields”.
A spokesman for Lord Iddesleigh said he was not against bringing broadband to Upton Pyne, but would prefer the cables to be buried as telegraph poles would have to cross an “unspoilt valley”.
Villager Fabian King said: “We’ve got the cabinet here and we’ve got the fibre optic cable on the poles.
“There’s one gap of four poles, that’s all we need to sew it up.”
Councillor Bob Short, from Upton Pyne and Cowley Parish Council, said: “In the interests of keeping the countryside looking pretty for everybody, nobody wants poles.
“But it’s the case that they’d not be the only ones in the area, there are poles everywhere. Surely four more isn’t going to be too much of an inconvenience for the countryside?”
Reports are circulating in the media of vulnerable people being encouraged to take their own lives by following a series of online challenges.
In Russia, the deaths of some teenagers have been linked to the ‘Blue Whale’ challenge – though these reports have not been confirmed.
The idea is that individuals are invited to complete a number of tasks within a 50-day period. The tasks become increasingly harmful and end with the individual being challenged to take their own life.
There is concern that the idea is spreading around the world on social media networks.
With questions over whether or not the Blue Whale challenge actually exists, and with no confirmed link between the deaths in Russia and Blue Whale, how concerned should you be?
What is Blue Whale?
There is some confusion about the origin of Blue Whale, but the title is believed to be a reference to an act carried out by some blue whales, who appear beach themselves on purpose, causing them to die.
The name is apparently being used by an alleged online pressure group, which is said to assign a curator to individual participants who then encourages them to complete tests over the course of 50 days.
These assigned tasks reportedly escalate from straightforward demands such as watching a macabre video or horror film to something more sinister – even leading to suicide.
Unfortunately it is not unusual for teenagers to be drawn to social media groups that ultimately have a detrimental effect on their mental health.
The online group associated with the Blue Whale reports is said to have thousands of members and subscribers on Facebook and YouTube.
The name has cropped up in countries including Russia, Ukraine, Spain, Portugal, France and the UK.
How worried should I be?
Although authorities in Russia are reportedly investigating links between the suicides of a number of teenagers and online pressure groups, there have been no confirmed reports of links to Blue Whale.
What police are looking for in these criminal investigations are previous conversations between the deceased and social media users that may have had an influence on any actions taken.
There are also reports of suicide cases being investigated in Ukraine, Kazakhstan, Russia and Kyrgyzstan, with a focus on links to internet groups.
How can I spot the signs?
Children’s advice groups such as the NSPCC can offer guidance on how to detect signs of online grooming – the building of an emotional connection to gain trust – and how to protect your child and prevent the situation from escalating.
There are a number of possible signs, but they are not always obvious because offenders exercise discretion in order to avoid being detected or identified.
Among the most common signs to watch out for include children who:
- become very secretive, especially about what they are doing online
- are spending a lot of time on the internet and social media
- are switching screens on their device when approached
- are withdrawn or angry after using the internet or sending text messages
- have lots of new phone numbers or email addresses on their device
What should I do?
The Child Exploitation and Online Protection Centre (Ceop), a UK government agency, points out that sometimes change in a child’s behaviour is completely normal and it is important not to overreact.
Having a calm and open conversation, Ceop says, is an effective way of determining the cause of any behavioural change, tackling any concerns head-on and offering support and reassurance.
An education programme set up by the organisation, ThinkUKnow, also says that it should be made clear to the individual when approached that any discussion is not going to result in punishment.
Children, it says, often avoid reporting their own concerns if they believe that their internet access will be revoked, for example.
Another UK-based advice group, Get Safe Online, told the BBC that it was aware of the “horrific” Blue Whale reports and said that it was unfortunate that groups were “willing to abuse these platforms”.
The chief executive, Tony Neate, said dialogue was essential for addressing issues of peer pressure if a child is “acting strangely”.
“It will allow them to take a step back, away from the pressures,” he said, adding that this will help them to realise that it is “not something they have to, or should, be taking part in”.
Mr Neate also advised against “blanket bans” on internet use and said that the importance of privacy settings should be explained to all users.
He said that speaking openly to other parents and teachers can help raise awareness of potential online threats and “open the path for other instances to be reported”.
“Never shy away from reporting something that has occurred online to the police if you think your child, or someone else’s child is in danger,” Mr Neate said.
“This is how we can warn others and make sure teenagers don’t get caught up in horrific games like this.”
Prof Noriko Arai has spent years training a robot to pass prestigious University of Tokyo’s entrance exams.
And in 2015 and 2016, her Todai robot outperformed 80% of high-school pupils and was in the top 1% for maths.
But Prof Arai, a mathematician at the National Institute of Informatics, is not happy about how well it is doing.
At the Ted (Technology, Entertainment and Design) conference, in a session called Our Robotic Overlords, she said the results were “alarming”.
“You might think I was delighted, but I was alarmed,” she said.
“This robot, which could not read or understand, was able to outperform thousands of high-school children.”
This led Prof Arai to investigate the reading and writing skills of high-school students, in conjunction with Japan’s ministry of education.
“Most of the students pack in knowledge without understanding, and that is just memorising,” she said.
“AIs can do that better, so we need a new type of education.”
Stuart Russell, a professor of computer science at Berkeley, University of California, told the session machines would soon be reading and understanding very well.
“And very soon afterwards, they will have read everything that has ever been written,” he said.
Humans needed to start devising rules for how robots related to them, he told the Ted audience, and proposed three basic principles:
- The robot’s only objective is to maximise the realisation of human values
- The robot is initially uncertain about what these values are
- Human behaviour provides information about human values
But Prof Russell acknowledged there might be teething troubles.
“If a robot’s job is to feed hungry kids and it sees the family cat but doesn’t see that the sentimental value of the pet is greater than its nutritional value, that could single-handedly destroy the market for home robots,” he said.
The idea of altruistic robots is one that Tom Gruber, the man who designed Apple’s voice assistant Siri, believes in.
And while some experts, including tech tycoon Elon Musk and Prof Stephen Hawking, worry machines will overtake and destroy mankind, he maintains an optimistic vision of “humanistic” AI.
“The purpose of AI is to empower and augment us,” he told the Ted audience.
“Imagine if AI remembered every person you ever met or could retrieve everything you had ever read or seen.
“Not only would it make us better at remembering people at social occasions – but for those with dementia or Alzheimer’s, it would mean the difference between a life of isolation and one of dignity.”
Authorities in Indian-administered Kashmir have announced a one-month ban on 22 social media services, including Facebook, Twitter and WhatsApp.
The state government said the services were being misused by “anti-government elements” to incite violence.
Graphic videos claiming to show abuses on both sides have been shared extensively.
At least nine people have died in widespread violent clashes with the security forces in the disputed region.
Other social media services, communications tools and websites to have been banned under the order include YouTube, Skype, Telegram, Snapchat and Reddit.
The state government order said “objectionable contents” were being distributed to “spread disaffection” with the authorities.
Confrontations in Indian-administered Kashmir have become frequent since the killing of popular militant leader Burhan Wani by security forces last July.
The latest bout of violence began on 9 April when eight people were killed and scores injured after police clashed with protesters during a by-election in the city of Srinagar.
Since then, hundreds of students have protested on the streets, chanting anti-India slogans and throwing stones at the security forces.
Graphic videos claiming to show abuses on both sides have been shared extensively on social media and have added fuel to the conflict.
In recent weeks, schools have been set on fire and police say three politicians have been killed by unknown gunmen.
Muslim-majority Kashmir is at the centre of a decades-old territorial dispute between India and Pakistan.
India accuses Pakistan of supporting separatist sentiment in Kashmir, but Islamabad denies this. Both countries claim Kashmir in its entirety and control different parts of it.
Agricool is using former shipping containers to grow strawberries in Paris.
Belfast and its surrounding area is the best in the UK for overall mobile network performance, according to research.
Research company RootMetrics tested the performance of the four main operators in the UK’s 16 most populated areas.
London ranked fourth from the bottom, with only Bristol, Hull and Cardiff getting lower overall scores.
But RootMetrics said all the areas surveyed had improved over the past few years.
Newcastle and Glasgow were both singled out for improved network performance, thanks in part to increased data reliability and speed.
The research focused on six categories:
- network reliability
- network speed
- data performance
- call performance
- text performance
- overall performance
Belfast, Manchester, and Liverpool took the top three places in four of the six categories.
As well as being ranked best overall, Belfast was found to be best for text performance, data performance and speed.
Manchester was the best performing in terms of network reliability and call performance.
Of the 16 cities RootMetrics studied, Cardiff got the lowest score.
RootMetrics’ research showed that data speeds in the area were getting faster, but the networks’ performance in the other categories was largely unchanged from the first half of 2016.
The research covered not just the named cities but their surrounding areas.
The region covered by the London data, for example, stretches “from Maidenhead to Southend-on-Sea, and from Tunbridge Wells to Cambridge”, according to Scott Stonham, general manager for RootMetrics Europe.
“There’s a high level of transient demand from people travelling through the area, which is difficult to plan for, and the bigger the area the harder that is,” he said.
“With the numbers of people and the area covered, we get quite a wide spectrum of performance. London as a city is much better.”
RootMetrics said that the improved mobile performance in Newcastle was the result of faster download speeds, data reliability, and fewer blocked and dropped calls on all the networks.
In Glasgow, the researchers found “significant improvements” to both the speed and reliability of data services.
|RootMetrics overall performance ranking|
|Metropolitan area||Second half 2016||First half 2016|
|Leeds and Bradford||9||12|
A bug in Microsoft Word was exploited by hackers for months before it was eventually fixed, according to security researchers.
The flaw allowed attackers to take control of a computer via malicious document files.
The zero-day, or previously undetected, vulnerability was patched earlier this month.
However, it has since emerged that Microsoft was told about it in October, nearly six months ago.
A report from the Reuters news agency notes that security researcher Ryan Hanson at Optiv first discovered the problem in July 2016.
Microsoft could have notified customers to make a change to settings in Word that would have prevented the vulnerability from being exploited – but that would also have alerted hackers to its existence.
The decision to wait for a patch seems to have allowed a window of opportunity for hackers to discover the flaw on their own.
In March, cyber-security company FireEye noticed financial hacking software that was being distributed with the Microsoft bug.
And another company, McAfee, found attacks that were exploiting it, too.
McAfee faced some criticism, however, for publishing a blog post about the vulnerability – with details hackers may have found useful – two days before it was fixed.
Yet another company, Proofpoint, found that the vulnerability was being targeted by scammers trying to distribute Dridex malware – which infects a victim’s computer before snooping on banking logins.
There were even reports of hacking after the patch was made available.
Cyber-security outlet Morphisec said that employees at Ben-Gurion University in Israel had had their email accounts compromised by attackers who had then sent infected documents to medical professionals and contacts at technology companies.
“Prior to public disclosure, our engineers were aware of a small number of attempts to use this vulnerability through targeted spam designed to convince users to open a malicious attachment,” a Microsoft spokesman said.
Customers who applied the 11 April security update were already protected, he added.
“In an ideal world, it would have been fixed sooner,” said cyber-security expert Graham Cluley.
However, he pointed out that patching software run on millions of computers around the world was not an easy process.
“There’s always this huge challenge because companies want to patch their software, but they want to do it properly – they want to make sure they’ve been comprehensive with the fix,” he told the BBC.
Police in Wales plan to use facial recognition on fans during the Champions League final in Cardiff on 3 June, according to a government contract posted online.
Faces will be scanned at the Principality Stadium and Cardiff’s central railway station.
They can then be matched against 500,000 “custody images” stored by local police forces.
South Wales Police confirmed the pilot and said it was a “unique opportunity”.
A report on the plan was first posted by tech news site Motherboard.
Around 70,000 fans are expected in the stadium and Cardiff is preparing for a further 100,000 people to visit the city on the day.
Real-time facial recognition is planned to be used “in and around the Principality Stadium and Cardiff central train station on the day of the UCL Champions League Final”, the contract says.
The BBC understands that the police intend to use the system to scan faces at various locations, but it will not be a condition of entry to the stadium.
The value of the contract is listed at £177,000 and South Wales Police has said it secured Home Office funding for the technology.
“The UEFA Champions League finals in Cardiff give us a unique opportunity to test and prove the concept of this technology in a live operational environment, which will hopefully prove the benefits and the application of such technology across policing,” the force said in a statement.
“This will be one of the largest security operations ever undertaken in the Welsh capital and the use of technology will support the policing operation which aims to keep people safe during what will be a very busy time in Cardiff.”
In 2015, police in Leicestershire used facial recognition to scan the faces of visitors to the Download music festival – a move which was met with criticism from privacy advocates.
The planned pilot for the Champions League final is something we should be “worried about”, said Paul Bernal, an IT law lecturer at the University of East Anglia.
“This one is particularly intrusive – it’s not just about the match itself, but the station and the city centre,” he told the BBC.
“Is the idea that this should become the norm in every situation?”
Dr Bernal also questioned what would happen to data collected during the event.
“[It would be most worrying] if we move to a situation where everyone’s biometric data is stored,” he added. “This needs to be very carefully monitored indeed.”
These views were echoed by the Open Rights Group and Privacy International.
“The police need to explain why this surveillance is justified and how they will use and store these images,” said Jim Killock, executive director of the Open Rights Group.
Ransomware attacks on businesses around the world rose 50% last year, research into successful cyber-breaches shows.
Its popularity means malware is now responsible for 51% of all the incidents analysed in the annual Verizon data breach report.
This analyses almost 2,000 breaches to find out how firms were caught out by cyber-thieves.
It also found that measures taken by some firms after payment systems were targeted, stopped new breaches.
Glimmer of hope
The rapid rise in the number of successful ransomware attacks was widely expected, said Marc Spitler, senior manager in Verizon’s security research division, simply because so many malicious hacking groups were adopting the tactic.
“Ransomware is all about how can they get more money per infected device,” he said.
A separate report by security firm Symantec found that the average amount paid by victims of ransomware had risen to $1,000 (£775).
Consumers were likely to be hit straight away with ransomware, said Mr Spitler, but attacks on businesses were stealthier. Often, he said, attackers burrowed deeper into a company’s infrastructure to find key databases that were then scrambled before payment was sought.
In most attacks, booby-trapped attachments sent via email were the main delivery mechanism for ransomware and other malware, found the report.
“These attacks are all about getting a foothold on a system,” he said, adding that once attackers were inside an organisation they typically looked to use the back doors for many different types of attack.
Darren Thomson, chief technology officer for Symantec in Europe, said its statistics suggest about one in every 131 email messages was now harbouring some kind of cyber-threat.
“They are arriving in Word documents and Excel spreadsheets,” he said, “the messages people get many times a day.”
The Verizon report also spotted a shift in the targets of cyber-attacks with 61% of victims now being companies with fewer than 1,000 employees.
The good news, said Mr Spitler, was that some industry sectors that had been hit hard before, now appeared less often in its attack statistics – suggesting their digital defences were starting to work.
“The lack of large retailers suffering point-of-sale intrusions was a glimmer of hope,” he said.
The classic BlackBerry smartphone, including physical keyboard, is given a 2017 makeover.
Chinese drone maker DJI is offering up to one million yuan (£112,000) for information about drones that disrupted scores of flights at a Chinese airport.
On four days this month – 14, 17, 18 and 21 – drones were blamed for stranding thousands of passengers at Chengdu Shuangliu International.
Chinese reports said they caused 60 flight interruptions on 21 April alone.
One expert said it showed how difficult it is to combat unsafe drone flights.
Initially, it was reported that a reward of 10,000 yuan (£1,124) had been offered by the local public security bureau for information about unmanned aerial vehicles (UAVs) flown near to the transport hub.
However, DJI is now proposing a much bigger bounty.
In a press release in Chinese, the firm said that flying drones so close to an airport was a serious threat to public safety – and also damaged the UAV industry’s image.
Members of the public have until 31 December to make a report to local authorities.
The bounty was a sign that the firm was taking the potential impact on its reputation seriously, suggested Prof David Dunn at the University of Birmingham.
“Clearly they’re concerned about their brand image, given how much they dominate the drone market,” he told the BBC.
Prof Dunn pointed out that using on-board software to restrict where drones can fly – known as geofencing, which DJI uses in its drones – was not always successful.
“There seems to be an inability to deal with the potential drone threat to air traffic – other than through extraordinary measures like this reward,” he said.
App-based guides for games, including Fifa and Pokemon Go, were used to target more than 500,000 Android users with malware, a cyber-security company has said.
The apps, discovered on the Google Play Store, were designed to take control of devices before downloading malware.
Unwanted ads could then be displayed to users, for example, according to researchers at Check Point.
Google did not respond to a request for comment.
More than 40 guide apps for popular games were found to be capable of delivering the malware to users’ devices, Check Point said.
It is thought that the apps were downloaded between 528,000 and 1.8 million times, though it is not known how many of these downloads resulted in the deployment of malware.
“Since the actual apps do not contain any malicious code themselves, it’s very hard to trace,” said Daniel Padon, at Check Point.
He added that when Check Point had notified Google about the apps they had been removed.
But the researchers said that they continued to find more examples on the Play Store.
Connecting a botnet
Some of the apps were made available as long ago as November last year.
When one is downloaded, it asks users for device admin permission to ensure the software cannot be deleted.
It then attempts to establish a connection with a command and control server, turning the device into a bot in a botnet – a network of devices controlled from afar.
Malicious software can then be downloaded.
Mr Padon told the BBC that this could allow hackers to send illegitimate pop-up ads, use the device as part of a DDoS attack, or snoop on data sent via the device’s network.
He said mobile botnets were becoming more common.
“We, other security vendors and Google have found different mobile botnets spreading via the Play Store,” Mr Padon said.
“This is a hard thing to stop – it could have a devastating impact.”
The approach could indeed be dangerous, agreed Nikolaos Chrysaidos at cyber-security firm Avast.
“At the moment, it seems like the cyber-criminals behind the threat are only interested in making money from ads,” he said.
“The threat currently has very basic functionalities […] However, there is nothing stopping the threat from becoming more sophisticated in the future.”
The British government is protesting against Twitter’s decision to withdraw access to user data used to investigate potential terrorist plots.
The information was previously used by the police and the MI5 intelligence agency.
However, the Telegraph newspaper cited industry sources, in a report on 25 April, saying the government’s access had been “blocked”.
Twitter did not immediately respond to a request for comment.
“We are protesting this decision. We are in talks with Twitter on getting access to this data,” a government spokesman said, according to the Reuters news agency.
The BBC understands that the data in question is available to private companies but the Home Office has been denied access to it.
More to follow
Fitbit has said it is investigating a report from a Wisconsin woman who said she suffered second-degree burns when her fitness tracker caught fire.
Dina Mitchell told ABC News that the Flex 2 began to combust on her wrist while she was reading a book.
“It burned the heck out of my arm,” she said.
Fitbit has said it is “extremely concerned” and is now looking into the issue, though it sees “no reason” for people to stop wearing the Flex 2.
Ms Mitchell said she quickly removed the tracker from her arm and threw it on the floor.
A doctor had to take small pieces of plastic and rubber out of the wound following the incident, she claimed.
“We are not aware of any other complaints of this nature and see no reason for people to stop wearing their Flex 2,” Fitbit said in a statement.
“We will share additional information as we are able.”
The batteries in many electronic devices are sometimes susceptible to overheating and have been known to catch fire or explode in other cases.
Last year, Samsung had to recall its Galaxy Note 7 smartphones after the handsets were found to be prone to combusting.
Scientists have been able to keep premature lambs alive for weeks using an artificial womb that looks like a plastic bag.
It provides everything the foetus needs to continue growing and maturing, including a nutrient-rich blood supply and a protective sac of amniotic fluid.
The approach might one day help premature human babies have a better chance of survival, experts hope.
Human trials may be possible in a few years, according to researchers.
First, more tests in animals are needed to check it is safe enough to progress, the researchers say in the journal Nature Communications.
The Children’s Hospital of Philadelphia team insists it is not looking to replace mothers or extend the limits of viability – merely to find a better way to support babies who are born too early.
Currently, very premature infants, born at around 23 weeks of gestation, are placed in incubators and put on ventilators to help them breathe, but this can damage their lung development.
Plastic bag womb
The plastic “biobag” womb contains a mixture of warm water and added salts, similar to amniotic fluid, to support and protect the foetus.
This fluid is inhaled and swallowed by the growing foetus, as would normally happen in the womb. Gallons of the mixture are steadily flushed through the bag each day to ensure a continuous fresh supply.
The bagged lamb cannot get a supply of oxygen and nutrients from its mum via the placenta. Instead, it is connected to a special machine by its umbilical cord, which does the job.
The baby lamb’s heart does all the pumping work, sending “old, used” blood out to the machine to be replenished before it returns back to the body again.
The whole system is designed to closely mimic nature and buy the tiniest newborns a few weeks to develop their lungs and other organs.
Researcher Dr Emily Partridge explained: “The challenging age that we are trying to offset is that 23- to 24-week baby who is faced with such a challenge of adapting to life outside of the uterus on dry land, breathing air when they are not supposed to be there yet.”
In babies born preterm, the chance of survival at less than 23 weeks is close to zero, while at 23 weeks it is 15%, at 24 weeks 55% and at 25 weeks about 80%.
The premature lambs in the study, equivalent in age to 23-week-old human infants, appeared to develop normally in their bags.
They opened their eyes, grew a woolly coat and appeared comfortable living in their polyethylene homes.
After 28 days, when their lungs had matured enough, the lambs were released so they could start breathing air.
Shortly after, the lambs were then killed so the researchers could study their brains and organs in detail to see how well they had grown.
In later experiments, however, a few more bagged lambs were allowed to survive and were bottle-fed by the team.
“They appear to have normal development in all respects,” said lead investigator Dr Alan Flake.
There are still many potential problems to overcome, however.
There is a significant risk of infection, even though the biobag is sterile and sealed. Finding the right mix of nutrients and hormones to support a human baby will also be a challenge.
Even if the work can progress, it’s not clear how parents-to-be might feel about it.
Fellow researcher Dr Marcus Davey said: “We envisage the unit will look pretty much like a traditional incubator. It will have a lid and inside that warmed environment would be the baby inside the biobag.”
Prof Colin Duncan, professor of reproductive medicine and science at the University of Edinburgh, said: “This study is a very important step forward. There are still huge challenges to refine the technique, to make good results more consistent and eventually to compare outcomes with current neonatal intensive care strategies.
“This will require a lot of additional pre-clinical research and development and this treatment will not enter the clinic any time soon.”
Follow Michelle on Twitter
Boston Dynamics founder Prof Marc Raibert came to the Ted (Technology, Entertainment and Design) conference with a new message about his military-funded robots – they could find ways in to homes and on to the streets.
He revealed a video of one of the company’s dog-like robots, SpotMini, delivering parcels to employees’ homes.
Its Atlas robot was also shown lifting and carrying packages.
And SpotMini later put on a show of cuteness to win over the Ted audience.
It demonstrated its ability to negotiate obstacles, walk forwards, backwards and sideways and hop on two legs, and, at the end of its demonstration, rolled over as if to have its stomach tickled.
Boston Dynamics has become synonymous with developing dexterous but terrifying robots, the skills of which are shown off via a series of YouTube videos.
Some have suggested the company’s image is at odds with that of parent company Alphabet – which owns Google – and there have been persistent rumours about it wanting to distance itself from its robotic wing.
Noel Sharkey, a robotics expert from Sheffield University, believes it needs a change of direction.
“They were mainly funded by the military in the past, but Google does not want that,” he said.
“Also, their Big Dog robot was too noisy for the military because of its petrol engine, and so they made an electric version that was not very good and the military cancelled their contract.”
But he remains a fan of the technology.
“Their Atlas robot is now quite incredible – it can open doors, which is impressive, believe it or not, and maintain a fixed goal even when interrupted,” he said.
“But their new robot, Handle, is just not like anything before it – stunning.”
At a previous conference, Prof Raibert described Handle as “nightmare-inducing”.
And it was later dubbed “Terminator on a hoverboard”.
This time, Prof Raibert appeared to want to show a more empathetic side to his robots.
Showing a video of Handle jumping on to a table, he told the Ted audience: “It likes to put on a show.”
And during SpotMini’s demonstration, he said: “It is a little bit of a show-off”.
Later, as SpotMini wandered among the Ted crowd, Prof Raibert was clearly pleased it was going down well.
“People like to pet it,” he said.
The YouTube videos received many “likes” as well as criticisms, Prof Raibert said, and Boston Dynamics was determined to find new ways to appeal to the public – showing robots slipping on banana peel, for instance.
He said the company’s experiments with robots delivering parcels were “70% there”, but added it was harder for the robots to negotiate the small spaces in homes than the rough outside terrain they were more associated with.
Boston Dynamics has a long history of viral videos showing off its robots. Some of its best known include:
- Big Dog – a quadruped robot designed for the US military with funding from the Defense Advanced Research Projects Agency (Darpa)
- Petman (Protection Ensemble Test Mannequin) – a bipedal device constructed for testing chemical protection suits. It is the first anthropomorphic robot that moves dynamically like a real person. Much of its technology is derived from Big Dog
- Atlas (Agile Anthropomorphic Robot) – a 6ft (183cm) bipedal robot, based on Boston Dynamics’ earlier Petman humanoid robot, and designed for a variety of search and rescue tasks. It is a high mobility, humanoid robot designed to negotiate rough outdoor terrain
- Handle – a research robot on wheels that stands 6.5ft (198cm) tall, travels at 9mph (14.5km/h) and jumps 4ft (122cm) vertically. It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles (24km) on one battery charge.
An MP has urged Facebook to tackle fake news in the run-up to the UK’s general election on 8 June.
Conservative MP Damian Collins expressed his concerns to the Guardian newspaper, saying fake news threatened “the integrity of democracy”.
A parliamentary inquiry examining the “growing phenomenon of fake news” was launched by Mr Collins in January.
Dealing with the problem was “a global priority” and an “evolving challenge”, Facebook said.
Mr Collins told the Guardian that the top 20 fake news stories were shared more than the top 20 true news stories during the three months leading up to the US election last year.
“The danger is, if for many people the main source of news is Facebook and if the news they get on Facebook is mostly fake news, they could be voting based on lies,” he said.
He also claimed that Facebook did not respond quickly enough to the phenomenon, despite the fact that the social media site can detect when news stories go viral.
Facebook said: “Improving news literacy is a global priority and false news runs counter to our mission to connect people with the stories that they find meaningful.
“We understand that we need to do our part to help people understand how to make decisions about which sources to trust.”
The social network is examining various methods for this – including possibly adding labels to stories that have been reported as false by third parties or Facebook users, which would then warn others before sharing them.
And this month it launched an educational tool designed to help people investigate the veracity of a story shared on Facebook.
On 25 April, Google announced that it was altering the way its search engine works to try to curb the spread of fake news and hate speech.
A recent study by economists at Stanford University suggested that while fake news had been viewed by many adults before the US election, there was little evidence to suggest that such stories had had a decisive impact on voting.
An incident as shocking as a man murdering his 11-month-old daughter live on Facebook before killing himself was bound to provoke heated debate.
The 21-year-old man broadcast himself hanging his daughter from a half-finished building on the island of Phuket in Thailand, reportedly after ending a turbulent and sometimes violent relationship with his wife.
The man’s Facebook page has received dozens of comments from Thai people outraged by the death of the little girl. Some men who have also had failed relationships have posted how they got through their problems and rebuilt bonds with their children.
Thais are accustomed to seeing violent scenes on their television news bulletins, which would be deemed unacceptable in many Western countries.
Previous shocking incidents, like appalling car accidents caused by negligent driving, have led to brief national debates, but have quickly dropped from public consciousness. But there have been some reflective responses to this incident, with a number of people urging people not to share the video.
The long period of time the video remained viewable on Facebook – 24 hours – is one area the social media giant may be able to address.
Thai police were aware of the video almost immediately after the crime took place. It is not clear yet when the Thai authorities alerted Facebook.
The police now say that in future they will discuss inappropriate online content with social media companies like Facebook, YouTube or Instagram, and how to take it down quickly. But the challenge of stopping offensive and disturbing content on a medium, which is used by so many people, including two thirds of the Thai population, is a difficult one.
The Thai military government does operate a range of censorship regimes, and blocks many thousands of websites, especially those carrying content deemed critical of the monarchy.
On the day this awful incident occurred, the Ministry of Digital Information and Economy demanded that local internet service providers (ISPs) do even more to block anti-monarchy content, and the government is believed to be trying to implement a single digital gateway which will allow it to wall Thailand off from such content.
But until now it has been wary of tampering with Facebook. A clumsy attempt to block Facebook shortly after the military had seized power in 2014 provoked a huge public outcry, and the social media giant remained unavailable for only 30 minutes.
Aside from the general popularity of Facebook for social communication, it is also used by large numbers of Thai businesses to promote their products and services.
Until now there has been little public debate over the negative sides of social media, for example hate speech, trolling and fake news, which have aggravated Thailand’s bitter political polarisation.
Thailand has no law against hate speech. It is therefore less prepared to address issues like those thrown up by this murder-suicide.
A more fruitful area for discussion coming out of this incident might instead be the issue of domestic violence in Thailand, and the high level of suicides related to it.
The Thai Department of Mental Health reports that there are around 350 suicides a month here, a figure it says it is working to reduce.
Four times as many men than women are victims of suicide, and the highest number of those male suicides are related to relationship problems, and reactions to being criticised or insulted, or loss of face.
The department says alcohol consumption also plays a big role in encouraging these men to kill themselves, and that it is very common for them to assault others, usually family members, before they do.
An app that can alert parents when a child takes or receives sexually explicit photographs has been developed.
The BBC’s Chris Foxx tested Gallery Guardian ahead of its release on Wednesday, and asked creator Daniel Skowronksi how his software detects nude selfies.
The last male northern white rhino on earth has joined the dating app Tinder – as part of fundraising efforts by conservationists to save the species.
At 43 (or 100 in rhino years), Sudan is described as “one of a kind”, who likes to eat grass and chill in the mud.
Attempts to mate the animal – who lives in Kenya – with only two surviving females have failed.
Conservationists say they need to raise $10m (£7.8m) to develop in vitro fertilisation (IVF) methods for rhinos.
“It’s never ever has been done in rhinos before,” Richard Vigne, head of Ol Pejeta Conservancy, told the BBC.
“This is a 10-year programme to recover that species.
“We’ll hopefully keep him alive as long as we can – but we are in a race against time if we are going to recover this species.”
On Tinder, Sudan’s profile reads: “I don’t mean to be too forward, but the fate of my species literally depends on me.
“I perform well under pressure… 6ft (183cm) tall and 5,000lb (2,268kg) if it matters.”
In a joint campaign launched by Ol Pejeta Conservancy and Tinder, app users now have an option to donate when they open Sudan’s profile.
Scientists in several countries are currently testing how to use IVF techniques on the two northern white rhino females.
They also do not rule out using Sudan’s sperm for IVF with southern white rhinos – although they are a distinct species. Still the crossing option would be better than extinction, they say.
Sudan – who is often described as “the most eligible bachelor in the world” – has his own team of armed bodyguards, who are protecting him around the clock.
Countless TV shows have been made about the ageing animal.
Northern whites are the only rhinos that can survive in the wild in central Africa.
But they have been hunted into near extinction by poachers who target their horns.
A cheating US husband has been charged with killing his wife after police said data from her wearable fitness tracker contradicted his version of events.
Richard Dabate, 40, claimed to have seen Connie Dabate shot to death more than an hour before her Fitbit device recorded her last movements.
He told detectives that she was killed by a home intruder in the US state of Connecticut on 23 December 2015.
But police say her electronic device tells a different story.
Mr Dabate was charged this month with murder, tampering with physical evidence and making false statements about his 39-year-old wife’s death.
Police say he initially told them he had just returned home at around 09:00 after dropping off his two sons when he was attacked by a home intruder.
He claimed the perpetrator was a “tall, obese man” with a deep voice like actor Vin Diesel’s and wearing “camouflage and a mask”, according to an arrest warrant.
The accused said a .357 revolver registered in his name was used by the purported attacker to shoot Mrs Dabate.
The victim, a pharmaceutical representative, was shot twice, once in the back of her head, with the pistol that her husband had purchased two months earlier, say prosecutors.
Mr Dabate, a computer technician, claimed the home intruder then tied him up after a struggle.
Police found Mr Dabate with an arm and leg bound by zip ties to a chair in the kitchen at the crime scene.
He had what police described as superficial knife wounds.
Investigators say physical evidence showed no sign of the struggle described by Mr Dabate.
Data found on a Fitbit that Mrs Dabate had worn for an exercise class that morning shows she did not take her last movements until 10:05, more than an hour after her husband claimed to watch her die.
Also, police sniffer dogs picked up no scent of other people in the house in the hours before Mrs Dabate’s death.
Her husband was in a relationship with another woman, who was expecting his baby – which detectives suspect as a motive for the attack.
According to investigators, Mr Dabate had texted his wife a year before her death saying: “I want a divorce.”
Bank statements obtained by the Hartford Courant newspaper showed credit card charges from hotels and bouquets of flowers for his girlfriend, as well as strip clubs.
The accused is currently not in custody after posting bail of $1m.
His next trial date is scheduled for 28 April.
Mr Dabate’s lawyer says his client is innocent.
A Chinese entrepreneur is promoting edible insects and online farmers’ markets in a campaign to improve eating habits in the country.
Matilda Ho spoke at the Ted (Technology, Entertainment and Design) conference about the need to spread the message about healthy eating.
She is backing a range of start-ups, including one that offers protein made from silkworms.
China has a growing problem with obesity and diabetes.
“China has 20% of the world’s population but only 7% of land is arable,” Ms Ho told the BBC.
“One in four diabetics is now Chinese and one in five obese people.”
Ms Ho began tackling the issue with an online farmers’ market which now supplies 240 types of new produce from 57 farmers.
It has gained 40,000 subscribers since it was launched 18 months ago.
“I wanted to use technology to shorten the gap between growers and consumers,” Ms Ho told the BBC.
“It is a right to know where your food comes from and it empowers consumers.”
The food is delivered to customers via electric vehicles and in biodegradable boxes to keep the carbon footprint low.
China has a rapidly rising middle class and a culture where it is polite to over-order food for guests in restaurants.
Ms Ho realised that one online start-up was not enough and has now launched an accelerator to promote a range of food tech firms.
It includes a start-up that uses silkworms as a sustainable source of protein.
“In China, silkworms are by-products of the textile industry so they are affordable and accessible,” explained Ms Ho.
“They also don’t sound like a bug so it doesn’t sound as yucky as an insect. As kids we raised silkworms at school.”
There is a history of insect eating in China but silkworms are the only insect currently legal in China to use as an ingredient in food.
There is a push to legalise crickets too but that is likely to take three to five years to become law.
The makers of a face-morphing app have apologised after users said the “hot” filter lightened their skin.
A former railway station in Paris is being turned into the world’s largest incubator for tech start-ups.
Station F cost 250m euros (£212m) to renovate and its central section will be able to seat more than 3,000 entrepreneurs.
The site will open later in 2017.
BBC Click finds out more.
A computer hacker has been jailed for two years for masterminding global online attacks as a teenager from his bedroom in Hertfordshire.
Adam Mudd, now 20, admitted creating malware in 2013 which was used to carry out 1.7 million cyber attacks.
Among the victims were gaming websites including Minecraft, Xbox Live and the fantasy game Runescape, the Old Bailey heard.
Judge Michael Topolski said Mudd “knew full well this was not a game”.
Mudd, who has autism, will serve his sentence in a young offenders institution.
The judge said he could not suspend the jail term because he needed the sentence to be a “real” deterrent to others.
Mudd was 16 when he developed a programme called Titanium Stresser, the court heard. He set it up using a false name and an address in Manchester.
It had 112,000 registered users, who in turn attacked 666,000 IP addresses globally.
The attacks, known as ‘distributed denials of service’, left companies paying millions to defend themselves against it.
The teenager earned more than £386,000 worth of US dollars and Bitcoins from selling the programme to international cyber criminals.
The Old Bailey heard that he also personally carried out 594 attacks, including one on West Herts College where he was studying computer science.
Mudd also targeted up to 70 schools and colleges, including the University of Cambridge, University of Essex and University of East Anglia, as well as local councils.
Police said that when he was arrested in March 2015, Mudd was in the bedroom of his home in King’s Langley and refused to unlock his computer until his father intervened.
During sentencing Judge Topolski noted that Mudd came from a “perfectly respectable and caring family” but the effect of his crimes had caused damage “from Greenland to New Zealand and from Russia to Chile”.
“I’m entirely satisfied that you knew full well and understood completely this was not a game for fun,” he told Mudd.
“It was a serious money-making business and your software was doing exactly what you created it to do”.
Google is changing the way its core search engine works to help stop the spread of fake news and hate speech.
The changes involve different measures for ranking sites and people checking results are accurate.
In a blog, Google said the changes should thwart attempts to abuse its algorithms that let extremists promote their content.
Google was criticised last year for giving prominence to groups seeking to deny that the Holocaust took place.
Rate and replace
Ben Gomes, a vice-president of engineering at Google’s search division, said it was making “structural” changes to tackle the new ways people had found to trick its algorithms.
In particular, he said, many groups and organisations were using “fake news” to help spread “blatantly misleading, low quality, offensive or downright false information”.
To combat this, he said, Google had added new metrics to its ranking systems that should help to stop false information entering the top results for particular search terms.
In addition, he said, it had updated the guidelines given to the thousands of human raters it used to give feedback on whether results were accurate.
The guidelines included examples of low quality and fake news websites, said Mr Gomes, to help them pick out “misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories”.
Analysis: Rory Cellan-Jones, BBC technology correspondent
Google has done its best to play down the extent of fake news and hateful material – or what it prefers to call “low quality content” – in search results.
The company keeps repeating that this affects only 0.25% of queries.
But the fact that searches such as “Is Obama planning a coup?” – or even “Who invented stairs?” – produced such questionable results meant it had to act.
These searches threw up a prominent “snippets” box telling you that, yes, President Obama was planning a coup, or that stairs had been invented in 1948.
Now both boxes have gone, and Google’s almighty algorithm has been tweaked so that such content is less likely to rise to the top.
What’s interesting is that a company that has put such faith in technology solutions is turning to 10,000 humans to try to make search a better experience.
This giant focus group, which tests out changes in the search algorithm, has been told to pay more attention to the source of any pages rated highly in results, looking round the web to see whether they seem authoritative and trustworthy.
Questions are bound to be raised about whether this panel, which Google says is representative of its users, is impartial and objective.
Google’s Ben Gomes, a veteran who’s been wrestling with the intricacies of search since arriving as one of the earliest employees, believes it is now on the path to getting this right.
But with so many people trying to game the system, the battle to make search true and fair will never be over.
Google also planned to change its “autocomplete” tool, which suggests search terms, to allow users to more easily to flag up troubling content, he said.
Danny Sullivan, founder of the Search Engine Land news site, said the changes made sense and should not be taken to suggest that Google’s algorithms were failing to correctly index what they found online.
“It’s sort of like saying that a restaurant is a failure if it asks for people to rate the food it makes,” he said.
“The raters don’t rank results,” said Mr Sullivan.
“They simply give feedback about whether the results are good.
“That feedback is then used to reshape the algorithms – the recipes, if you will -that Google uses.”
Russian hackers are targeting the campaign of French presidential candidate Emmanuel Macron, say security experts.
Phishing emails, malware and fake net domains were all being used as attack techniques, said Feike Hacquebord, from security company Trend Micro.
The attackers are believed to be part of the same group that targeted the US election.
Russia has denied that it is behind attacks aimed at Mr Macron.
In a report, Mr Hacquebord said the group behind the “aggressive” attacks was a collective of Russian hackers known widely as Fancy Bear, APT28 and Pawn Storm.
He said the group was using an extensive arsenal of high-tech con tricks to grab the login names, passwords and other credentials of staff aiding Mr Macron’s bid to be the next French president.
Mr Macron got through the first round of the presidential election as did Marine Le Pen.
In particular, said Mr Hacquebord, the hacker group had registered several net domains similar to those already registered by the French politician’s staff.
The fake domains were then used in phishing emails sent to key workers in an attempt to get them to visit the websites so login details could be scooped up.
The hackers were also abusing a system called OAuth that let people log into one service using the credentials they use for another.
Mr Hacquebord said telltale techniques of the group lent weight to the idea that the people involved in the French attacks were behind ones seen last year in the US.
“We have seen that phishing sites were set up, and the fingerprints were really the same actors as in the Democratic National Congress breach,” he told the Reuters news agency.
A spokesman for the French national cyber-security agency, ANSSI, confirmed that it too had seen several attacks on Mr Macron’s staff and back-office systems.
However, a spokesman for the agency said it was difficult to be sure that the Pawn Storm group was behind the attacks.
A spokesman for the Macron campaign said it knew about the range of attacks aimed at it and none had led to the release of sensitive data.
“These are usual cyber-attack tactics,” Mounir Mahjoubi told CNN.
“We have set up a security team and every member of the staff is trained to report these attempts.”
A security researcher called The Grugq, who is known as an expert on operational security, said Mr Macron’s campaign was an easier target than some because of its reliance on the Telegram app for messaging.
“Its security is not particularly strong compared with alternatives, and the defaults guide users towards insecure practices,” he wrote.
The Pawn Storm group is also believed to have been involved in other attacks on political organisations, including the Christian Democratic Union of Germany, the Turkish government and Montenegro’s parliament, as well as the World Anti-Doping Agency and Arabic television channel al-Jazeera.
A Thai man filmed himself killing his baby daughter on Facebook Live, before taking his own life, Thai police say.
The 21-year-old hanged his daughter, and then himself, at a deserted hotel in Phuket on Monday, reportedly after an argument with his wife.
Facebook sent condolences to the family for the “appalling” incident and said that the content had now been removed.
The company pledged a review of its processes after footage of a US killing stayed online for hours this month.
Relatives of the Thai man saw the distressing footage and alerted the police – but the authorities arrived too late to save the man and his daughter.
The footage shows the man tying a knot around his daughter’s neck, before dropping her from a rooftop. He then retrieves the body.
Social media anger
The Facebook Live post was widely reported by Thai media, and went viral on social media, BBC Thai editor Nopporn Wong-Anan reports.
In a statement, a Facebook spokesperson said: “This is an appalling incident and our hearts go out to the family of the victim. There is absolutely no place for content of this kind on Facebook and it has now been removed.”
The footage is still on video sharing website YouTube. The company has not yet commented.
Thai social media users reacted with anger, while offering condolences to the family of the girl, our correspondent says.
Devastated relatives of the child, including the mother, picked up the body of the girl and her father from hospital on Tuesday.
Following the US killing, Facebook said it was “constantly exploring ways that new technologies can help us make sure Facebook is a safe environment”.
“We prioritise reports with serious safety implications for our community, and are working on making that review process go even faster,” blogged one of its executives last week.
Analysis: Leo Kelion, BBC technology desk editor
This latest atrocity comes less than a fortnight after a US man bragged on Facebook Live about his murder of a 74-year-old man in Cleveland, having also posted a video of the killing to the social network.
The platform’s chief, Mark Zuckerberg, subsequently acknowledged he had “a lot of work” to do after it emerged the murder clip had remained online for more than two hours despite Facebook having received complaints in the meantime.
Prior to that, Facebook Live broadcast the death of a Chicago man who was shot in the neck and head last June, and then in July a woman streamed the death of her boyfriend after he was shot by police in Minneapolis.
For its part, Facebook is trying to find ways for its review team – which employs thousands of people – to react to such content more quickly.
In addition, the firm has developed software to prevent such footage being reshared in full on its service at a later point.
And it is also exploring the use of artificial intelligence to automatically flag videos and photos that need to be reviewed rather than waiting for other users to report them.
What it hasn’t discussed is the idea of scrapping Facebook Live altogether.
With Twitter and YouTube, among others, offering rival live-streaming products, doing so could put it at a disadvantage.
But as a result, there will inevitably be further outrages and criticism because Facebook Live’s popularity makes it all but impossible for the firm to keep a human eye over each broadcast.
Garry Kasparov relived his epic chess match with IBM’s Deep Blue computer as he took to the stage at the Ted (Technology, Entertainment and Design) conference to talk about the current battle between man and machine.
The ex-champion said it had been both a “blessing and curse” to be the first to take on an intelligent machine.
“Nobody remembers that I won the first match,” he said.
And he still believed man had ultimately triumphed.
“Machine’s triumph is man’s triumph,” he told the audience.
“Deep Blue was victorious, but was it intelligent?
“Chess could be crunched by brute force once algorithms got smart enough, but it didn’t offer the dreamt-of insights into the mysteries of human intelligence.”
He described how he had felt when he had sat down for his first match, in February 1996.
“When I met Deep Blue I was the world champion, but I immediately sensed something new, something unsettling,” Kasparov said.
“I wasn’t sure what the thing was capable of.”
Now we faced similar challenges in our everyday life, he said.
“Soon machines will be taxi drivers and doctors, but will they be intelligent?” Kasparov asked.
“What really matters is how humans feel about working and living with these machines.”
Since Kasparov’s match, machines have taken on far more complex challenges, with Google’s DeepMind recently proving successful in the game of Go.
Smart machines are increasingly being integrated into all aspects of society – self-drive vehicles are being tested in cities around the world and artificial intelligence systems currently work alongside humans in a range of professions, including doctors, lawyers and insurers.
“Eventually every profession will have to feel this pressure or else it will mean humanity has failed to progress,” Kasparov told the Ted audience.
“We don’t get to choose when progress starts, and we cannot slow it down.
“Technology excels at removing difficulties and uncertainties from our lives.
“We must seek out ever more complicated challenges.”
And this has led some to consider whether the future leaves any room for humans.
When Deep Blue triumphed, Kasparov said, he had wondered whether it would signal the end of “my beloved game”, but he had concluded it would not.
“The world of chess still wanted a human world champion,” he said, adding chess apps more powerful than Deep Blue were now available on phones but people still chose to play each other.
He said society should not let fears of what technology might eventually be capable of affect the drive to make it ever-more powerful.
“We need to conquer those fears if we want to get the best out of humanity,” he said.
He pointed out that machines remained far from perfect – using the example of online translation, which often requires a good deal of human input to get an accurate result.
While his historic match with Deep Blue had come to define a “man v machine battle”, he said, it was important to focus on the differences between the two.
“Machines have instructions, we have purpose,” Kasparov said.
“Machines have objectivity, we have passion.
“We should not worry about what machines can do today, we should worry about what they cannot do.
“If we fail, it is because we grew complacent and limited our ambitions.
“There is one thing only humans can do, and that is dream.”
Google is letting people use its driverless car service for any ride at any time.
The search firm’s sister company Waymo has created a free “early rider” programme in Phoenix, Arizona.
Hundreds of families are expected to take part. Waymo has equipped a fleet of 500 minivans with its self-driving technology to handle ride requests.
The company said testers could ride any time across a test zone in Phoenix twice the size of San Francisco.
The test is the first, large-scale public trial of a driverless car system.
In a blog John Krafcik, Waymo’s chief executive, said it had been doing small-scale tests of its riding service with a few Phoenix families for the past month.
Now, he said, it wanted more testers, with “diverse backgrounds and transportation needs”.
“We’ll learn things like where people want to go in a self-driving car, how they communicate with our vehicles, and what information and controls they want to see inside,” wrote Mr Krafcik.
Those applying to take part must be over 18 and live inside the large test region, which forms part of the greater Phoenix metropolitan area.
The cars will not be entirely autonomous, as Arizona laws governing the use of autonomous vehicles demand a test driver be behind the wheel to take control in the event of problems or collisions.
Google’s Waymo has been one of the most aggressive developers of autonomous car driving technology and services.
The company’s robot cars have now driven more than 2.5 million miles on public roads without human help.
The cars have also been involved in 14 collisions while logging those miles.
As well as fitting out existing cars with sensing and navigations systems, Waymo has also developed its own small, two-seater vehicles.
For the Phoenix test, Google will use Chrysler Pacifica minivans.
The news of the test project comes a day after the UK announced plans to get driverless cars tested on public roads and motorways by 2019.
It also comes as the Wall Street Journal revealed Amazon has been working on autonomous car services for more than 12 months.
Matt “Dellor” Vaughn is sacked by Toronto eSports after footage of a racist rant was posted on YouTube.
Manu Prakash, a bio-engineer at Stanford University, designs cheap tools that can make a big difference in the poorest parts of the world.
At the Ted (Technology, Entertainment and Design) conference in Vancouver, he showed off his latest gizmo – a cardboard centrifuge that can spot malarial parasites in blood.
Toy-inspired, it costs 20 cents (15p).
He also launched a citizen science project to identify disease-carrying mosquitoes by their sound.
The “abuzz” project asks people to record the sound of mosquitoes’ wings beating, on the microphones of their mobile phones, which are available even on the most basic models.
Acquiring acoustic data on wing beat sounds – the frequency of which varies from species to species – together with the time and location of the human-mosquito encounter creates a “powerful tool” for identifying where disease-carrying mosquitoes may be.
Prof Prakash has a passion for getting low-cost scientific tools with a practical use into the poorest communities.
“There are one billion people who live with no infrastructure, electricity or healthcare,” he told the Ted audience.
“Frugal science is about building solutions for these communities.”
Prof Prakash – who has also designed a paper microscope that costs less than a dollar (80p) – came up with the idea of Paperfuge during a field trip in Uganda.
He spotted a $1,000 centrifuge – a medical tool used to separate liquids such as blood – being used in a remote clinic as a doorstop.
“They had no electricity so it was useless to them,” said Prof Prakash.
On returning to his Stanford lab, he was inspired to create a cheaper option, by toys – first a yoyo and then a whirlygig – also known as a button on a string – that is made using a spherical object suspended on threads that are then pulled to make it spin.
“Could we use the physics of these objects to build centrifuges?” he asked.
Prof Prakash and his colleague Saad Bhamla recruited three undergraduate engineering students from Massachusetts Institute of Technology (MIT) and Stanford to build a mathematical model of how the device worked.
The team created a computer simulation to capture design variables such as disc size, string elasticity and pulling force.
They also borrowed equations from the physics of supercoiling DNA strands, and eventually created a prototype that spun at up to 125,000 revolutions per minute.
“There are some beautiful mathematics hidden inside this object,” Prof Prakash said.
Using the device to spin blood in a capillary coated with orange dye for 15 minutes separates malarial parasites from red blood cells, enabling them to be spotted under a microscope.
And in 2014, Prof Prakash launched Foldscope, a paper microscope that costs under a dollar.
Foldscope has now sold 50,000 units in 130 countries.
Used by amateurs and children as well as scientists, the projects it has inspired are being shared on a citizen science database.
Prof Prakash plans to ship one million more microscopes this year.
A first glimpse of the flying car backed by Google co-founder
Many people are unsure about exactly what machine learning is. But the reality is that it is already part of everyday life.
A form of artificial intelligence, it allows computers to learn from examples rather than having to follow step-by-step instructions.
The Royal Society believes it will have an increasing impact on people’s lives and is calling for more research, to ensure the UK makes the most of opportunities.
Machine learning is already powering systems from the seemingly mundane to the life-changing. Here are just a few examples.
1. On your phone
Using spoken commands to ask your phone to carry out a search, or make a call, relies on technology supported by machine learning.
Virtual personal assistants – the likes of Siri, Alexa, Cortana and Google Assistant – are able to follow instructions because of voice recognition.
They process natural human speech, match it to the desired command and respond in an increasingly natural way.
The assistants learn over a number of conversations and in many different ways.
They might ask for specific information – for example how to pronounce your name, or whose voice is whose in a household.
Data from large numbers of conversations by all users is also sampled, to help them recognise words with different pronunciations or how to create natural discussion.
2. In your shopping basket
Many of us are familiar with shopping recommendations – think of the supermarket that reminds you to add cheese to your online shop, or the way Amazon suggests books it thinks you might like.
Machine learning is the technology that helps deliver these suggestions, via so-called recommender systems.
By analysing data about what customers have bought before, and any preferences they have expressed, recommender systems can pick up on patterns in purchasing history. They use this to make predictions about the products you might like.
3. On your TV
Similar systems are used to recommend films or TV shows on streaming services like Netflix.
Recommender systems use machine learning to analyse viewing habits and pick out patterns in who watches – and enjoys – which shows.
By understanding which users like which films – and what shows you have watched or awarded high ratings – recommender systems can identify your tastes.
They are also used to suggest music on streaming services, like Spotify, and articles to read on Facebook.
4. In your email
Machine learning can also be used to distinguish between different categories of objects or items.
This makes it useful when sorting out the emails you want to see from those you don’t.
Spam detection systems use a sample of emails to work out what is junk – learning to detect the presence of specific words, the names of certain senders, or other characteristics.
Once deployed, the system uses this learning to direct emails to the right folder. It continues to learn as users flag emails, or move them between folders.
5. On your social media
Ever wondered how Facebook knows who is in your photos and can automatically label your pictures?
The image recognition systems that Facebook – and other social media – uses to automatically tag photos is based on machine learning.
When users upload images and tag their friends and family, these image recognition systems can spot pictures that are repeated and assigns these to categories – or people.
6. At your bank
By analysing large amounts of data and looking for patterns, activity which might not otherwise be visible to human analysts can be identified.
One common application of this ability is in the fight against debit and credit card fraud.
Machine learning systems can be trained to recognise typical spending patterns and which characteristics of a transaction – location, amount, or timing – make it more or less likely to be fraudulent.
When a transaction seems out of the ordinary, an alarm can be raised – and a message sent to the user.
7. In hospitals
Doctors are just starting to consider machine learning to make better diagnoses, for example to spot cancer and eye disease.
Learning from images that have been labelled by doctors, computers can analyse new pictures of a patient’s retina, a skin spot, or an image of cells taken under a microscope.
In doing so, they look for visual clues that indicate the presence of medical conditions.
This type of image recognition system is increasingly important in healthcare diagnostics.
8. In science
Machine learning is also powering scientists’ ability to make new discoveries.
In particle physics it has allowed them to find patterns in immense data sets generated from the Large Hadron Collider at Cern.
It was instrumental in the discovery of the Higgs Boson, for example, and is now being used to search for “new physics” that no-one has yet imagined.
Similar ideas are being used to search for new medicines, for example by looking for new small molecules and antibodies to fight diseases.
The focus will be on making systems that perform specific tasks well which could therefore be thought of as helpers.
In schools they could track student performance and develop personal learning plans.
They could help us reduce energy usage by making better use of resources and improve care for the elderly by finding more time for meaningful human contact.
In the area of transport, machine learning will power autonomous vehicles.
Many industries could turn to algorithms to increase productivity. Financial services could become increasingly automated and law firms may use machine learning to carry out basic research.
Routine tasks will be done faster, challenging business models that rely on charging hourly rates.
Over the next 10 years machine learning technologies will increasingly be part of our lives, transforming the way we work and live.
About this piece
This analysis piece was commissioned by the BBC from an expert working for an outside organisation.
Dr Sabine Hauert is a member of the Royal Society’s machine learning working group. Dr Hauert is also co-founder and president of Robohub.org and assistant professor in robotics at the University of Bristol. Follow her @sabinehauert.
Uber reportedly used a tactic called fingerprinting to track iPhones in order to fight fraud – despite Apple banning the practice.
The New York Times reports that in 2015 Apple discovered that the ride-sharing company had broken its privacy rules by collecting iPhone serial numbers.
Boss Tim Cook told Uber founder Travis Kalanick to remove the “fingerprinting” code or he would ban the app from the Apple Store, the paper claims.
Apple declined to comment.
Uber said the practice of fingerprinting deterred criminals from installing its app on stolen phones, using stolen credit cards to book journeys, then wiping the phone and doing it again.
“Being able to recognise known bad actors when they try to get back on to our network is an important security measure for both Uber and our users,” it said.
Security researcher Will Strafach told news site Tech Crunch that the coding in the iPhone version of the app from 2014 revealed that it was noting the device’s serial number.
The New York Times also claimed Uber ringfenced Apple’s Cupertino headquarters so that employees using the app there would not notice.
Cyber-security expert Prof Alan Woodward, from University of Surrey, said the act of fingerprinting is fairly common and generally not blocked by other operators – for example, if you sign into a service from a different device and get an email warning you about it, it’s because there is a device ID linked to your account.
“Digital fingerprinting can be effective in tracking who goes where on the web, and it can be used to prevent fraud, but also it has the potential to invade your privacy,” he said.
“Whether it should be allowed ultimately will be a matter for the legislators and not all jurisdictions will necessarily agree.”
The practice is still banned by Apple.
Jake Davis, aka Topiary, is a former cyber-criminal who now gets paid to legally hack company websites.