Connect with us

Привет, что-то ищете?

The Times On Ru
  1. The Times On RU
  2. /
  3. Технологии
  4. /
  5. Google says machine learning is the future. So I tried ..

Технологии

Google says machine learning is the future. So I tried it myself

The world is quietly being reshaped by machine learning. We no longer need to teach computers how to perform complex tasks like image recognition or text translation: instead, we build systems that let them learn how to do it themselves.

“It’s not magic,” says Greg Corrado, a senior research scientist at Google. “It’s just a tool. But it’s a really important tool.”

The most powerful form of machine learning being used today, called “deep learning”, builds a complex mathematical structure called a neural network based on vast quantities of data. Designed to be analogous to how a human brain works, neural networks themselves were first described in the 1930s. But it’s only in the last three or four years that computers have become powerful enough to use them effectively.

Corrado says he thinks it is as big a change for tech as the internet was. “Before internet technologies, if you worked in computer science, networking was some weird thing that weirdos did. And now everyone, regardless of whether they’re an engineer or a software developer or a product designer or a CEO understands how internet connectivity shapes their product, shapes the market, what they could possibly build.”

He says that same kind of transformation is going to happen with machine learning. “It ends up being something that everybody can do a little of. They don’t have to do the detailed things, but they need to understand ‘well, wait a minute, maybe we could do this if we had data to learn from.’”

Google’s own implementation of the idea, an open-source software suite called TensorFlow, was built from the ground up to be useable by both the researchers at the company attempting to understand the powerful models they create, as well as the engineers who are already taking them, bottling them up, and using them to categorise photos or let people search with their voice.

Machine learning is still a complex beast. Away from simplified playgrounds, there’s not much you can do with neural networks yourself unless you have a strong background in coding. But I wanted to put Conrado’s claims to the test: if machine learning will be something “everybody can do a little of” in the future, how close is it to that today?

One of the nice things about the machine learning community right now is how open it is to sharing ideas and research. When Google made TensorFlow open to anyone to use, it wrote: “By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products”. And it’s not alone in that: every major machine learning implementation is available for free to use and modify, meaning it’s possible to set up a simple machine intelligence with nothing more than a laptop and a web connection.

Which is what I did.

Following the lead of writer and technologist Robin Sloan, I trained a simple neural network on 119mb of Guardian leader columns. It wasn’t easy. Even with a detailed readme, it took me a few hours to set up a computer to the point where it could start learning from the corpus of text. And once it reached that point, I realised I had vastly underrated the amount of time it takes for a machine to learn. After running the training software for 30 minutes, and getting around 1% of the way through, I realised I would need a much faster computer.


Robin Sloan
(@robinsloan)

Finally got this running: snappy in-editor «autocomplete» powered by a neural net trained on old sci-fi stories. pic.twitter.com/Cu4GCZdUEl

April 26, 2016

So I spent another few hours configuring a server on Amazon’s cloud to do the learning for me. It cost $.70 an hour, but meant that the whole thing was done in about 8 hours.

I’m not the only one to play around with the technology. Quietly, starting a few years ago, Google itself has undergone a metamorphosis. The search giant has torn out the guts of some of its biggest services, from image search to voice recognition, and recreated them from the ground up. Now, it wants the rest of the world to follow suit.

On 16 June, it announced that it was opening a dedicated Machine Learning group in its Zurich engineering office, the largest collection of Google developers outside of the US, to lead research into three areas: machine intelligence, natural language processing, and machine perception. That is, building systems that can think, listen, and see.

But while computer scientists know enough about how to wrangle neural networks to use them to identify speech or create psychedelic images, they don’t really know all there is to know about how they actually work. They sort of just … do. Part of the job of Google DeepMind, the research arm which most famously led an algorithm to victory over a world champion in the ancient Asian board game Go, is to work out a little bit more about why and how they are so good. And the new machine learning group is straddling the line between research and product development, attempting to build new algorithms that can tackle unprecedented challenges.

My own attempt to do the same didn’t go so well. The results were … not perfect. While Google’s machine learning demonstrations involve solving problems which were described as “virtually impossible” just two years ago, mine could barely string a sentence together.

Following Sloan’s example, I set my model up to run as an autocomplete engine. I could write the first half-sentance of a theoretical Guardian editorial, and the system gets fed it as an input and asked what it thinks will come next. Don’t like that? Ask it for another response.

I tried to use it to read Guardian editorials from a parallel universe. I used “Thursday’s momentous vote to stay in the EU was” as the seed, and tried to get the system to imagine what the rest of the sentence would look like:

  • Thursday’s momentous vote to stay in the EU was more contracts in the 1970s.
  • Thursday’s momentous vote to stay in the EU was a man of a legal opposition in 2015.
  • Thursday’s momentous vote to stay in the EU was the discrimination and the first way the failure to declare the government is not a single constitutional conservative policy.
  • Thursday’s momentous vote to stay in the EU was on the promise of the pronouncements.

It’s terrible. Of course it’s terrible: if I could train a machine to write a convincing Guardian editorial, or even a convincing sentence extract from a Guardian editorial, in two days by copying a readme and fiddling around with complex software which I don’t really understand even after having successfully used it, then my job would be much less secure than it is. Hell, everyone’s jobs would be much less secure than they are.

I’m not even the first to fall at this hurdle: the Atlantic’s Adrienne LaFrance tried a similar experiment, also using Sloan’s kit, earlier in June, but was hampered by the size of her corpus. Half a million words, the total quantity of her writing from the Atlantic, isn’t quite enough for a machine to learn from, but the 20m sitting in the Guardian’s archive of editorials is better. (I could have run the system on every story in the archive, but it learns better if there’s a consistent tone and style for it to emulate – something leader columns, which are all written in the voice of the paper, have).

While the results are unimpressive on the face of it, at the same time, however, they’re … kind of amazing. The specific package I used, called Torch-rnn, is designed for training character-level neural networks. That is, before it’s trained, it doesn’t even know the concept of a word, let alone have a specific vocabulary or understanding of English grammar.

Now, I have a model that knows all those things. And it taught itself with nothing more than a huge quantity of Guardian editorials.

It still can’t actually create meaning. That makes sense: a Guardian editorial has meaning in relation to the real world, not as a collection of words existing in its own right. And so to properly train a neural network to write one, you’d also have to feed in information about the world, and then you’ve got less of a weekend project and more of a startup pitch.

So it’s not surprising to see the number of startup pitches that do involve “deep learning” skyrocket. My inbox has consistently seen one or two a day for the past year, from an “online personal styling service” which uses deep learning to match people to clothes, to a “knowledge discovery engine” which aims to beat Google at its own game.

Where the archetypal startup of 2008 was “x but on a phone” and the startup of 2014 was “uber but for x”, this year is the year of “doing x with machine learning”. And Google seems happy to be leading the way, not only with its own products, but also by making the tools which the rest of the ecosystem is relying on.

But why now? Corrado has an answer. “The maths for deep learning was done in the 1980s and 1990s… but until now, computers were too slow for us to understand that the math worked well.

“The fact that they’re getting faster and cheaper is part of what’s making this possible.” Right now, he says, doing machine learning yourself is like trying to go online by manually coding a TCP/IP stack.

But that’s going to change. It will get quicker, easier and more effective, and slowly move from something the engineers know about, to something the whole development team know about, then the whole tech industry, and then, eventually, everyone. And when it does, it’s going to change a lot else with it.

• AlphaGo taught itself how to win, but without humans it would have run out of time

Оставить комментарий

Leave a Reply

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Стоит Посмотреть

Новости По Дате

Март 2021
Пн Вт Ср Чт Пт Сб Вс
1234567
891011121314
15161718192021
22232425262728
293031  

Вам может быть интересно:

Спорт

Zen Чемпионка Европы по фигурному катанию Алена Косторная заявила изданию Sport, что пропустит сразу два соревновательных сезона «для решения личных проблем». Что скрывается за...

Общество

ZenДОНЕЦК, 3 ноября. Число раненых в результате атаки украинского беспилотника на станцию ​​Никитовка в Горловке в ДНР возросло до двух человек, сообщил мэр города...

Бизнес

В третьем квартале 2024 года более 70% особо критических киберинцидентов были связаны с компрометацией учетных записей сотрудников. По данным центра противодействия кибератакам Solar JSOC...

Спорт

Дзен Ровно 95 лет назад родился советский вратарь Лев Яшин. В 1963 году он стал обладателем главной индивидуальной награды в мировом футболе — «Золотого...