RSS

Monthly Archives: July 2017

Learning and some explorations

Machine learning and cognitive computing is dominating in todays IT world everything. A long time, I had the impression that this is too much rocket sciences for my brain, and specially the part with big data did distract me. But as usual, when you start to dig into a topic, you figure out that the rocket sciences is already done and the basic are easy to consume. Or in other words: There are giants and you can stand on their shoulders!

But there is also another component that did get my attention. Once I did understand the basic about a neutral network, I figured out that there are so many interesting books beside IT-related stuff about learning, that I already head read and they became relevant. Ono of my favorite is Daniel Kahnemanns, Thinking: Fast and Slow. Btw. this book is easy to read but very interesting, and this not always the case with authors that earned the Nobel Prize! Daniel Kahnemann is describing how we learn and that when we achieve a certain maturity in learned topic, our thinking is becoming incredible faster. Let me explain this with an example:

Shift gear! In Switzerland the most care are manual cars. So one of the first thing, you have to learn is how to shift gear. I remember very clear, how I did train this. Doing with full attention the complete sequence. But after I while, it became more common and then, I didn’t even have to think about it. It changed from a thing I’ve done with a huge effort to an effortless “no brainer”.

Kahnemann state in his book, that you can achieve this maturity of effortless “no brainer” in your core expertise. So writing Java code is a no brainer, but I’m currently learning TypeScript, that need more my attention.

But what are we doing with our brain, while we do the training? We are using our own neural network, to learn pattern and recognize patterns. Once we have learned the pattern (which could be a sequence of action to shift a gear, or writing Java Code), our brain switches back from learning to executing. Using the “programmed” neural network. And this network is in execution incredible fast! And with the execution, we are getting even more maturity in what we are doing. Now combine this knowledge with the observation that Kathy Sierra is presenting here:

(I know I did already share this, but it’s so cool…)

But here is the point. Machine learning has a lot to do with how we are learning. So it is important to understand our learning approaches. And it leads us to a next observation: we are not continuously learning. Once we have achieved a certain maturity, we do execute very fast, and we need some kind of reflection to review if the stuff we have learned is still accurate. And maybe we have to train our neural network for the next challenge.

But let me share some observation and let my play a bit a prophet.

  1. We will see that trained neural networks will come to devices, we could not imagine and there the networks will “ennoble” data and support fast decisions.
  2. We will build feedback loops (In human terms: reflection and reviews) to train and improve this networks, but we will split the workload for training and execution. Training will be done in huge data center, while the execution of a network will be as near as possible to the point where data are measured /happens
  3. We will see that trainings will start depend on “trained” basic functions. Like letters to words to sentences.

But there is another thing, I’m observing: My son Simon and my son David have different learning approaches. Simon is the trial and error type of learner. When he learned to walk, he did stand up and tumble until he did get it. David is the “I can’t until I can” type. Means he is NOT training him self with trail and error. He did not stand up and tumble. He did stand up and start walking. He did not train to say words, he started to talk.

There must also be a way, how you can jump over this “trail and error” phase, direct to success. Means learning could be more “effort less”.  And this last thing, I want to share. David is not in school, he cannot read, but he is a passionate “Ticket to Ride” Player. A game where you have to “read” locations and build connections between locations by acquiring the routes. So his neural network is capable to recognize the pattern of city names, combine them to the spoken city name, without reading the letter. That’s very cool, and that’s what computers also doing 🙂

And as always

Have Fun!

(Because Fun is the best precondition to learn)

 

 
Leave a comment

Posted by on July 27, 2017 in ThinkTank

 

Tags: , ,