- Machine Learning with Swift
- Alexander Sosnovshchenko
- 505字
- 2021-06-24 18:54:48
The motivation behind ML
Let's start with an analogy. There are two ways of learning an unfamiliar language:
- Learning the language rules by heart, using textbooks, dictionaries, and so on. That's how college students usually do it.
- Observing live language: by communicating with native speakers, reading books, and watching movies. That's how children do it.
In both cases, you build in your mind the language model, or, as some prefer to say, develop a sense of language.
In the first case, you are trying to build a logical system based on rules. In this case, you will encounter many problems: the exceptions to the rule, different dialects, borrowing from other languages, idioms, and lots more. Someone else, not you, derived and described for you the rules and structure of the language.
In the second case, you derive the same rules from the available data. You may not even be aware of the existence of these rules, but gradually adjust yourself to the hidden structure and understand the laws. You use your special brain cells called mirror neurons, trying to mimic native speakers. This ability is honed by millions of years of evolution. After some time, when facing the wrong word usage, you just feel that something is wrong but you can't tell immediately what exactly.
In any case, the next step is to apply the resulting language model in the real world. Results may differ. In the first case, you will experience difficulty every time you find the missing hyphen or comma, but may be able to get a job as a proofreader at a publishing house. In the second case, everything will depend on the quality, diversity, and amount of the data on which you were trained. Just imagine a person in the center of New York who studied English through Shakespeare. Would he be able to have a normal conversation with people around him?
Now we'll put the computer in place of the person in our example. Two approaches, in this case, represent the two programming techniques. The first one corresponds to writing ad hoc algorithms consisting of conditions, cycles, and so on, by which a programmer expresses rules and structures. The second one represents ML , in which case the computer itself identifies the underlying structure and rules based on the available data.
The analogy is deeper than it seems at first glance. For many tasks, building the algorithms directly is impossibly hard because of the variability in the real world. It may require the work of experts in the domain, who must describe all rules and edge cases explicitly. Resulting models can be fragile and rigid. On the other hand, this same task can be solved by allowing computers to figure out the rules on their own from a reasonable amount of data. An example of such a task is face recognition. It's virtually impossible to formalize face recognition in terms of conventional imperative algorithms and data structures. Only recently, the task was successfully solved with the help of ML .
- Applied Unsupervised Learning with R
- Effective STL中文版:50條有效使用STL的經驗(雙色)
- 嵌入式技術基礎與實踐(第5版)
- 深入淺出SSD:固態存儲核心技術、原理與實戰
- 計算機應用與維護基礎教程
- 硬件產品經理手冊:手把手構建智能硬件產品
- Manage Partitions with GParted How-to
- AMD FPGA設計優化寶典:面向Vivado/SystemVerilog
- The Deep Learning with Keras Workshop
- 超大流量分布式系統架構解決方案:人人都是架構師2.0
- Hands-On Artificial Intelligence for Banking
- Building Machine Learning Systems with Python
- Blender for Video Production Quick Start Guide
- Practical Artificial Intelligence and Blockchain
- Machine Learning Projects for Mobile Applications