- Hands-On Meta Learning with Python
- Sudharsan Ravichandiran
- 245字
- 2021-07-02 14:29:22
Algorithm
Now, we will better understand the Gaussian prototypical network by going through it step by step:
- Let's say we have a dataset, D = {(x1, y1,), (x2, y2), ... (xi, yi)}, where x is the feature and y is the label. Let's say we have a binary label, which means we have only two classes, 0 and 1. We will sample data points at random without replacement from each of the classes from our dataset, D, and create our support set, S.
- Similarly, we sample data points at random per class and create the query set, Q.
- We will pass the support set to our embedding function, f(). The embedding function will generate the embeddings for our support set, along with the covariance matrix.
- We calculate the inverse of the covariance matrix.
- We compute the prototype of each class in the support set as follows:

In this equation, is the diagonal of the inverse covariance matrix,
denotes the embeddings of the support set and superscript c denotes the class.
- After computing the prototype of each class in the support set, we learn the embeddings for the query set, Q. Let's say x' is the embedding of the query point.
- We calculate the distance of the query point embeddings to the class prototypes as follows:

- After calculating the distance between the class prototype and query set embeddings, we predict the class of the query set as a class that has a minimum distance, as follows:

推薦閱讀
- 從零開始學Hadoop大數據分析(視頻教學版)
- Developing Mobile Games with Moai SDK
- SQL Server 2012數據庫技術與應用(微課版)
- 卷積神經網絡的Python實現
- 深入淺出MySQL:數據庫開發、優化與管理維護(第2版)
- 數據庫設計與應用(SQL Server 2014)(第二版)
- 高維數據分析預處理技術
- Power BI智能數據分析與可視化從入門到精通
- 數字IC設計入門(微課視頻版)
- Hadoop 3實戰指南
- Rust High Performance
- 新基建:大數據中心時代
- 邊緣計算使能工業互聯網
- 由淺入深學SQL Server:基礎、進階與必做300題
- Sencha Architect App Development