官术网_书友最值得收藏!

Algorithm

Now, we will better understand the Gaussian prototypical network by going through it step by step:

  1. Let's say we have a dataset, D = {(x1, y1,), (x2, y2), ... (xi, yi)}, where x is the feature and y is the label. Let's say we have a binary label, which means we have only two classes, 0 and 1. We will sample data points at random without replacement from each of the classes from our dataset, D, and create our support set, S.
  2. Similarly, we sample data points at random per class and create the query set, Q.
  3. We will pass the support set to our embedding function, f(). The embedding function will generate the embeddings for our support set, along with the covariance matrix.
  4. We calculate the inverse of the covariance matrix.
  5. We compute the prototype of each class in the support set as follows:

In this equation, is the diagonal of the inverse covariance matrix, denotes the embeddings of the support set and superscript c denotes the class.

  1. After computing the prototype of each class in the support set, we learn the embeddings for the query set, Q. Let's say x' is the embedding of the query point.
  2. We calculate the distance of the query point embeddings to the class prototypes as follows:
  1. After calculating the distance between the class prototype and query set embeddings, we predict the class of the query set as a class that has a minimum distance, as follows:
主站蜘蛛池模板: 五华县| 怀化市| 延庆县| 古田县| 许昌县| 宁波市| 岱山县| 辽中县| 万载县| 潞城市| 修文县| 如东县| 三亚市| 新乡县| 久治县| 长岛县| 深州市| 朝阳市| 屯昌县| 高陵县| 依兰县| 五常市| 延长县| 灵山县| 北川| 泽普县| 翁源县| 吉安县| 临颍县| 衢州市| 八宿县| 东莞市| 册亨县| 锦屏县| 米林县| 澄迈县| 当涂县| 玉山县| 色达县| 西昌市| 沛县|