-
Notifications
You must be signed in to change notification settings - Fork 1
/
glossary.test.json
285 lines (285 loc) · 9.06 KB
/
glossary.test.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
{
"source_id": "glossary",
"url": "https://developers.google.com/machine-learning/glossary",
"sections": [
{
"name": "class",
"split": "test",
"text": [
[
"class",
"类别"
],
[
"One of a set of enumerated target values for a label.",
"为标签枚举的一组目标值中的一个。"
],
[
"For example, in a binary classification model that detects spam, the two classes are spam and not spam.",
"例如,在检测垃圾邮件的二元分类模型中,两种类别分别是“垃圾邮件”和“非垃圾邮件”。"
],
[
"In a multi-class classification model that identifies dog breeds, the classes would be poodle, beagle, pug, and so on.",
"在识别狗品种的多类别分类模型中,类别可以是“贵宾犬”、“小猎犬”、“哈巴犬”等等。"
]
]
},
{
"name": "classification model",
"split": "test",
"text": [
[
"classification model",
"分类模型"
],
[
"A type of machine learning model for distinguishing among two or more discrete classes.",
"一种机器学习模型,用于区分两种或多种离散类别。s"
],
[
"For example, a natural language processing classification model could determine whether an input sentence was in French, Spanish, or Italian.",
"例如,某个自然语言处理分类模型可以确定输入的句子是法语、西班牙语还是意大利语。"
],
[
"Compare with regression model.",
"请与回归模型进行比较。"
]
]
},
{
"name": "classification threshold",
"split": "test",
"text": [
[
"classification threshold",
"分类阈值"
],
[
"A scalar-value criterion that is applied to a model's predicted score in order to separate the positive class from the negative class.",
"一种标量值条件,应用于模型预测的得分,旨在将正类别与负类别区分开。"
],
[
"Used when mapping logistic regression results to binary classification.",
"将逻辑回归结果映射到二元分类时使用。"
],
[
"For example, consider a logistic regression model that determines the probability of a given email message being spam.",
"以某个逻辑回归模型为例,该模型用于确定指定电子邮件是垃圾邮件的概率。"
],
[
"If the classification threshold is 0.9, then logistic regression values above 0.9 are classified as spam and those below 0.9 are classified as not spam.",
"如果分类阈值为 0.9,那么逻辑回归值高于 0.9 的电子邮件将被归类为“垃圾邮件”,低于 0.9 的则被归类为“非垃圾邮件”。"
]
]
},
{
"name": "clustering",
"split": "test",
"text": [
[
"clustering",
"聚类"
],
[
"Grouping related examples, particularly during unsupervised learning.",
"将关联的样本分成一组,一般用于非监督式学习。"
],
[
"Once all the examples are grouped, a human can optionally supply meaning to each cluster.",
"在所有样本均分组完毕后,相关人员便可选择性地为每个聚类赋予含义。"
],
[
"Many clustering algorithms exist.",
"聚类算法有很多。"
],
[
"For example, the k-means algorithm clusters examples based on their proximity to a centroid, as in the following diagram:",
"例如,k-means 算法会基于样本与形心的接近程度聚类样本,如下图所示:"
],
[
"A human researcher could then review the clusters and, for example, label cluster 1 as \"dwarf trees\" and cluster 2 as \"full-size trees.\"",
"之后,研究人员便可查看这些聚类并进行其他操作,例如,将聚类 1 标记为“矮型树”,将聚类 2 标记为“全尺寸树”。"
],
[
"As another example, consider a clustering algorithm based on an example's distance from a center point, illustrated as follows:",
"再举一个例子,例如基于样本与中心点距离的聚类算法,如下所示:"
]
]
},
{
"name": "convolutional neural network",
"split": "test",
"text": []
},
{
"name": "cost",
"split": "test",
"text": []
},
{
"name": "data augmentation",
"split": "test",
"text": []
},
{
"name": "data set or dataset",
"split": "test",
"text": []
},
{
"name": "decision boundary",
"split": "test",
"text": []
},
{
"name": "dropout regularization",
"split": "test",
"text": []
},
{
"name": "example",
"split": "test",
"text": []
},
{
"name": "fairness constraint",
"split": "test",
"text": []
},
{
"name": "few-shot learning",
"split": "test",
"text": []
},
{
"name": "hyperparameter",
"split": "test",
"text": []
},
{
"name": "implicit bias",
"split": "test",
"text": []
},
{
"name": "items",
"split": "test",
"text": []
},
{
"name": "label",
"split": "test",
"text": []
},
{
"name": "landmarks",
"split": "test",
"text": []
},
{
"name": "learning rate",
"split": "test",
"text": []
},
{
"name": "loss surface",
"split": "test",
"text": []
},
{
"name": "majority class",
"split": "test",
"text": []
},
{
"name": "Markov decision process (MDP)",
"split": "test",
"text": []
},
{
"name": "ML",
"split": "test",
"text": []
},
{
"name": "MNIST",
"split": "test",
"text": []
},
{
"name": "NaN trap",
"split": "test",
"text": []
},
{
"name": "negative class",
"split": "test",
"text": []
},
{
"name": "node (TensorFlow graph)",
"split": "test",
"text": []
},
{
"name": "output layer",
"split": "test",
"text": []
},
{
"name": "preprocessing",
"split": "test",
"text": []
},
{
"name": "quantization",
"split": "test",
"text": []
},
{
"name": "queue",
"split": "test",
"text": []
},
{
"name": "representation",
"split": "test",
"text": []
},
{
"name": "reward",
"split": "test",
"text": []
},
{
"name": "Saver",
"split": "test",
"text": []
},
{
"name": "step",
"split": "test",
"text": []
},
{
"name": "timestep",
"split": "test",
"text": []
},
{
"name": "trajectory",
"split": "test",
"text": []
},
{
"name": "true positive rate (TPR)",
"split": "test",
"text": []
},
{
"name": "wide model",
"split": "test",
"text": []
}
]
}