Upload dataset_chunk_38.csv with huggingface_hub
Browse files- dataset_chunk_38.csv +2 -0
dataset_chunk_38.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
", respectively, and the “joints”inthehiddenunitsareatpositions1/6,2/6,and4/6. findvaluesofϕ ,ϕ ,ϕ ,andϕ 0 1 2 3 that will combine the hidden unit activations as ϕ +ϕ h +ϕ h +ϕ h to create a function 0 1 1 2 2 3 3 with four linear regions that oscillate between output values of zero and one. the slope of the leftmost region should be positive, the next one negative, and so on. how many linear regions will we create if we compose this network with itself? how many will we create if we compose it with itself k times? problem4.9∗ followingproblem4.8,isitpossibletocreateafunctionwiththreelinearregions that oscillates back and forth between output values of zero and one using a shallow network withtwohiddenunits? isitpossibletocreateafunctionwithfivelinearregionsthatoscillates in the same way using a shallow network with four hidden units? figure 4.9 hidden unit activations for problem 4.8. a) first hidden unit has a jointatpositionx=1/6andaslopeofoneintheactiveregion. b)secondhidden unit has a joint at position x = 2/6 and a slope of one in the active region. c) thirdhiddenunithasajointatpositionx=4/6andaslopeofminusoneinthe active region. problem 4.10 consider a deep neural network with a single input, a single output, and k hiddenlayers,eachofwhichcontainsd hiddenunits. showthatthisnetworkwillhaveatotal of 3d+1+(k−1)d(d+1) parameters. problem 4.11∗ consider two neural networks that map a scalar input x to a scalar output y. the first network is shallow and has d=95 hidden units. the second is deep and has k =10 layers, each containing d = 5 hidden units. how many parameters does each network have? how many linear regions can each network make? which would run faster? draft: please send errata to [email protected] 5 loss functions the last three chapters described linear regression, shallow neural networks, and deep neural networks. each represents a family of functions that map input to output, where the particular member of the family is determined by the model parameters ϕ. when we train these models, we seek the parameters that produce the best possible mapping frominputtooutputforthetaskweareconsidering. thischapterdefineswhatismeant by the “best possible” mapping. that definition requires a training dataset {x ,y } of input/output pairs. a loss i i function or cost function l[ϕ] returns a single number that describes the mismatch betweenthemodelpredictionsf[x ,ϕ]andtheircorrespondingground-truthoutputsy . i i during training, we seek parameter values ϕ that minimize the loss and hence map the training inputs to the outputs as closely as possible. we saw one example of a loss function in chapter 2; the least squares loss function is suitable for univariate regression problemsforwhichthetargetisarealnumbery ∈r. itcomputesthesumofthesquares appendixa of the deviations between the model predictions f[x ,ϕ] and the true values y . numbersets i i this chapter provides a framework that both justifies the choice of the least squares criterionforreal-valuedoutputsandallowsustobuildlossfunctionsforotherprediction types. we consider binary classification, where the prediction y ∈ {0,1} is one of two categories, multiclass classification, where the prediction y ∈ {1,2,...,k} is one of k categories, and more complex cases. in the following two chapters, we address model training,wherethegoalistofindtheparametervaluesthatminimizetheselossfunctions. 5.1 maximum likelihood in this section, we develop a recipe for constructing loss functions. consider a model f[x,ϕ] with parameters ϕ that computes an output from input x. until now, we have appendixc.1.3 implied that the model directly computes a prediction y. we now shift perspective and conditional probability consider the model as computing a conditional probability distribution pr(y|x) over possible outputs y given input x."
|