@@ -42,7 +42,7 @@ a lot of data has become available. The idea of machine learning
42
42
is to make use of this data.
43
43
A formal definition of the field of Machine Learning is given
44
44
by Tom Mitchel [Mit97]:
45
- A computer program is said to learn from experi ence E with respect to some class of tasks T and
45
+ A computer program is said to learn from experi ence E with respect to some class of tasks T and
46
46
performance measure P, if its performance at tasks
47
47
in T, as measured by P, improves with experience E.
48
48
Σ ϕ
@@ -64,8 +64,8 @@ xi are the input signals and wi are
64
64
weights which have to get learned.
65
65
Each input signal gets multiplied
66
66
with its weight, everything gets
67
- summed up and the activation func tion ϕ is applied.
68
- (b) A visualization of a simple feed forward neural network. The 5 in put nodes are red, the 2 bias nodes
67
+ summed up and the activation func tion ϕ is applied.
68
+ (b) A visualization of a simple feed forward neural network. The 5 in put nodes are red, the 2 bias nodes
69
69
are gray, the 3 hidden units are
70
70
green and the single output node
71
71
is blue.
@@ -79,7 +79,7 @@ adjust them without having to re-program everything. Machine
79
79
learning programs should generally improve when they are fed
80
80
with more data.
81
81
The field of machine learning is related to statistics. Some
82
- algorithms directly try to find models which are based on well known distribution assumptions of the developer, others are
82
+ algorithms directly try to find models which are based on well known distribution assumptions of the developer, others are
83
83
more general.
84
84
A common misunderstanding of people who are not related
85
85
in this field is that the developers don’t understand what their
@@ -97,7 +97,7 @@ basic building blocks is a time-intensive and difficult task.
97
97
An important group of machine learning algorithms was
98
98
inspired by biological neurons and are thus called artificial
99
99
neural networks. Those networks are based on mathematical
100
- functions called artificial neurons which take n ∈ N num bers x1, . . . , xn ∈ R as input, multiply them with weights
100
+ functions called artificial neurons which take n ∈ N num bers x1, . . . , xn ∈ R as input, multiply them with weights
101
101
w1, . . . , wn ∈ R, add them and apply a so called activation
102
102
function ϕ as visualized in Figure 1(a). One example of such
103
103
an activation function is the sigmoid function ϕ(x) = 1
@@ -235,7 +235,7 @@ a lower level. Using such a predictor, one can generate texts
235
235
character by character. If the model is good, the text can have
236
236
the correct punctuation. This would not be possible with a
237
237
word predictor.
238
- Character predictors can be implemented with RNNs. In con trast to standard feed-forward neural networks like multilayer
238
+ Character predictors can be implemented with RNNs. In con trast to standard feed-forward neural networks like multilayer
239
239
Perceptrons (MLPs) which was shown in Figure 1(b), those
240
240
networks are trained to take their output at some point as well as
241
241
the normal input. This means they can keep some information
@@ -327,14 +327,14 @@ highly authentic replications and novel music compositions”.
327
327
The reader might want to listen to [Cop12] to get an impression
328
328
of the beauty of the created music.
329
329
According to Cope, an essential part of music is “a set of
330
- instructions for creating different, but highly related self replications”. Emmy was programmed to find this set of
330
+ instructions for creating different, but highly related self replications”. Emmy was programmed to find this set of
331
331
instructions. It tries to find the “signature” of a composer,
332
332
which Cope describes as “contiguous patterns that recur in two
333
333
or more works of the composer”.
334
334
The new feature of Emily Howell compared to Emmy is that
335
335
Emily Howell does not necessarily remain in a single, already
336
336
known style.
337
- Emily Howell makes use of association network. Cope empha sizes that this is not a form of a neural network. However, it
337
+ Emily Howell makes use of association network. Cope empha sizes that this is not a form of a neural network. However, it
338
338
is not clear from [Cop13] how exactly an association network
339
339
is trained. Cope mentions that Emily Howell is explained in
340
340
detail in [Cop05].
@@ -392,7 +392,7 @@ Available: https://www.youtube.com/watch?v=jLR- c uCwI
392
392
composition,” XRDS: Crossroads, The ACM Magazine for
393
393
Students, vol. 19, no. 4, pp. 16–20, 2013. [Online]. Available:
394
394
http://dl.acm.org/citation.cfm?id=2460444
395
- [Cur14] A. Curtis, “Now then,” BBC, Jul. 2014. [On line]. Available: http://www.bbc.co.uk/blogs/adamcurtis/entries/
395
+ [Cur14] A. Curtis, “Now then,” BBC, Jul. 2014. [On line]. Available: http://www.bbc.co.uk/blogs/adamcurtis/entries/
396
396
78691781-c9b7-30a0-9a0a-3ff76e8bfe58
397
397
[Gad06] A. Gadsby, Ed., Dictionary of Contemporary English. Pearson
398
398
Education Limited, 2006.
@@ -413,7 +413,7 @@ for image recognition,” arXiv preprint arXiv:1512.03385, 2015.
413
413
[Joh15a] D. Johnson, “Biaxial recurrent neural network for music
414
414
composition,” GitHub, Aug. 2015. [Online]. Available: https:
415
415
//github.com/hexahedria/biaxial-rnn-music-composition
416
- [Joh15b] ——, “Composing music with recurrent neu ral networks,” Personal Blog, Aug. 2015. [On line]. Available: http://www.hexahedria.com/2015/08/03/
416
+ [Joh15b] ——, “Composing music with recurrent neu ral networks,” Personal Blog, Aug. 2015. [On line]. Available: http://www.hexahedria.com/2015/08/03/
417
417
composing-music-with-recurrent-neural-networks/
418
418
[Joh16] J. Johnson, “neural-style,” GitHub, Jan. 2016. [Online]. Available:
419
419
https://github.com/jcjohnson/neural-style
@@ -432,7 +432,7 @@ computer science. McGraw-Hill, 1997.
432
432
deeper into neural networks,” googleresearch.blogspot.co.uk,
433
433
Jun. 2015. [Online]. Available: http://googleresearch.blogspot.de/
434
434
2015/06/inceptionism-going-deeper-into-neural.html
435
- [Nie15] M. A. Nielsen, Neural Networks and Deep Learn ing. Determination Press, 2015. [Online]. Avail able: http://neuralnetworksanddeeplearning.com/chap6.html#
435
+ [Nie15] M. A. Nielsen, Neural Networks and Deep Learn ing. Determination Press, 2015. [Online]. Avail able: http://neuralnetworksanddeeplearning.com/chap6.html#
436
436
introducing convolutional networks
437
437
[NV15] A. Nayebi and M. Vitelli, “GRUV: Algorithmic music generation
438
438
using recurrent neural networks,” 2015. [Online]. Available:
@@ -463,7 +463,7 @@ http://arxiv.org/abs/1506.05869v2
463
463
Available: https://github.com/MattVitelli/GRUV
464
464
[Wei76] J. Weizenbaum, Computer Power and Human Reason: From
465
465
Judgement to Calculation. W.H.Freeman & Co Ltd, 1976.
466
- [ZF14] M. D. Zeiler and R. Fergus, “Visualizing and understanding con volutional networks,” in Computer Vision–ECCV 2014. Springer,
466
+ [ZF14] M. D. Zeiler and R. Fergus, “Visualizing and understanding con volutional networks,” in Computer Vision–ECCV 2014. Springer,
467
467
2014, pp. 818–833.
468
468
6
469
469
APPENDIX A
0 commit comments