Sunday, May 3, 2020

Deep Learning for Foundations and Trends- myassignmenthelp.com

Question: Discuss about theDeep Learning for Foundations and Trends. Answer: Introduction: Deep learning is a kind of machine learning technique that has the capability to teach computer to perform tasks, which would be naturally performed by humans (LeCun, Bengio and Hinton 2015). It is also known as hierarchical method of learning or deep structured learning. The method of deep learning comprises a part of the larger part of the methods of machine learning. These approaches are mainly based on the representations of learning data as compared to the task specific algorithms. The learning models of deep learning are lightly related to the processing of information and the patterns of communication. The technology of deep learning is a key behind various processes. In deep learning, a model of computer would be able to learn to do tasks directly with the help of text, images or sound. The models of deep learning could achieve such levels of accuracy that would exceed the performance of humans. The different models of deep learning are trained with the use of labeled data an d architectures of neural network that would contain many layers (Schmidhuber 2015). Importance of Deep Learning The technology of deep learning has achieved accuracy in recognizing at higher levels. The technology would help to meet the expectations of the consumers. Deep learning works on the process of machine learning with the help of an artificial neural net, which is mainly composed of various levels that are arranged in a hierarchical order (Sutskever et al. 2013). The deep learning technology has gained a wide attraction as it has the potential to be useful in the applications of the real world. The networks of deep learning could be successfully applied to big data for the discovery of knowledge, application of the knowledge and predictions based on the knowledge. Deep learning could be a powerful tool for the production of actionable results. Deep learning has the requirement of huge amounts of labeled data. It also requires a substantial amount of power of computing. The GPUs that have a high performance possess a parallel architecture, which is efficient for deep learning technology . When the deep learning technology is combined with cloud computing or clusters, it would be helpful for development teams to reduce the time of training for the purpose of a deep learning network (Goodfellow et al. 2016). Deep Learning Methods The methods of deep learning is a cluster of methods of machine learning, which can learn the features from a lower level to a higher level by establishing a deep architecture. The methods of deep learning possess the ability to learn the features at multiple levels in an automatic way. The most important feature of the methods of deep learning is that their entire set of models have deep architectures. The meaning of deep architecture is that it possesses multiple hidden layers within the network. Accordingly, a shallow architecture would have a less number of hidden layers (Deng and Yu 2014). The deep convolutional neural networks have their applications in several areas. The areas include modeling motion, modeling textures, retrieval of information, processing of natural language, dimensionality reduction, regression classification, diagnosis of fault and many others. Deep Learning Algorithms The algorithms of deep learning have been studied and has been applied in the recent times (Van Merrienboer et al. 2015). As a result of this, there is a wide number of related approaches. The algorithms of deep learning could be grouped into two categories based on the architectures: Restricted Boltzmann Machines (RBMs) Convolutional Neural Networks (CNNs) 4.1 Restricted Boltzmann Machines RBM is a probabilistic energy based generative model. This model is composed of a single layer of visible units and another layer of hidden units. The units that are visible would be able to represent the input vector of the sample of data. The units, which are hidden would represent the features that are abstracted from the visible units. The units that are visible are connected to each hidden unit. Hence no connection would exist in the hidden or visible layer (Kuremoto et al. 2014). Convolutional Neural Networks In the recent past, the quality of the classification of the image and the detection of the object has seen a major improvement with the help of the method of deep learning. The CNN has brought about a major revolution in the area of computer vision. With the implementation of this network, it has not only advanced the accuracy of image classification, but it has played a major role for the extraction of generic feature. This includes classification of scenes, detection of object, retrieval of image and the caption of the image. CNN is among the major powerful classes of deep neural networks in the tasks of image processing. It is extremely effective and is commonly used in the applications of computer vision. The convolution neural network is comprised of three layers that includes subsampling layers, full connection layers and convolution layers (Ji et al. 2013). Current Applications of Deep Learning Deep learning technology has their application in various fields. This includes processing of signals, recognition of speech and computer vision. The methods of Deep Machine Learning are mainly used in the systems of image recognition (Deng, Hinton and Kingsbury 2013). CNN Based Applications in Visual Computing - The Convolutional Neural Networks are extremely powerful tools for the recognition of image and classification. CNN based methods have been applied for the detection of an object, recognition of objects and images and segmentation of object. CNN Based Applications for Face Recognition The face recognition technology has been the most important tasks of computer vision. The system generally consists of four steps. At first, a given image that consists of one or more faces, is located by a face detector and then the faces are isolated. After this, each face is processed and then aligned using 2D or 3D methods of modelling. In the next step, a feature extractor extracts the features from a pre-aligned face in order to obtain a representation of low dimension. Finally, a classifier is required to make predictions based on the representation of low dimensions. Colorization of White and Black Images The deep learning technique could be used for the objects and their context in the photograph in order to color the image. The approach of deep learning would involve the use of an extremely large convolutional neural networks and the supervised layers that would be able to recreate the image by adding color. The Future of Deep Learning The technology of deep machine learning is an active area of research in the recent times. The future implementations of neural network should learn while collecting the sets of data. The training neural networks on large sets of data such as ImageNet would require a huge SSD that would have high performance GPU and they should be sufficient enough in having physical memory. As the modern sets of algorithms are becoming mature, hence new samples of data could be translated to low space dimensions in order to store huge sets of data. This might change the way in which of compressing natural images in the future (Najafabadi et al. 2015). As both the neural network and the sets of data are increasing in size, hence the labeling would become unrealistic and unreasonable. With the improvement of technology in deep learning, they would adopt a core set of standard tools for their operation. Deep learning would also fins a stable place within the ecosystem of open analytics. Most of the deployments of deep learning depends on Spark, Hadoop and Kafka and various other platforms of data analytics. With the help of these kind of platforms, anyone could be able to train, manage and deploy the algorithms of deep learning without the help of the capabilities of big data analytics. Many of the developers of deep learning technology are using the Spark clusters for performing specialized tasks of pipeline such as training of fast in-memory, cleansing of data, preprocessing and hyper parameter optimization (Ling et al. 2015). The tools of deep learning would be able to incorporate simplified frameworks of programming for performing the coding in an efficient way. The community of application developers would insist on APIs and other abstractions of programming for quick coding of the capabilities of the core algorithms. The toolkits of deep learning would also be able to support the visual development of recyclable components. The tools of deep learning would be implemented in every surface of the design (Witten et al. 2016). As the market trends of deep learning is advancing towards mass adoption, hence it would follow the steps of the visualization of data, business intelligence and the markets of predictive analytics. Conclusion Based on the above discussion, it can be concluded that deep learning could be a very useful technology for the future. This report provides an extensive overview of the algorithms of deep learning and the applications of deep learning. Many of the classic algorithms of deep learning that includes Boltzmann machines and convolutional neural networks are being introduced. In addition to the algorithms of deep learning, the applications would be reviewed and then compared with the other methods of machine learning. References Deng, L. and Yu, D., 2014. Deep learning: methods and applications.Foundations and Trends in Signal Processing,7(34), pp.197-387. Deng, L., Hinton, G. and Kingsbury, B., 2013, May. New types of deep neural network learning for speech recognition and related applications: An overview. InAcoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on(pp. 8599-8603). IEEE. Goodfellow, I., Bengio, Y., Courville, A. and Bengio, Y., 2016.Deep learning(Vol. 1). Cambridge: MIT press. Ji, S., Xu, W., Yang, M. and Yu, K., 2013. 3D convolutional neural networks for human action recognition.IEEE transactions on pattern analysis and machine intelligence,35(1), pp.221-231. Kuremoto, T., Kimura, S., Kobayashi, K. and Obayashi, M., 2014. Time series forecasting using a deep belief network with restricted Boltzmann machines.Neurocomputing,137, pp.47-56. LeCun, Y., Bengio, Y. and Hinton, G., 2015. Deep learning.nature,521(7553), p.436. Ling, Z.H., Kang, S.Y., Zen, H., Senior, A., Schuster, M., Qian, X.J., Meng, H.M. and Deng, L., 2015. Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends.IEEE Signal Processing Magazine,32(3), pp.35-52. Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R. and Muharemagic, E., 2015. Deep learning applications and challenges in big data analytics.Journal of Big Data,2(1), p.1. Schmidhuber, J., 2015. Deep learning in neural networks: An overview.Neural networks,61, pp.85-117. Sutskever, I., Martens, J., Dahl, G. and Hinton, G., 2013, February. On the importance of initialization and momentum in deep learning. InInternational conference on machine learning(pp. 1139-1147). Van Merrinboer, B., Bahdanau, D., Dumoulin, V., Serdyuk, D., Warde-Farley, D., Chorowski, J. and Bengio, Y., 2015. Blocks and fuel: Frameworks for deep learning.arXiv preprint arXiv:1506.00619. Witten, I.H., Frank, E., Hall, M.A. and Pal, C.J., 2016.Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.