TEKS BERJALAN

SELAMAT DATANG DI BLOG TUBAGUS AJI PERMANA

Tuesday, April 17, 2018

Computation & Computing

Nama Kelompok : 
Achmad Irfan
Bobby Monginsidi
Ghailan Aldi M
M. Taufik
Tubagus Aji P

Materi : Computation & Computing



Introduction
                General, computation can be interpreted as a way to find problem solving from input data by using an algorithm. Computing is a sub-field of computer science and mathematics. For thousands of years, calculations and computations have generally been done using pens and paper, or chalk and slate, or done mentally, sometimes with the help of a table. But now, most computing was done using computers. This computer-based computing is called Modern Computing.             
History of Modern Computation The beginning of computing is the existence of human figures. Humans have known numbers and calculations since centuries ago. Roman nation has been able to calculate the system of calendars and constellations. Along with the development of human era also perform more complex calculations. The human brain also experienced limitations in calculating numbers that can digit numbers, then created the abacus tool to calculate, then develop into a calculator, Because the development of tools and needs more and more data to be counted, and start the idea of ​​making to make a computer as a tool to calculate the concept of modern computing. Not only that, the computer created up to now is not a tool used to calculate, but also can store, edit and process words and many more uses and advantages possessed by the computer.

Theory Computation
THEORY OF COMPUTATION
The theory of computation is theoretical computer science. Computational theory deals with the study of how problems can be solved on a model using algorithms. The model is called a computational model. The computational theory is subdivided into 3 branches:
-Automata theory (automata theory)
-Theory of Computability (computability theory)
-Theory of Complexity (computational complexity theory)
Computability theory aims to examine whether computational problems can be solved on a theoretical computational model. In other words, the theory of computability classifies the problem as solvable or unsolvable. The theory of complexity aims to examine the need for time and space to solve problems solved by different approaches.
In other words, the theory of complexity classifies the problem as an easy or hard question. Computability theory introduces several concepts used in complexity theory. Automata theory refers to the definition and properties of computational models. In computational theory, the most commonly used computation model is Turing Machine.
Some models of computing:
Finite State Automata (FSA) / Finite State Machine (FSM)
Push Down Automata (PDA)
Turing Machine (TM) or TM

Implementation of computing in the fields of physics, chemistry, mathematics, economics, geology, geography
1. Implementation of Modern Computation in the Field of Physics                The implementation of modern computing in the field of physics is Computational Physics which studies a combination of Physics, Computer Science and Applied Mathematics to provide solutions to "complex and complex events in the real world" either by using simulations as well as the proper use of algorithms. The understanding of physics in theory, experimentation, and computation must be comparable, in order to produce the appropriate numerical and visualization / modeling solutions to understand the Physics problem.
                To perform work such as integral evaluations, solving differential equations, solving simultaneous equations, plotting a function / data, making the development of a series of functions, finding the root of the equation and working with complex numbers for which computational physics is applied. Many software or languages ​​are used, both MatLab, Visual Basic, Fortran, Open Source Physics (OSP), Labview, Mathematica, etc. are used for understanding and finding numerical solutions to problems in Computational Physics.
2. Implementation of Modern Computation in Chemistry                Implementation of modern computation in the field of chemistry is Computational Chemistry is the use of computer science to help solve chemical problems, for example the use of super computers to calculate the structure and molecular properties. The term chemical theory can be defined as a mathematical description for chemistry, whereas computational chemistry is typically used when mathematical methods are developed well enough to be used in computer programs. It should be noted that the word "exact" or "perfect" does not appear here, because very few chemical aspects can be calculated appropriately. Almost all aspects of chemistry can be described in the scheme of quantitative or qualitative computing almost.
 3. Implementation of Modern Computation in Mathematics                Resolving a problem relating to mathematical calculations, but in the sense that will be discussed in the discussion of modern computing is a system that will solve mathematical problems using a computer by preparing algorithms that can be understood by computers that are useful for solving human problems.
4. Implementation of Modern Computing in Economic Field Programming specifically designed for economic computing, and development of aids in computing economics education. Because the economic field must have problems that must be solved by the algorithm for example is solve the theory of statistics to solve financial problems.
                One example of computing in economics is statistical computing. Statistical computing is the department that studies the techniques of data processing, program making, and data analysis and statistical information system preparation techniques such as database compilation, data communications, network systems, and statistical data dissemination.
 5. Implementation of Computing in the Field of Geology and Geography                Geology and Geography, in this field can be used such as modeling the access of the geographic state of a monitored surface area in the event of movement or vibration. In addition, information on weather forecasts is very useful for everything, especially air and sea transportation.

1. Introduction to Clould Computing
A. Introduction
The development of technology in this era using concepts such as social networking, open, share, colaborations, mobile, easy maintenance, one click, distributed, scalability, concurency, and transparent. Until now the trend of Cloud Computing technology is still being investigated in the research - the world of IT experts. With various advantages and disadvantages, Cloud Computing comes with easy access to data from anywhere and anytime, because by utilizing the internet and using a fixed device or mobile device using the Internet cloud as a place of data storage, applications and others. This technology will provide many advantages both from the service provider (provider) or from the user. The application of this technology has a very significant impact on the development of the technology itself, both from the user side and from the industry side.
Users benefit from the increasing ease of obtaining or downloading data quickly and easily as many services are opened by industry. The advantages for the industry is no less great with the convenience obtained by users, because with the advancement of cloud computing technology will further facilitate the industry to market products and spread information widely throughout the world. In general, the definition of cloud computing (cloud computing) is a combination of the use of computer technology (computing) in a network with Internet-based development (cloud) that has the function to run programs or applications through computers connected at the same time, but not all which connects via the Internet using cloud computing.
Cloud-based computer system technology is a technology that makes the Internet as a central server to manage data and user applications. This technology allows users to run programs without installation and allows users to access their personal data through computers with internet access.

B. Introduction to Grid Computing
Grid Computing is actually a development application of a computer network (network). Unlike conventional computer networks that focus on device-to-device communication, applications on grid computing are designed to take advantage of resources at terminals in the network. Grid Computing is usually applied to run a function that is too complex or too intensive to be done by a single system. Just as Internet users access various websites and use various protocols as if in a stand-alone system, Grid Computing application users seem to be using a virtual computer with huge data processing capacity.
By definition Grid Computing or Grid Computing is one of the parallel computing data types. Because the use of its resources involves many computers geographically dispersed but connected via communication channels (including the internet) to solve large-scale computing problems. The faster the communication path is open, the chances of combining computing performance from separate computer sources become increasingly widespread. Thus, the scale of distributed computing can be increased geographically further, across administrative domain boundaries.
The faster the communication path is open, the chances to combine computing performance from separate computing sources are increasing. Thus, the scale of distributed computing can be increased geographically further, across the boundaries of existing administrative domains.
C. Virtualization
There are two terms that are currently popular in terms of computing technology, namely Virtualization and Cloud computing, but today it seems that many think that virtualization and cloud computing is the same thing, when in fact cloud computing is more than just virtualization.
Virtualization is a technology, which allows you to create virtual versions of something physical, such as operating systems, data storage or network resources. The process is done by a software or firmware called Hypervisor. Hypervisor is the soul of virtualization, because he is the layer that "pretend" to be an infrastructure to run multiple virtual machines. In practice, by buying and owning one machine, you seem to have multiple servers, so you can reduce IT spending for new server purchases, components, storage, and other support software.
In hardware virtualization, the software works to form a virtual machine that acts as if it were a genuine computer with an operating system installed in it. One easy example suppose there is one computer that has installed GNU / Linux Ubuntu. Then by using virtualization software such as Virtual box we can install two other operating system as an example of Windows XP and FreeBSD.
The operating system installed on the computer physically in this case GNU / Linux is called as host machine while the operating system installed above it is called the guest machine. The terms host and guest are introduced to make it easy to distinguish between a physical operating system installed on a computer with an operating system installed on it or virtual.
The software used to create virtual machines on a host machine is commonly referred to as a hypervisor or Virtual Machine Monitor (VMM). According to Robert P. Goldberg on his thesis entitled "Architectural Principles For Virtual Computer Systems" on page 23 mentions that the types of VMM are 2:
- Type 1 runs on the physical computer that exists directly. In this type of hypervisor / VMM actually controls the hardware of the host computer. Includes controlling the operating system-its guest operating system. Examples of existing implementations and I have tried directly is VMWare ESXi. As for other examples that exist like Microsoft Hyper-V
- Type 2 runs on the operating system above it. In this type of course guest operating system is located on the layer above it again.
D.Distributed Computation in Cloud Computing
Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of several autonomous computers that communicate over a computer network. Computers that interact with each other to achieve common goals. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing the program. Distributed computing also refers to the use of distributed systems to solve computing problems. In distributed computing, the problem is divided into many tasks, each of which is solved by one computer.
This activity is a collection of several computers that are connected to perform distribution, such as sending and receiving data and other interactions between computers that require a network for one computer and another can interact and interact. This is all done with cloud computing as we know to provide services where the information is stored on the server permanently and stored on the client computer in temporary.
Distributed Computing is one of the goals of Cloud Computing, because it offers access to resources in parallel, users can also use it simultaneously (not having to wait in the queue for service), consisting of many systems so that if one system crashes, will be affected, can save operational costs because it does not require resources (resource).
This computational distribution has the definition of studying the coordinated use of physically separate or distributed computers. In this distributed computing, the program is split into several parts simultaneously run on many computers connected via the internet network.
E. Map Reduce and No SQL
Map-Reduce is one of the most important technical concepts in cloud technology especially because it can be implemented in a distributed computing environment. Thus it will guarantee the scalability of our application. One example of the real implementation of this map-reduce in a product is what Google does. With inspiration from the functional programming map and reducing Google can generate a highly scalable distributed filesystem, Google Big Table. And also inspired by Google, in the realm of open source visible acceleration development of other framework that is also distributed and using the same concept, open source project is named Apache Hadoop.
NoSQL is a term to state things that include a simple database containing key and values ​​like cache, or more sophisticated non-relational databases like MongoDB, Cassandra, CouchDB, and more.
Wikipedia says NoSQL is a database management system that is different from the classic relational database management system in some ways. NoSQL may not require table schemes and generally avoid join operations and develop horizontally. Academics call this kind of database as structured storage, a term that includes a relational database management system.
F. No SQL Database
The NoSQL database, also called Not Only SQL, is an approach to data management and database design useful for a very large set of distributed data. NoSQL, which includes various technologies and architectures, strives to solve the problem of scalability and the performance of large data that relational databases are not designed to handle. NoSQL is very useful when companies need to access and analyze large amounts of structured data or data stored remotely on multiple virtual servers in the cloud.
Contrary to the misunderstanding caused by its name, NoSQL does not prohibit structured query languages ​​(SQL). While it is true that some NoSQL systems are completely non-relational, others just avoid selected relational functions like fixed table schemes and join operations. For example, instead of using tables, NoSQL databases might organize data into objects, key / value pairs or tuples


Source :
https://adhibarfan.wordpress.com/2016/04/21/pengantar-komputasi-cloud/
https://diioradhitya.blogspot.co.id/2017/03/implementasi-komputasi-dalam-bidang.html
https://anggasaputro.wordpress.com/2016/03/15/teori-komputasi-dan-penerapannya/