Θεωρητική Πληροφορική: Διαφορά μεταξύ των αναθεωρήσεων

Από τη Βικιπαίδεια, την ελεύθερη εγκυκλοπαίδεια
Περιεχόμενο που διαγράφηκε Περιεχόμενο που προστέθηκε
Klairh.fr (συζήτηση | συνεισφορές)
Χωρίς σύνοψη επεξεργασίας
Klairh.fr (συζήτηση | συνεισφορές)
Χωρίς σύνοψη επεξεργασίας
Ετικέτα: μεγάλη προσθήκη
Γραμμή 70: Γραμμή 70:


Ένα πρόβλημα θεωρείται ότι είναι δύσκολο, αν η λύση του απαιτεί σημαντικούς (υπολογιστικούς) πόρους, ανεξάρτητα από τον [[αλγόριθμο]] που χρησιμοποιείται. H θεωρία τυποποιεί αυτή τηn διαίσθηση, με την εισαγωγή μαθηματικών [[υπολογιστικών μοντέλων]] για την μελέτη τέτοιων προβλημάτων και υπολογίζει το σύνολο των πόρων που απαιτούνται για την επίλυσή τους, όπως ο χρόνος επεξεργασίας και ο όγκος των δεδομένων που αποθηκεύονται. Άλλα μέσα για την μέτρηση της [[πολυπλοκότητας]] επίσης χρησιμοποιούνται, όπως το ύψος της επικοινωνίας (το οποίο χρησιμοποιείται στην [[πολυπλοκότητα της επικοινωνίας]]), ο αριθμός των [[logic gate|λογικών πυλών]] σε ένα (ολοκληρωμένο) κύκλωμα (που χρησιμοποιείται στην [[πολυπλοκότητα κυκλώματος]]) και ο αριθμός των επεξεργαστών (που χρησιμοποιείται στην [[παράλληλη υπολογιστική]]). Ένας από τους ρόλους της θεωρία της πολυπλοκότητας είναι να καθορίσει τα πρακτικά όρια σχετικά με το τι οι [[υπολογιστές]] μπορούν και δεν μπορούν να κάνουν.
Ένα πρόβλημα θεωρείται ότι είναι δύσκολο, αν η λύση του απαιτεί σημαντικούς (υπολογιστικούς) πόρους, ανεξάρτητα από τον [[αλγόριθμο]] που χρησιμοποιείται. H θεωρία τυποποιεί αυτή τηn διαίσθηση, με την εισαγωγή μαθηματικών [[υπολογιστικών μοντέλων]] για την μελέτη τέτοιων προβλημάτων και υπολογίζει το σύνολο των πόρων που απαιτούνται για την επίλυσή τους, όπως ο χρόνος επεξεργασίας και ο όγκος των δεδομένων που αποθηκεύονται. Άλλα μέσα για την μέτρηση της [[πολυπλοκότητας]] επίσης χρησιμοποιούνται, όπως το ύψος της επικοινωνίας (το οποίο χρησιμοποιείται στην [[πολυπλοκότητα της επικοινωνίας]]), ο αριθμός των [[logic gate|λογικών πυλών]] σε ένα (ολοκληρωμένο) κύκλωμα (που χρησιμοποιείται στην [[πολυπλοκότητα κυκλώματος]]) και ο αριθμός των επεξεργαστών (που χρησιμοποιείται στην [[παράλληλη υπολογιστική]]). Ένας από τους ρόλους της θεωρία της πολυπλοκότητας είναι να καθορίσει τα πρακτικά όρια σχετικά με το τι οι [[υπολογιστές]] μπορούν και δεν μπορούν να κάνουν.

===Distributed computation===
{{main|Distributed computation}}
[[Distributed computing]] studies distributed systems. A distributed system is a software system in which components located on [[computer network|networked computers]] communicate and coordinate their actions by [[message passing|passing messages]].<ref name="Coulouris">{{cite book|last=Coulouris|first=George|author2=Jean Dollimore|author3=Tim Kindberg|author4=Gordon Blair|title=Distributed Systems: Concepts and Design (5th Edition)|publisher = Addison-Wesley|year=2011|location=Boston|isbn=0-132-14301-1}}</ref> The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components.<ref name="Coulouris"/> Examples of distributed systems vary from [[Service-oriented architecture|SOA-based systems]] to [[massively multiplayer online game]]s to [[Peer-to-peer| peer-to-peer applications]].

A [[computer program]] that runs in a distributed system is called a '''distributed program''', and distributed programming is the process of writing such programs.<ref>{{harvtxt|Andrews|2000}}. {{harvtxt|Dolev|2000}}. {{harvtxt|Ghosh|2007}}, p. 10.</ref> There are many alternatives for the message passing mechanism, including [[Remote procedure call|RPC-like]] connectors and [[Message-oriented middleware|message queues]]. An important goal and challenge of distributed systems is [[location transparency]].

===Parallel computation===
{{main|Parallel computation}}
[[Parallel computing]] is a form of [[computing|computation]] in which many calculations are carried out simultaneously,<ref>{{cite book|last=Gottlieb|first=Allan|title=Highly parallel computing|year=1989|publisher=Benjamin/Cummings|location=Redwood City, Calif.|isbn=0-8053-0177-1|url=http://dl.acm.org/citation.cfm?id=160438|author2=Almasi, George S.}}</ref> operating on the principle that large problems can often be divided into smaller ones, which are then solved [[Concurrency (computer science)|concurrently]] ("in parallel"). There are several different forms of parallel computing: [[bit-level parallelism|bit-level]], [[instruction level parallelism|instruction level]], [[data parallelism|data]], and [[task parallelism]]. Parallelism has been employed for many years, mainly in [[high performance computing|high-performance computing]], but interest in it has grown lately due to the physical constraints preventing [[frequency scaling]].<ref>S.V. Adve et al. (November 2008). [http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits&nbsp;– increased clock frequency and smarter but increasingly complex architectures&nbsp;– are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[multi-core processor]]s.<ref name="View-Power">Asanovic, Krste et al. (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance&nbsp;... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."</ref>

[[Parallel algorithm|Parallel computer programs]] are more difficult to write than sequential ones,<ref>{{cite book|last=Hennessy|first=John L.|title=Computer organization and design : the hardware/software interface|year=1999|publisher=Kaufmann|location=San Francisco|isbn=1-55860-428-6|edition=2. ed., 3rd print.|author2=Patterson, David A. |author3=Larus, James R. }}</ref> because concurrency introduces several new classes of potential [[software bug]]s, of which [[race condition]]s are the most common. [[Computer networking|Communication]] and [[Synchronization (computer science)|synchronization]] between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

The maximum possible [[speedup|speed-up]] of a single program as a result of parallelization is known as [[Amdahl's law]].

===Very-large-scale integration===
{{main|VLSI}}
[[Very-large-scale integration]] ('''VLSI''') is the process of creating an [[integrated circuit]] (IC) by combining thousands of [[transistors]] into a single chip. VLSI began in the 1970s when complex [[semiconductor]] and [[communication]] technologies were being developed. The [[microprocessor]] is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An [[electronic circuit]] might consist of a [[central processing unit|CPU]], [[Read-only memory|ROM]], [[Random Access Memory|RAM]] and other [[glue logic]]. VLSI lets IC makers add all of these into one chip.

===Machine learning===
{{main|Machine learning}}
[[Machine learning]] is a [[academic disciplines|scientific discipline]] that deals with the construction and study of [[algorithm]]s that can [[learning|learn]] from data.<ref>{{cite journal |title=Glossary of terms |author1=Ron Kovahi |author2=Foster Provost |journal=[[Machine Learning (journal)|Machine Learning]] |volume=30 |pages=271–274 |year=1998 |url=http://ai.stanford.edu/~ronnyk/glossary.html}}</ref> Such algorithms operate by building a [[Statistical model|model]] based on inputs<ref name="bishop">{{cite book |author=C. M. Bishop |authorlink=Christopher M. Bishop |year=2006 |title=Pattern Recognition and Machine Learning |publisher=Springer |isbn=0-387-31073-8}}</ref>{{rp|2}} and using that to make predictions or decisions, rather than following only explicitly programmed instructions.

Machine learning can be considered a subfield of computer science and [[statistics]]. It has strong ties to [[artificial intelligence]] and [[mathematical optimization|optimization]], which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based [[algorithm]]s is infeasible. Example applications include [[spam filter]]ing, [[optical character recognition]] (OCR),<ref name=Wernick-Signal-Proc-July-2010>Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, ''[[IEEE Signal Processing Society|IEEE Signal Processing Magazine]]'', vol. 27, no. 4, July 2010, pp. 25-38</ref> [[Learning to rank|search engines]] and [[computer vision]]. Machine learning is sometimes conflated with [[data mining]],<ref>{{cite conference |last=Mannila |first=Heikki |title=Data mining: machine learning, statistics, and databases |conference=Int'l Conf. Scientific and Statistical Database Management |publisher=IEEE Computer Society |year=1996}}</ref> although that focuses more on exploratory data analysis.<ref>{{cite journal |last=Friedman |first=Jerome H. |authorlink=Jerome H. Friedman |title=Data Mining and Statistics: What's the connection? |journal=Computing Science and Statistics |volume=29 |issue=1 |year=1998 |pages=3–9}}</ref> Machine learning and [[pattern recognition]] "can be viewed as two facets of
the same field."<ref name="bishop"/>{{rp|vii}}

===Computational biology===
{{main|Computational biology}}
[[Computational biology]] involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.<ref name="nih">
{{cite web
| url = http://www.bisti.nih.gov/docs/compubiodef.pdf
| title = NIH working definition of bioinformatics and computational biology
| date = 17 July 2000
| accessdate = 18 August 2012
| publisher = Biomedical Information Science and Technology Initiative
}}
</ref> The field is broadly defined and includes foundations in computer science, [[applied mathematics]], [[animation]], [[statistics]], [[biochemistry]], [[chemistry]], [[biophysics]], [[molecular biology]], [[genetics]], [[genomics]], [[ecology]], [[evolution]], [[anatomy]], [[neuroscience]], and [[scientific visualization|visualization]].<ref name="brown">
{{cite web
| url = http://www.brown.edu/research/projects/computational-molecular-biology/
| title = About the CCMB
| accessdate = 18 August 2012
| publisher = Center for Computational Molecular Biology
}}
</ref>

Computational biology is different from [[biological computation]], which is a subfield of computer science and [[computer engineering]] using [[bioengineering]] and [[biology]] to build [[computer]]s, but is similar to [[bioinformatics]], which is an interdisciplinary science using computers to store and process biological data.

===Computational geometry===
{{main|Computational geometry}}
[[Computational geometry]] is a branch of computer science devoted to the study of algorithms that can be stated in terms of [[geometry]]. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with history stretching back to antiquity. An ancient precursor is the [[Sanskrit]] treatise [[Shulba Sutras]] , or "Rules of the Chord", that is a book of algorithms written in 800 BCE. The book prescribes step-by-step procedures for constructing geometric objects like altars using a peg and chord.

The main impetus for the development of computational geometry as a discipline was progress in [[computer graphics]] and computer-aided design and manufacturing ([[CAD]]/[[Computer-aided manufacturing|CAM]]), but many problems in computational geometry are classical in nature, and may come from [[mathematical visualization]].

Other important applications of computational geometry include [[robotics]] (motion planning and visibility problems), [[geographic information system]]s (GIS) (geometrical location and search, route planning), [[integrated circuit]] design (IC geometry design and verification), [[computer-aided engineering]] (CAE) (mesh generation), [[computer vision]] (3D reconstruction).

===Information theory===
{{main|Information theory}}
[[Information theory]] is a branch of [[applied mathematics]], [[electrical engineering]], and [[computer science]] involving the [[Quantification (science)|quantification]] of [[information]]. Information theory was developed by [[Claude E. Shannon]] to find fundamental limits on [[signal processing]] operations such as [[data compression|compressing data]] and on reliably [[Computer data storage|storing]] and [[Telecommunication|communicating]] data. Since its inception it has broadened to find applications in many other areas, including [[statistical inference]], [[natural language processing]], [[cryptography]], [[neurobiology]],<ref>{{cite book|author=F. Rieke, D. Warland, R Ruyter van Steveninck, W Bialek|title=Spikes: Exploring the Neural Code|publisher=The MIT press|year=1997|isbn=978-0262681087}}</ref> the evolution<ref>cf. Huelsenbeck, J. P., F. Ronquist, R. Nielsen and J. P. Bollback (2001) Bayesian inference of phylogeny and its impact on evolutionary biology, ''Science'' '''294''':2310-2314</ref> and function<ref>Rando Allikmets, Wyeth W. Wasserman, Amy Hutchinson, Philip Smallwood, Jeremy Nathans, Peter K. Rogan, [http://alum.mit.edu/www/toms/ Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences, ''Gene'' '''215''':1, 111-122</ref> of molecular codes, model selection in ecology,<ref>Burnham, K. P. and Anderson D. R. (2002) ''Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition'' (Springer Science, New York) ISBN 978-0-387-95364-9.</ref> thermal physics,<ref>Jaynes, E. T. (1957) [http://bayes.wustl.edu/ Information Theory and Statistical Mechanics], ''Phys. Rev.'' '''106''':620</ref> [[quantum computing]], [[linguistics]], plagiarism detection,<ref>Charles H. Bennett, Ming Li, and Bin Ma (2003) [http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75 Chain Letters and Evolutionary Histories], ''Scientific American'' '''288''':6, 76-81</ref> [[pattern recognition]], [[anomaly detection]] and other forms of [[data analysis]].<ref>
{{Cite web
| author = David R. Anderson
| title = Some background on why people in the empirical sciences may want to better understand the information-theoretic methods
| date = November 1, 2003
| url = http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf
| format = pdf
| accessdate = 2010-06-23}}
</ref>

Applications of fundamental topics of information theory include [[lossless data compression]] (e.g. [[ZIP (file format)|ZIP files]]), [[lossy data compression]] (e.g. [[MP3]]s and [[JPEG]]s), and [[channel capacity|channel coding]] (e.g. for [[DSL|Digital Subscriber Line (DSL)]]). The field is at the intersection of [[mathematics]], [[statistics]], [[computer science]], [[physics]], [[neurobiology]], and [[electrical engineering]]. Its impact has been crucial to the success of the [[Voyager program|Voyager]] missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the [[Internet]], the study of [[linguistics]] and of human perception, the understanding of [[black hole]]s, and numerous other fields. Important sub-fields of information theory are [[source coding]], [[channel coding]], [[algorithmic complexity theory]], [[algorithmic information theory]], [[information-theoretic security]], and measures of information.

===Cryptography===
{{main|Cryptography}}
[[Cryptography]] is the practice and study of techniques for [[secure communication]] in the presence of third parties (called [[adversary (cryptography)|adversaries]]).<ref name="rivest90">{{cite book|first=Ronald L.|last=Rivest|authorlink=Ron Rivest|editor=J. Van Leeuwen|title=Handbook of Theoretical Computer Science|chapter=Cryptology|volume=1|publisher=Elsevier|year=1990}}</ref> More generally, it is about constructing and analyzing [[communications protocol|protocol]]s that overcome the influence of adversaries<ref name="modern-crypto">{{Cite book|first1=Mihir|last1=Bellare|first2=Phillip|last2=Rogaway|title=Introduction to Modern Cryptography|chapter=Introduction|page=10|date=21 September 2005}}</ref> and that are related to various aspects in [[information security]] such as data [[confidentiality]], [[data integrity]], [[authentication]], and [[non-repudiation]].<ref name="hac">{{cite book |first=A. J. |last=Menezes |first2=P. C. |last2=van Oorschot |first3=S. A. |last3=Vanstone |url=https://web.archive.org/web/20050307081354/www.cacr.math.uwaterloo.ca/hac/ |title=Handbook of Applied Cryptography |publisher= |isbn=0-8493-8523-7}}</ref> Modern cryptography intersects the disciplines of [[mathematics]], [[computer science]], and [[electrical engineering]]. Applications of cryptography include [[automated teller machine|ATM cards]], [[password|computer passwords]], and [[electronic commerce]].

Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around [[computational hardness assumption]]s, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in [[integer factorization]] algorithms, and faster computing technology require these solutions to be continually adapted. There exist [[Information theoretic security|information-theoretically secure]] schemes that {{not a typo|provably}} cannot be broken even with unlimited computing power—an example is the [[one-time pad]]—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.

===Quantum computation===
{{main|Quantum computation}}
A [[quantum computer]] is a [[computation]] system that makes direct use of [[quantum mechanics|quantum-mechanical]] [[phenomena]], such as [[quantum superposition|superposition]] and [[quantum entanglement|entanglement]], to perform [[Instruction (computer science)|operations]] on [[data]].<ref>"[http://cba.mit.edu/docs/papers/98.06.sciqc.pdf Quantum Computing with Molecules]" article in ''[[Scientific American]]'' by [[Neil Gershenfeld]] and [[Isaac L. Chuang]]</ref> Quantum computers are different from digital computers based on [[transistor]]s. Whereas digital computers require data to be encoded into binary digits ([[bit]]s), each of which is always in one of two definite states (0 or 1), quantum computation uses [[qubits]] (quantum bits), which can be in [[quantum superposition|superpositions]] of states. A theoretical model is the [[quantum Turing machine]], also known as the universal quantum computer. Quantum computers share theoretical similarities with [[Non-deterministic Turing machine|non-deterministic]] and [[probabilistic automaton|probabilistic computers]]; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by [[Yuri Manin]] in 1980<ref name="manin1980vychislimoe">{{cite book| author=Manin, Yu. I.| title=Vychislimoe i nevychislimoe |trans_title=Computable and Noncomputable | year=1980| publisher=Sov.Radio| url=http://publ.lib.ru/ARCHIVES/M/MANIN_Yuriy_Ivanovich/Manin_Yu.I._Vychislimoe_i_nevychislimoe.(1980).%5Bdjv%5D.zip| pages=13–15| language=Russian| accessdate=4 March 2013}}</ref> and [[Richard Feynman]] in 1982.<ref name="Feynman82">{{cite journal |last=Feynman |first=R. P. |title=Simulating physics with computers |journal=[[International Journal of Theoretical Physics]] |year=1982 |volume=21 |issue=6 |pages=467–488 |doi=10.1007/BF02650179 }}</ref><ref>{{cite journal |title=Quantum computation |authorlink=David Deutsch |first=David |last=Deutsch |journal=Physics World |date=1992-01-06 }}</ref> A quantum computer with spins as quantum bits was also formulated for use as a quantum [[space–time]] in 1968.<ref>{{cite book |first=David |last=Finkelstein |chapter=Space-Time Structure in High Energy Interactions |title=Fundamental Interactions at High Energy |editor1-first=T. |editor1-last=Gudehus |editor2-first=G. |editor2-last=Kaiser |location=New York |publisher=Gordon & Breach |year=1968 }}</ref>

{{as of|2014}}, quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.<ref>{{cite web|url=http://phys.org/news/2013-01-qubit-bodes-future-quantum.html|title=New qubit control bodes well for future of quantum computing|publisher=|accessdate=26 October 2014}}</ref> Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum [[computer]]s for both civilian and national security purposes, such as [[cryptanalysis]].<ref>[http://qist.lanl.gov/qcomp_map.shtml Quantum Information Science and Technology Roadmap] for a sense of where the research is heading.</ref>

===Computational number theory===
{{main|Computational number theory}}
[[Computational number theory]], also known as '''algorithmic number theory''', is the study of [[algorithm]]s for performing [[number theory|number theoretic]] [[computation]]s. The best known problem in the field is [[integer factorization]].

===Symbolic computation===
{{main|Symbolic computation}}
[[Computer algebra]], also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of [[algorithm]]s and [[software]] for manipulating [[expression (mathematics)|mathematical expressions]] and other [[mathematical object]]s. Although, properly speaking, computer algebra should be a subfield of [[scientific computing]], they are generally considered as distinct fields because scientific computing is usually based on [[numerical computation]] with approximate [[floating point number]]s, while symbolic computation emphasizes ''exact'' computation with expressions containing [[variable (mathematics)|variable]]s that have not any given value and are thus manipulated as symbols (therefore the name of ''symbolic computation'').

[[Software]] applications that perform symbolic calculations are called ''[[computer algebra system]]s'', with the term ''system'' alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a [[user interface]] for the input/output of mathematical expressions, a large set of [[function (computer science)|routines]] to perform usual operations, like simplification of expressions, [[differentiation (mathematics)|differentiation]] using [[chain rule]], [[polynomial factorization]], [[indefinite integration]], etc.

===Program semantics===
{{main|Program semantics}}
In [[programming language theory]], '''semantics''' is the field concerned with the rigorous mathematical study of the meaning of [[programming language]]s. It does so by evaluating the meaning of [[programming language syntax|syntactically]] legal [[String (computer science)|strings]] defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain [[computer platform|platform]], hence creating a [[model of computation]].

===Formal methods===
{{main|Formal methods}}
[[Formal methods]] are a particular kind of [[mathematics]] based techniques for the [[formal specification|specification]], development and [[formal verification|verification]] of [[software]] and [[computer hardware|hardware]] systems.<ref name="butler">{{cite web|author=R. W. Butler|title=What is Formal Methods?|url=http://shemesh.larc.nasa.gov/fm/fm-what.html|date=2001-08-06|accessdate=2006-11-16}}</ref> The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.<ref>{{cite journal|author=C. Michael Holloway|title=Why Engineers Should Consider Formal Methods|url=http://klabs.org/richcontent/verification/holloway/nasa-97-16dasc-cmh.pdf| publisher=16th Digital Avionics Systems Conference (27–30 October 1997)|accessdate=2006-11-16}}</ref>

Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular [[logic in computer science|logic]] calculi, [[formal language]]s, [[automata theory]], and [[program semantics]], but also [[type systems]] and [[algebraic data types]] to problems in software and hardware specification and verification.<ref>Monin, pp.3-4</ref>

===Automata theory===
{{main|Automata theory}}
[[Automata theory]] is the study of ''[[abstract machine]]s'' and ''[[automaton|automata]]'', as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under [[Discrete mathematics]] (a section of [[Mathematics]] and also of [[Computer Science]]). ''Automata'' comes from the Greek word αὐτόματα meaning "self-acting".

Automata Theory is the study of self-operating virtual machines to help in logical understanding of input and output process, without or with intermediate stage(s) of [[computation]] (or any [[Function (engineering)|function]] / process).

===Coding theory===
{{main|Coding theory}}
[[Coding theory]] is the study of the properties of codes and their fitness for a specific application. Codes are used for [[data compression]], [[cryptography]], [[error-correction]] and more recently also for [[network coding]]. Codes are studied by various scientific disciplines—such as [[information theory]], [[electrical engineering]], [[mathematics]], and [[computer science]]—for the purpose of designing efficient and reliable [[data transmission]] methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.

===Computational learning theory===
{{main|Computational learning theory}}
Theoretical results in machine learning mainly deal with a type of
inductive learning called supervised learning. In supervised
learning, an algorithm is given samples that are labeled in some
useful way. For example, the samples might be descriptions of
mushrooms, and the labels could be whether or not the mushrooms are
edible. The algorithm takes these previously labeled samples and
uses them to induce a classifier. This classifier is a function that
assigns labels to samples including the samples that have never been
previously seen by the algorithm. The goal of the supervised learning
algorithm is to optimize some measure of performance such as
minimizing the number of mistakes made on new samples.

== Organizations ==
* [[European Association for Theoretical Computer Science]]
* [[SIGACT]]

== Journals and newsletters ==
* ''[[Information and Computation]]''
* ''[[Theory of Computing (journal)|Theory of Computing]]'' ([[Open access (publishing)|open access]] journal)
* ''[[Formal Aspects of Computing]]''
* ''[[Journal of the ACM]]''
* ''[[SIAM Journal on Computing]]'' (SICOMP)
* ''[[SIGACT News]]''
* ''[[Theoretical Computer Science (journal)|Theoretical Computer Science]]''
* ''[[Theory of Computing Systems]]''
* ''[[International Journal of Foundations of Computer Science]]''
* ''[[Chicago Journal of Theoretical Computer Science]]'' ([[Open access (publishing)|open access]] journal)
* ''[[Foundations and Trends in Theoretical Computer Science]]''
* ''[[Journal of Automata, Languages and Combinatorics]]''
* ''[[Acta Informatica]]''
* ''[[Fundamenta Informaticae]]''
* ''[[ACM Transactions on Computation Theory]]''
* ''[[Computational Complexity]]''
* ACM Transactions on Algorithms
* Information Processing Letters

== Conferences ==
* Annual ACM [[Symposium on Theory of Computing]] (STOC)<ref name="core-a-plus">The [http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html 2007 Australian Ranking of ICT Conferences]: tier A+.</ref>
* Annual IEEE [[Symposium on Foundations of Computer Science]] (FOCS)<ref name="core-a-plus"/>
* ACM–SIAM [[Symposium on Discrete Algorithms]] (SODA)<ref name="core-a-plus"/>
* Annual [[Symposium on Computational Geometry]] (SoCG)<ref name="core-a">The [http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html 2007 Australian Ranking of ICT Conferences]: tier A.</ref>
* [[International Colloquium on Automata, Languages and Programming]] (ICALP)<ref name="core-a"/>
* [[Symposium on Theoretical Aspects of Computer Science]] (STACS)<ref name="core-a"/>
* [[International Conference on Theory and Applications of Models of Computation]] (TAMC)
* [[European Symposium on Algorithms]] (ESA)<ref name="core-a"/>
* IEEE [[Symposium on Logic in Computer Science]] (LICS)<ref name="core-a-plus"/>
* [[International Symposium on Algorithms and Computation]] (ISAAC)<ref name="core-a"/>
* [[Workshop on Approximation Algorithms for Combinatorial Optimization Problems]] (APPROX)<ref name="core-a"/>
* [[Workshop on Randomization and Computation]] (RANDOM)<ref name="core-a"/>
* [[Computational Complexity Conference]] (CCC)<ref name="core-a"/>
* ACM [[Symposium on Parallelism in Algorithms and Architectures]] (SPAA)<ref name="core-a"/>
* ACM [[Symposium on Principles of Distributed Computing]] (PODC)<ref name="core-a-plus"/>
* [[International Symposium on Fundamentals of Computation Theory]] (FCT)<ref>[http://fct11.ifi.uio.no/ FCT 2011] (retrieved 2013-06-03)</ref>
* [[Annual Conference on Learning Theory]] (COLT)<ref name="core-a"/>
* [[International Workshop on Graph-Theoretic Concepts in Computer Science]] (WG)

==See also==
* [[Formal science]]
* [[Unsolved problems in computer science]]
* [[List of important publications in theoretical computer science]]

== Notes ==
<references/>

== Further reading ==

* [[Martin Davis]], Ron Sigal, Elaine J. Weyuker, ''Computability, complexity, and languages: fundamentals of theoretical computer science'', 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers [[theory of computation]], but also [[program semantics]] and [[quantification theory]]. Aimed at graduate students.

== External links ==
* [http://www.sigact.org/webpages.php SIGACT directory of additional theory links]
* [http://theorymatters.org/ Theory Matters Wiki] Theoretical Computer Science (TCS) Advocacy Wiki
* [news://comp.theory Usenet comp.theory]
* [http://www.confsearch.org/confsearch/faces/pages/topic.jsp?topic=Theory&sortMode=1&graphicView=1 List of academic conferences in the area of theoretical computer science] at [http://www.confsearch.org confsearch]
* [http://cstheory.stackexchange.com/ Theoretical Computer Science - StackExchange], a Question and Answer site for researchers in theoretical computer science
* [http://www.csanimated.com/browse.php Computer Science Animated]
* http://theory.csail.mit.edu/ @ [[Massachusetts Institute of Technology]]

{{Computer science}}

{{Authority control}}
{{DEFAULTSORT:Theoretical Computer Science}}
[[Category:Theoretical computer science|*]]
[[Category:Formal sciences]]

Έκδοση από την 18:45, 24 Μαΐου 2015

Μία καλλιτεχνική αποτύπωση της Μηχανής Τούρινγκ. Οι μηχανές Τούρινγκ χρησιμοποιούνται για την μοντελοποίηση συσκευές υπολογιστών.

H Θεωρητική πληροφορική είναι ένα τμήμα ή υποσύνολο της γενικής επιστήμης των υπολογιστών και των μαθηματικών που επικεντρώνεται σε πιο αφηρημένες έννοιες ή μαθηματικές πτυχές της πληροφορικής και περιλαμβάνει την θεωρία του υπολογισμού.

Δεν είναι εύκολο να οριοθετηθούν οι θεωρητικοί τομείς επακριβώς και η Ομάδα Ειδικού Ενδιαφέροντος για Αλγορίθμους και Θεωρία Υπολογισμού (SIGACT) της ACM περιγράφει ότι αποστολή της είναι η προώθηση της θεωρητικής επιστήμης των υπολογιστών και σημειώνει ότι:[1]

Το πεδίο της θεωρητικής επιστήμης των υπολογιστών ερμηνεύεται γενικά, ώστε να περιλαμβάνει αλγορίθμους, δομές δεδομένων, θεωρία πολυπλοκότητας, κατανεμημένο υπολογισμό, παράλληλο υπολογισμό, VLSI, μηχανική μάθηση, υπολογιστική βιολογία, υπολογιστική γεωμετρία, [θεωρία της πληροφορίας]], κρυπτογραφία, κβαντικού υπολογισμoύ, υπολογιστική θεωρία αριθμών και άλγεβρας, σημασιολογία προγράμματος και επαλήθευση, θεωρία αυτομάτων, καθώς και τη μελέτη της τυχαιότητας. Οι εργασίες στον τομέα αυτό διακρίνονται συχνά από την έμφαση στην μαθηματική τεχνική και στην αυστηρότητα.

Στον παραπάνω κατάλογο, το περιοδικό της ACM Transactions on Computation Theory προσθέτει τη θεωρία κωδικοποίησης, την υπολογιστική θεωρία μάθησης και τις θεωρητικές πτυχές της επιστήμης των υπολογιστών σε τομείς όπως οι βάσεις δεδομένων, η ανάκτηση πληροφοριών, τα οικονομικά μοντέλα και τα δίκτυα.[2] Παρά το ευρύ πεδίο εφαρμογής , οι "θεωρητικοί" στην επιστήμη των υπολογιστών αυτοπροσδιορίζονται ως διαφορετικοί από τους "ειδικούς εφαρμογών". Κάποιοι αυτοχαρακτηρίζονται ότι εφαρμόζουν «(την πιο θεμελιώδη) επιστήμη (ες) που κρύβεται πίσω από το πεδίο της υπολογιστικής». [3] Άλλοι «θεωρητικοί – ειδικοί εφαρμογών » προτείνουν ότι είναι αδύνατο να διαχωριστεί η θεωρία από την πρακτική εφαρμογή . Αυτό σημαίνει ότι οι αναφερόμενοι ως «θεωρητικοί» χρησιμοποιούν τακτικά την πειραματική επιστήμη (ες) σε λιγότερο θεωρητικούς τομείς όπως η έρευνα λογισμικού συστήματος . Αυτό σημαίνει επίσης ότι υπάρχει περισσότερη συνεργασία παρά ανταγωνισμός αλληλοαναίρεσης μεταξύ θεωρίας και εφαρμογής.

Ιστορία

Ενώ η επίσημοι αλγόριθμοι υπάρχουν για χιλιετίες (ο Ευκλείδειος αλγόριθμος για τον καθορισμό του μέγιστου κοινού διαιρέτη δύο αριθμών εξακολουθεί να χρησιμοποιείται στους υπολογισμούς) , δεν ήταν μέχρι το 1936 που οι Alan Turing, Alonzo Church και Stephen Kleene επισημοποίησαν τον ορισμό ενός αλγορίθμου σε σχέση με τους υπολογισμούς. Ενώ τα δυαδικά και λογικά συστήματα των μαθηματικών υπήρχαν πριν από το 1703, ήταν ο Gottfried Leibniz που όρισε την λογική με δυαδικές τιμές για το αληθές και το ψευδές. Ενώ η λογική επαγωγή και μαθηματική απόδειξη υπήρχαν από την αρχαιότητα, το 1931 ο Kurt Gödel απέδειξε με το [θεώρημα της μη πληρότητας]] ότι υπήρχαν θεμελιώδεις περιορισμοί σχετικά με το τι θεωρήματα θα μπορούσαν να αποδειχθούν ή να απορρηφθούν.

Αυτές οι εξελίξεις οδήγησαν στη σύγχρονη μελέτη της λογικής και της υπολογισιμότητας, και μάλιστα στον τομέα της επιστήμης της Θεωρητικής Πληροφορικής σαν ολότητα. Η θεωρία της Πληροφορίας προστέθηκε σε αυτό το πεδίο το 1948 με την μαθηματική θεωρία της επικοινωνίας του Claude Shannon. Την ίδια δεκαετία, ο Donald Hebb εισήγαγε το μαθηματικό μοντέλο της μάθησης του εγκεφάλου. Με την ενσωμάτωση βιολογικών δεδομένων τα οποία υποστηρίζουν αυτήν την υπόθεση και με κάποιες τροποποιήσεις, το πεδίο των νευρωνικών δικτύων και παράλληλης κατανεμημένες επεξεργασίας θεσπίστηκαν. Το 1971, ο Stephen Cook και ο Leonid Levin, που δούλευαν ανεξάρτητα, απέδειξαν ότι υπάρχουν πρακτικά σχετικά προβλήματα τα οποία είναι NP-πλήρης – ένα σημείο αναφοράς το οποίο είχε σαν αποτέλεσμα την θεωρία της πολυπλοκότητας.

Με την ανάπτυξη της κβαντικής Μηχανικής στις αρχές του 20ου αιώνα θεμελιώθηκε η ιδέα ότι οι μαθηματικές πράξεις μπορούν να εκτελούνται σε ολόκληρη την κυματοσυνάρτηση σωματιδίων. Με άλλα λόγια, θα μπορούσε κανείς να υπολογίσει τις λειτουργίες σε πολλαπλές καταστάσεις ταυτόχρονα. Αυτό οδήγησε στην ιδέα του κβαντικού υπολογιστή στο επόμενο μισό του 20ου αιώνα, η οποία απογειώθηκε (απέκτησε θεωρητική βάση) στη δεκαετία του 1990, όταν ο Peter Shor έδειξε ότι τέτοιες μέθοδοι θα μπορούσαν να χρησιμοποιηθούν για την παραγοντοποίηση μεγάλων αριθμών σε πολυωνυμικό χρόνο, η οποία, αν εφαρμοστεί, θα καθιστούσε τα πιο σύγχρονα συστήματα κρυπτογραφίας δημόσιου κλειδιού άχρηστα και ανασφαλή.[εκκρεμεί παραπομπή]

Η σύγχρονη θεωρητική έρευνα στην επιστήμη των υπολογιστών βασίζεται σε αυτές τις βασικές εξελίξεις, αλλά περιλαμβάνει πολλά άλλα μαθηματικά και διεπιστημονικά προβλήματα που έχουν τεθεί.

P = NP ?
Μαθηματική λογική Θεωρία αυτομάτων Θεωρία αριθμών Θεωρία γράφων Θεωρία υπολογισιμότητας Θεωρία πολυπλοκότητας
GNITIRW-TERCES
Κρυπτογραφία Θεωρία τύπων Θεωρία κατηγοριών Υπολογιστική γεωμετρία Συνδυαστική βελτιστοποίηση Κβαντική θεωρία πληροφορικής

Ενότητες

Αλγόριθμοι

Κύριο λήμμα: αλγόριθμος

Ένας αλγόριθμος είναι μια βήμα προς βήμα διαδικασία υπολογισμών. Οι αλγόριθμοι χρησιμοποιούνται για υπολογισμού, επεξεργασία δεδομένων, και για αυτοματοποιημένη συλλογιστική.

Ένας αλγόριθμος είναι μια αποτελεσματική μέθοδο που εκφράζεται ως μια πεπερασμένη λίστα[4] από καλώς-ορισμένες οδηγίες[5] για τον υπολογισμό μιας συνάρτησης.[6] Ξεκινώντας από μια αρχική κατάσταση και με αρχικές εισόδους (οι οποίες μπορεί να είναι κενές),[7] οι οδηγίες περιγράφουν έναν υπολογισμό ο οποίος, όταν εκτελείται, προχωρά μέσω ενός πεπερασμένου [8] αριθμού καλώς-ορισμένων διαδοχικών καταστάσεων, οι οποίες τελικά παράγουν "έξοδο"[9] και τερματίζεται σε μια τελική κατάσταση. Η μετάβαση από μια κατάσταση στην επόμενη δεν είναι αναγκαστικά ντετερμινιστική; ορισμένοι αλγόριθμοι, γνωστοί ως τυχαιοποιημένοι αλγόριθμοι, ενσωματώνουν τυχαία είσοδο.[10]

Δομές δεδομένων

Κύριο λήμμα: Δομές δεδομένων

Μια δομή δεδομένων είναι ένας συγκεκριμένος τρόπος οργάνωσης δεδομένων σε έναν υπολογιστή έτσι ώστε να μπορούν να χρησιμοποιηθούν αποδοτικότητα.[11][12]

Διαφορετικά είδη δομών δεδομένων είναι κατάλληλα για διαφορετικά είδη εφαρμογών, και μερικά είναι άκρως εξειδικευμένα για συγκεκριμένες εργασίες. Για παράδειγμα, οι βάσεις δεδομένων χρησιμοποιούν B-δενδροειδή ευρετήρια για μικρά ποσοστά ανάκτησης δεδομένων και οι μεταγλωττιστές και οι βάσεις δεδομένων χρησιμοποιούν δυναμικούς hash tables σαν πίνακες αναφοράς.

Οι δομές δεδομένων παρέχουν ένα μέσο για την αποτελεσματική διαχείριση μεγάλης ποσότητας δεδομένων για χρήσεις όπως μεγάλες βάσεις δεδομένων και υπηρεσίες δημιουργίας διαδικτυακού ευρετηρίου. Συνήθως, οι αποδοτικές δομές δεδομένων είναι το κλειδί για τον σχεδιασμό αποτελεσματικών αλγορίθμων. Κάποιες τυπικές μέθοδοι σχεδιασμού και γλώσσες προγραμματισμού τονίζουν τις δομές δεδομένων, παρά τους αλγόριθμους, ως βασικό παράγοντα για την οργάνωση στον σχεδιασμό λογισμικού. Η αποθήκευση και ανάκτηση μπορεί να πραγματοποιηθεί σε δεδομένα αποθηκευμένα και στην κύρια μνήμη και στην δευτερεύουσα μνήμη.

Θεωρία πολυπλοκότητας

Θεωρία πολυπλοκότητας είναι κλάδος της [[υπολογιστικής θεωρίας] που εστιάζει στην ταξινόμηση υπολογιστικών προβλημάτων ανάλογα με εγγενή δυσκολία τους, και συσχετίζει αυτές τις κλάσεις μεταξύ τους. Σαν υπολογιστικό πρόβλημα νοείται μια εργασία η οποία κατ 'αρχήν είναι δυνατόν να λυθεί από έναν υπολογιστή, η οποία είναι ισοδύναμη με τη δήλωση ότι το πρόβλημα μπορεί να λυθεί με τη μηχανική εφαρμογή μαθηματικών βημάτων, όπως ένας αλγόριθμος.

Ένα πρόβλημα θεωρείται ότι είναι δύσκολο, αν η λύση του απαιτεί σημαντικούς (υπολογιστικούς) πόρους, ανεξάρτητα από τον αλγόριθμο που χρησιμοποιείται. H θεωρία τυποποιεί αυτή τηn διαίσθηση, με την εισαγωγή μαθηματικών υπολογιστικών μοντέλων για την μελέτη τέτοιων προβλημάτων και υπολογίζει το σύνολο των πόρων που απαιτούνται για την επίλυσή τους, όπως ο χρόνος επεξεργασίας και ο όγκος των δεδομένων που αποθηκεύονται. Άλλα μέσα για την μέτρηση της πολυπλοκότητας επίσης χρησιμοποιούνται, όπως το ύψος της επικοινωνίας (το οποίο χρησιμοποιείται στην πολυπλοκότητα της επικοινωνίας), ο αριθμός των λογικών πυλών σε ένα (ολοκληρωμένο) κύκλωμα (που χρησιμοποιείται στην πολυπλοκότητα κυκλώματος) και ο αριθμός των επεξεργαστών (που χρησιμοποιείται στην παράλληλη υπολογιστική). Ένας από τους ρόλους της θεωρία της πολυπλοκότητας είναι να καθορίσει τα πρακτικά όρια σχετικά με το τι οι υπολογιστές μπορούν και δεν μπορούν να κάνουν.

Distributed computation

Κύριο λήμμα: Distributed computation

Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages.[13] The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components.[13] Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.

A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.[14] There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency.

Parallel computation

Κύριο λήμμα: Parallel computation

Parallel computing is a form of computation in which many calculations are carried out simultaneously,[15] operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling.[16] As power consumption (and consequently heat generation) by computers has become a concern in recent years,[17] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.[18]

Parallel computer programs are more difficult to write than sequential ones,[19] because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.

Very-large-scale integration

Κύριο λήμμα: VLSI

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI lets IC makers add all of these into one chip.

Machine learning

Κύριο λήμμα: Machine learning

Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data.[20] Such algorithms operate by building a model based on inputs[21]:2 and using that to make predictions or decisions, rather than following only explicitly programmed instructions.

Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR),[22] search engines and computer vision. Machine learning is sometimes conflated with data mining,[23] although that focuses more on exploratory data analysis.[24] Machine learning and pattern recognition "can be viewed as two facets of the same field."[21]:vii

Computational biology

Κύριο λήμμα: Computational biology

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.[25] The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization.[26]

Computational biology is different from biological computation, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers, but is similar to bioinformatics, which is an interdisciplinary science using computers to store and process biological data.

Computational geometry

Κύριο λήμμα: Computational geometry

Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with history stretching back to antiquity. An ancient precursor is the Sanskrit treatise Shulba Sutras , or "Rules of the Chord", that is a book of algorithms written in 800 BCE. The book prescribes step-by-step procedures for constructing geometric objects like altars using a peg and chord.

The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.

Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction).

Information theory

Κύριο λήμμα: Information theory

Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology,[27] the evolution[28] and function[29] of molecular codes, model selection in ecology,[30] thermal physics,[31] quantum computing, linguistics, plagiarism detection,[32] pattern recognition, anomaly detection and other forms of data analysis.[33]

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information.

Cryptography

Κύριο λήμμα: Cryptography

Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries).[34] More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries[35] and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation.[36] Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.

Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that Πρότυπο:Not a typo cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.

Quantum computation

Κύριο λήμμα: Quantum computation

A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.[37] Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980[38] and Richard Feynman in 1982.[39][40] A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.[41]

Ως τις 2014, quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.[42] Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.[43]

Computational number theory

Κύριο λήμμα: Computational number theory

Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization.

Symbolic computation

Κύριο λήμμα: Symbolic computation

Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation).

Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.

Program semantics

Κύριο λήμμα: Program semantics

In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation.

Formal methods

Κύριο λήμμα: Formal methods

Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems.[44] The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.[45]

Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.[46]

Automata theory

Κύριο λήμμα: Automata theory

Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under Discrete mathematics (a section of Mathematics and also of Computer Science). Automata comes from the Greek word αὐτόματα meaning "self-acting".

Automata Theory is the study of self-operating virtual machines to help in logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function / process).

Coding theory

Κύριο λήμμα: Coding theory

Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.

Computational learning theory

Κύριο λήμμα: Computational learning theory

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.

Organizations

Journals and newsletters

Conferences

See also

Notes

  1. «SIGACT». Ανακτήθηκε στις 29 Μαρτίου 2009. 
  2. «ToCT». Ανακτήθηκε στις 9 Ιουνίου 2010. 
  3. «Challenges for Theoretical Computer Science: Theory as the Scientific Foundation of Computing». Ανακτήθηκε στις 29 Μαρτίου 2009. 
  4. "Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2).
  5. Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2).
  6. "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", (Rogers 1987:1).
  7. "An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5).
  8. "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5).
  9. "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5).
  10. Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without use of continuous methods or analogue devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" Rogers 1987:2.
  11. Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. 15 December 2004. Online version Accessed May 21, 2009.
  12. Entry data structure in the Encyclopædia Britannica (2009) Online entry accessed on May 21, 2009.
  13. 13,0 13,1 Coulouris, George· Jean Dollimore· Tim Kindberg· Gordon Blair (2011). Distributed Systems: Concepts and Design (5th Edition). Boston: Addison-Wesley. ISBN 0-132-14301-1. 
  14. Andrews (2000). Dolev (2000). Ghosh (2007), p. 10.
  15. Gottlieb, Allan· Almasi, George S. (1989). Highly parallel computing. Redwood City, Calif.: Benjamin/Cummings. ISBN 0-8053-0177-1. 
  16. S.V. Adve et al. (November 2008). "Parallel Computing Research at Illinois: The UPCRC Agenda" (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits – increased clock frequency and smarter but increasingly complex architectures – are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."
  17. Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".
  18. Asanovic, Krste et al. (December 18, 2006). "The Landscape of Parallel Computing Research: A View from Berkeley" (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."
  19. Hennessy, John L.· Patterson, David A.· Larus, James R. (1999). Computer organization and design : the hardware/software interface (2. ed., 3rd print. έκδοση). San Francisco: Kaufmann. ISBN 1-55860-428-6. 
  20. Ron Kovahi; Foster Provost (1998). «Glossary of terms». Machine Learning 30: 271–274. http://ai.stanford.edu/~ronnyk/glossary.html. 
  21. 21,0 21,1 C. M. Bishop (2006). Pattern Recognition and Machine Learning. Springer. ISBN 0-387-31073-8. 
  22. Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, IEEE Signal Processing Magazine, vol. 27, no. 4, July 2010, pp. 25-38
  23. Mannila, Heikki (1996). «Data mining: machine learning, statistics, and databases». Int'l Conf. Scientific and Statistical Database Management. IEEE Computer Society. 
  24. Friedman, Jerome H. (1998). «Data Mining and Statistics: What's the connection?». Computing Science and Statistics 29 (1): 3–9. 
  25. «NIH working definition of bioinformatics and computational biology» (PDF). Biomedical Information Science and Technology Initiative. 17 Ιουλίου 2000. Ανακτήθηκε στις 18 Αυγούστου 2012. 
  26. «About the CCMB». Center for Computational Molecular Biology. Ανακτήθηκε στις 18 Αυγούστου 2012. 
  27. F. Rieke, D. Warland, R Ruyter van Steveninck, W Bialek (1997). Spikes: Exploring the Neural Code. The MIT press. ISBN 978-0262681087. CS1 maint: Πολλαπλές ονομασίες: authors list (link)
  28. cf. Huelsenbeck, J. P., F. Ronquist, R. Nielsen and J. P. Bollback (2001) Bayesian inference of phylogeny and its impact on evolutionary biology, Science 294:2310-2314
  29. Rando Allikmets, Wyeth W. Wasserman, Amy Hutchinson, Philip Smallwood, Jeremy Nathans, Peter K. Rogan, Thomas D. Schneider, Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences, Gene 215:1, 111-122
  30. Burnham, K. P. and Anderson D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition (Springer Science, New York) ISBN 978-0-387-95364-9.
  31. Jaynes, E. T. (1957) Information Theory and Statistical Mechanics, Phys. Rev. 106:620
  32. Charles H. Bennett, Ming Li, and Bin Ma (2003) Chain Letters and Evolutionary Histories, Scientific American 288:6, 76-81
  33. David R. Anderson (1 Νοεμβρίου 2003). «Some background on why people in the empirical sciences may want to better understand the information-theoretic methods» (pdf). Ανακτήθηκε στις 23 Ιουνίου 2010. 
  34. Rivest, Ronald L. (1990). «Cryptology». Στο: J. Van Leeuwen. Handbook of Theoretical Computer Science. 1. Elsevier. 
  35. Bellare, Mihir· Rogaway, Phillip (21 Σεπτεμβρίου 2005). «Introduction». Introduction to Modern Cryptography. σελ. 10. 
  36. Menezes, A. J.· van Oorschot, P. C.· Vanstone, S. A. Handbook of Applied Cryptography. ISBN 0-8493-8523-7. 
  37. "Quantum Computing with Molecules" article in Scientific American by Neil Gershenfeld and Isaac L. Chuang
  38. Manin, Yu. I. (1980). Vychislimoe i nevychislimoe [Computable and Noncomputable] (στα Russian). Sov.Radio. σελίδες 13–15. Ανακτήθηκε στις 4 Μαρτίου 2013. CS1 maint: Μη αναγνωρίσιμη γλώσσα (link)
  39. Feynman, R. P. (1982). «Simulating physics with computers». International Journal of Theoretical Physics 21 (6): 467–488. doi:10.1007/BF02650179. 
  40. Deutsch, David (1992-01-06). «Quantum computation». Physics World. 
  41. Finkelstein, David (1968). «Space-Time Structure in High Energy Interactions». Στο: Gudehus, T.· Kaiser, G. Fundamental Interactions at High Energy. New York: Gordon & Breach. 
  42. «New qubit control bodes well for future of quantum computing». Ανακτήθηκε στις 26 Οκτωβρίου 2014. 
  43. Quantum Information Science and Technology Roadmap for a sense of where the research is heading.
  44. R. W. Butler (6 Αυγούστου 2001). «What is Formal Methods?». Ανακτήθηκε στις 16 Νοεμβρίου 2006. 
  45. C. Michael Holloway. Why Engineers Should Consider Formal Methods. 16th Digital Avionics Systems Conference (27–30 October 1997). http://klabs.org/richcontent/verification/holloway/nasa-97-16dasc-cmh.pdf. Ανακτήθηκε στις 2006-11-16. 
  46. Monin, pp.3-4
  47. 47,0 47,1 47,2 47,3 47,4 The 2007 Australian Ranking of ICT Conferences: tier A+.
  48. 48,00 48,01 48,02 48,03 48,04 48,05 48,06 48,07 48,08 48,09 The 2007 Australian Ranking of ICT Conferences: tier A.
  49. FCT 2011 (retrieved 2013-06-03)

Further reading

External links

Πρότυπο:Computer science