Fundamentos de la matemática

De Wikipedia, la enciclopedia libre
Saltar a: navegación, búsqueda

Guardameta (el texto de Hilbert y Bernays)

Los Fundamentos de la matemática es el estudio de conceptos matemáticos básicos como números, figuras geométricas, conjuntos, funciones, etc. y cómo forman jerarquías de estructuras y conceptos más complejos, especialmente las estructuras fundamentalmente importantes que forman el lenguaje de la matemática: fórmulas, teorías y sus modelos, dando un significado a las fórmulas, definiciones, pruebas, algoritmos, etc. también llamados conceptos metamatemáticos, con atención a los aspectos filosóficos y la unidad de la matemática. La búsqueda por los fundamentos de la matemática es una pregunta central de la filosofía de las matemáticas; la naturaleza abstracta de los objetos matemáticos presenta desafíos filosóficos especiales.

Pero los fundamentos de la matemática como un todo no apuntan a contener los fundamentos de cada tópico matemático. Generalmente, los fundamentos de un campo de estudio, se refieren a un análisis más o menos sistemático de sus conceptos fundamentales más básicos, su unidad conceptual y su ordenamiento natural o jerarquía de conceptos, los cuales podrían ayudar a conectarlos con el resto del conocimiento humano. El desarrollo, emergencia y aclaración de los fundamentos puede aparecer tarde en la historia de un campo, y podría no ser visto por cualquiera como su parte más interesante.

Las matemáticas siempre jugaron un rol especial en el pensamiento científico, sirviendo desde tiempos antiguos como modelo de verdad y rigor para la inquisición racional, dando herramientas o incluso fundamentos para otras ciencias (especialmente la física). Pero lo mucho de la matemática hacia abstracciones más elevadas en el siglo XIX, trajeron paradojas y desafíos nuevos, exigiendo un examen más profundo y sistemático de la naturaleza y el criterio de la verdad matemática, así como también una unificación de las diversas ramas de la matemática en un todo coherente.

La búsqueda sistemática de los fundamentos de la matemática empezó al fin del siglo XIX, y formó una disciplina matemática nueva llamada lógica matemática, con fuertes vínculos con la ciencia de la computación teórica. Fue mediante una serie de crisis con resultados paradójicos, hasta que los descubrimientos se estabilizaron durante el siglo XX con un amplio y coherente cuerpo de conocimiento matemático con muchísimos aspectos o componentes (teoría de conjuntos, teoría de modelos, teoría de pruebas...), cuyas propiedades detalladas y posibles variantes aún están en campo de investigación activa. Su alto nivel de sofisticación técnica inspiró a muchos filósofos a conjeturar que puede servir como modelo o patrón para los fundamentos de otras ciencias.

Crisis de los fundamentos[editar]

La crisis fundacional de la matemática (llamada originalmente en alemán: Grundlagenkrise der Mathematik) fue un término acuñado a principios del siglo XX para referirse a la situación teórica que llevó a una investigación sistemática y profundamente de los fundamentos de la matemática, y que acabó inaugurando una nueva rama de la matemática.

Numerosas escuelas filosóficas matemáticas incurrieron en dificultades una tras otra, a medida que la asunción de que los fundamentos de la matemática podían ser justificados de manera consistentes dentro de la propia matemática fue puesta en duda por el descubrimiento de varias paradojas (entre ellas la célebre paradoja de Russell).

El término "paradoja" no debe ser confundido con el término contradicción. Una contradicción dentro de una teoría formal es una demostración formal de la existencia de un absurdo como resultado de un conjunto de asunciones inapropiadas (tales como 2 + 2 = 5), un conjunto de axiomas o teoría que da lugar a una contradicción se clasifica de inconsistente y debe ser rechazada como teoría útil (ya que en ella cualquier proposición acabaría siendo demostrable). Sin embargo, una paradoja puede referirse o bien a un resultado contraintuitivo pero verdadero, o a un argumento informal que lleva a una contradicción, así que una teoría candidata donde se atente la formalización de un argumento debe inhabilitar al menos uno de sus pasos; en este caso el problema es encontrar una teoría satisfactoria sin contradicciones. Ambos significados pueden aplicar si la versión formalizada del argumento forma la prueba de una verdad sorprendente. Por ejemplo, la paradoja de Russell puede ser expresada como "no hay un conjunto que contenga a todos los conjuntos" (exceptuando algunas teorías axiomáticas marginales).

Algunas escuelas de pensamiento al buscar acercarse al enfoque correcto a los fundamentos de la matemática se oponían ferozmente entre si. La escuela liderante era la escuela de enfoque formalista, de la cual, David Hilbert era el proponente principal, culminando con lo que se conoce como Programa de Hilbert, quien pensaba en fundamentar la matemática en una pequeña base de un sistema lógico sondeado en términos del finitismo metamatemático. El oponente principal era la escuela del intuicionismo, liderada por L. E. J. Brouwer, quien resueltamente descartó el formalismo como un juego futil con símbolos (van Dalen, 2008). La pelea fue acrimoniosa. En 1920 Hilbert triunfó en sacar a Brouwer, a quien él consideraba una amenaza a la matemática, removiéndolo del tablón editorial del Mathematische Annalen, la revista líder en matemáticas en aquella época.

Perspectivas filosóficas[editar]

A principios del siglo XX, tres escuelas de filosofía de la matemática tenían visiones contrapuestas sobre los fundamentos matemáticos: el Formalismo, el Intuicionismo y el Logicismo.

Formalismo[editar]

La postura de los formalistas, tal como fue enunciada por David Hilbert (1862–1943), es que la matemática es sólo un lenguaje formal y una serie de juegos. De hecho, Hilbert usó el término "juego de fórmulas" en su respuesta de 1927 al criticismo de L. E. J. Brouwer:

"And to what has the formula game thus made possible been successful? This formula game enables us to express the entire thought-content of the science of mathematics in a uniform manner and develop it in such a way that, at the same time, the interconnections between the individual propositions and facts become clear . . . The formula game that Brouwer so deprecates has, besides its mathematical value, an important general philosophical significance. For this formula game is carried out according to certain definite rules, in which the technique of our thinking is expressed. These rules form a closed system that can be discovered and definitively stated."[1]

Por tanto, Hilbert insistió que la matemática no es un juego "arbitrario" con reglas "arbitrarias", sino más bien un juego que debe coincidir con nuestro pensamiento, que son el punto de partida de nuestra exposición oral y escrita.[1]

"We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise".[2]

La filosofía inicial del formalismo, tal como es ejemplificada por David Hilbert, es una respuesta a las paradojas de la teoría axiomática de conjuntos, que se basa en la lógica formal. Prácticamente todos los teoremas matemáticos hoy en día se pueden formular como teoremas de la teoría de conjuntos. La verdad de un enunciado matemático, en esta teoría está representada por el hecho de que una declaración se puede derivar de los axiomas de la teoría de conjuntos utilizando las reglas de la lógica formal.

 El uso del formalismo por sí solo no explica varias cuestiones: ¿por qué debemos utilizar estos axiomas y no otros, por qué debemos emplear unas reglas lógicas y no otras, por qué proposiciones matemáticas "verdaderas" (p. ej. la leyes de la aritmética) parecen ser verdad? y así sucesivamente. Hermann Weyl hará estas mismas preguntas a Hilbert:

"What "truth" or objectivity can be ascribed to this theoretic construction of the world, which presses far beyond the given, is a profound philosophical problem. It is closely connected with the further question: what impels us to take as a basis precisely the particular axiom system developed by Hilbert? Consistency is indeed a necessary but not a sufficient condition. For the time being we probably cannot answer this question . . .."[3]

En algunos casos, estas preguntas pueden ser contestadas satisfactoriamente a través del estudio de las teorías formales, en disciplinas como las matemáticas inversas y la teoría de la complejidad computacional. Como ha señalado por Weyl, los sistemas lógicos formales también corren el riesgo de inconsistencia; en la aritmética de G. Peano, esto sin duda ya se ha salvado con varias pruebas de consistencia, pero hay debate sobre si son o no son suficientemente finitistas para que tengan sentido. El segundo teorema de incompletitud de Gödel establece que los sistemas lógicos de la aritmética no pueden contener una prueba válida de su propia consistencia. Lo que Hilbert quería hacer era probar que un sistema lógico S fuese consistente, basado en principios P que fueran sólo una pequeña parte de S. Pero Gödel demostró que los principios P ni siquiera podrían demostrar su propia coherencia, por no hablar de la de S!

Intuicionismo[editar]

Los intuicionistas , como Brouwer (1882-1966) , sostienen que la matemática es una creación de la mente humana. Los números, como los personajes de los cuentos de hadas , no son más que entidades mentales, que no existiría si las mentes humanas no pensaran en ellos.

La filosofía fundamental del intuicionismo o constructivismo , como se ejemplifica en extremo por Brouwer y más coherente de Stephen Kleene , requiere pruebas para ser " constructivo" en naturaleza - la existencia de un objeto debe ser demostrada en lugar de deducirse de una demostración de la imposibilidad de su no-existencia. Por ejemplo, como una consecuencia de esto la prueba conocida como reducción al absurdo se vería con sospecho.

Algunas teorías modernas de la filosofía de las matemáticas niegan la existencia de fundamentos en el sentido original . Algunas teorías tienden a centrarse en la práctica de las matemáticas, y tienen como objetivo describir y analizar el funcionamiento real de los matemáticos como grupo social. Otras tratan de crear una ciencia cognitiva de las matemáticas, se centran en la cognición humana como el origen de la fiabilidad de las matemáticas cuando se aplica al mundo real. Estas teorías propondrían encontrar fundamentos sólo en el pensamiento humano, y no en cualquier construcción externa objetiva. La cuestión sigue siendo controvertida

Logicismo[editar]

El logicismo es una de la escuelas de pensamiento de filosofía de la matemática, que considera que la matemática es básicamente una extensión de la lógica y por tanto una buena parte de la misma o toda la matemática es reducible a la lógica. Bertrand Russell y Alfred North Whitehead fueron partidarios de esta línea de pensamiento inaugurada por Gottlob Frege.

Platonismo de Teoría de conjuntos[editar]

Muchos investigadores de la teoría axiomática de conjuntos han suscrito lo que se conoce como el platonismo de la teoría de conjuntos, ejemplificado por el matemático Kurt Gödel.

Varios matemáticos teóricos en conjuntos siguieron este enfoque y activamente buscaron posibles axiomas que se pueden considerar como verdaderos por razones heurísticas y que decidieran la hipótesis del continuo. Se estudiaron muchos grandes axiomas cardinales, pero la hipótesis del continuo permaneció independiente. Se consideraron otros tipos de axiomas, pero ninguno de ellos hasta ahora ha logrado consenso como solución para el problema continuo.

Argumento de indispensabilidad para el realismo[editar]

This argument by Willard Quine and Hilary Putnam says (in Putnam's shorter words),

quantification over mathematical entities is indispensable for science...; therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question.

However Putnam was not a Platonist.

Realismo Duro y listo[editar]

Few mathematicians are typically concerned on a daily, working basis over logicism, formalism or any other philosophical position. Instead, their primary concern is that the mathematical enterprise as a whole always remains productive. Typically, they see this as insured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy. Such a view was expressed by the Physics Nobel Prize laureate Richard Feynman

People say to me, “Are you looking for the ultimate laws of physics?” No, I’m not… If it turns out there is a simple ultimate law which explains everything, so be it — that would be very nice to discover. If it turns out it’s like an onion with millions of layers… then that’s the way it is. But either way there’s Nature and she’s going to come out the way She is. So therefore when we go to investigate we shouldn’t predecide what it is we’re looking for only to find out more about it. Now you ask: “Why do you try to find out more about it?” If you began your investigation to get an answer to some deep philosophical question, you may be wrong. It may be that you can’t get an answer to that particular question just by finding out more about the character of Nature. But that’s not my interest in science; my interest in science is to simply find out about the world and the more I find out the better it is, I like to find out…[4]
Philosophers, incidentally, say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong[5]

and also Steven Weinberg[6]

The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion—by protecting them from the preconceptions of other philosophers.(...) without some guidance from our preconceptions one could do nothing at all. It is just that philosophical principles have not generally provided us with the right preconceptions.
Physicists do of course carry around with them a working philosophy. For most of us, it is a rough-and-ready realism, a belief in the objective reality of the ingredients of our scientific theories. But this has been learned through the experience of scientific research and rarely from the teachings of philosophers. (...) we should not expect [the philosophy of science] to provide today's scientists with any useful guidance about how to go about their work or about what they are likely to find. (...)
After a few years' infatuation with philosophy as an undergraduate I became disenchanted. The insights of the philosophers I studied seemed murky and inconsequential compared with the dazzling successes of physics and mathematics. From time to time since then I have tried to read current work on the philosophy of science. Some of it I found to be written in a jargon so impenetrable that I can only think that it aimed at impressing those who confound obscurity with profundity. (...) But only rarely did it seem to me to have anything to do with the work of science as I knew it. (...)
I am not alone in this; I know of no one who has participated actively in the advance of physics in the postwar period whose research has been significantly helped by the work of philosophers. I raised in the previous chapter the problem of what Wigner calls the "unreasonable effectiveness" of mathematics; here I want to take up another equally puzzling phenomenon, the unreasonable ineffectiveness of philosophy.
Even where philosophical doctrines have in the past been useful to scientists, they have generally lingered on too long, becoming of more harm than ever they were of use.

He believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory.

Consecuencias filosóficas para el teorema de la completitud[editar]

The Completeness theorem establishes an equivalence in first-order logic, between the formal provability of a formula, and its truth in all possible models. Precisely, for any consistent first-order theory it gives an "explicit construction" of a model described by the theory; and this model will be countable if the language of the theory is countable. However this "explicit construction" is not algorithmic. It is based on an iterative process of completion of the theory, where each step of the iteration consists in adding a formula to the axioms if it keeps the theory consistent; but this consistency question is only semi-decidable (an algorithm is available to find any contradiction but if there is none this consistency fact can remain unprovable).

This can be seen as a giving a sort of justification to the Platonist view that the objects of our mathematical theories are real. More precisely, it shows that the mere assumption of the existence of the set of natural numbers as a totality (an actual infinity) suffices to imply the existence of a model (a world of objects) of any consistent theory. However several difficulties remain:

  • For any consistent theory this usually does not give just one world of objects, but an infinity of possible worlds that the theory might equally describe, with a possible diversity of truths between them.
  • In the case of set theory, none of the models obtained by this construction resemble the intended model, as they are countable while set theory intends to describe uncountable infinities. Similar remarks can be made in many other cases. For example, with theories that include arithmetic, such constructions generally give models that include non-standard numbers, unless the construction method was specifically designed to avoid them.
  • As it gives models to all consistent theories without distinction, it gives no reason to accept or reject any axiom as long as the theory remains consistent, but regards all consistent axiomatic theories as referring to equally existing worlds. It gives no indication on which axiomatic system should be preferred as a foundation of mathematics.
  • As claims of consistency are usually unprovable, they remain a matter of belief or non-rigorous kinds of justifications. Hence the existence of models as given by the completeness theorem needs in fact 2 philosophical assumptions: the actual infinity of natural numbers and the consistency of the theory.

Another consequence of the completeness theorem is that it justifies the conception of infinitesimals as actual infinitely small nonzero quantities, based on the existence of non-standard models as equally legitimate to standard ones. This idea was formalized by Abraham Robinson into the theory of nonstandard analysis. However this theory did not look so simple and did not have much success.

Más paradojas[editar]

1920: Thoralf Skolem corrected Löwenheim's proof of what is now called the downward Löwenheim-Skolem theorem, leading to Skolem's paradox discussed in 1922 (the existence of countable models of ZF, making infinite cardinalities a relative property.

1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements.

1927: Werner Heisenberg published the Uncertainty principle of quantum mechanics

1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth can not be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove).

1935: Publication of the article by Albert Einstein, Boris Podolsky and Nathan Rosen arguing that quantum mechanics was incomplete, as its formalism was non-local, which the authors assumed to not possibly reflect some true underlying mechanism that remained to be discovered.

1936: Alfred Tarski proved his truth undefinability theorem.

1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

1938: Gödel proved the consistency of the axiom of choice and of the Generalized Continuum-Hypothesis.

1936 - 1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem).

1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable.

1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.

1964: John Stewart Bell published his inequalities showing that the predictions of quantum mechanics in the EPR thought experiment are significantly different from the predictions of a particular class of hidden variable theories (the local hidden variable theories). Inspired by the fundamental randomness in physics, Gregory Chaitin starts publishing results on Algorithmic Information theory (measuring incompleteness and randomness in mathematics)

1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements.

1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers.

1971: Suslin's problem is proven to be independent from ZFC.

Resolución parcial de la crisis[editar]

A partir de 1935 el grupo Bourbaki de matemáticos franceses empezaron a publicar una serie de libros para formalizar muchas áreas de matemáticas basados en los nuevos fundamentos de la teoría de conjuntos.

The intuitionistic school did not attract many adherents among working mathematicians, due to difficulties of constructive mathematics.

We may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hibert's original ambitions. His ambitions were expressed in a time when nothing was clear: we did not know if mathematics could have a rigorous foundation at all. Now we can say that mathematics has a clear and satisfying foundation made of set theory and model theory. Set theory and model theory are clearly defined and the right foundation for each other.

There are many possible variants of set theory which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we don't have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF. But it is still possible to justify the consistency of ZF in informal ways.[7]

In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully.

Toward the middle of the 20th century it turned out that set theory (ZFC or otherwise) was inadequate as a foundation for some of the emerging new fields, such as homological algebra[cita requerida], and category theory was proposed as an alternative foundation by Samuel Eilenberg and others.[cita requerida]

Evolución histórica[editar]

Plantilla:Ver también

Matemática en la Antigua Grecia[editar]

Aunque que el uso práctico de la matemática fue desarrollada ya en civilizaciones de la edad de bronce, el interés específico por sus aspectos fundacionales y teóricos parece remontarse a la matemática helénica. Los primeros filósofos griegos discutieron ampliamente sobre qué rama de la matemática era más antiuga, si la aritmética o la geometría. Zenón de Elea (490 a. C - ca. 430 a. C.) formuló cuatro aporías que aparentan mostrar que el cambio es imposible, que en esencia no fueron convenientemente aclaradas hasta el desarrollo de matemática moderna.

La escuela pitagórica de matemática insistía originalmente en que solo existían los números naturales y racionales. El descubrimiento de la irracionalidad de √2, la proporción de la diagonal de un cuadrado con su lado (data del siglo V a.C), fue un golpe filosófico a dicha escuela que solo aceptaron de mala gana. La discrepancia entre racionales y reales fue finalmente resuelta por Eudoxo de Cnido, un estudiante de Platón, quien redujo la comparación de las proporciones de los irracionales a comparaciones a comparaciones de múltiples proporciones racionales, además de anticipar la definición de número real de Richard Dedekind.

En su obra Segundos analíticos, Aristóteles (384 a.C - 322 a.C) asentó el método axiomático, para organizar lógicamente un campo del conocimiento en términos de conceptos primitivos, axiomas, postulados, definiciones, y teoremas, tomando una mayoría de sus ejemplos de la aritmética y la geometría. This method reached its high point with Euclid's Elements (300 BC), a monumental treatise on geometry structured with very high standards of rigor: each proposition is justified by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates). Aristotle's syllogistic logic, together with the Axiomatic Method exemplified by Euclid's Elements, are universally recognized as towering scientific achievements of ancient Greece.

Platonismo como una filosofía tradicional de la matemática[editar]

Starting from the end of the 19th century, a Platonist view of mathematics became common among practicing mathematicians.

The objects of mathematics are abstract and remote from everyday perceptual experience: geometrical figures are conceived as idealities to be distinguished from effective drawings and shapes of objects, and numbers are not confused with the counting of concrete objects. Their existence and nature present special philosophical challenges: How do mathematical objects differ from their concrete representation? Are they located in their representation, or in our minds, or somewhere else? How can we know them?

The ancient Greek philosophers took such questions very seriously. Indeed, many of their general philosophical discussions were carried on with extensive reference to geometry and arithmetic. Plato (424/423 BC – 348/347 BC) insisted that mathematical objects, like other platonic Ideas (forms or essences), must be perfectly abstract and have a separate, non-material kind of existence, in a world of mathematical objects independent of humans. He believed that the truths about these objects also exists independently of the human mind, but is discovered by humans. In the Meno Plato’s teacher Socrates asserts that it is possible to come to know this truth by a process akin to memory retrieval.

Above the gateway to Plato's academy appeared a famous inscription: "Let no one who is ignorant of geometry enter here".

In this way Plato indicated his high opinion of geometry. He regarded geometry as ``the first essential in the training of philosophers", because of its abstract character.

This philosophy of Platonist mathematical realism, is shared by many mathematicians. It can be argued that Platonism somehow comes as a necessary assumption underlying any mathematical work.[8]

In this view, the laws of nature and the laws of mathematics have a similar status, and the effectiveness ceases to be unreasonable. Not our axioms, but the very real world of mathematical objects forms the foundation.

Aristotle dissected and rejected this view in his Metaphysics. These questions provide much fuel for philosophical analysis and debate.

Edad media y Renacimiento[editar]

For over 2,000 years, Euclid’s Elements stood as a perfectly solid foundation for mathematics, as its methodology of rational exploration guided mathematicians, philosophers, and scientists well into the 19th century.

The Middle Ages saw a dispute over the ontological status of the universals (platonic Ideas): Realism asserted their existence independently of perception; conceptualism asserted their existence within the mind only; nominalism, denied either, only seeing universals as names of collections of individual objects (following older speculations that they are words, "logos").

René Descartes published La Géométrie (1637) aimed to reduce geometry to algebra by means of coordinate systems, giving algebra a more foundational role (while the Greeks embedded arithmetic into geometry by identifying whole numbers with evenly spaced points on a line). It became famous after 1649 and paved the way to infinitesimal calculus.

Isaac Newton (1642 – 1727) in England and Leibniz (1646 – 1716) in Germany independently developed the infinitesimal calculus based on heuristic methods greatly efficient, but direly lacking rigorous justifications. Leibniz even went on to explicitly describe infinitesimals as actual infinitely small numbers (close to zero). Leibniz also worked on formal logic but most of his writings on it remained unpublished until 1903.

The Christian philosopher George Berkeley (1685–1753), in his campaign against the religious implications of Newtonian mechanics, wrote a pamphlet on the lack of rational justifications of infinitesimal calculus:[9] “They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?”

Then mathematics developed very rapidly and successfully in physical applications, but with little attention to logical foundations.

Siglo XIX[editar]

In the 19th century, mathematics became increasingly abstract. Concerns about logical gaps and inconsistencies in different fields led to the development of axiomatic systems.

Análisis sobre los reales[editar]

Cauchy (1789 - 1857) inició el proyecto de demostrar los teoremas de cálculo infinitesimal sobre una base rigurosa, rechazando el principio de generalidad del álgebra usado por diversos matemáticos durante el siglo XVIII. En su Cours d'Analyse ('Curso de análisis) de 1821, Cauchy definió las cantidades infinitesimales como sucesiones decrecientes que convergen a 0, que pueden ser usadas para definir la continuidad. Aunque no formalizó ninguna noción de convergencia.

La definición moderna del criterio (ε, δ) y la noción de función continua fueron desarrollada por primera vez por Bolzano en 1817, pero durante un tiempo fue relativametne poco conocida. Estas nociones dan un fundamente riguroso al cálculo infinitesimal basado en el conjunto de los números reales, y resuleven claramente tanto las paradojas de Zenón como los argumentos de Berkeley.

Mathematicians such as Karl Weierstrass (1815 – 1897) discovered pathological functions such as continuous, nowhere-differentiable functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, to axiomatize analysis using properties of the natural numbers. In 1858, Dedekind proposed a definition of the real numbers as cuts of rational numbers. This reduction of real numbers and continuous functions in terms of rational numbers and thus of natural numbers, was later integrated by Cantor in his set theory, and axiomatized in terms of second order arithmetic by Hilbert and Bernays.

Teoría de grupos[editar]

For the first time, the limits of mathematics were explored. Niels Henrik Abel (1802 – 1829), a Norwegian, and Évariste Galois, (1811 – 1832) a Frenchman, investigated into the solutions of various polynomial equations, and proved that there is no general algebraic solution to equations of degree greater than four (Abel–Ruffini theorem). With these concepts, Pierre Wantzel (1837) proved that straightedge and compass alone cannot trisect an arbitrary angle nor double a cube, nor to construct a square equal in area to a given circle. Mathematicians had vainly attempted to solve all of these problems since the time of the ancient Greeks.

Abel and Galois's works opened the way for the developments of group theory (which will be used to study symmetry in physics and other fields), and abstract algebra. Concepts of vector spaces emerged from the conception of barycentric coordinates by Möbius in 1827, to the modern definition of vector spaces and linear maps by Peano in 1888. Geometry was no more limited to 3 dimensions. These concepts do not generalize numbers but combine notions of functions and sets which are not yet formalized, breaking away from familiar mathematical objects.

Geometrías no-euclidianas[editar]

After many failed attempts to derive the parallel postulate from other axioms, the study of the still hypothetical hyperbolic geometry by Johann Heinrich Lambert (1728 – 1777) led him to introduce the hyperbolic functions and compute the area of a hyperbolic triangle (where the sum of angles is less than 180°). Then the Russian mathematician Nikolai Lobachevsky (1792–1856) established in 1826 (and published in 1829) the coherence of this geometry (thus the independence of the parallel postulate), in parallel with the Hungarian mathematician János Bolyai (1802–60) in 1832, and with Gauss. Later in the 19th century, the German mathematician Bernhard Riemann developed Elliptic geometry, another non-Euclidean geometry where no parallel can be found and the sum of angles in a triangle is more than 180°. It was proved consistent by defining point to mean a pair of antipodal points on a fixed sphere and line to mean a great circle on the sphere. At that time, the main method for proving the consistency of a set of axioms was to provide a model for it.

Geometría proyectiva[editar]

One of the traps in a deductive system is circular reasoning, a problem that seemed to befall projective geometry until it was resolved by Karl von Staudt. As explained by Laptev & Rosenfeld (1996):

In the mid-nineteenth century there was an acrimonious controversy between the proponents of synthetic and analytic methods in projective geometry, the two sides accusing each other of mixing projective and metric concepts. Indeed the basic concept that is applied in the synthetic presentation of projective geometry, the cross-ratio of four points of a line, was introduced through consideration of the lengths of intervals.

The purely geometric approach of von Staudt was based on the complete quadrilateral to express the relation of projective harmonic conjugates. Then he created a means of expressing the familiar numeric properties with his Algebra of Throws. English language versions of this process of deducing the properties of a field can be found in either the book by Oswald Veblen and John Young, Projective Geometry (1938), or more recently in John Stillwell's Four Pillars of Geometry (2005). Stillwell writes on page 120

...projective geometry is simpler than algebra in a certain sense, because we use only five geometric axioms to derive the nine field axioms.

The algebra of throws is commonly seen as a feature of cross-ratios since students ordinarily rely upon numbers without worry about their basis. However, cross-ratio calculations use metric features of geometry, features not admitted by purists. For instance, in 1961 Coxeter wrote Introduction to Geometry without mention of cross-ratio.

Algebra Booleana y lógica[editar]

Attempts of formal treatment of mathematics had started with Leibniz and Lambert (1728 – 1777), and continued with works by algebraists such as George Peacock (1791 – 1858). Systematic mathematical treatments of logic came with the British mathematician George Boole (1847) who devised an algebra that soon evolved into what is now called Boolean algebra, in which the only numbers were 0 and 1 and logical combinations (conjunction, disjunction, implication and negation) are operations similar to the addition and multiplication of integers. Also De Morgan publishes his laws (1847). Logic becomes a branch of mathematics. Boolean algebra is the starting point of mathematical logic and has important applications in computer science.

Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885.

The German mathematician Gottlob Frege (1848–1925) presented an independent development of logic with quantifiers in his Begriffsschrift (formula language) published in 1879, a work generally considered as marking a turning point in the history of logic. He exposed deficiencies in Aristotle's Logic, and pointed out the 3 expected properties of a mathematical theory

  1. Consistency: impossibility to prove contradictory statements
  2. Completeness: any statement is either provable or refutable (i.e. its negation is provable).
  3. Decidability: there is a decision procedure to test any statement in the theory.

He then showed in Grundgesetze der Arithmetik (Basic Laws of Arithmetic) how arithmetic could be formalised in his new logic.

Frege's work was popularized by Bertrand Russell near the turn of the century. But Frege's two-dimensional notation had no success. Popular notations were (x) for universal and (∃x) for existential quantifiers, coming from Giuseppe Peano and William Ernest Johnson until the ∀ symbol was introduced by Gentzen in 1935 and became canonical in the 1960s.

From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.

Aritmetica de Peano[editar]

The formalization of arithmetic (the theory of natural numbers) as an axiomatic theory, started with Peirce in 1881, and continued with Richard Dedekind and Giuseppe Peano in 1888. This was still a second-order axiomatization (expressing induction in terms of arbitrary subsets, thus with an implicit use of set theory) as concerns for expressing theories in first-order logic were not yet understood. In Dedekind's work, this approach appears as completely characterizing natural numbers and providing recursive definitions of addition and multiplication from the successor function and mathematical induction.

Véase también[editar]

Notas[editar]

  1. a b Hilbert 1927 The Foundations of Mathematics in van Heijenoort 1967:475
  2. p. 14 in Hilbert, D. (1919–20), Natur und Mathematisches Erkennen: Vorlesungen, gehalten 1919–1920 in Göttingen. Nach der Ausarbeitung von Paul Bernays (Edited and with an English introduction by David E. Rowe), Basel, Birkhauser (1992).
  3. Weyl 1927 Comments on Hilbert's second lecture on the foundations of mathematics in van Heijenoort 1967:484. Although Weyl the intuitionist believed that "Hilbert's view" would ultimately prevail, this would come with a significant loss to philosophy: "I see in this a decisive defeat of the philosophical attitude of pure phenomenology, which thus proves to be insufficient for the understanding of creative science even in the area of cognition that is most primal and most readily open to evidence – mathematics" (ibid).
  4. Richard Feynman, The Pleasure of Finding Things Out p. 23
  5. Richard Feynman, Lectures on Physics, volume I, chapter 2.
  6. Steven Weinberg, chapter Against Philosophy in Dreams of a final theory
  7. A philosophical proof of consistency of ZF
  8. Karlis Podnieks, Platonism, intuition and the nature of mathematics: 1. Platonism - the Philosophy of Working Mathematicians
  9. The Analyst, A Discourse Addressed to an Infidel Mathematician

Referencias[editar]

  • Avigad, Jeremy (2003) Number theory and elementary arithmetic, Philosophia Mathematica Vol. 11, pp. 257–284
  • Eves, Howard (1990), Foundations and Fundamental Concepts of Mathematics Third Edition, Dover Publications, INC, Mineola NY, ISBN 0-486-69609-X (pbk.) cf §9.5 Philosophies of Mathematics pp. 266–271. Eves lists the three with short descriptions prefaced by a brief introduction.
  • Goodman, N.D. (1979), "Mathematics as an Objective Science", in Tymoczko (ed., 1986).
  • Hart, W.D. (ed., 1996), The Philosophy of Mathematics, Oxford University Press, Oxford, UK.
  • Hersh, R. (1979), "Some Proposals for Reviving the Philosophy of Mathematics", in (Tymoczko 1986).
  • Hilbert, D. (1922), "Neubegründung der Mathematik. Erste Mitteilung", Hamburger Mathematische Seminarabhandlungen 1, 157–177. Translated, "The New Grounding of Mathematics. First Report", in (Mancosu 1998).
  • Katz, Robert (1964), Axiomatic Analysis, D. C. Heath and Company.
  • Kleene, Stephen C. (1991) [1952]. Introduction to Meta-Mathematics (Tenth impression 1991 edición). Amsterdam NY: North-Holland Pub. Co. ISBN 0-7204-2103-9. 
In Chapter III A Critique of Mathematic Reasoning, §11. The paradoxes, Kleene discusses Intuitionism and Formalism in depth. Throughout the rest of the book he treats, and compares, both Formalist (classical) and Intuitionist logics with an emphasis on the former. Extraordinary writing by an extraordinary mathematician.
  • Laptev, B.L. & B.A. Rozenfel'd (1996) Mathematics of the 19th Century: Geometry, page 40, Birkhäuser ISBN 3-7643-5048-2 .
  • Mancosu, P. (ed., 1998), From Hilbert to Brouwer. The Debate on the Foundations of Mathematics in the 1920s, Oxford University Press, Oxford, UK.
  • Putnam, Hilary (1967), "Mathematics Without Foundations", Journal of Philosophy 64/1, 5–22. Reprinted, pp. 168–184 in W.D. Hart (ed., 1996).
  • Putnam, Hilary (1975), "What is Mathematical Truth?", in Tymoczko (ed., 1986).
  • Sudac, Olivier (Apr 2001). «The prime number theorem is PRA-provable». Theoretical Computer Science 257 (1–2):  pp. 185–239. doi:10.1016/S0304-3975(00)00116-X. 
  • Troelstra, A. S. (no date but later than 1990), "A History of Constructivism in the 20th Century", http://staff.science.uva.nl/~anne/hhhist.pdf, A detailed survey for specialists: §1 Introduction, §2 Finitism & §2.2 Actualism, §3 Predicativism and Semi-Intuitionism, §4 Brouwerian Intuitionism, §5 Intuitionistic Logic and Arithmetic, §6 Intuitionistic Analysis and Stronger Theories, §7 Constructive Recursive Mathematics, §8 Bishop's Constructivism, §9 Concluding Remarks. Approximately 80 references.
  • Tymoczko, T. (1986), "Challenging Foundations", in Tymoczko (ed., 1986).
  • Tymoczko, T. (ed., 1986), New Directions in the Philosophy of Mathematics, 1986. Revised edition, 1998.
  • van Dalen D. (2008), "Brouwer, Luitzen Egbertus Jan (1881–1966)", in Biografisch Woordenboek van Nederland. URL:http://www.inghist.nl/Onderzoek/Projecten/BWN/lemmata/bwn2/brouwerle [13-03-2008]
  • Weyl, H. (1921), "Über die neue Grundlagenkrise der Mathematik", Mathematische Zeitschrift 10, 39–79. Translated, "On the New Foundational Crisis of Mathematics", in (Mancosu 1998).
  • Wilder, Raymond L. (1952), Introduction to the Foundations of Mathematics, John Wiley and Sons, New York, NY.

Links externos[editar]

Plantilla:Lógica

Los fundamentos de la matemática es un término a veces usado para ciertos campos de la matemática, como la lógica matemática, teoría de conjuntos axiomática, teoría de la demostración, teoría de modelos y la teoría de la recursividad. La búsqueda de fundamentos de las matemáticas es también una pregunta central de la filosofía de las matemáticas.

Fundamentos filosóficos de la matemática[editar]

Resumen de las tres filosofías:

  • Platonismo: platonistas, como Kurt Gödel (1906–1978), sostienen que los números son abstractos, objetos necesariamente existentes, independientes de la mente humana.
  • Formalismo matemático: formalistas, como David Hilbert (1862–1943), sostienen que la matemática no es ni más ni menos que un lenguaje matemático. Son simplemente una serie de juegos.
  • Intuicionismo: intuicionistas, como L. E. J. Brouwer (1882–1966), sostienen que la matemática es una creación de la mente humana. Los números, como personajes de cuentos de hadas, son simplemente entidades mentales, que no existirían sin que nunca hubieran algunas mentes humanas que pensaran en ellos.

Platonismo[editar]

La filosofía fundamental del realismo matemático platónico, ejemplificado por el matemático Kurt Gödel, propone la existencia del mundo de los objetos matemáticos independiente de los seres humanos; las verdades de estos objetos son descubiertos por seres humanos. Con este punto de vista, las leyes de la naturaleza y las leyes de la matemática tienen una posición similar, y la efectividad deja de ser irrazonable. No nuestros axiomas, pero el verdadero mundo de los objetos matemáticos constituye el fundamento. La pregunta obvia entonces es, ¿cómo entramos en ese mundo?

Formalismo[editar]

La filosofía fundamental del formalismo, ejemplificado por David Hilbert, está basado en la teoría axiomática de los conjuntos y la lógica formal. Prácticamente todos los teoremas matemáticos actualmente pueden ser formulados como teoremas de la teoría de los conjuntos. La verdad de un enunciado matemático, en este punto de vista, no es nada más que la reclamación de que el enunciado puede ser derivado de los axiomas de la teoría de los conjuntos, usando las reglas de la lógica formal.

Sólo el uso del formalismo no explica varias cuestiones: por qué debemos de usar axiomas que hacemos y no otros, por qué debemos emplear las reglas de la lógica que hacemos y no otras, por qué enunciados matemáticos verdaderos (como leyes de la aritmética) parecen ser verdad, etc. En algunos casos esto puede ser suficientemente contestadas a través del estudio de las teorías formales, en disciplinas como la matemática reversa y la teoría de complejidad computacional.

Los sistemas lógicos formales también pueden correr el riesgo de la incoherencia; con Peano aritmética, esto posiblemente se ha establecido con varias pruebas de coherencia, pero hay un debate sobre si son lo suficientemente significativas. El segundo de los Teoremas de incompletitud de Gödel establece que los sistemas lógicos de la aritmética no pueden contener una prueba válida de su propia coherencia. Lo que Hilbert quería hacer era probar un sistema lógico S que fuera coherente, basado en los principios P, que solo es formado por una pequeña parte de S. Pero Gödel comprobó que los principios P no podían ni siquiera comprobar que P fuera coherente, ¡por no hablar de sólo S!.

Intuicionismo[editar]

La filosofía fundamental del intuicionismo o constructivismo, ejemplificado al extremo por Brouwer y con más coherencia por Stephen Kleene, requiere pruebas para ser “constructivo” en la naturaleza – la existencia de un objeto puede ser demostrada, más no inferida de una demostración de la imposibilidad de su inexistencia. Como una consecuencia inmediata de esto, el intuicionismo no acepta como válido el método de demostración conocido como reducción al absurdo.

Algunas teorías modernas en la filosofía de la matemática niegan la existencia de los fundamentos en su sentido original. Algunas teorías tienden a enfocarse en la práctica matemática, y a tener como objetivo el describir y analizar el verdadero trabajo de los matemáticos, como un grupo social. Otros tratan de crear una ciencia cognitiva a la matemática, enfocándose en la cognición humana como el origen de la confiabilidad en la matemática cuando son aplicadas al mundo real. Estas teorías pueden proponer la búsqueda de fundamentos sólo en el pensamiento humano, no en ningún objetivo afuera de la construcción. Este asunto se mantiene en discusión.

Logicismo[editar]

El logicismo es una de las escuelas de pensamiento en la filosofía de la matemática, que sostiene la teoría de que la matemática es una extensión de la lógica y que, por tanto, toda la matemática o parte de ella es reducible a la lógica. Bertrand Russell y Alfred North Whitehead defendieron esta teoría concebida por Gottlob Frege.

Constructivismo[editar]

Es una variante del empirismo, uno de sus defensores es Roger Apéry.[1]

  • Critican el formalismo llevado hasta el extremo por el grupo de matemáticos llamado Nicolas Bourbaki.
  • Admiten la sucesión de los números naturales, más no el conjunto de los naturales,
  • Cuestionan la lógica en que se fundamenta la matemática de Bourbaki.
  • Proclaman la tercera opción respecto del principio del tercio excluido, a más de p y ~p, cabe otra salida.

Véase también[editar]

Referencias[editar]

  1. Roger Apéry (1984). «Matemática constructiva». Pensar La Matematica – Seminario de Filosofía y Matemática de la Ecole Normale Supériure de París. dirigido por J. Diedonné, M. Loi, y R. Thomm. Barcelona: Éditions du Seuil. ISBN 8472236145.