How to catch proof errors during self study?
Clash Royale CLAN TAG#URR8PPP
$begingroup$
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
$endgroup$
|
show 5 more comments
$begingroup$
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
$endgroup$
3
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
1
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
1
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56
|
show 5 more comments
$begingroup$
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
$endgroup$
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
soft-question self-learning
edited Jan 2 at 0:14
mrtaurho
4,06121234
4,06121234
asked Jan 2 at 0:08
KalkirinKalkirin
686
686
3
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
1
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
1
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56
|
show 5 more comments
3
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
1
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
1
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56
3
3
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
1
1
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
1
1
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56
|
show 5 more comments
5 Answers
5
active
oldest
votes
$begingroup$
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
$endgroup$
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
|
show 9 more comments
$begingroup$
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
$endgroup$
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
add a comment |
$begingroup$
By not stopping. If you keep thinking about the topic with a misconception in your head (even if it's a small untrue lemma that you never stated explicitly), you will eventually prove something you know to be untrue. Then you can have lots of fun* going over everything you thought you knew with a fine-toothed comb, and trying to find the bug.
*: Your mileage may vary.
$endgroup$
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
add a comment |
$begingroup$
Put it away for a while and come back to it later after you've forgotten most of it. Then either grade it yourself with a fresh brain, or try reproving it cold and compare the two proofs.
Learn the language and software for doing automated proof checking and have a computer check your proof for you. Previously: State of the progess of the automated proof checking
$endgroup$
add a comment |
$begingroup$
Find a study partner, or even better, group, and critique each other's proofs. If you go to professors' office hours and wait for them to not have any students there, they may be willing to go over your proofs.
If you can't find other people, user3067860 's suggestion of coming back to them is good. Also, try to look at what other conclusions you can derive from whatever claims you introduce in your proofs. If you have a proof that the number of primes is infinite, but your proof can also be used to show the number of primes less than 1000 is infinite, you know you messed up somewhere. Also, looking at the contrapositive can be helpful sometimes: if you have a proof $arightarrow brightarrow c rightarrow d$, see what happens when you change it to $ neg c rightarrow neg b rightarrow neg a$
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059014%2fhow-to-catch-proof-errors-during-self-study%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
$endgroup$
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
|
show 9 more comments
$begingroup$
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
$endgroup$
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
|
show 9 more comments
$begingroup$
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
$endgroup$
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
edited Jan 2 at 2:07
answered Jan 2 at 1:49
stressed outstressed out
4,1211533
4,1211533
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
|
show 9 more comments
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
$endgroup$
– Kalkirin
Jan 2 at 2:13
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
$endgroup$
– stressed out
Jan 2 at 2:20
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
$begingroup$
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
$endgroup$
– Kalkirin
Jan 2 at 2:31
1
1
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
$begingroup$
I also highly recommend Bishop's Pattern Recognition. Fantastic resource and easily understandable.
$endgroup$
– Alex L
Jan 2 at 6:47
1
1
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
$begingroup$
This is an amazing answer. Thanks!
$endgroup$
– Elie Louis
Jan 2 at 22:35
|
show 9 more comments
$begingroup$
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
$endgroup$
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
add a comment |
$begingroup$
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
$endgroup$
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
add a comment |
$begingroup$
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
$endgroup$
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
answered Jan 2 at 0:17
Ethan BolkerEthan Bolker
42.1k548111
42.1k548111
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
add a comment |
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
2
2
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
$endgroup$
– stressed out
Jan 2 at 0:22
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@stressedout What about Galois?
$endgroup$
– John Douma
Jan 2 at 0:33
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
$begingroup$
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
$endgroup$
– stressed out
Jan 2 at 0:44
add a comment |
$begingroup$
By not stopping. If you keep thinking about the topic with a misconception in your head (even if it's a small untrue lemma that you never stated explicitly), you will eventually prove something you know to be untrue. Then you can have lots of fun* going over everything you thought you knew with a fine-toothed comb, and trying to find the bug.
*: Your mileage may vary.
$endgroup$
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
add a comment |
$begingroup$
By not stopping. If you keep thinking about the topic with a misconception in your head (even if it's a small untrue lemma that you never stated explicitly), you will eventually prove something you know to be untrue. Then you can have lots of fun* going over everything you thought you knew with a fine-toothed comb, and trying to find the bug.
*: Your mileage may vary.
$endgroup$
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
add a comment |
$begingroup$
By not stopping. If you keep thinking about the topic with a misconception in your head (even if it's a small untrue lemma that you never stated explicitly), you will eventually prove something you know to be untrue. Then you can have lots of fun* going over everything you thought you knew with a fine-toothed comb, and trying to find the bug.
*: Your mileage may vary.
$endgroup$
By not stopping. If you keep thinking about the topic with a misconception in your head (even if it's a small untrue lemma that you never stated explicitly), you will eventually prove something you know to be untrue. Then you can have lots of fun* going over everything you thought you knew with a fine-toothed comb, and trying to find the bug.
*: Your mileage may vary.
answered Jan 2 at 4:31
SolveItSolveIt
434514
434514
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
add a comment |
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
A "proof" of an untrue proposition will, theoretically, eventually lead to a clearly false conclusion, but this requires that all further logic be perfect, it is hardly certain that this will occur within a practical period of time (where "practical" means not only "within a person's lifetime", but "before the heat death of the universe"), and it doesn't address the issue of invalid proofs of true propositions.
$endgroup$
– Acccumulation
Jan 2 at 17:29
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
$begingroup$
I agree, but this seems to work very well in practice.
$endgroup$
– SolveIt
Jan 3 at 0:12
add a comment |
$begingroup$
Put it away for a while and come back to it later after you've forgotten most of it. Then either grade it yourself with a fresh brain, or try reproving it cold and compare the two proofs.
Learn the language and software for doing automated proof checking and have a computer check your proof for you. Previously: State of the progess of the automated proof checking
$endgroup$
add a comment |
$begingroup$
Put it away for a while and come back to it later after you've forgotten most of it. Then either grade it yourself with a fresh brain, or try reproving it cold and compare the two proofs.
Learn the language and software for doing automated proof checking and have a computer check your proof for you. Previously: State of the progess of the automated proof checking
$endgroup$
add a comment |
$begingroup$
Put it away for a while and come back to it later after you've forgotten most of it. Then either grade it yourself with a fresh brain, or try reproving it cold and compare the two proofs.
Learn the language and software for doing automated proof checking and have a computer check your proof for you. Previously: State of the progess of the automated proof checking
$endgroup$
Put it away for a while and come back to it later after you've forgotten most of it. Then either grade it yourself with a fresh brain, or try reproving it cold and compare the two proofs.
Learn the language and software for doing automated proof checking and have a computer check your proof for you. Previously: State of the progess of the automated proof checking
answered Jan 2 at 14:41
user3067860user3067860
1713
1713
add a comment |
add a comment |
$begingroup$
Find a study partner, or even better, group, and critique each other's proofs. If you go to professors' office hours and wait for them to not have any students there, they may be willing to go over your proofs.
If you can't find other people, user3067860 's suggestion of coming back to them is good. Also, try to look at what other conclusions you can derive from whatever claims you introduce in your proofs. If you have a proof that the number of primes is infinite, but your proof can also be used to show the number of primes less than 1000 is infinite, you know you messed up somewhere. Also, looking at the contrapositive can be helpful sometimes: if you have a proof $arightarrow brightarrow c rightarrow d$, see what happens when you change it to $ neg c rightarrow neg b rightarrow neg a$
$endgroup$
add a comment |
$begingroup$
Find a study partner, or even better, group, and critique each other's proofs. If you go to professors' office hours and wait for them to not have any students there, they may be willing to go over your proofs.
If you can't find other people, user3067860 's suggestion of coming back to them is good. Also, try to look at what other conclusions you can derive from whatever claims you introduce in your proofs. If you have a proof that the number of primes is infinite, but your proof can also be used to show the number of primes less than 1000 is infinite, you know you messed up somewhere. Also, looking at the contrapositive can be helpful sometimes: if you have a proof $arightarrow brightarrow c rightarrow d$, see what happens when you change it to $ neg c rightarrow neg b rightarrow neg a$
$endgroup$
add a comment |
$begingroup$
Find a study partner, or even better, group, and critique each other's proofs. If you go to professors' office hours and wait for them to not have any students there, they may be willing to go over your proofs.
If you can't find other people, user3067860 's suggestion of coming back to them is good. Also, try to look at what other conclusions you can derive from whatever claims you introduce in your proofs. If you have a proof that the number of primes is infinite, but your proof can also be used to show the number of primes less than 1000 is infinite, you know you messed up somewhere. Also, looking at the contrapositive can be helpful sometimes: if you have a proof $arightarrow brightarrow c rightarrow d$, see what happens when you change it to $ neg c rightarrow neg b rightarrow neg a$
$endgroup$
Find a study partner, or even better, group, and critique each other's proofs. If you go to professors' office hours and wait for them to not have any students there, they may be willing to go over your proofs.
If you can't find other people, user3067860 's suggestion of coming back to them is good. Also, try to look at what other conclusions you can derive from whatever claims you introduce in your proofs. If you have a proof that the number of primes is infinite, but your proof can also be used to show the number of primes less than 1000 is infinite, you know you messed up somewhere. Also, looking at the contrapositive can be helpful sometimes: if you have a proof $arightarrow brightarrow c rightarrow d$, see what happens when you change it to $ neg c rightarrow neg b rightarrow neg a$
answered Jan 2 at 17:41
AcccumulationAcccumulation
6,8142618
6,8142618
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059014%2fhow-to-catch-proof-errors-during-self-study%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
$begingroup$
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
$endgroup$
– stressed out
Jan 2 at 0:13
$begingroup$
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
$endgroup$
– Kalkirin
Jan 2 at 1:10
1
$begingroup$
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
$endgroup$
– stressed out
Jan 2 at 1:19
$begingroup$
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
$endgroup$
– Kalkirin
Jan 2 at 1:20
1
$begingroup$
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
$endgroup$
– stressed out
Jan 2 at 1:56