Applications of basic linear algebra concepts to computer science? [closed]

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












5












$begingroup$


I'm trying to explain linear algebra to some programmers with computer science backgrounds. They took a course on it long ago, but don't seem to remember much. They can follow basic formalism, but none of it seems to stick when they don't see anything in "real life" to relate the concepts to. Do people know any good examples of applications in computer science using basic linear algebra concepts? Some things I was thinking of include:



  • Eigenvalues and eigenspaces

  • Image and kernel

  • Determinants

  • Dual space

  • Transpose

  • Inner products

  • Singular value decomposition









share|cite|improve this question









$endgroup$



closed as off-topic by Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor Feb 8 at 17:00


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor
If this question can be reworded to fit the rules in the help center, please edit the question.











  • 15




    $begingroup$
    It seems to me that this is a question you should ask to computer scientists.
    $endgroup$
    – Najib Idrissi
    Feb 7 at 16:49






  • 2




    $begingroup$
    pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:11






  • 2




    $begingroup$
    math.boisestate.edu/~wright/courses/m297/google_talk.pdf
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:13






  • 2




    $begingroup$
    Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
    $endgroup$
    – setholopolus
    Feb 8 at 5:54










  • $begingroup$
    There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
    $endgroup$
    – Kim
    Feb 8 at 8:56















5












$begingroup$


I'm trying to explain linear algebra to some programmers with computer science backgrounds. They took a course on it long ago, but don't seem to remember much. They can follow basic formalism, but none of it seems to stick when they don't see anything in "real life" to relate the concepts to. Do people know any good examples of applications in computer science using basic linear algebra concepts? Some things I was thinking of include:



  • Eigenvalues and eigenspaces

  • Image and kernel

  • Determinants

  • Dual space

  • Transpose

  • Inner products

  • Singular value decomposition









share|cite|improve this question









$endgroup$



closed as off-topic by Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor Feb 8 at 17:00


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor
If this question can be reworded to fit the rules in the help center, please edit the question.











  • 15




    $begingroup$
    It seems to me that this is a question you should ask to computer scientists.
    $endgroup$
    – Najib Idrissi
    Feb 7 at 16:49






  • 2




    $begingroup$
    pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:11






  • 2




    $begingroup$
    math.boisestate.edu/~wright/courses/m297/google_talk.pdf
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:13






  • 2




    $begingroup$
    Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
    $endgroup$
    – setholopolus
    Feb 8 at 5:54










  • $begingroup$
    There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
    $endgroup$
    – Kim
    Feb 8 at 8:56













5












5








5


5



$begingroup$


I'm trying to explain linear algebra to some programmers with computer science backgrounds. They took a course on it long ago, but don't seem to remember much. They can follow basic formalism, but none of it seems to stick when they don't see anything in "real life" to relate the concepts to. Do people know any good examples of applications in computer science using basic linear algebra concepts? Some things I was thinking of include:



  • Eigenvalues and eigenspaces

  • Image and kernel

  • Determinants

  • Dual space

  • Transpose

  • Inner products

  • Singular value decomposition









share|cite|improve this question









$endgroup$




I'm trying to explain linear algebra to some programmers with computer science backgrounds. They took a course on it long ago, but don't seem to remember much. They can follow basic formalism, but none of it seems to stick when they don't see anything in "real life" to relate the concepts to. Do people know any good examples of applications in computer science using basic linear algebra concepts? Some things I was thinking of include:



  • Eigenvalues and eigenspaces

  • Image and kernel

  • Determinants

  • Dual space

  • Transpose

  • Inner products

  • Singular value decomposition






linear-algebra computer-science mathematics-education






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Feb 7 at 16:46









KimKim

482314




482314




closed as off-topic by Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor Feb 8 at 17:00


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor
If this question can be reworded to fit the rules in the help center, please edit the question.







closed as off-topic by Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor Feb 8 at 17:00


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "This question does not appear to be about research level mathematics within the scope defined in the help center." – Wojowu, Igor Belegradek, Chris Godsil, Suvrit, YCor
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 15




    $begingroup$
    It seems to me that this is a question you should ask to computer scientists.
    $endgroup$
    – Najib Idrissi
    Feb 7 at 16:49






  • 2




    $begingroup$
    pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:11






  • 2




    $begingroup$
    math.boisestate.edu/~wright/courses/m297/google_talk.pdf
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:13






  • 2




    $begingroup$
    Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
    $endgroup$
    – setholopolus
    Feb 8 at 5:54










  • $begingroup$
    There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
    $endgroup$
    – Kim
    Feb 8 at 8:56












  • 15




    $begingroup$
    It seems to me that this is a question you should ask to computer scientists.
    $endgroup$
    – Najib Idrissi
    Feb 7 at 16:49






  • 2




    $begingroup$
    pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:11






  • 2




    $begingroup$
    math.boisestate.edu/~wright/courses/m297/google_talk.pdf
    $endgroup$
    – Liviu Nicolaescu
    Feb 7 at 17:13






  • 2




    $begingroup$
    Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
    $endgroup$
    – setholopolus
    Feb 8 at 5:54










  • $begingroup$
    There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
    $endgroup$
    – Kim
    Feb 8 at 8:56







15




15




$begingroup$
It seems to me that this is a question you should ask to computer scientists.
$endgroup$
– Najib Idrissi
Feb 7 at 16:49




$begingroup$
It seems to me that this is a question you should ask to computer scientists.
$endgroup$
– Najib Idrissi
Feb 7 at 16:49




2




2




$begingroup$
pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
$endgroup$
– Liviu Nicolaescu
Feb 7 at 17:11




$begingroup$
pi.math.cornell.edu/~mec/Winter2009/RalucaRemus/Lecture3/…
$endgroup$
– Liviu Nicolaescu
Feb 7 at 17:11




2




2




$begingroup$
math.boisestate.edu/~wright/courses/m297/google_talk.pdf
$endgroup$
– Liviu Nicolaescu
Feb 7 at 17:13




$begingroup$
math.boisestate.edu/~wright/courses/m297/google_talk.pdf
$endgroup$
– Liviu Nicolaescu
Feb 7 at 17:13




2




2




$begingroup$
Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
$endgroup$
– setholopolus
Feb 8 at 5:54




$begingroup$
Brown CS had an entire course on this topic, titled "The Matrix in Computer Science": cs.brown.edu/courses/cs053/current/index.htm
$endgroup$
– setholopolus
Feb 8 at 5:54












$begingroup$
There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
$endgroup$
– Kim
Feb 8 at 8:56




$begingroup$
There's a detail which isn't clear to me. Pagerank gives a way to rank pages. But where does the search input get used? Does Google do a global ranking via pagerank and then just return the highest ranked pages containing the searched phrase?
$endgroup$
– Kim
Feb 8 at 8:56










9 Answers
9






active

oldest

votes


















14












$begingroup$

For eigenvalues and eigenvectors, Google's original PageRank algorithm is a good example. The idea is to build a matrix $A$ where $A_ij$ is the probability that you would end up at page $i$ by following a randomly chosen hyperlink on page $j$. An eigenvector with eigenvalue $1$ of this matrix is a stable probability distribution over web pages. Generally a page which is linked to by many sites which are themselves linked to by many sites will have a high page rank.



Singular value decomposition is widely used for image processing. You can represent a black-and-white image by the matrix of values for its pixels. The $k$ largest singular values represent the most significant contributions to the image. You can then compress an image by replacing $A = UDV^T$ with $U'D'(V')^T$, where $D'$ is the $k times k$ matrix submatrix of $D$ of the largest singular values, $U'$ is the $n times k$ submatrix of $U$ consisting of the corresponding $k$ columns of $U$, and $(V')^T$ is the submatrix of $V^T$ consisting of the corresponding $k$ rows of $V^T$.



Computer graphics is another area where linear algebra plays a huge role. Consider the problem of finding the shadow of triangle onto some plane from a light source infinitely far away. This is just a projection map.






share|cite|improve this answer









$endgroup$








  • 3




    $begingroup$
    Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
    $endgroup$
    – Somatic Custard
    Feb 8 at 1:00










  • $begingroup$
    SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
    $endgroup$
    – DreamConspiracy
    Feb 8 at 13:25










  • $begingroup$
    ... but greedy $epsilon$-approximations are relatively fast.
    $endgroup$
    – DreamConspiracy
    Feb 8 at 13:26


















10












$begingroup$

The whole field of linear codes is a huge application of linear algebra (albeit over finite fields rather than $mathbb R$ or $mathbb C$)






share|cite|improve this answer









$endgroup$




















    10












    $begingroup$

    Here are a few further suggestions:



    (1) The intersection of planes and lines (relevant in computer graphics) leads to linear equations and thus to kernels of linear maps.



    (2) The area of parallelograms (and thus of triangles) can be computed in terms of determinants. That's maybe a bit too simple because you only need very small matrices, but on the other hand many surfaces in computer graphics are constructed from triangles.



    If you compute the area of triangles in three dimensional space you'll also need to multiply $3times 2$-matrices with their tranposed matrices.



    (3) A simple way to represent directed graphs on a computer is to store their adjacency matriy $A$ as a double array. Transposing the matrix means inverting the direction of all edges.



    Moreover, the entries of the $k$-th power $A^k$ tell you how many paths of length $k$ exist between any two vertices of the graph.



    (4) The pseudo-inverse of a matrix (which is closely related to singular value decomposition) is very important for linear regression/least square fitting.



    (5) Consider a least square problem where you have, in addition, a linear restriction on the fitting parameters (probably, it would be helpful to present this situation in terms of a more concrete example). This means you restrict your parameter space to the kernel of a linear map. Now, use for instance the Gauss algorithm to represent your kernel as the image of another linear map. This yields a coordinate transform which represents your fitting problem without any restriction - so now you can simply solve it by using the pseudo-inverse approach.



    (6) Again an application from computer graphics: have your students build some nice three-dimensional geometric object (for instance, consisting of many triangles) and let them make it "visible" for instance by a simple raytracing algorithm. (This is going to be a nice exercise in computing intersections of lines and planes).



    Now, you want to rotate the object - so your students will need to compute rotation matrices and apply them to the coordinates of the triangles. This is a nice illustration of orthogonal matrices and of the action of matrices as linear maps.



    (7) An illustration (though, admittedly not an application) of eigenvalues and eigenspaces: Have your students draw an ellipse with axes parallel to the coordinate axes in $mathbbR^2$, then have them rotate the ellipse by using certain orthogonal matrices.



    Afterwards, give them a quadratic equation in two variables, and make them plot the solution set (which should be chosen by you to agree with the rotated ellipse from above). Then have them diagonalize the symmetric matrix representing the corresponding quadratic form and make them compare their result with the rotation matrix above.



    (8) Another example from image processing (in addition to James' example of image compression): many image "improvement" techniques are based on applying filters to the image - which is simply an application of a linear map.



    (9) The discrete Fourier transform - which can by represented by a unitary matrix - is frequently used in image processing.



    (10) In case that you wish to go into numerical analysis: discretization methods for linear partial differential equations usually yield linear equations in finite dimensions which have to be solved numerically.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
      $endgroup$
      – John Dvorak
      Feb 8 at 7:45











    • $begingroup$
      @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
      $endgroup$
      – Jochen Glueck
      Feb 14 at 20:17


















    9












    $begingroup$

    It's a standard pre-requisite in Machine Learning. For example, dimensionality reduction techniques, such as PCA and Johnson-Lindenstrauss all require a basic understanding of projections (and the former relies on the spectral theorem).






    share|cite|improve this answer









    $endgroup$




















      5












      $begingroup$

      Numerical linear algebra is a field of computer science.






      share|cite|improve this answer









      $endgroup$








      • 2




        $begingroup$
        Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
        $endgroup$
        – daw
        Feb 8 at 11:17


















      3












      $begingroup$

      Spectral graph theory is a big part of theoretical computer science. Some concrete examples are:



      1) A big trend in the past decade in TCS has been to utilize the properties of the Laplacian matrix to make classical graph algorithms faster. Usually, the arguments boil down to showing that the problem reduces to solving the linear system $Lx = y$ where $L$ is the Laplacian. Since solving these systems is fast, the resulting algorithms are also fast. Some concrete examples are max flow and graph sparsification.



      2) Expander graphs: These are graphs that 'expand' in some way. For example, the number of neighbors of any subset $S$ of vertices is of size at least $c|S|$ for some fixed constant $c$. They 'look like' the complete graph but are much sparser. They have nice spectral properties which means random walks converge fast on them. They are also connected to other branches of TCS through error correcting codes, random graphs, etc.



      3) Drawing graphs! It turns out if you have a complicated graph and want to visualize it, a good way to do so is to take the second and third eigenvectors of the Laplacian matrix and use them as $(x,y)$ coordinates for graph vertices. There are some cool examples here. The math behind this is essentially that the eigenvectors are connected to minimizing some form of 'energy' through the Rayleigh quotient.






      share|cite|improve this answer









      $endgroup$




















        2












        $begingroup$

        The answer to this depends on what you classify as computer science. I'd expect that a lot of modern algorithms and automata theory involves linear algebra.



        The answer to the following question involves linear algebra, for example.




        Suppose $mathcalA$ is a weighted automaton over a semiring $mathbbS$, where $mathbbS$ can be embedded in a field. Is the support language of $mathcalA$ nonempty?




        Linear programming techniques can also be used to give approximate solutions to several NP-complete problems. See this pdf.






        share|cite|improve this answer









        $endgroup$




















          1












          $begingroup$

          It's unlikely that CS students will have encountered this need before, or indeed will necessarily in future. Some of the structures are useful (matrices etc) but it is fairly rare to need a good understanding of the theory.
          The only CS type problem I have encountered that needed an understanding of linear algebra was numerical simulation, in which solutions to large systems of simultaneous equations needed solving. The basic theory (half a day of googleing) can get to a solution but improvements to stability/rate of convergence and debugging those sorts of issues would have been hard without at least some theoretical linear algebra.



          I gather the same is true for machine learning, but have no experience of this.






          share|cite|improve this answer











          $endgroup$




















            0












            $begingroup$

            One application of algebra is in Oil and Gas. To analyze acoustic data gathered during oil exploration geophysicists will use algorithms like full waveform inversion. This algorithm needs to invert and process a very large matrix (over 200k x 200k), usually using parallel processing supercomputers.






            share|cite|improve this answer









            $endgroup$












            • $begingroup$
              How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
              $endgroup$
              – Qfwfq
              Feb 8 at 15:45


















            9 Answers
            9






            active

            oldest

            votes








            9 Answers
            9






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            14












            $begingroup$

            For eigenvalues and eigenvectors, Google's original PageRank algorithm is a good example. The idea is to build a matrix $A$ where $A_ij$ is the probability that you would end up at page $i$ by following a randomly chosen hyperlink on page $j$. An eigenvector with eigenvalue $1$ of this matrix is a stable probability distribution over web pages. Generally a page which is linked to by many sites which are themselves linked to by many sites will have a high page rank.



            Singular value decomposition is widely used for image processing. You can represent a black-and-white image by the matrix of values for its pixels. The $k$ largest singular values represent the most significant contributions to the image. You can then compress an image by replacing $A = UDV^T$ with $U'D'(V')^T$, where $D'$ is the $k times k$ matrix submatrix of $D$ of the largest singular values, $U'$ is the $n times k$ submatrix of $U$ consisting of the corresponding $k$ columns of $U$, and $(V')^T$ is the submatrix of $V^T$ consisting of the corresponding $k$ rows of $V^T$.



            Computer graphics is another area where linear algebra plays a huge role. Consider the problem of finding the shadow of triangle onto some plane from a light source infinitely far away. This is just a projection map.






            share|cite|improve this answer









            $endgroup$








            • 3




              $begingroup$
              Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
              $endgroup$
              – Somatic Custard
              Feb 8 at 1:00










            • $begingroup$
              SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:25










            • $begingroup$
              ... but greedy $epsilon$-approximations are relatively fast.
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:26















            14












            $begingroup$

            For eigenvalues and eigenvectors, Google's original PageRank algorithm is a good example. The idea is to build a matrix $A$ where $A_ij$ is the probability that you would end up at page $i$ by following a randomly chosen hyperlink on page $j$. An eigenvector with eigenvalue $1$ of this matrix is a stable probability distribution over web pages. Generally a page which is linked to by many sites which are themselves linked to by many sites will have a high page rank.



            Singular value decomposition is widely used for image processing. You can represent a black-and-white image by the matrix of values for its pixels. The $k$ largest singular values represent the most significant contributions to the image. You can then compress an image by replacing $A = UDV^T$ with $U'D'(V')^T$, where $D'$ is the $k times k$ matrix submatrix of $D$ of the largest singular values, $U'$ is the $n times k$ submatrix of $U$ consisting of the corresponding $k$ columns of $U$, and $(V')^T$ is the submatrix of $V^T$ consisting of the corresponding $k$ rows of $V^T$.



            Computer graphics is another area where linear algebra plays a huge role. Consider the problem of finding the shadow of triangle onto some plane from a light source infinitely far away. This is just a projection map.






            share|cite|improve this answer









            $endgroup$








            • 3




              $begingroup$
              Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
              $endgroup$
              – Somatic Custard
              Feb 8 at 1:00










            • $begingroup$
              SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:25










            • $begingroup$
              ... but greedy $epsilon$-approximations are relatively fast.
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:26













            14












            14








            14





            $begingroup$

            For eigenvalues and eigenvectors, Google's original PageRank algorithm is a good example. The idea is to build a matrix $A$ where $A_ij$ is the probability that you would end up at page $i$ by following a randomly chosen hyperlink on page $j$. An eigenvector with eigenvalue $1$ of this matrix is a stable probability distribution over web pages. Generally a page which is linked to by many sites which are themselves linked to by many sites will have a high page rank.



            Singular value decomposition is widely used for image processing. You can represent a black-and-white image by the matrix of values for its pixels. The $k$ largest singular values represent the most significant contributions to the image. You can then compress an image by replacing $A = UDV^T$ with $U'D'(V')^T$, where $D'$ is the $k times k$ matrix submatrix of $D$ of the largest singular values, $U'$ is the $n times k$ submatrix of $U$ consisting of the corresponding $k$ columns of $U$, and $(V')^T$ is the submatrix of $V^T$ consisting of the corresponding $k$ rows of $V^T$.



            Computer graphics is another area where linear algebra plays a huge role. Consider the problem of finding the shadow of triangle onto some plane from a light source infinitely far away. This is just a projection map.






            share|cite|improve this answer









            $endgroup$



            For eigenvalues and eigenvectors, Google's original PageRank algorithm is a good example. The idea is to build a matrix $A$ where $A_ij$ is the probability that you would end up at page $i$ by following a randomly chosen hyperlink on page $j$. An eigenvector with eigenvalue $1$ of this matrix is a stable probability distribution over web pages. Generally a page which is linked to by many sites which are themselves linked to by many sites will have a high page rank.



            Singular value decomposition is widely used for image processing. You can represent a black-and-white image by the matrix of values for its pixels. The $k$ largest singular values represent the most significant contributions to the image. You can then compress an image by replacing $A = UDV^T$ with $U'D'(V')^T$, where $D'$ is the $k times k$ matrix submatrix of $D$ of the largest singular values, $U'$ is the $n times k$ submatrix of $U$ consisting of the corresponding $k$ columns of $U$, and $(V')^T$ is the submatrix of $V^T$ consisting of the corresponding $k$ rows of $V^T$.



            Computer graphics is another area where linear algebra plays a huge role. Consider the problem of finding the shadow of triangle onto some plane from a light source infinitely far away. This is just a projection map.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Feb 7 at 17:22









            JamesJames

            1,264613




            1,264613







            • 3




              $begingroup$
              Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
              $endgroup$
              – Somatic Custard
              Feb 8 at 1:00










            • $begingroup$
              SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:25










            • $begingroup$
              ... but greedy $epsilon$-approximations are relatively fast.
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:26












            • 3




              $begingroup$
              Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
              $endgroup$
              – Somatic Custard
              Feb 8 at 1:00










            • $begingroup$
              SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:25










            • $begingroup$
              ... but greedy $epsilon$-approximations are relatively fast.
              $endgroup$
              – DreamConspiracy
              Feb 8 at 13:26







            3




            3




            $begingroup$
            Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
            $endgroup$
            – Somatic Custard
            Feb 8 at 1:00




            $begingroup$
            Yes, and more generally the linear algebra that arises from graph theory, via the adjacency matrix, graph Laplacian, etc.
            $endgroup$
            – Somatic Custard
            Feb 8 at 1:00












            $begingroup$
            SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
            $endgroup$
            – DreamConspiracy
            Feb 8 at 13:25




            $begingroup$
            SVD are actually more widely used than this. They are used for many kinds of compression based on low rank approximation. Consider a case where we have $n$ data entries, each with $d$ dimensions, and we wish to compress this. If we believe that this data is "low rank"- i.e. the rank of the associated $dcrossn$ matrix is low, then the SVD truncated to $k$ values as described above gives a provably optimal low rank approximation. It turns out many modern data sets are low rank, so this is widely used. In terms of computational efficiency, computing an exact SVD is impossible, ...
            $endgroup$
            – DreamConspiracy
            Feb 8 at 13:25












            $begingroup$
            ... but greedy $epsilon$-approximations are relatively fast.
            $endgroup$
            – DreamConspiracy
            Feb 8 at 13:26




            $begingroup$
            ... but greedy $epsilon$-approximations are relatively fast.
            $endgroup$
            – DreamConspiracy
            Feb 8 at 13:26











            10












            $begingroup$

            The whole field of linear codes is a huge application of linear algebra (albeit over finite fields rather than $mathbb R$ or $mathbb C$)






            share|cite|improve this answer









            $endgroup$

















              10












              $begingroup$

              The whole field of linear codes is a huge application of linear algebra (albeit over finite fields rather than $mathbb R$ or $mathbb C$)






              share|cite|improve this answer









              $endgroup$















                10












                10








                10





                $begingroup$

                The whole field of linear codes is a huge application of linear algebra (albeit over finite fields rather than $mathbb R$ or $mathbb C$)






                share|cite|improve this answer









                $endgroup$



                The whole field of linear codes is a huge application of linear algebra (albeit over finite fields rather than $mathbb R$ or $mathbb C$)







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Feb 7 at 17:33









                Robert IsraelRobert Israel

                42.6k51122




                42.6k51122





















                    10












                    $begingroup$

                    Here are a few further suggestions:



                    (1) The intersection of planes and lines (relevant in computer graphics) leads to linear equations and thus to kernels of linear maps.



                    (2) The area of parallelograms (and thus of triangles) can be computed in terms of determinants. That's maybe a bit too simple because you only need very small matrices, but on the other hand many surfaces in computer graphics are constructed from triangles.



                    If you compute the area of triangles in three dimensional space you'll also need to multiply $3times 2$-matrices with their tranposed matrices.



                    (3) A simple way to represent directed graphs on a computer is to store their adjacency matriy $A$ as a double array. Transposing the matrix means inverting the direction of all edges.



                    Moreover, the entries of the $k$-th power $A^k$ tell you how many paths of length $k$ exist between any two vertices of the graph.



                    (4) The pseudo-inverse of a matrix (which is closely related to singular value decomposition) is very important for linear regression/least square fitting.



                    (5) Consider a least square problem where you have, in addition, a linear restriction on the fitting parameters (probably, it would be helpful to present this situation in terms of a more concrete example). This means you restrict your parameter space to the kernel of a linear map. Now, use for instance the Gauss algorithm to represent your kernel as the image of another linear map. This yields a coordinate transform which represents your fitting problem without any restriction - so now you can simply solve it by using the pseudo-inverse approach.



                    (6) Again an application from computer graphics: have your students build some nice three-dimensional geometric object (for instance, consisting of many triangles) and let them make it "visible" for instance by a simple raytracing algorithm. (This is going to be a nice exercise in computing intersections of lines and planes).



                    Now, you want to rotate the object - so your students will need to compute rotation matrices and apply them to the coordinates of the triangles. This is a nice illustration of orthogonal matrices and of the action of matrices as linear maps.



                    (7) An illustration (though, admittedly not an application) of eigenvalues and eigenspaces: Have your students draw an ellipse with axes parallel to the coordinate axes in $mathbbR^2$, then have them rotate the ellipse by using certain orthogonal matrices.



                    Afterwards, give them a quadratic equation in two variables, and make them plot the solution set (which should be chosen by you to agree with the rotated ellipse from above). Then have them diagonalize the symmetric matrix representing the corresponding quadratic form and make them compare their result with the rotation matrix above.



                    (8) Another example from image processing (in addition to James' example of image compression): many image "improvement" techniques are based on applying filters to the image - which is simply an application of a linear map.



                    (9) The discrete Fourier transform - which can by represented by a unitary matrix - is frequently used in image processing.



                    (10) In case that you wish to go into numerical analysis: discretization methods for linear partial differential equations usually yield linear equations in finite dimensions which have to be solved numerically.






                    share|cite|improve this answer











                    $endgroup$












                    • $begingroup$
                      "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                      $endgroup$
                      – John Dvorak
                      Feb 8 at 7:45











                    • $begingroup$
                      @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                      $endgroup$
                      – Jochen Glueck
                      Feb 14 at 20:17















                    10












                    $begingroup$

                    Here are a few further suggestions:



                    (1) The intersection of planes and lines (relevant in computer graphics) leads to linear equations and thus to kernels of linear maps.



                    (2) The area of parallelograms (and thus of triangles) can be computed in terms of determinants. That's maybe a bit too simple because you only need very small matrices, but on the other hand many surfaces in computer graphics are constructed from triangles.



                    If you compute the area of triangles in three dimensional space you'll also need to multiply $3times 2$-matrices with their tranposed matrices.



                    (3) A simple way to represent directed graphs on a computer is to store their adjacency matriy $A$ as a double array. Transposing the matrix means inverting the direction of all edges.



                    Moreover, the entries of the $k$-th power $A^k$ tell you how many paths of length $k$ exist between any two vertices of the graph.



                    (4) The pseudo-inverse of a matrix (which is closely related to singular value decomposition) is very important for linear regression/least square fitting.



                    (5) Consider a least square problem where you have, in addition, a linear restriction on the fitting parameters (probably, it would be helpful to present this situation in terms of a more concrete example). This means you restrict your parameter space to the kernel of a linear map. Now, use for instance the Gauss algorithm to represent your kernel as the image of another linear map. This yields a coordinate transform which represents your fitting problem without any restriction - so now you can simply solve it by using the pseudo-inverse approach.



                    (6) Again an application from computer graphics: have your students build some nice three-dimensional geometric object (for instance, consisting of many triangles) and let them make it "visible" for instance by a simple raytracing algorithm. (This is going to be a nice exercise in computing intersections of lines and planes).



                    Now, you want to rotate the object - so your students will need to compute rotation matrices and apply them to the coordinates of the triangles. This is a nice illustration of orthogonal matrices and of the action of matrices as linear maps.



                    (7) An illustration (though, admittedly not an application) of eigenvalues and eigenspaces: Have your students draw an ellipse with axes parallel to the coordinate axes in $mathbbR^2$, then have them rotate the ellipse by using certain orthogonal matrices.



                    Afterwards, give them a quadratic equation in two variables, and make them plot the solution set (which should be chosen by you to agree with the rotated ellipse from above). Then have them diagonalize the symmetric matrix representing the corresponding quadratic form and make them compare their result with the rotation matrix above.



                    (8) Another example from image processing (in addition to James' example of image compression): many image "improvement" techniques are based on applying filters to the image - which is simply an application of a linear map.



                    (9) The discrete Fourier transform - which can by represented by a unitary matrix - is frequently used in image processing.



                    (10) In case that you wish to go into numerical analysis: discretization methods for linear partial differential equations usually yield linear equations in finite dimensions which have to be solved numerically.






                    share|cite|improve this answer











                    $endgroup$












                    • $begingroup$
                      "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                      $endgroup$
                      – John Dvorak
                      Feb 8 at 7:45











                    • $begingroup$
                      @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                      $endgroup$
                      – Jochen Glueck
                      Feb 14 at 20:17













                    10












                    10








                    10





                    $begingroup$

                    Here are a few further suggestions:



                    (1) The intersection of planes and lines (relevant in computer graphics) leads to linear equations and thus to kernels of linear maps.



                    (2) The area of parallelograms (and thus of triangles) can be computed in terms of determinants. That's maybe a bit too simple because you only need very small matrices, but on the other hand many surfaces in computer graphics are constructed from triangles.



                    If you compute the area of triangles in three dimensional space you'll also need to multiply $3times 2$-matrices with their tranposed matrices.



                    (3) A simple way to represent directed graphs on a computer is to store their adjacency matriy $A$ as a double array. Transposing the matrix means inverting the direction of all edges.



                    Moreover, the entries of the $k$-th power $A^k$ tell you how many paths of length $k$ exist between any two vertices of the graph.



                    (4) The pseudo-inverse of a matrix (which is closely related to singular value decomposition) is very important for linear regression/least square fitting.



                    (5) Consider a least square problem where you have, in addition, a linear restriction on the fitting parameters (probably, it would be helpful to present this situation in terms of a more concrete example). This means you restrict your parameter space to the kernel of a linear map. Now, use for instance the Gauss algorithm to represent your kernel as the image of another linear map. This yields a coordinate transform which represents your fitting problem without any restriction - so now you can simply solve it by using the pseudo-inverse approach.



                    (6) Again an application from computer graphics: have your students build some nice three-dimensional geometric object (for instance, consisting of many triangles) and let them make it "visible" for instance by a simple raytracing algorithm. (This is going to be a nice exercise in computing intersections of lines and planes).



                    Now, you want to rotate the object - so your students will need to compute rotation matrices and apply them to the coordinates of the triangles. This is a nice illustration of orthogonal matrices and of the action of matrices as linear maps.



                    (7) An illustration (though, admittedly not an application) of eigenvalues and eigenspaces: Have your students draw an ellipse with axes parallel to the coordinate axes in $mathbbR^2$, then have them rotate the ellipse by using certain orthogonal matrices.



                    Afterwards, give them a quadratic equation in two variables, and make them plot the solution set (which should be chosen by you to agree with the rotated ellipse from above). Then have them diagonalize the symmetric matrix representing the corresponding quadratic form and make them compare their result with the rotation matrix above.



                    (8) Another example from image processing (in addition to James' example of image compression): many image "improvement" techniques are based on applying filters to the image - which is simply an application of a linear map.



                    (9) The discrete Fourier transform - which can by represented by a unitary matrix - is frequently used in image processing.



                    (10) In case that you wish to go into numerical analysis: discretization methods for linear partial differential equations usually yield linear equations in finite dimensions which have to be solved numerically.






                    share|cite|improve this answer











                    $endgroup$



                    Here are a few further suggestions:



                    (1) The intersection of planes and lines (relevant in computer graphics) leads to linear equations and thus to kernels of linear maps.



                    (2) The area of parallelograms (and thus of triangles) can be computed in terms of determinants. That's maybe a bit too simple because you only need very small matrices, but on the other hand many surfaces in computer graphics are constructed from triangles.



                    If you compute the area of triangles in three dimensional space you'll also need to multiply $3times 2$-matrices with their tranposed matrices.



                    (3) A simple way to represent directed graphs on a computer is to store their adjacency matriy $A$ as a double array. Transposing the matrix means inverting the direction of all edges.



                    Moreover, the entries of the $k$-th power $A^k$ tell you how many paths of length $k$ exist between any two vertices of the graph.



                    (4) The pseudo-inverse of a matrix (which is closely related to singular value decomposition) is very important for linear regression/least square fitting.



                    (5) Consider a least square problem where you have, in addition, a linear restriction on the fitting parameters (probably, it would be helpful to present this situation in terms of a more concrete example). This means you restrict your parameter space to the kernel of a linear map. Now, use for instance the Gauss algorithm to represent your kernel as the image of another linear map. This yields a coordinate transform which represents your fitting problem without any restriction - so now you can simply solve it by using the pseudo-inverse approach.



                    (6) Again an application from computer graphics: have your students build some nice three-dimensional geometric object (for instance, consisting of many triangles) and let them make it "visible" for instance by a simple raytracing algorithm. (This is going to be a nice exercise in computing intersections of lines and planes).



                    Now, you want to rotate the object - so your students will need to compute rotation matrices and apply them to the coordinates of the triangles. This is a nice illustration of orthogonal matrices and of the action of matrices as linear maps.



                    (7) An illustration (though, admittedly not an application) of eigenvalues and eigenspaces: Have your students draw an ellipse with axes parallel to the coordinate axes in $mathbbR^2$, then have them rotate the ellipse by using certain orthogonal matrices.



                    Afterwards, give them a quadratic equation in two variables, and make them plot the solution set (which should be chosen by you to agree with the rotated ellipse from above). Then have them diagonalize the symmetric matrix representing the corresponding quadratic form and make them compare their result with the rotation matrix above.



                    (8) Another example from image processing (in addition to James' example of image compression): many image "improvement" techniques are based on applying filters to the image - which is simply an application of a linear map.



                    (9) The discrete Fourier transform - which can by represented by a unitary matrix - is frequently used in image processing.



                    (10) In case that you wish to go into numerical analysis: discretization methods for linear partial differential equations usually yield linear equations in finite dimensions which have to be solved numerically.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Feb 14 at 20:16

























                    answered Feb 7 at 22:52









                    Jochen GlueckJochen Glueck

                    3,10311423




                    3,10311423











                    • $begingroup$
                      "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                      $endgroup$
                      – John Dvorak
                      Feb 8 at 7:45











                    • $begingroup$
                      @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                      $endgroup$
                      – Jochen Glueck
                      Feb 14 at 20:17
















                    • $begingroup$
                      "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                      $endgroup$
                      – John Dvorak
                      Feb 8 at 7:45











                    • $begingroup$
                      @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                      $endgroup$
                      – Jochen Glueck
                      Feb 14 at 20:17















                    $begingroup$
                    "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                    $endgroup$
                    – John Dvorak
                    Feb 8 at 7:45





                    $begingroup$
                    "nice exercise in computing intersections of lines and planes because they need to determine which triangles are in front of the others" - triangle occlusion (and even line occlusion in 3D) is in general cyclic (and thus non-transitive). In the old Doom and Quake games you'd instead use binary space partitioning and backface culling, these days you go straight to depth buffers (which, incidentally, are also the way to go for shadow maps - you render the depth map from the point of view of the light source, then transform each pixel into that space and compare the depths).
                    $endgroup$
                    – John Dvorak
                    Feb 8 at 7:45













                    $begingroup$
                    @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                    $endgroup$
                    – Jochen Glueck
                    Feb 14 at 20:17




                    $begingroup$
                    @JohnDvorak: You're right, of course. I was actually thinking of a simple version of raytracing; I corrected the post accordingly.
                    $endgroup$
                    – Jochen Glueck
                    Feb 14 at 20:17











                    9












                    $begingroup$

                    It's a standard pre-requisite in Machine Learning. For example, dimensionality reduction techniques, such as PCA and Johnson-Lindenstrauss all require a basic understanding of projections (and the former relies on the spectral theorem).






                    share|cite|improve this answer









                    $endgroup$

















                      9












                      $begingroup$

                      It's a standard pre-requisite in Machine Learning. For example, dimensionality reduction techniques, such as PCA and Johnson-Lindenstrauss all require a basic understanding of projections (and the former relies on the spectral theorem).






                      share|cite|improve this answer









                      $endgroup$















                        9












                        9








                        9





                        $begingroup$

                        It's a standard pre-requisite in Machine Learning. For example, dimensionality reduction techniques, such as PCA and Johnson-Lindenstrauss all require a basic understanding of projections (and the former relies on the spectral theorem).






                        share|cite|improve this answer









                        $endgroup$



                        It's a standard pre-requisite in Machine Learning. For example, dimensionality reduction techniques, such as PCA and Johnson-Lindenstrauss all require a basic understanding of projections (and the former relies on the spectral theorem).







                        share|cite|improve this answer












                        share|cite|improve this answer



                        share|cite|improve this answer










                        answered Feb 7 at 22:44









                        Aryeh KontorovichAryeh Kontorovich

                        2,4711629




                        2,4711629





















                            5












                            $begingroup$

                            Numerical linear algebra is a field of computer science.






                            share|cite|improve this answer









                            $endgroup$








                            • 2




                              $begingroup$
                              Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                              $endgroup$
                              – daw
                              Feb 8 at 11:17















                            5












                            $begingroup$

                            Numerical linear algebra is a field of computer science.






                            share|cite|improve this answer









                            $endgroup$








                            • 2




                              $begingroup$
                              Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                              $endgroup$
                              – daw
                              Feb 8 at 11:17













                            5












                            5








                            5





                            $begingroup$

                            Numerical linear algebra is a field of computer science.






                            share|cite|improve this answer









                            $endgroup$



                            Numerical linear algebra is a field of computer science.







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Feb 7 at 17:52









                            Ben McKayBen McKay

                            14.6k22961




                            14.6k22961







                            • 2




                              $begingroup$
                              Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                              $endgroup$
                              – daw
                              Feb 8 at 11:17












                            • 2




                              $begingroup$
                              Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                              $endgroup$
                              – daw
                              Feb 8 at 11:17







                            2




                            2




                            $begingroup$
                            Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                            $endgroup$
                            – daw
                            Feb 8 at 11:17




                            $begingroup$
                            Numerical linear algebra is applied in computer science. It is an active research field of (applied) mathematics.
                            $endgroup$
                            – daw
                            Feb 8 at 11:17











                            3












                            $begingroup$

                            Spectral graph theory is a big part of theoretical computer science. Some concrete examples are:



                            1) A big trend in the past decade in TCS has been to utilize the properties of the Laplacian matrix to make classical graph algorithms faster. Usually, the arguments boil down to showing that the problem reduces to solving the linear system $Lx = y$ where $L$ is the Laplacian. Since solving these systems is fast, the resulting algorithms are also fast. Some concrete examples are max flow and graph sparsification.



                            2) Expander graphs: These are graphs that 'expand' in some way. For example, the number of neighbors of any subset $S$ of vertices is of size at least $c|S|$ for some fixed constant $c$. They 'look like' the complete graph but are much sparser. They have nice spectral properties which means random walks converge fast on them. They are also connected to other branches of TCS through error correcting codes, random graphs, etc.



                            3) Drawing graphs! It turns out if you have a complicated graph and want to visualize it, a good way to do so is to take the second and third eigenvectors of the Laplacian matrix and use them as $(x,y)$ coordinates for graph vertices. There are some cool examples here. The math behind this is essentially that the eigenvectors are connected to minimizing some form of 'energy' through the Rayleigh quotient.






                            share|cite|improve this answer









                            $endgroup$

















                              3












                              $begingroup$

                              Spectral graph theory is a big part of theoretical computer science. Some concrete examples are:



                              1) A big trend in the past decade in TCS has been to utilize the properties of the Laplacian matrix to make classical graph algorithms faster. Usually, the arguments boil down to showing that the problem reduces to solving the linear system $Lx = y$ where $L$ is the Laplacian. Since solving these systems is fast, the resulting algorithms are also fast. Some concrete examples are max flow and graph sparsification.



                              2) Expander graphs: These are graphs that 'expand' in some way. For example, the number of neighbors of any subset $S$ of vertices is of size at least $c|S|$ for some fixed constant $c$. They 'look like' the complete graph but are much sparser. They have nice spectral properties which means random walks converge fast on them. They are also connected to other branches of TCS through error correcting codes, random graphs, etc.



                              3) Drawing graphs! It turns out if you have a complicated graph and want to visualize it, a good way to do so is to take the second and third eigenvectors of the Laplacian matrix and use them as $(x,y)$ coordinates for graph vertices. There are some cool examples here. The math behind this is essentially that the eigenvectors are connected to minimizing some form of 'energy' through the Rayleigh quotient.






                              share|cite|improve this answer









                              $endgroup$















                                3












                                3








                                3





                                $begingroup$

                                Spectral graph theory is a big part of theoretical computer science. Some concrete examples are:



                                1) A big trend in the past decade in TCS has been to utilize the properties of the Laplacian matrix to make classical graph algorithms faster. Usually, the arguments boil down to showing that the problem reduces to solving the linear system $Lx = y$ where $L$ is the Laplacian. Since solving these systems is fast, the resulting algorithms are also fast. Some concrete examples are max flow and graph sparsification.



                                2) Expander graphs: These are graphs that 'expand' in some way. For example, the number of neighbors of any subset $S$ of vertices is of size at least $c|S|$ for some fixed constant $c$. They 'look like' the complete graph but are much sparser. They have nice spectral properties which means random walks converge fast on them. They are also connected to other branches of TCS through error correcting codes, random graphs, etc.



                                3) Drawing graphs! It turns out if you have a complicated graph and want to visualize it, a good way to do so is to take the second and third eigenvectors of the Laplacian matrix and use them as $(x,y)$ coordinates for graph vertices. There are some cool examples here. The math behind this is essentially that the eigenvectors are connected to minimizing some form of 'energy' through the Rayleigh quotient.






                                share|cite|improve this answer









                                $endgroup$



                                Spectral graph theory is a big part of theoretical computer science. Some concrete examples are:



                                1) A big trend in the past decade in TCS has been to utilize the properties of the Laplacian matrix to make classical graph algorithms faster. Usually, the arguments boil down to showing that the problem reduces to solving the linear system $Lx = y$ where $L$ is the Laplacian. Since solving these systems is fast, the resulting algorithms are also fast. Some concrete examples are max flow and graph sparsification.



                                2) Expander graphs: These are graphs that 'expand' in some way. For example, the number of neighbors of any subset $S$ of vertices is of size at least $c|S|$ for some fixed constant $c$. They 'look like' the complete graph but are much sparser. They have nice spectral properties which means random walks converge fast on them. They are also connected to other branches of TCS through error correcting codes, random graphs, etc.



                                3) Drawing graphs! It turns out if you have a complicated graph and want to visualize it, a good way to do so is to take the second and third eigenvectors of the Laplacian matrix and use them as $(x,y)$ coordinates for graph vertices. There are some cool examples here. The math behind this is essentially that the eigenvectors are connected to minimizing some form of 'energy' through the Rayleigh quotient.







                                share|cite|improve this answer












                                share|cite|improve this answer



                                share|cite|improve this answer










                                answered Feb 8 at 15:37









                                Sandeep SilwalSandeep Silwal

                                16316




                                16316





















                                    2












                                    $begingroup$

                                    The answer to this depends on what you classify as computer science. I'd expect that a lot of modern algorithms and automata theory involves linear algebra.



                                    The answer to the following question involves linear algebra, for example.




                                    Suppose $mathcalA$ is a weighted automaton over a semiring $mathbbS$, where $mathbbS$ can be embedded in a field. Is the support language of $mathcalA$ nonempty?




                                    Linear programming techniques can also be used to give approximate solutions to several NP-complete problems. See this pdf.






                                    share|cite|improve this answer









                                    $endgroup$

















                                      2












                                      $begingroup$

                                      The answer to this depends on what you classify as computer science. I'd expect that a lot of modern algorithms and automata theory involves linear algebra.



                                      The answer to the following question involves linear algebra, for example.




                                      Suppose $mathcalA$ is a weighted automaton over a semiring $mathbbS$, where $mathbbS$ can be embedded in a field. Is the support language of $mathcalA$ nonempty?




                                      Linear programming techniques can also be used to give approximate solutions to several NP-complete problems. See this pdf.






                                      share|cite|improve this answer









                                      $endgroup$















                                        2












                                        2








                                        2





                                        $begingroup$

                                        The answer to this depends on what you classify as computer science. I'd expect that a lot of modern algorithms and automata theory involves linear algebra.



                                        The answer to the following question involves linear algebra, for example.




                                        Suppose $mathcalA$ is a weighted automaton over a semiring $mathbbS$, where $mathbbS$ can be embedded in a field. Is the support language of $mathcalA$ nonempty?




                                        Linear programming techniques can also be used to give approximate solutions to several NP-complete problems. See this pdf.






                                        share|cite|improve this answer









                                        $endgroup$



                                        The answer to this depends on what you classify as computer science. I'd expect that a lot of modern algorithms and automata theory involves linear algebra.



                                        The answer to the following question involves linear algebra, for example.




                                        Suppose $mathcalA$ is a weighted automaton over a semiring $mathbbS$, where $mathbbS$ can be embedded in a field. Is the support language of $mathcalA$ nonempty?




                                        Linear programming techniques can also be used to give approximate solutions to several NP-complete problems. See this pdf.







                                        share|cite|improve this answer












                                        share|cite|improve this answer



                                        share|cite|improve this answer










                                        answered Feb 8 at 3:44









                                        Agnishom ChattopadhyayAgnishom Chattopadhyay

                                        1213




                                        1213





















                                            1












                                            $begingroup$

                                            It's unlikely that CS students will have encountered this need before, or indeed will necessarily in future. Some of the structures are useful (matrices etc) but it is fairly rare to need a good understanding of the theory.
                                            The only CS type problem I have encountered that needed an understanding of linear algebra was numerical simulation, in which solutions to large systems of simultaneous equations needed solving. The basic theory (half a day of googleing) can get to a solution but improvements to stability/rate of convergence and debugging those sorts of issues would have been hard without at least some theoretical linear algebra.



                                            I gather the same is true for machine learning, but have no experience of this.






                                            share|cite|improve this answer











                                            $endgroup$

















                                              1












                                              $begingroup$

                                              It's unlikely that CS students will have encountered this need before, or indeed will necessarily in future. Some of the structures are useful (matrices etc) but it is fairly rare to need a good understanding of the theory.
                                              The only CS type problem I have encountered that needed an understanding of linear algebra was numerical simulation, in which solutions to large systems of simultaneous equations needed solving. The basic theory (half a day of googleing) can get to a solution but improvements to stability/rate of convergence and debugging those sorts of issues would have been hard without at least some theoretical linear algebra.



                                              I gather the same is true for machine learning, but have no experience of this.






                                              share|cite|improve this answer











                                              $endgroup$















                                                1












                                                1








                                                1





                                                $begingroup$

                                                It's unlikely that CS students will have encountered this need before, or indeed will necessarily in future. Some of the structures are useful (matrices etc) but it is fairly rare to need a good understanding of the theory.
                                                The only CS type problem I have encountered that needed an understanding of linear algebra was numerical simulation, in which solutions to large systems of simultaneous equations needed solving. The basic theory (half a day of googleing) can get to a solution but improvements to stability/rate of convergence and debugging those sorts of issues would have been hard without at least some theoretical linear algebra.



                                                I gather the same is true for machine learning, but have no experience of this.






                                                share|cite|improve this answer











                                                $endgroup$



                                                It's unlikely that CS students will have encountered this need before, or indeed will necessarily in future. Some of the structures are useful (matrices etc) but it is fairly rare to need a good understanding of the theory.
                                                The only CS type problem I have encountered that needed an understanding of linear algebra was numerical simulation, in which solutions to large systems of simultaneous equations needed solving. The basic theory (half a day of googleing) can get to a solution but improvements to stability/rate of convergence and debugging those sorts of issues would have been hard without at least some theoretical linear algebra.



                                                I gather the same is true for machine learning, but have no experience of this.







                                                share|cite|improve this answer














                                                share|cite|improve this answer



                                                share|cite|improve this answer








                                                edited Feb 8 at 16:00

























                                                answered Feb 8 at 12:02









                                                drjpizzledrjpizzle

                                                1112




                                                1112





















                                                    0












                                                    $begingroup$

                                                    One application of algebra is in Oil and Gas. To analyze acoustic data gathered during oil exploration geophysicists will use algorithms like full waveform inversion. This algorithm needs to invert and process a very large matrix (over 200k x 200k), usually using parallel processing supercomputers.






                                                    share|cite|improve this answer









                                                    $endgroup$












                                                    • $begingroup$
                                                      How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                      $endgroup$
                                                      – Qfwfq
                                                      Feb 8 at 15:45
















                                                    0












                                                    $begingroup$

                                                    One application of algebra is in Oil and Gas. To analyze acoustic data gathered during oil exploration geophysicists will use algorithms like full waveform inversion. This algorithm needs to invert and process a very large matrix (over 200k x 200k), usually using parallel processing supercomputers.






                                                    share|cite|improve this answer









                                                    $endgroup$












                                                    • $begingroup$
                                                      How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                      $endgroup$
                                                      – Qfwfq
                                                      Feb 8 at 15:45














                                                    0












                                                    0








                                                    0





                                                    $begingroup$

                                                    One application of algebra is in Oil and Gas. To analyze acoustic data gathered during oil exploration geophysicists will use algorithms like full waveform inversion. This algorithm needs to invert and process a very large matrix (over 200k x 200k), usually using parallel processing supercomputers.






                                                    share|cite|improve this answer









                                                    $endgroup$



                                                    One application of algebra is in Oil and Gas. To analyze acoustic data gathered during oil exploration geophysicists will use algorithms like full waveform inversion. This algorithm needs to invert and process a very large matrix (over 200k x 200k), usually using parallel processing supercomputers.







                                                    share|cite|improve this answer












                                                    share|cite|improve this answer



                                                    share|cite|improve this answer










                                                    answered Feb 8 at 11:49









                                                    BardBard

                                                    1




                                                    1











                                                    • $begingroup$
                                                      How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                      $endgroup$
                                                      – Qfwfq
                                                      Feb 8 at 15:45

















                                                    • $begingroup$
                                                      How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                      $endgroup$
                                                      – Qfwfq
                                                      Feb 8 at 15:45
















                                                    $begingroup$
                                                    How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                    $endgroup$
                                                    – Qfwfq
                                                    Feb 8 at 15:45





                                                    $begingroup$
                                                    How is this computer science as opposed to -say- general applied mathematics that needs simulations done with a computer?
                                                    $endgroup$
                                                    – Qfwfq
                                                    Feb 8 at 15:45



                                                    Popular posts from this blog

                                                    How to check contact read email or not when send email to Individual?

                                                    Displaying single band from multi-band raster using QGIS

                                                    How many registers does an x86_64 CPU actually have?