From d4731984b0162b362694629d543ec74239be9c73 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Wed, 12 Dec 2018 09:19:48 +0100 Subject: added front matters to engine; removed site/ --- site/assets | 1 - site/blog-feed.xml | 328 --------------------- site/blog.html | 62 ---- site/index.html | 33 --- site/links.html | 84 ------ site/microblog-feed.xml | 291 ------------------ site/microblog.html | 184 ------------ site/postlist.html | 67 ----- .../2013-06-01-q-robinson-schensted-paper.html | 32 -- ...-04-01-q-robinson-schensted-symmetry-paper.html | 33 --- ...ghted-interpretation-super-catalan-numbers.html | 32 -- site/posts/2015-04-01-unitary-double-products.html | 29 -- site/posts/2015-04-02-juggling-skill-tree.html | 32 -- ...y-words-containing-repetitions-odd-periods.html | 49 --- ...015-07-01-causal-quantum-product-levy-area.html | 30 -- ...ald-polynomials-macdonald-superpolynomials.html | 41 --- ...6-10-13-q-robinson-schensted-knuth-polymer.html | 38 --- site/posts/2017-04-25-open_research_toywiki.html | 33 --- site/posts/2017-08-07-mathematical_bazaar.html | 80 ----- site/posts/2018-04-10-update-open-research.html | 77 ----- .../2018-06-03-automatic_differentiation.html | 76 ----- 21 files changed, 1632 deletions(-) delete mode 120000 site/assets delete mode 100644 site/blog-feed.xml delete mode 100644 site/blog.html delete mode 100644 site/index.html delete mode 100644 site/links.html delete mode 100644 site/microblog-feed.xml delete mode 100644 site/microblog.html delete mode 100644 site/postlist.html delete mode 100644 site/posts/2013-06-01-q-robinson-schensted-paper.html delete mode 100644 site/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html delete mode 100644 site/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html delete mode 100644 site/posts/2015-04-01-unitary-double-products.html delete mode 100644 site/posts/2015-04-02-juggling-skill-tree.html delete mode 100644 site/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html delete mode 100644 site/posts/2015-07-01-causal-quantum-product-levy-area.html delete mode 100644 site/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html delete mode 100644 site/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html delete mode 100644 site/posts/2017-04-25-open_research_toywiki.html delete mode 100644 site/posts/2017-08-07-mathematical_bazaar.html delete mode 100644 site/posts/2018-04-10-update-open-research.html delete mode 100644 site/posts/2018-06-03-automatic_differentiation.html (limited to 'site') diff --git a/site/assets b/site/assets deleted file mode 120000 index bae6859..0000000 --- a/site/assets +++ /dev/null @@ -1 +0,0 @@ -../assets/ \ No newline at end of file diff --git a/site/blog-feed.xml b/site/blog-feed.xml deleted file mode 100644 index 82d8c31..0000000 --- a/site/blog-feed.xml +++ /dev/null @@ -1,328 +0,0 @@ - - - Yuchen Pei's Blog - https://ypei.me/blog-feed.xml - 2018-06-03T00:00:00Z - - - - Yuchen Pei - - PyAtom - - Automatic differentiation - posts/2018-06-03-automatic_differentiation.html - 2018-06-03T00:00:00Z - - - Yuchen Pei - - <p>This post is meant as a documentation of my understanding of autodiff. I benefited a lot from <a href="http://www.cs.toronto.edu/%7Ergrosse/courses/csc321_2018/slides/lec10.pdf">Toronto CSC321 slides</a> and the <a href="https://github.com/mattjj/autodidact/">autodidact</a> project which is a pedagogical implementation of <a href="https://github.com/hips/autograd">Autograd</a>. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.</p> -<p>Automatic differentiation (AD) is a way to compute derivatives. It does so by traversing through a computational graph using the chain rule.</p> -<p>There are the forward mode AD and reverse mode AD, which are kind of symmetric to each other and understanding one of them results in little to no difficulty in understanding the other.</p> -<p>In the language of neural networks, one can say that the forward mode AD is used when one wants to compute the derivatives of functions at all layers with respect to input layer weights, whereas the reverse mode AD is used to compute the derivatives of output functions with respect to weights at all layers. Therefore reverse mode AD (rmAD) is the one to use for gradient descent, which is the one we focus in this post.</p> -<p>Basically rmAD requires the computation to be sufficiently decomposed, so that in the computational graph, each node as a function of its parent nodes is an elementary function that the AD engine has knowledge about.</p> -<p>For example, the Sigmoid activation <span class="math inline">\(a&#39; = \sigma(w a + b)\)</span> is quite simple, but it should be decomposed to simpler computations:</p> -<ul> -<li><span class="math inline">\(a&#39; = 1 / t_1\)</span></li> -<li><span class="math inline">\(t_1 = 1 + t_2\)</span></li> -<li><span class="math inline">\(t_2 = \exp(t_3)\)</span></li> -<li><span class="math inline">\(t_3 = - t_4\)</span></li> -<li><span class="math inline">\(t_4 = t_5 + b\)</span></li> -<li><span class="math inline">\(t_5 = w a\)</span></li> -</ul> -<p>Thus the function <span class="math inline">\(a&#39;(a)\)</span> is decomposed to elementary operations like addition, subtraction, multiplication, reciprocitation, exponentiation, logarithm etc. And the rmAD engine stores the Jacobian of these elementary operations.</p> -<p>Since in neural networks we want to find derivatives of a single loss function <span class="math inline">\(L(x; \theta)\)</span>, we can omit <span class="math inline">\(L\)</span> when writing derivatives and denote, say <span class="math inline">\(\bar \theta_k := \partial_{\theta_k} L\)</span>.</p> -<p>In implementations of rmAD, one can represent the Jacobian as a transformation <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span>. <span class="math inline">\(j\)</span> is called the <em>Vector Jacobian Product</em> (VJP). For example, <span class="math inline">\(j(\exp)(y, \bar y, x) = y \bar y\)</span> since given <span class="math inline">\(y = \exp(x)\)</span>,</p> -<p><span class="math inline">\(\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y\)</span></p> -<p>as another example, <span class="math inline">\(j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)\)</span> since given <span class="math inline">\(y = x_1 + x_2\)</span>, <span class="math inline">\(\bar{x_1} = \bar{x_2} = \bar y\)</span>.</p> -<p>Similarly,</p> -<ol type="1"> -<li><span class="math inline">\(j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)\)</span></li> -<li><span class="math inline">\(j(\log)(y, \bar y, x) = \bar y / x\)</span></li> -<li><span class="math inline">\(j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)\)</span>.</li> -<li>etc...</li> -</ol> -<p>In the third one, the function is a matrix <span class="math inline">\(A\)</span> multiplied on the right by a column vector <span class="math inline">\(\beta\)</span>, and <span class="math inline">\(\bar y \otimes \beta\)</span> is the tensor product which is a fancy way of writing <span class="math inline">\(\bar y \beta^T\)</span>. See <a href="https://github.com/mattjj/autodidact/blob/master/autograd/numpy/numpy_vjps.py">numpy_vjps.py</a> for the implementation in autodidact.</p> -<p>So, given a node say <span class="math inline">\(y = y(x_1, x_2, ..., x_n)\)</span>, and given the value of <span class="math inline">\(y\)</span>, <span class="math inline">\(x_{1 : n}\)</span> and <span class="math inline">\(\bar y\)</span>, rmAD computes the values of <span class="math inline">\(\bar x_{1 : n}\)</span> by using the Jacobians.</p> -<p>This is the gist of rmAD. It stores the values of each node in a forward pass, and computes the derivatives of each node exactly once in a backward pass.</p> -<p>It is a nice exercise to derive the backpropagation in the fully connected feedforward neural networks (e.g. <a href="http://neuralnetworksanddeeplearning.com/chap2.html#the_four_fundamental_equations_behind_backpropagation">the one for MNIST in Neural Networks and Deep Learning</a>) using rmAD.</p> -<p>AD is an approach lying between the extremes of numerical approximation (e.g. finite difference) and symbolic evaluation. It uses exact formulas (VJP) at each elementary operation like symbolic evaluation, while evaluates each VJP numerically rather than lumping all the VJPs into an unwieldy symbolic formula.</p> -<p>Things to look further into: the higher-order functional currying form <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span> begs for a functional programming implementation.</p> - - - - Updates on open research - posts/2018-04-10-update-open-research.html - 2018-04-29T00:00:00Z - - - Yuchen Pei - - <p>It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.</p> -<p>As always I discuss open research only in mathematics, not because I think it should not be applied to other disciplines, but simply because I do not have experience nor sufficient interests in non-mathematical subjects.</p> -<p>First, I read about Richard Stallman the founder of the free software movement, in <a href="http://shop.oreilly.com/product/9780596002879.do">his biography by Sam Williams</a> and his own collection of essays <a href="https://shop.fsf.org/books-docs/free-software-free-society-selected-essays-richard-m-stallman-3rd-edition"><em>Free software, free society</em></a>, from which I learned a bit more about the context and philosophy of free software and its relation to that of open source software. For anyone interested in open research, I highly recommend having a look at these two books. I am also reading Levy’s <a href="http://www.stevenlevy.com/index.php/books/hackers">Hackers</a>, which documented the development of the hacker culture predating Stallman. I can see the connection of ideas from the hacker ethic to the free software philosophy and to the open source philosophy. My guess is that the software world is fortunate to have pioneers who advocated for various kinds of freedom and openness from the beginning, whereas for academia which has a much longer history, credit protection has always been a bigger concern.</p> -<p>Also a month ago I attended a workshop called <a href="https://www.perimeterinstitute.ca/conferences/open-research-rethinking-scientific-collaboration">Open research: rethinking scientific collaboration</a>. That was the first time I met a group of people (mostly physicists) who also want open research to happen, and we had some stimulating discussions. Many thanks to the organisers at Perimeter Institute for organising the event, and special thanks to <a href="https://www.perimeterinstitute.ca/people/matteo-smerlak">Matteo Smerlak</a> and <a href="https://www.perimeterinstitute.ca/people/ashley-milsted">Ashley Milsted</a> for invitation and hosting.</p> -<p>From both of these I feel like I should write an updated post on open research.</p> -<h3 id="freedom-and-community">Freedom and community</h3> -<p>Ideals matter. Stallman’s struggles stemmed from the frustration of denied request of source code (a frustration I shared in academia except source code is replaced by maths knowledge), and revolved around two things that underlie the free software movement: freedom and community. That is, the freedom to use, modify and share a work, and by sharing, to help the community.</p> -<p>Likewise, as for open research, apart from the utilitarian view that open research is more efficient / harder for credit theft, we should not ignore the ethical aspect that open research is right and fair. In particular, I think freedom and community can also serve as principles in open research. One way to make this argument more concrete is to describe what I feel are the central problems: NDAs (non-disclosure agreements) and reproducibility.</p> -<p><strong>NDAs</strong>. It is assumed that when establishing a research collaboration, or just having a discussion, all those involved own the joint work in progress, and no one has the freedom to disclose any information e.g. intermediate results without getting permission from all collaborators. In effect this amounts to signing an NDA. NDAs are harmful because they restrict people’s freedom from sharing information that can benefit their own or others’ research. Considering that in contrast to the private sector, the primary goal of academia is knowledge but not profit, NDAs in research are unacceptable.</p> -<p><strong>Reproducibility</strong>. Research papers written down are not necessarily reproducible, even though they appear on peer-reviewed journals. This is because the peer-review process is opaque and the proofs in the papers may not be clear to everyone. To make things worse, there are no open channels to discuss results in these papers and one may have to rely on interacting with the small circle of the informed. One example is folk theorems. Another is trade secrets required to decipher published works.</p> -<p>I should clarify that freedom works both ways. One should have the freedom to disclose maths knowledge, but they should also be free to withhold any information that does not hamper the reproducibility of published works (e.g. results in ongoing research yet to be published), even though it may not be nice to do so when such information can help others with their research.</p> -<p>Similar to the solution offered by the free software movement, we need a community that promotes and respects free flow of maths knowledge, in the spirit of the <a href="https://www.gnu.org/philosophy/">four essential freedoms</a>, a community that rejects NDAs and upholds reproducibility.</p> -<p>Here are some ideas on how to tackle these two problems and build the community:</p> -<ol type="1"> -<li>Free licensing. It solves NDA problem - free licenses permit redistribution and modification of works, so if you adopt them in your joint work, then you have the freedom to modify and distribute the work; it also helps with reproducibility - if a paper is not clear, anyone can write their own version and publish it. Bonus points with the use of copyleft licenses like <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Share-Alike</a> or the <a href="https://www.gnu.org/licenses/fdl.html">GNU Free Documentation License</a>.</li> -<li>A forum for discussions of mathematics. It helps solve the reproducibility problem - public interaction may help quickly clarify problems. By the way, Math Overflow is not a forum.</li> -<li>An infrastructure of mathematical knowledge. Like the GNU system, a mathematics encyclopedia under a copyleft license maintained in the Github-style rather than Wikipedia-style by a “Free Mathematics Foundation”, and drawing contributions from the public (inside or outside of the academia). To begin with, crowd-source (again, Github-style) the proofs of say 1000 foundational theorems covered in the curriculum of a bachelor’s degree. Perhaps start with taking contributions from people with some credentials (e.g. having a bachelor degree in maths) and then expand the contribution permission to the public, or taking advantage of existing corpus under free license like Wikipedia.</li> -<li>Citing with care: if a work is considered authorative but you couldn’t reproduce the results, whereas another paper which tries to explain or discuss similar results makes the first paper understandable to you, give both papers due attribution (something like: see [1], but I couldn’t reproduce the proof in [1], and the proofs in [2] helped clarify it). No one should be offended if you say you can not reproduce something - there may be causes on both sides, whereas citing [2] is fairer and helps readers with a similar background.</li> -</ol> -<h3 id="tools-for-open-research">Tools for open research</h3> -<p>The open research workshop revolved around how to lead academia towards a more open culture. There were discussions on open research tools, improving credit attributions, the peer-review process and the path to adoption.</p> -<p>During the workshop many efforts for open research were mentioned, and afterwards I was also made aware by more of them, like the following:</p> -<ul> -<li><a href="https://osf.io">OSF</a>, an online research platform. It has a clean and simple interface with commenting, wiki, citation generation, DOI generation, tags, license generation etc. Like Github it supports private and public repositories (but defaults to private), version control, with the ability to fork or bookmark a project.</li> -<li><a href="https://scipost.org/">SciPost</a>, physics journals whose peer review reports and responses are public (peer-witnessed refereeing), and allows comments (post-publication evaluation). Like arXiv, it requires some academic credential (PhD or above) to register.</li> -<li><a href="https://knowen.org/">Knowen</a>, a platform to organise knowledge in directed acyclic graphs. Could be useful for building the infrastructure of mathematical knowledge.</li> -<li><a href="https://fermatslibrary.com/">Fermat’s Library</a>, the journal club website that crowd-annotates one notable paper per week released a Chrome extension <a href="https://fermatslibrary.com/librarian">Librarian</a> that overlays a commenting interface on arXiv. As an example Ian Goodfellow did an <a href="https://fermatslibrary.com/arxiv_comments?url=https://arxiv.org/pdf/1406.2661.pdf">AMA (ask me anything) on his GAN paper</a>.</li> -<li><a href="https://polymathprojects.org/">The Polymath project</a>, the famous massive collaborative mathematical project. Not exactly new, the Polymath project is the only open maths research project that has gained some traction and recognition. However, it does not have many active projects (<a href="http://michaelnielsen.org/polymath1/index.php?title=Main_Page">currently only one active project</a>).</li> -<li><a href="https://stacks.math.columbia.edu/">The Stacks Project</a>. I was made aware of this project by <a href="https://people.kth.se/~yitingl/">Yiting</a>. Its data is hosted on github and accepts contributions via pull requests and is licensed under the GNU Free Documentation License, ticking many boxes of the free and open source model.</li> -</ul> -<h3 id="an-anecdote-from-the-workshop">An anecdote from the workshop</h3> -<p>In a conversation during the workshop, one of the participants called open science “normal science”, because reproducibility, open access, collaborations, and fair attributions are all what science is supposed to be, and practices like treating the readers as buyers rather than users should be called “bad science”, rather than “closed science”.</p> -<p>To which an organiser replied: maybe we should rename the workshop “Not-bad science”.</p> - - - - The Mathematical Bazaar - posts/2017-08-07-mathematical_bazaar.html - 2017-08-07T00:00:00Z - - - Yuchen Pei - - <p>In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.</p> -<p>This essay is a work in progress - comments and criticisms are welcome! <a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a></p> -<p>Before I start I should point out that</p> -<ol type="1"> -<li>Open research is <em>not</em> open access. In fact the latter is a prerequisite to the former.</li> -<li>I am not proposing to replace the current academic model with the open model - I know academia works well for many people and I am happy for them, but I think an open research community is long overdue since the wide adoption of the World Wide Web more than two decades ago. In fact, I fail to see why an open model can not run in tandem with the academia, just like open source and closed source software development coexist today.</li> -</ol> -<h2 id="problems-of-academia">problems of academia</h2> -<p>Open source projects are characterised by publicly available source codes as well as open invitations for public collaborations, whereas closed source projects do not make source codes accessible to the public. How about mathematical academia then, is it open source or closed source? The answer is neither.</p> -<p>Compared to some other scientific disciplines, mathematics does not require expensive equipments or resources to replicate results; compared to programming in conventional software industry, mathematical findings are not meant to be commercial, as credits and reputation rather than money are the direct incentives (even though the former are commonly used to trade for the latter). It is also a custom and common belief that mathematical derivations and theorems shouldn't be patented. Because of this, mathematical research is an open source activity in the sense that proofs to new results are all available in papers, and thanks to open access e.g. the arXiv preprint repository most of the new mathematical knowledge is accessible for free.</p> -<p>Then why, you may ask, do I claim that maths research is not open sourced? Well, this is because 1. mathematical arguments are not easily replicable and 2. mathematical research projects are mostly not open for public participation.</p> -<p>Compared to computer programs, mathematical arguments are not written in an unambiguous language, and they are terse and not written in maximum verbosity (this is especially true in research papers as journals encourage limiting the length of submissions), so the understanding of a proof depends on whether the reader is equipped with the right background knowledge, and the completeness of a proof is highly subjective. More generally speaking, computer programs are mostly portable because all machines with the correct configurations can understand and execute a piece of program, whereas humans are subject to their environment, upbringings, resources etc. to have a brain ready to comprehend a proof that interests them. (these barriers are softer than the expensive equipments and resources in other scientific fields mentioned before because it is all about having access to the right information)</p> -<p>On the other hand, as far as the pursuit of reputation and prestige (which can be used to trade for the scarce resource of research positions and grant money) goes, there is often little practical motivation for career mathematicians to explain their results to the public carefully. And so the weird reality of the mathematical academia is that it is not an uncommon practice to keep trade secrets in order to protect one's territory and maintain a monopoly. This is doable because as long as a paper passes the opaque and sometimes political peer review process and is accepted by a journal, it is considered work done, accepted by the whole academic community and adds to the reputation of the author(s). Just like in the software industry, trade secrets and monopoly hinder the development of research as a whole, as well as demoralise outsiders who are interested in participating in related research.</p> -<p>Apart from trade secrets and territoriality, another reason to the nonexistence of open research community is an elitist tradition in the mathematical academia, which goes as follows:</p> -<ul> -<li>Whoever is not good at mathematics or does not possess a degree in maths is not eligible to do research, or else they run high risks of being labelled a crackpot.</li> -<li>Mistakes made by established mathematicians are more tolerable than those less established.</li> -<li>Good mathematical writings should be deep, and expositions of non-original results are viewed as inferior work and do not add to (and in some cases may even damage) one's reputation.</li> -</ul> -<p>All these customs potentially discourage public participations in mathematical research, and I do not see them easily go away unless an open source community gains momentum.</p> -<p>To solve the above problems, I propose a open source model of mathematical research, which has high levels of openness and transparency and also has some added benefits listed in the last section of this essay. This model tries to achieve two major goals:</p> -<ul> -<li>Open and public discussions and collaborations of mathematical research projects online</li> -<li>Open review to validate results, where author name, reviewer name, comments and responses are all publicly available online.</li> -</ul> -<p>To this end, a Github model is fitting. Let me first describe how open source collaboration works on Github.</p> -<h2 id="open-source-collaborations-on-github">open source collaborations on Github</h2> -<p>On <a href="https://github.com">Github</a>, every project is publicly available in a repository (we do not consider private repos). The owner can update the project by &quot;committing&quot; changes, which include a message of what has been changed, the author of the changes and a timestamp. Each project has an issue tracker, which is basically a discussion forum about the project, where anyone can open an issue (start a discussion), and the owner of the project as well as the original poster of the issue can close it if it is resolved, e.g. bug fixed, feature added, or out of the scope of the project. Closing the issue is like ending the discussion, except that the thread is still open to more posts for anyone interested. People can react to each issue post, e.g. upvote, downvote, celebration, and importantly, all the reactions are public too, so you can find out who upvoted or downvoted your post.</p> -<p>When one is interested in contributing code to a project, they fork it, i.e. make a copy of the project, and make the changes they like in the fork. Once they are happy with the changes, they submit a pull request to the original project. The owner of the original project may accept or reject the request, and they can comment on the code in the pull request, asking for clarification, pointing out problematic part of the code etc and the author of the pull request can respond to the comments. Anyone, not just the owner can participate in this review process, turning it into a public discussion. In fact, a pull request is a special issue thread. Once the owner is happy with the pull request, they accept it and the changes are merged into the original project. The author of the changes will show up in the commit history of the original project, so they get the credits.</p> -<p>As an alternative to forking, if one is interested in a project but has a different vision, or that the maintainer has stopped working on it, they can clone it and make their own version. This is a more independent kind of fork because there is no longer intention to contribute back to the original project.</p> -<p>Moreover, on Github there is no way to send private messages, which forces people to interact publicly. If say you want someone to see and reply to your comment in an issue post or pull request, you simply mention them by <code>@someone</code>.</p> -<h2 id="open-research-in-mathematics">open research in mathematics</h2> -<p>All this points to a promising direction of open research. A maths project may have a wiki / collection of notes, the paper being written, computer programs implementing the results etc. The issue tracker can serve as a discussion forum about the project as well as a platform for open review (bugs are analogous to mistakes, enhancements are possible ways of improving the main results etc.), and anyone can make their own version of the project, and (optionally) contribute back by making pull requests, which will also be openly reviewed. One may want to add an extra &quot;review this project&quot; functionality, so that people can comment on the original project like they do in a pull request. This may or may not be necessary, as anyone can make comments or point out mistakes in the issue tracker.</p> -<p>One may doubt this model due to concerns of credits because work in progress is available to anyone. Well, since all the contributions are trackable in project commit history and public discussions in issues and pull request reviews, there is in fact <em>less</em> room for cheating than the current model in academia, where scooping can happen without any witnesses. What we need is a platform with a good amount of trust like arXiv, so that the open research community honours (and can not ignore) the commit history, and the chance of mis-attribution can be reduced to minimum.</p> -<p>Compared to the academic model, open research also has the following advantages:</p> -<ul> -<li>Anyone in the world with Internet access will have a chance to participate in research, whether they are affiliated to a university, have the financial means to attend conferences, or are colleagues of one of the handful experts in a specific field.</li> -<li>The problem of replicating / understanding maths results will be solved, as people help each other out. This will also remove the burden of answering queries about one's research. For example, say one has a project &quot;Understanding the fancy results in [paper name]&quot;, they write up some initial notes but get stuck understanding certain arguments. In this case they can simply post the questions on the issue tracker, and anyone who knows the answer, or just has a speculation can participate in the discussion. In the end the problem may be resolved without the authors of the paper being bothered, who may be too busy to answer.</li> -<li>Similarly, the burden of peer review can also be shifted from a few appointed reviewers to the crowd.</li> -</ul> -<h2 id="related-readings">related readings</h2> -<ul> -<li><a href="http://www.catb.org/esr/writings/cathedral-bazaar/">The Cathedral and the Bazaar by Eric Raymond</a></li> -<li><a href="http://michaelnielsen.org/blog/doing-science-online/">Doing sience online by Michael Nielson</a></li> -<li><a href="https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/">Is massively collaborative mathematics possible? by Timothy Gowers</a></li> -</ul> -<section class="footnotes"> -<hr /> -<ol> -<li id="fn1"><p>Please send your comments to my email address - I am still looking for ways to add a comment functionality to this website.<a href="#fnref1" class="footnote-back">↩</a></p></li> -</ol> -</section> - - - - Open mathematical research and launching toywiki - posts/2017-04-25-open_research_toywiki.html - 2017-04-25T00:00:00Z - - - Yuchen Pei - - <p>As an experimental project, I am launching toywiki.</p> -<p>It hosts a collection of my research notes.</p> -<p>It takes some ideas from the open source culture and apply them to mathematical research: 1. It uses a very permissive license (CC-BY-SA). For example anyone can fork the project and make their own version if they have a different vision and want to build upon the project. 2. All edits will done with maximum transparency, and discussions of any of notes should also be as public as possible (e.g. Github issues) 3. Anyone can suggest changes by opening issues and submitting pull requests</p> -<p>Here are the links: <a href="http://toywiki.xyz">toywiki</a> and <a href="https://github.com/ycpei/toywiki">github repo</a>.</p> -<p>Feedbacks are welcome by email.</p> - - - - A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer - posts/2016-10-13-q-robinson-schensted-knuth-polymer.html - 2016-10-13T00:00:00Z - - - Yuchen Pei - - <p>(Latest update: 2017-01-12) In <a href="http://arxiv.org/abs/1504.00666">Matveev-Petrov 2016</a> a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.</p> -<p>This article is available at <a href="https://arxiv.org/abs/1610.03692">arXiv</a>. It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a “dev” branch hosting continuous updates, whereas the master branch is where the stable releases live.</p> -<p><a href="%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf">Here</a> is a “dev” version of the article, which I shall push to arXiv when it stablises. Below is the changelog.</p> -<ul> -<li>2017-01-12: Typos and grammar, arXiv v2.</li> -<li>2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \(q\)RSK. Fixed typos.</li> -<li>2016-11-13: Fixed some typos in the proof of Theorem 3.</li> -<li>2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated in a more symmetric way, so is the law of large numbers Theorem 2.</li> -<li>2016-10-20: Fixed a few typos. Updated some references. Added a reference: <a href="http://web.mit.edu/~shopkins/docs/rsk.pdf">a set of notes titled “RSK via local transformations”</a>. It is written by <a href="http://web.mit.edu/~shopkins/">Sam Hopkins</a> in 2014 as an expository article based on MIT combinatorics preseminar presentations of Alex Postnikov. It contains some idea (applying local moves to a general Young-diagram shaped array in the order that matches any growth sequence of the underlying Young diagram) which I thought I was the first one to write down.</li> -</ul> - - - - AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu - posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html - 2015-07-15T00:00:00Z - - - Yuchen Pei - - <p>A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al., Lett. Math. Phys. <span class="bf">101</span> (2012), no. 1, 27–47; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;s1=2935476&amp;loc=fromrevtext">MR2935476</a>; J. Comb. <span class="bf">3</span> (2012), no. 3, 495–561; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;s1=3029444&amp;loc=fromrevtext">MR3029444</a>]) in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\) is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N \ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic degree. A stable Macdonald superpolynomial (corresponding to a bisymmetric polynomial) is also called a double Macdonald polynomial (dMp). The main result of this paper is the factorisation of a dMp into plethysms of two classical Macdonald polynomials (Theorem 5). Based on this result, this paper</p> -<ol type="1"> -<li><p>shows that the dMp has a unique decomposition into bisymmetric monomials;</p></li> -<li><p>calculates the norm of the dMp;</p></li> -<li><p>calculates the kernel of the Cauchy-Littlewood-type identity of the dMp;</p></li> -<li><p>shows the specialisation of the aforementioned factorisation to the Jack, Hall-Littlewood and Schur cases. One of the three Schur specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7) and (9) below;</p></li> -<li><p>defines the \(\omega\) -automorphism in this setting, which was used to prove an identity involving products of four Littlewood-Richardson coefficients;</p></li> -<li><p>shows an explicit evaluation of the dMp motivated by the most general evaluation of the usual Macdonald polynomials;</p></li> -<li><p>relates dMps with the representation theory of the hyperoctahedral group \(B_n\) via the double Kostka coefficients (which are defined as the entries of the transition matrix from the bisymmetric Schur functions \(s_{\lambda, \mu}\) to the modified dMps);</p></li> -<li><p>shows that the double Kostka coefficients have the positivity and the symmetry property, and can be written as sums of products of the usual Kostka coefficients;</p></li> -<li><p>defines an operator \(\nabla^B\) as an analogue of the nabla operator \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in <em>Algebraic methods and \(q\)-special functions</em> (Montréal, QC, 1996), 1–52, CRM Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=1726826&amp;loc=fromrevtext">MR1726826</a>]. The action of \(\nabla^B\) on the bisymmetric Schur function \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for the corresponding representation of \(B_n\) , where \(h\) and \(r\) are the Coxeter number and the rank of \(B_n\) , in the same way that the action of \(\nabla\) on the \(n\) th elementary symmetric function leads to the same formula for the group of type \(A_n\) .</p></li> -</ol> -<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by the AMS.</p> - - - - On a causal quantum double product integral related to Lévy stochastic area. - posts/2015-07-01-causal-quantum-product-levy-area.html - 2015-07-01T00:00:00Z - - - Yuchen Pei - - <p>In <a href="https://arxiv.org/abs/1506.04294">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we study the family of causal double product integrals \[ \prod_{a &lt; x &lt; y &lt; b}\left(1 + i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y + dQ_x dQ_y)\right) \]</p> -<p>where <span class="math inline">\(P\)</span> and <span class="math inline">\(Q\)</span> are the mutually noncommuting momentum and position Brownian motions of quantum stochastic calculus. The evaluation is motivated heuristically by approximating the continuous double product by a discrete product in which infinitesimals are replaced by finite increments. The latter is in turn approximated by the second quantisation of a discrete double product of rotation-like operators in different planes due to a result in <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;vol=46&amp;page=1851">(Hudson-Pei2015)</a>. The main problem solved in this paper is the explicit evaluation of the continuum limit <span class="math inline">\(W\)</span> of the latter, and showing that <span class="math inline">\(W\)</span> is a unitary operator. The kernel of <span class="math inline">\(W\)</span> is written in terms of Bessel functions, and the evaluation is achieved by working on a lattice path model and enumerating linear extensions of related partial orderings, where the enumeration turns out to be heavily related to Dyck paths and generalisations of Catalan numbers.</p> - - - - AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore - posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html - 2015-05-30T00:00:00Z - - - Yuchen Pei - - <p>This paper is about the existence of pattern-avoiding infinite binary words, where the patterns are squares, cubes and \(3^+\)-powers.    There are mainly two kinds of results, positive (existence of an infinite binary word avoiding a certain pattern) and negative (non-existence of such a word). Each positive result is proved by the construction of a word with finitely many squares and cubes which are listed explicitly. First a synchronising (also known as comma-free) uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)</p> -<p>is constructed. Then an argument is given to show that the length of squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence all the squares can be obtained by examining all \(g(s)\) for \(s\) of bounded lengths. The argument resembles that of the proof of, e.g., Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O. Shallit and M. Wang, Theoret. Comput. Sci. <strong>339</strong> (2005), no. 1, 19–34; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=2142071&amp;loc=fromrevtext">MR2142071</a>]. The negative results are proved by traversing all possible finite words satisfying the conditions.</p> -<p>   Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\) distinct squares, \(n_3\) distinct cubes and that the periods of the squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup \{\infty, \omega\}\) and \(S \subset \Bbb N_+\) .    \(n_k = 0\) corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the number of distinct \(k\)-powers, and \(n_k = \omega\) means \(k^+\)-free.</p> -<p>   Below is the summary of the positive and negative results:</p> -<ol type="1"> -<li><p>(Negative) \(L(\infty, \omega, 2 \Bbb N) &lt; \infty\) : \(\nexists\) an infinite \(3^+\) -free binary word avoiding all squares of odd periods. (Proposition 1)</p></li> -<li><p>(Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an infinite 3-free binary word, avoiding squares of even periods. The longest one has length \(\le 23\) (Proposition 2).</p></li> -<li>(Positive) \(L(\infty, \omega, 2 \Bbb N + -<ol type="1"> -<li><dl> -<dt>= \infty\)</dt> -<dd>\(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even periods (Theorem 1). -</dd> -</dl></li> -</ol></li> -<li><p>(Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word containing only squares of period 1 or 3 (Theorem 2).</p></li> -<li><p>(Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite binary word avoiding squares of even period containing \(&lt; 7\) squares and \(&lt; 2\) cubes. The longest one containing 6 squares and 1 cube has length 57 (Proposition 6).</p></li> -<li><p>(Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even period with 1 cube and 7 squares (Theorem 3).</p></li> -<li><p>(Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary words avoiding squares of even period and containing 2 cubes and 4 squares (Theorem 4).</p></li> -</ol> -<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by the AMS.</p> - - - - jst - posts/2015-04-02-juggling-skill-tree.html - 2015-04-02T00:00:00Z - - - Yuchen Pei - - <p>jst = juggling skill tree</p> -<p>If you have ever played a computer role playing game, you may have noticed the protagonist sometimes has a skill “tree” (most of the time it is actually a directed acyclic graph), where certain skills leads to others. For example, <a href="http://hydra-media.cursecdn.com/diablo.gamepedia.com/3/37/Sorceress_Skill_Trees_%28Diablo_II%29.png?version=b74b3d4097ef7ad4e26ebee0dcf33d01">here</a> is the skill tree of sorceress in <a href="https://en.wikipedia.org/wiki/Diablo_II">Diablo II</a>.</p> -<p>Now suppose our hero embarks on a quest for learning all the possible juggling patterns. Everyone would agree she should start with cascade, the simplest nontrivial 3-ball pattern, but what afterwards? A few other accessible patterns for beginners are juggler’s tennis, two in one and even reverse cascade, but what to learn after that? The encyclopeadic <a href="http://libraryofjuggling.com/">Library of Juggling</a> serves as a good guide, as it records more than 160 patterns, some of which very aesthetically appealing. On this website almost all the patterns have a “prerequisite” section, indicating what one should learn beforehand. I have therefore written a script using <a href="http://python.org">Python</a>, <a href="http://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> and <a href="http://pygraphviz.github.io/">pygraphviz</a> to generate a jst (graded by difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size):</p> -<p><a href="../assets/resources/juggling.png"><img src="../assets/resources/juggling.png" alt="The juggling skill tree" style="width:38em" /></a></p> - - - - Unitary causal quantum stochastic double products as universal interactions I - posts/2015-04-01-unitary-double-products.html - 2015-04-01T00:00:00Z - - - Yuchen Pei - - <p>In <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;vol=46&amp;page=1851">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we show the explicit formulae for a family of unitary triangular and rectangular double product integrals which can be described as second quantisations.</p> - - - - AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc - posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html - 2015-01-20T00:00:00Z - - - Yuchen Pei - - <p>The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)! 2 m! n! (m + n)!}. $$</p> -<p>   This paper has two main results. First a combinatorial interpretation of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$ where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\) -th step begins at an even level (called \(m\)-positive paths) and \(N(m,n)\) those with \(m\)-th step beginning at an odd level (\(m\)-negative paths). The proof uses a recursive argument on the number of \(m\)-positive and -negative paths, based on a recursion of the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic Comput. <strong>14</strong> (1992), no. 2-3, 179–194; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=1187230&amp;loc=fromrevtext">MR1187230</a>]: $$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression for the super Catalan numbers in terms of numbers counting the so-called ballot paths. The latter sometimes are also referred to as the generalised Catalan numbers forming the entries of the Catalan triangle.</p> -<p>   Based on the first result, the second result is a combinatorial interpretation of the super Catalan numbers \(T(2,n)\) in terms of counting certain Dyck paths. This is equivalent to a theorem, which represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I. M. Gessel and G. Xin, J. Integer Seq. <strong>8</strong> (2005), no. 2, Article 05.2.3, 13 pp.; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=2134162&amp;loc=fromrevtext">MR2134162</a>], and the equivalence is explained at the end of the paper by a bijection between the Dyck paths and the pairs of Dyck paths. The proof of the theorem itself is also done by constructing two bijections between Dyck paths satisfying certain conditions. All the three bijections are formulated by locating, removing and adding steps.</p> -<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by the AMS.</p> - - - - Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms - posts/2014-04-01-q-robinson-schensted-symmetry-paper.html - 2014-04-01T00:00:00Z - - - Yuchen Pei - - <p>In <a href="http://link.springer.com/article/10.1007/s10801-014-0505-x">this paper</a> a symmetry property analogous to the well known symmetry property of the normal Robinson-Schensted algorithm has been shown for the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a generalisation of the growth diagram approach introduced by Fomin. This approach, which uses “growth graphs”, can also be applied to a wider class of insertion algorithms which have a branching structure.</p> -<figure> -<img src="../assets/resources/1423graph.jpg" alt="Growth graph of q-RS for 1423" /><figcaption>Growth graph of q-RS for 1423</figcaption> -</figure> -<p>Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).</p> - - - - A \(q\)-weighted Robinson-Schensted algorithm - posts/2013-06-01-q-robinson-schensted-paper.html - 2013-06-01T00:00:00Z - - - Yuchen Pei - - <p>In <a href="https://projecteuclid.org/euclid.ejp/1465064320">this paper</a> with <a href="http://www.bristol.ac.uk/maths/people/neil-m-oconnell/">Neil</a> we construct a \(q\)-version of the Robinson-Schensted algorithm with column insertion. Like the <a href="http://en.wikipedia.org/wiki/Robinson–Schensted_correspondence">usual RS correspondence</a> with column insertion, this algorithm could take words as input. Unlike the usual RS algorithm, the output is a set of weighted pairs of semistandard and standard Young tableaux \((P,Q)\) with the same shape. The weights are rational functions of indeterminant \(q\).</p> -<p>If \(q\in[0,1]\), the algorithm can be considered as a randomised RS algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it is reduced to the latter usual RS algorithm; while when \(q\to1\) with proper scaling it should scale to directed random polymer model in <a href="http://arxiv.org/abs/0910.0069">(O’Connell 2012)</a>. When the input word \(w\) is a random walk:</p> -<p>\begin{align*}\mathbb P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}</p> -<p>the shape of output evolves as a Markov chain with kernel related to \(q\)-Whittaker functions, which are Macdonald functions when \(t=0\) with a factor.</p> - - - diff --git a/site/blog.html b/site/blog.html deleted file mode 100644 index 3222e3a..0000000 --- a/site/blog.html +++ /dev/null @@ -1,62 +0,0 @@ - - - - - Yuchen's Blog - - - - - -
- - -
- -
-
-

Automatic differentiation

-

Posted on 2018-06-03

-

This post is meant as a documentation of my understanding of autodiff. I benefited a lot from Toronto CSC321 slides and the autodidact project which is a pedagogical implementation of Autograd. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.

- - Continue reading -
-
-

Updates on open research

-

Posted on 2018-04-29

-

It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.

- - Continue reading -
-
-

The Mathematical Bazaar

-

Posted on 2017-08-07

-

In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.

- - Continue reading -
-
-

Open mathematical research and launching toywiki

-

Posted on 2017-04-25

-

As an experimental project, I am launching toywiki.

- - Continue reading -
-
-

A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer

-

Posted on 2016-10-13

-

(Latest update: 2017-01-12) In Matveev-Petrov 2016 a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.

- - Continue reading -
- - -
- - diff --git a/site/index.html b/site/index.html deleted file mode 100644 index a167ab1..0000000 --- a/site/index.html +++ /dev/null @@ -1,33 +0,0 @@ - - - - - Yuchen Pei - - - - - -
- - -
- -
-
-

Yuchen is a post-doctoral researcher in mathematics at the KTH RMSMA group. Before KTH he did a PhD at the MASDOC program at Warwick, and spent two years in a postdoc position at CMSA at Harvard.

-

He is interested in machine learning and functional programming.

-

He is also interested in the idea of open research and open sourced his research in Robinson-Schensted algorithms as a wiki.

-

He can be reached at: ypei@kth.se | hi@ypei.me | Github | LinkedIn

-

This website is made using a handmade static site generator.

-

Unless otherwise specified, all contents on this website are licensed under Creative Commons Attribution-NoDerivatives 4.0 International License.

- -
-
- - - diff --git a/site/links.html b/site/links.html deleted file mode 100644 index fdff77a..0000000 --- a/site/links.html +++ /dev/null @@ -1,84 +0,0 @@ - - - - - Links - - - - - -
- - -
- -
-
-

Here are some links I find interesting or helpful, or both. Listed in no particular order.

- - -
-
- - - diff --git a/site/microblog-feed.xml b/site/microblog-feed.xml deleted file mode 100644 index 4563861..0000000 --- a/site/microblog-feed.xml +++ /dev/null @@ -1,291 +0,0 @@ - - - Yuchen Pei's Microblog - https://ypei.me/microblog-feed.xml - 2018-05-30T00:00:00Z - - - - Yuchen Pei - - PyAtom - - 2018-05-30 - microblog.html - 2018-05-30T00:00:00Z - - - Yuchen Pei - - <p>Roger Grosse’s post <a href="https://metacademy.org/roadmaps/rgrosse/learn_on_your_own">How to learn on your own (2015)</a> is an excellent modern guide on how to learn and research technical stuff (especially machine learning and maths) on one’s own.</p> - - - - 2018-05-25 - microblog.html - 2018-05-25T00:00:00Z - - - Yuchen Pei - - <p><a href="http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html">This post</a> models 2048 as an MDP and solves it using policy iteration and backward induction.</p> - - - - 2018-05-22 - microblog.html - 2018-05-22T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems. A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages. By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.</p> -</blockquote> -<p><a href="https://en.wikipedia.org/wiki/ATS_(programming_language)">Wikipedia entry on ATS</a></p> - - - - 2018-05-20 - microblog.html - 2018-05-20T00:00:00Z - - - Yuchen Pei - - <p>(5-second fame) I sent a picture of my kitchen sink to BBC and got mentioned in the <a href="https://www.bbc.co.uk/programmes/w3cswg8c">latest Boston Calling episode</a> (listen at 25:54).</p> - - - - 2018-05-18 - microblog.html - 2018-05-18T00:00:00Z - - - Yuchen Pei - - <p><a href="https://colah.github.io/">colah’s blog</a> has a cool feature that allows you to comment on any paragraph of a blog post. Here’s an <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">example</a>. If it is doable on a static site hosted on Github pages, I suppose it shouldn’t be too hard to implement. This also seems to work more seamlessly than <a href="https://fermatslibrary.com/">Fermat’s Library</a>, because the latter has to embed pdfs in webpages. Now fantasy time: imagine that one day arXiv shows html versions of papers (through author uploading or conversion from TeX) with this feature.</p> - - - - 2018-05-15 - microblog.html - 2018-05-15T00:00:00Z - - - Yuchen Pei - - <h3 id="notes-on-random-froests">Notes on random froests</h3> -<p><a href="https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info">Stanford Lagunita’s statistical learning course</a> has some excellent lectures on random forests. It starts with explanations of decision trees, followed by bagged trees and random forests, and ends with boosting. From these lectures it seems that:</p> -<ol type="1"> -<li>The term “predictors” in statistical learning = “features” in machine learning.</li> -<li>The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.</li> -</ol> -<p>By the way, here’s a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:</p> -<p><a href="../assets/resources/sl-vs-ml.png"><img src="../assets/resources/sl-vs-ml.png" alt="SL vs ML" style="width:38em" /></a></p> - - - - 2018-05-14 - microblog.html - 2018-05-14T00:00:00Z - - - Yuchen Pei - - <h3 id="open-peer-review">Open peer review</h3> -<p>Open peer review means peer review process where communications e.g. comments and responses are public.</p> -<p>Like <a href="https://scipost.org/">SciPost</a> mentioned in <a href="/posts/2018-04-10-update-open-research.html">my post</a>, <a href="https://openreview.net">OpenReview.net</a> is an example of open peer review in research. It looks like their focus is machine learning. Their <a href="https://openreview.net/about">about page</a> states their mission, and here’s <a href="https://openreview.net/group?id=ICLR.cc/2018/Conference">an example</a> where you can click on each entry to see what it is like. We definitely need this in the maths research community.</p> - - - - 2018-05-11 - microblog.html - 2018-05-11T00:00:00Z - - - Yuchen Pei - - <h3 id="some-notes-on-rnn-fsm-fa-tm-and-utm">Some notes on RNN, FSM / FA, TM and UTM</h3> -<p>Related to <a href="#neural-turing-machine">a previous micropost</a>.</p> -<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p> -<p><a href="http://www.deeplearningbook.org/contents/rnn.html">Goodfellow et. al.’s book</a> (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the <em>universal</em> Turing machine abbr. UTM (the book referenced <a href="https://www.sciencedirect.com/science/article/pii/S0022000085710136">Siegelmann-Sontag</a>), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).</p> -<p>By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in <a href="https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview">Hinton’s video</a>.</p> -<p>From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.</p> - - - - 2018-05-10 - microblog.html - 2018-05-10T00:00:00Z - - - Yuchen Pei - - <h3 id="writing-readable-mathematics-like-writing-an-operating-system">Writing readable mathematics like writing an operating system</h3> -<p>One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.</p> -<p>It’s a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t <a href="http://nand2tetris.org/">NAND2Tetris</a>).</p> -<p>The book <a href="http://neuralnetworksanddeeplearning.com/">Neutral networks and deep learning</a> by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%.</p> - - - - 2018-05-09 - microblog.html - 2018-05-09T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>What makes the rectified linear activation function better than the sigmoid or tanh functions? At present, we have a poor understanding of the answer to this question. Indeed, rectified linear units have only begun to be widely used in the past few years. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments. They got good results classifying benchmark data sets, and the practice has spread. In an ideal world we’d have a theory telling us which activation function to pick for which application. But at present we’re a long way from such a world. I should not be at all surprised if further major improvements can be obtained by an even better choice of activation function. And I also expect that in coming decades a powerful theory of activation functions will be developed. Today, we still have to rely on poorly understood rules of thumb and experience.</p> -</blockquote> -<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice">Neutral networks and deep learning</a></p> - - - - 2018-05-09 - microblog.html - 2018-05-09T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. <a href="https://arxiv.org/abs/1410.4615">A 2014 paper</a> developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. Informally, the network is learning to “understand” certain Python programs. <a href="https://arxiv.org/abs/1410.5401">A second paper, also from 2014</a>, used RNNs as a starting point to develop what they called a neural Turing machine (NTM). This is a universal computer whose entire structure can be trained using gradient descent. They trained their NTM to infer algorithms for several simple problems, such as sorting and copying.</p> -<p>As it stands, these are extremely simple toy models. Learning to execute the Python program <code>print(398345+42598)</code> doesn’t make a network into a full-fledged Python interpreter! It’s not clear how much further it will be possible to push the ideas. Still, the results are intriguing. Historically, neural networks have done well at pattern recognition problems where conventional algorithmic approaches have trouble. Vice versa, conventional algorithmic approaches are good at solving problems that neural nets aren’t so good at. No-one today implements a web server or a database program using a neural network! It’d be great to develop unified models that integrate the strengths of both neural networks and more traditional approaches to algorithms. RNNs and ideas inspired by RNNs may help us do that.</p> -</blockquote> -<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets">Neural networks and deep learning</a></p> - - - - 2018-05-08 - microblog.html - 2018-05-08T00:00:00Z - - - Yuchen Pei - - <p>Primer Science is a tool by a startup called Primer that uses NLP to summarize contents (but not single papers, yet) on arxiv. A developer of this tool predicts in <a href="https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#">an interview</a> that progress on AI’s ability to extract meanings from AI research papers will be the biggest accelerant on AI research.</p> - - - - 2018-05-08 - microblog.html - 2018-05-08T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don’t have an entirely satisfactory systematic understanding of what’s going on, merely incomplete heuristics and rules of thumb.</p> -<p>There’s a deeper set of issues here, issues which go to the heart of science. It’s the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn’t give us a principled understanding of how generalization works, nor of what the best approach is.</p> -</blockquote> -<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting">Neural networks and deep learning</a></p> - - - - 2018-05-08 - microblog.html - 2018-05-08T00:00:00Z - - - Yuchen Pei - - <p>Computerphile has some brilliant educational videos on computer science, like <a href="https://www.youtube.com/watch?v=ciNHn38EyRc">a demo of SQL injection</a>, <a href="https://www.youtube.com/watch?v=eis11j_iGMs">a toy example of the lambda calculus</a>, and <a href="https://www.youtube.com/watch?v=9T8A89jgeTI">explaining the Y combinator</a>.</p> - - - - 2018-05-07 - microblog.html - 2018-05-07T00:00:00Z - - - Yuchen Pei - - <h3 id="learning-via-knowledge-graph-and-reddit-journal-clubs">Learning via knowledge graph and reddit journal clubs</h3> -<p>It is a natural idea to look for ways to learn things like going through a skill tree in a computer RPG.</p> -<p>For example I made a <a href="https://ypei.me/posts/2015-04-02-juggling-skill-tree.html">DAG for juggling</a>.</p> -<p>Websites like <a href="https://knowen.org">Knowen</a> and <a href="https://metacademy.org">Metacademy</a> explore this idea with added flavour of open collaboration.</p> -<p>The design of Metacademy looks quite promising. It also has a nice tagline: “your package manager for knowledge”.</p> -<p>There are so so many tools to assist learning / research / knowledge sharing today, and we should keep experimenting, in the hope that eventually one of them will scale.</p> -<p>On another note, I often complain about the lack of a place to discuss math research online, but today I found on Reddit some journal clubs on machine learning: <a href="https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/">1</a>, <a href="https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/">2</a>. If only we had this for maths. On the other hand r/math does have some interesting recurring threads as well: <a href="https://www.reddit.com/r/math/wiki/everythingaboutx">Everything about X</a> and <a href="https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&amp;sort=new&amp;restrict_sr=on&amp;t=all">What Are You Working On?</a>. Hopefully these threads can last for years to come.</p> - - - - 2018-05-02 - microblog.html - 2018-05-02T00:00:00Z - - - Yuchen Pei - - <h3 id="pastebin-for-the-win">Pastebin for the win</h3> -<p>The lack of maths rendering in major online communication platforms like instant messaging, email or Github has been a minor obsession of mine for quite a while, as I saw it as a big factor preventing people from talking more maths online. But today I realised this is totally a non-issue. Just do what people on IRC have been doing since the inception of the universe: use a (latex) pastebin.</p> - - - - 2018-05-01 - microblog.html - 2018-05-01T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.</p> -</blockquote> -<p>Michael Nielsen - <a href="http://neuralnetworksanddeeplearning.com/about.html">What this book (Neural Networks and Deep Learning) is about</a></p> -<p>Unrelated to the quote, note that Nielsen’s book is licensed under <a href="https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB">CC BY-NC</a>, so one can build on it and redistribute non-commercially.</p> - - - - 2018-04-30 - microblog.html - 2018-04-30T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.</p> -</blockquote> -<p>Roger Schank - <a href="http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI">Fraudulent claims made by IBM about Watson and AI</a></p> - - - - 2018-04-06 - microblog.html - 2018-04-06T00:00:00Z - - - Yuchen Pei - - <blockquote> -<ul> -<li>Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!</li> -<li>All information should be free.</li> -<li>Mistrust Authority—Promote Decentralization.</li> -<li>Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.</li> -<li>You can create art and beauty on a computer.</li> -<li>Computers can change your life for the better.</li> -</ul> -</blockquote> -<p><a href="https://en.wikipedia.org/wiki/Hacker_ethic">The Hacker Ethic</a>, <a href="https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution">Hackers: Heroes of Computer Revolution</a>, by Steven Levy</p> - - - - 2018-03-23 - microblog.html - 2018-03-23T00:00:00Z - - - Yuchen Pei - - <blockquote> -<p>“Static site generators seem like music databases, in that everyone eventually writes their own crappy one that just barely scratches the itch they had (and I’m no exception).”</p> -</blockquote> -<p><a href="https://news.ycombinator.com/item?id=7747651">__david__@hackernews</a></p> -<p>So did I.</p> - - - diff --git a/site/microblog.html b/site/microblog.html deleted file mode 100644 index 2444f82..0000000 --- a/site/microblog.html +++ /dev/null @@ -1,184 +0,0 @@ - - - - - Yuchen's Microblog - - - - - -
- - -
- -
-
-

2018-05-30

-

Roger Grosse’s post How to learn on your own (2015) is an excellent modern guide on how to learn and research technical stuff (especially machine learning and maths) on one’s own.

- -
-
-

2018-05-25

-

This post models 2048 as an MDP and solves it using policy iteration and backward induction.

- -
-
-

2018-05-22

-
-

ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems. A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages. By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.

-
-

Wikipedia entry on ATS

- -
-
-

2018-05-20

-

(5-second fame) I sent a picture of my kitchen sink to BBC and got mentioned in the latest Boston Calling episode (listen at 25:54).

- -
-
-

2018-05-18

-

colah’s blog has a cool feature that allows you to comment on any paragraph of a blog post. Here’s an example. If it is doable on a static site hosted on Github pages, I suppose it shouldn’t be too hard to implement. This also seems to work more seamlessly than Fermat’s Library, because the latter has to embed pdfs in webpages. Now fantasy time: imagine that one day arXiv shows html versions of papers (through author uploading or conversion from TeX) with this feature.

- -
-
-

2018-05-15

-

Notes on random froests

-

Stanford Lagunita’s statistical learning course has some excellent lectures on random forests. It starts with explanations of decision trees, followed by bagged trees and random forests, and ends with boosting. From these lectures it seems that:

-
    -
  1. The term “predictors” in statistical learning = “features” in machine learning.
  2. -
  3. The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.
  4. -
-

By the way, here’s a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:

-

SL vs ML

- -
-
-

2018-05-14

-

Open peer review

-

Open peer review means peer review process where communications e.g. comments and responses are public.

-

Like SciPost mentioned in my post, OpenReview.net is an example of open peer review in research. It looks like their focus is machine learning. Their about page states their mission, and here’s an example where you can click on each entry to see what it is like. We definitely need this in the maths research community.

- -
-
-

2018-05-11

-

Some notes on RNN, FSM / FA, TM and UTM

-

Related to a previous micropost.

-

These slides from Toronto are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.

-

Goodfellow et. al.’s book (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the universal Turing machine abbr. UTM (the book referenced Siegelmann-Sontag), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).

-

By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in Hinton’s video.

-

From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.

- -
-
-

2018-05-10

-

Writing readable mathematics like writing an operating system

-

One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.

-

It’s a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t NAND2Tetris).

-

The book Neutral networks and deep learning by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%.

- -
-
-

2018-05-09

-
-

What makes the rectified linear activation function better than the sigmoid or tanh functions? At present, we have a poor understanding of the answer to this question. Indeed, rectified linear units have only begun to be widely used in the past few years. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments. They got good results classifying benchmark data sets, and the practice has spread. In an ideal world we’d have a theory telling us which activation function to pick for which application. But at present we’re a long way from such a world. I should not be at all surprised if further major improvements can be obtained by an even better choice of activation function. And I also expect that in coming decades a powerful theory of activation functions will be developed. Today, we still have to rely on poorly understood rules of thumb and experience.

-
-

Michael Nielsen, Neutral networks and deep learning

- -
-
-

2018-05-09

-
-

One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. A 2014 paper developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. Informally, the network is learning to “understand” certain Python programs. A second paper, also from 2014, used RNNs as a starting point to develop what they called a neural Turing machine (NTM). This is a universal computer whose entire structure can be trained using gradient descent. They trained their NTM to infer algorithms for several simple problems, such as sorting and copying.

-

As it stands, these are extremely simple toy models. Learning to execute the Python program print(398345+42598) doesn’t make a network into a full-fledged Python interpreter! It’s not clear how much further it will be possible to push the ideas. Still, the results are intriguing. Historically, neural networks have done well at pattern recognition problems where conventional algorithmic approaches have trouble. Vice versa, conventional algorithmic approaches are good at solving problems that neural nets aren’t so good at. No-one today implements a web server or a database program using a neural network! It’d be great to develop unified models that integrate the strengths of both neural networks and more traditional approaches to algorithms. RNNs and ideas inspired by RNNs may help us do that.

-
-

Michael Nielsen, Neural networks and deep learning

- -
-
-

2018-05-08

-

Primer Science is a tool by a startup called Primer that uses NLP to summarize contents (but not single papers, yet) on arxiv. A developer of this tool predicts in an interview that progress on AI’s ability to extract meanings from AI research papers will be the biggest accelerant on AI research.

- -
-
-

2018-05-08

-
-

no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don’t have an entirely satisfactory systematic understanding of what’s going on, merely incomplete heuristics and rules of thumb.

-

There’s a deeper set of issues here, issues which go to the heart of science. It’s the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn’t give us a principled understanding of how generalization works, nor of what the best approach is.

-
-

Michael Nielsen, Neural networks and deep learning

- -
-
-

2018-05-08

-

Computerphile has some brilliant educational videos on computer science, like a demo of SQL injection, a toy example of the lambda calculus, and explaining the Y combinator.

- -
-
-

2018-05-07

-

Learning via knowledge graph and reddit journal clubs

-

It is a natural idea to look for ways to learn things like going through a skill tree in a computer RPG.

-

For example I made a DAG for juggling.

-

Websites like Knowen and Metacademy explore this idea with added flavour of open collaboration.

-

The design of Metacademy looks quite promising. It also has a nice tagline: “your package manager for knowledge”.

-

There are so so many tools to assist learning / research / knowledge sharing today, and we should keep experimenting, in the hope that eventually one of them will scale.

-

On another note, I often complain about the lack of a place to discuss math research online, but today I found on Reddit some journal clubs on machine learning: 1, 2. If only we had this for maths. On the other hand r/math does have some interesting recurring threads as well: Everything about X and What Are You Working On?. Hopefully these threads can last for years to come.

- -
-
-

2018-05-02

-

Pastebin for the win

-

The lack of maths rendering in major online communication platforms like instant messaging, email or Github has been a minor obsession of mine for quite a while, as I saw it as a big factor preventing people from talking more maths online. But today I realised this is totally a non-issue. Just do what people on IRC have been doing since the inception of the universe: use a (latex) pastebin.

- -
-
-

2018-05-01

-
-

Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.

-
-

Michael Nielsen - What this book (Neural Networks and Deep Learning) is about

-

Unrelated to the quote, note that Nielsen’s book is licensed under CC BY-NC, so one can build on it and redistribute non-commercially.

- -
-
-

2018-04-30

-
-

But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.

-
-

Roger Schank - Fraudulent claims made by IBM about Watson and AI

- -
-
-

2018-04-06

-
-
    -
  • Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!
  • -
  • All information should be free.
  • -
  • Mistrust Authority—Promote Decentralization.
  • -
  • Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
  • -
  • You can create art and beauty on a computer.
  • -
  • Computers can change your life for the better.
  • -
-
-

The Hacker Ethic, Hackers: Heroes of Computer Revolution, by Steven Levy

- -
-
-

2018-03-23

-
-

“Static site generators seem like music databases, in that everyone eventually writes their own crappy one that just barely scratches the itch they had (and I’m no exception).”

-
-

__david__@hackernews

-

So did I.

- -
- -
- - - diff --git a/site/postlist.html b/site/postlist.html deleted file mode 100644 index 0ee5d77..0000000 --- a/site/postlist.html +++ /dev/null @@ -1,67 +0,0 @@ - - - - - All posts - - - - - -
- - -
- -
- -
- - diff --git a/site/posts/2013-06-01-q-robinson-schensted-paper.html b/site/posts/2013-06-01-q-robinson-schensted-paper.html deleted file mode 100644 index 0d81693..0000000 --- a/site/posts/2013-06-01-q-robinson-schensted-paper.html +++ /dev/null @@ -1,32 +0,0 @@ - - - - - A \(q\)-weighted Robinson-Schensted algorithm - - - - - -
- - -
- -
-
-

A \(q\)-weighted Robinson-Schensted algorithm

-

Posted on 2013-06-01

-

In this paper with Neil we construct a \(q\)-version of the Robinson-Schensted algorithm with column insertion. Like the usual RS correspondence with column insertion, this algorithm could take words as input. Unlike the usual RS algorithm, the output is a set of weighted pairs of semistandard and standard Young tableaux \((P,Q)\) with the same shape. The weights are rational functions of indeterminant \(q\).

-

If \(q\in[0,1]\), the algorithm can be considered as a randomised RS algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it is reduced to the latter usual RS algorithm; while when \(q\to1\) with proper scaling it should scale to directed random polymer model in (O’Connell 2012). When the input word \(w\) is a random walk:

-

\begin{align*}\mathbb P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}

-

the shape of output evolves as a Markov chain with kernel related to \(q\)-Whittaker functions, which are Macdonald functions when \(t=0\) with a factor.

- -
-
- - diff --git a/site/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html b/site/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html deleted file mode 100644 index b546aca..0000000 --- a/site/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html +++ /dev/null @@ -1,33 +0,0 @@ - - - - - Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms - - - - - -
- - -
- -
-
-

Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms

-

Posted on 2014-04-01

-

In this paper a symmetry property analogous to the well known symmetry property of the normal Robinson-Schensted algorithm has been shown for the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a generalisation of the growth diagram approach introduced by Fomin. This approach, which uses “growth graphs”, can also be applied to a wider class of insertion algorithms which have a branching structure.

-
-Growth graph of q-RS for 1423
Growth graph of q-RS for 1423
-
-

Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).

- -
-
- - diff --git a/site/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html b/site/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html deleted file mode 100644 index 1f72a96..0000000 --- a/site/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html +++ /dev/null @@ -1,32 +0,0 @@ - - - - - AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc - - - - - -
- - -
- -
-
-

AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc

-

Posted on 2015-01-20

-

The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)! 2 m! n! (m + n)!}. $$

-

   This paper has two main results. First a combinatorial interpretation of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$ where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\) -th step begins at an even level (called \(m\)-positive paths) and \(N(m,n)\) those with \(m\)-th step beginning at an odd level (\(m\)-negative paths). The proof uses a recursive argument on the number of \(m\)-positive and -negative paths, based on a recursion of the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic Comput. 14 (1992), no. 2-3, 179–194; MR1187230]: $$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression for the super Catalan numbers in terms of numbers counting the so-called ballot paths. The latter sometimes are also referred to as the generalised Catalan numbers forming the entries of the Catalan triangle.

-

   Based on the first result, the second result is a combinatorial interpretation of the super Catalan numbers \(T(2,n)\) in terms of counting certain Dyck paths. This is equivalent to a theorem, which represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I. M. Gessel and G. Xin, J. Integer Seq. 8 (2005), no. 2, Article 05.2.3, 13 pp.; MR2134162], and the equivalence is explained at the end of the paper by a bijection between the Dyck paths and the pairs of Dyck paths. The proof of the theorem itself is also done by constructing two bijections between Dyck paths satisfying certain conditions. All the three bijections are formulated by locating, removing and adding steps.

-

Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by the AMS.

- -
-
- - diff --git a/site/posts/2015-04-01-unitary-double-products.html b/site/posts/2015-04-01-unitary-double-products.html deleted file mode 100644 index 086503c..0000000 --- a/site/posts/2015-04-01-unitary-double-products.html +++ /dev/null @@ -1,29 +0,0 @@ - - - - - Unitary causal quantum stochastic double products as universal interactions I - - - - - -
- - -
- -
-
-

Unitary causal quantum stochastic double products as universal interactions I

-

Posted on 2015-04-01

-

In this paper with Robin we show the explicit formulae for a family of unitary triangular and rectangular double product integrals which can be described as second quantisations.

- -
-
- - diff --git a/site/posts/2015-04-02-juggling-skill-tree.html b/site/posts/2015-04-02-juggling-skill-tree.html deleted file mode 100644 index 0b98acf..0000000 --- a/site/posts/2015-04-02-juggling-skill-tree.html +++ /dev/null @@ -1,32 +0,0 @@ - - - - - jst - - - - - -
- - -
- -
-
-

jst

-

Posted on 2015-04-02

-

jst = juggling skill tree

-

If you have ever played a computer role playing game, you may have noticed the protagonist sometimes has a skill “tree” (most of the time it is actually a directed acyclic graph), where certain skills leads to others. For example, here is the skill tree of sorceress in Diablo II.

-

Now suppose our hero embarks on a quest for learning all the possible juggling patterns. Everyone would agree she should start with cascade, the simplest nontrivial 3-ball pattern, but what afterwards? A few other accessible patterns for beginners are juggler’s tennis, two in one and even reverse cascade, but what to learn after that? The encyclopeadic Library of Juggling serves as a good guide, as it records more than 160 patterns, some of which very aesthetically appealing. On this website almost all the patterns have a “prerequisite” section, indicating what one should learn beforehand. I have therefore written a script using Python, BeautifulSoup and pygraphviz to generate a jst (graded by difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size):

-

The juggling skill tree

- -
-
- - diff --git a/site/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html b/site/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html deleted file mode 100644 index 3b426fa..0000000 --- a/site/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html +++ /dev/null @@ -1,49 +0,0 @@ - - - - - AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore - - - - - -
- - -
- -
-
-

AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore

-

Posted on 2015-05-30

-

This paper is about the existence of pattern-avoiding infinite binary words, where the patterns are squares, cubes and \(3^+\)-powers.    There are mainly two kinds of results, positive (existence of an infinite binary word avoiding a certain pattern) and negative (non-existence of such a word). Each positive result is proved by the construction of a word with finitely many squares and cubes which are listed explicitly. First a synchronising (also known as comma-free) uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)

-

is constructed. Then an argument is given to show that the length of squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence all the squares can be obtained by examining all \(g(s)\) for \(s\) of bounded lengths. The argument resembles that of the proof of, e.g., Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O. Shallit and M. Wang, Theoret. Comput. Sci. 339 (2005), no. 1, 19–34; MR2142071]. The negative results are proved by traversing all possible finite words satisfying the conditions.

-

   Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\) distinct squares, \(n_3\) distinct cubes and that the periods of the squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup \{\infty, \omega\}\) and \(S \subset \Bbb N_+\) .    \(n_k = 0\) corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the number of distinct \(k\)-powers, and \(n_k = \omega\) means \(k^+\)-free.

-

   Below is the summary of the positive and negative results:

-
    -
  1. (Negative) \(L(\infty, \omega, 2 \Bbb N) < \infty\) : \(\nexists\) an infinite \(3^+\) -free binary word avoiding all squares of odd periods. (Proposition 1)

  2. -
  3. (Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an infinite 3-free binary word, avoiding squares of even periods. The longest one has length \(\le 23\) (Proposition 2).

  4. -
  5. (Positive) \(L(\infty, \omega, 2 \Bbb N + -
      -
    1. -
      = \infty\)
      -
      \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even periods (Theorem 1). -
      -
    2. -
  6. -
  7. (Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word containing only squares of period 1 or 3 (Theorem 2).

  8. -
  9. (Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite binary word avoiding squares of even period containing \(< 7\) squares and \(< 2\) cubes. The longest one containing 6 squares and 1 cube has length 57 (Proposition 6).

  10. -
  11. (Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even period with 1 cube and 7 squares (Theorem 3).

  12. -
  13. (Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary words avoiding squares of even period and containing 2 cubes and 4 squares (Theorem 4).

  14. -
-

Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by the AMS.

- -
-
- - diff --git a/site/posts/2015-07-01-causal-quantum-product-levy-area.html b/site/posts/2015-07-01-causal-quantum-product-levy-area.html deleted file mode 100644 index 3fdaa72..0000000 --- a/site/posts/2015-07-01-causal-quantum-product-levy-area.html +++ /dev/null @@ -1,30 +0,0 @@ - - - - - On a causal quantum double product integral related to Lévy stochastic area. - - - - - -
- - -
- -
-
-

On a causal quantum double product integral related to Lévy stochastic area.

-

Posted on 2015-07-01

-

In this paper with Robin we study the family of causal double product integrals \[ \prod_{a < x < y < b}\left(1 + i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y + dQ_x dQ_y)\right) \]

-

where \(P\) and \(Q\) are the mutually noncommuting momentum and position Brownian motions of quantum stochastic calculus. The evaluation is motivated heuristically by approximating the continuous double product by a discrete product in which infinitesimals are replaced by finite increments. The latter is in turn approximated by the second quantisation of a discrete double product of rotation-like operators in different planes due to a result in (Hudson-Pei2015). The main problem solved in this paper is the explicit evaluation of the continuum limit \(W\) of the latter, and showing that \(W\) is a unitary operator. The kernel of \(W\) is written in terms of Bessel functions, and the evaluation is achieved by working on a lattice path model and enumerating linear extensions of related partial orderings, where the enumeration turns out to be heavily related to Dyck paths and generalisations of Catalan numbers.

- -
-
- - diff --git a/site/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html b/site/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html deleted file mode 100644 index a0d5a7c..0000000 --- a/site/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html +++ /dev/null @@ -1,41 +0,0 @@ - - - - - AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu - - - - - -
- - -
- -
-
-

AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu

-

Posted on 2015-07-15

-

A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al., Lett. Math. Phys. 101 (2012), no. 1, 27–47; MR2935476; J. Comb. 3 (2012), no. 3, 495–561; MR3029444]) in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\) is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N \ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic degree. A stable Macdonald superpolynomial (corresponding to a bisymmetric polynomial) is also called a double Macdonald polynomial (dMp). The main result of this paper is the factorisation of a dMp into plethysms of two classical Macdonald polynomials (Theorem 5). Based on this result, this paper

-
    -
  1. shows that the dMp has a unique decomposition into bisymmetric monomials;

  2. -
  3. calculates the norm of the dMp;

  4. -
  5. calculates the kernel of the Cauchy-Littlewood-type identity of the dMp;

  6. -
  7. shows the specialisation of the aforementioned factorisation to the Jack, Hall-Littlewood and Schur cases. One of the three Schur specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7) and (9) below;

  8. -
  9. defines the \(\omega\) -automorphism in this setting, which was used to prove an identity involving products of four Littlewood-Richardson coefficients;

  10. -
  11. shows an explicit evaluation of the dMp motivated by the most general evaluation of the usual Macdonald polynomials;

  12. -
  13. relates dMps with the representation theory of the hyperoctahedral group \(B_n\) via the double Kostka coefficients (which are defined as the entries of the transition matrix from the bisymmetric Schur functions \(s_{\lambda, \mu}\) to the modified dMps);

  14. -
  15. shows that the double Kostka coefficients have the positivity and the symmetry property, and can be written as sums of products of the usual Kostka coefficients;

  16. -
  17. defines an operator \(\nabla^B\) as an analogue of the nabla operator \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in Algebraic methods and \(q\)-special functions (Montréal, QC, 1996), 1–52, CRM Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999; MR1726826]. The action of \(\nabla^B\) on the bisymmetric Schur function \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for the corresponding representation of \(B_n\) , where \(h\) and \(r\) are the Coxeter number and the rank of \(B_n\) , in the same way that the action of \(\nabla\) on the \(n\) th elementary symmetric function leads to the same formula for the group of type \(A_n\) .

  18. -
-

Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by the AMS.

- -
-
- - diff --git a/site/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html b/site/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html deleted file mode 100644 index e7c9a7e..0000000 --- a/site/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html +++ /dev/null @@ -1,38 +0,0 @@ - - - - - A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer - - - - - -
- - -
- -
-
-

A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer

-

Posted on 2016-10-13

-

(Latest update: 2017-01-12) In Matveev-Petrov 2016 a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.

-

This article is available at arXiv. It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a “dev” branch hosting continuous updates, whereas the master branch is where the stable releases live.

-

Here is a “dev” version of the article, which I shall push to arXiv when it stablises. Below is the changelog.

-
    -
  • 2017-01-12: Typos and grammar, arXiv v2.
  • -
  • 2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \(q\)RSK. Fixed typos.
  • -
  • 2016-11-13: Fixed some typos in the proof of Theorem 3.
  • -
  • 2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated in a more symmetric way, so is the law of large numbers Theorem 2.
  • -
  • 2016-10-20: Fixed a few typos. Updated some references. Added a reference: a set of notes titled “RSK via local transformations”. It is written by Sam Hopkins in 2014 as an expository article based on MIT combinatorics preseminar presentations of Alex Postnikov. It contains some idea (applying local moves to a general Young-diagram shaped array in the order that matches any growth sequence of the underlying Young diagram) which I thought I was the first one to write down.
  • -
- -
-
- - diff --git a/site/posts/2017-04-25-open_research_toywiki.html b/site/posts/2017-04-25-open_research_toywiki.html deleted file mode 100644 index 0fed793..0000000 --- a/site/posts/2017-04-25-open_research_toywiki.html +++ /dev/null @@ -1,33 +0,0 @@ - - - - - Open mathematical research and launching toywiki - - - - - -
- - -
- -
-
-

Open mathematical research and launching toywiki

-

Posted on 2017-04-25

-

As an experimental project, I am launching toywiki.

-

It hosts a collection of my research notes.

-

It takes some ideas from the open source culture and apply them to mathematical research: 1. It uses a very permissive license (CC-BY-SA). For example anyone can fork the project and make their own version if they have a different vision and want to build upon the project. 2. All edits will done with maximum transparency, and discussions of any of notes should also be as public as possible (e.g. Github issues) 3. Anyone can suggest changes by opening issues and submitting pull requests

-

Here are the links: toywiki and github repo.

-

Feedbacks are welcome by email.

- -
-
- - diff --git a/site/posts/2017-08-07-mathematical_bazaar.html b/site/posts/2017-08-07-mathematical_bazaar.html deleted file mode 100644 index 0e5f6b0..0000000 --- a/site/posts/2017-08-07-mathematical_bazaar.html +++ /dev/null @@ -1,80 +0,0 @@ - - - - - The Mathematical Bazaar - - - - - -
- - -
- -
-
-

The Mathematical Bazaar

-

Posted on 2017-08-07

-

In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.

-

This essay is a work in progress - comments and criticisms are welcome! 1

-

Before I start I should point out that

-
    -
  1. Open research is not open access. In fact the latter is a prerequisite to the former.
  2. -
  3. I am not proposing to replace the current academic model with the open model - I know academia works well for many people and I am happy for them, but I think an open research community is long overdue since the wide adoption of the World Wide Web more than two decades ago. In fact, I fail to see why an open model can not run in tandem with the academia, just like open source and closed source software development coexist today.
  4. -
-

problems of academia

-

Open source projects are characterised by publicly available source codes as well as open invitations for public collaborations, whereas closed source projects do not make source codes accessible to the public. How about mathematical academia then, is it open source or closed source? The answer is neither.

-

Compared to some other scientific disciplines, mathematics does not require expensive equipments or resources to replicate results; compared to programming in conventional software industry, mathematical findings are not meant to be commercial, as credits and reputation rather than money are the direct incentives (even though the former are commonly used to trade for the latter). It is also a custom and common belief that mathematical derivations and theorems shouldn't be patented. Because of this, mathematical research is an open source activity in the sense that proofs to new results are all available in papers, and thanks to open access e.g. the arXiv preprint repository most of the new mathematical knowledge is accessible for free.

-

Then why, you may ask, do I claim that maths research is not open sourced? Well, this is because 1. mathematical arguments are not easily replicable and 2. mathematical research projects are mostly not open for public participation.

-

Compared to computer programs, mathematical arguments are not written in an unambiguous language, and they are terse and not written in maximum verbosity (this is especially true in research papers as journals encourage limiting the length of submissions), so the understanding of a proof depends on whether the reader is equipped with the right background knowledge, and the completeness of a proof is highly subjective. More generally speaking, computer programs are mostly portable because all machines with the correct configurations can understand and execute a piece of program, whereas humans are subject to their environment, upbringings, resources etc. to have a brain ready to comprehend a proof that interests them. (these barriers are softer than the expensive equipments and resources in other scientific fields mentioned before because it is all about having access to the right information)

-

On the other hand, as far as the pursuit of reputation and prestige (which can be used to trade for the scarce resource of research positions and grant money) goes, there is often little practical motivation for career mathematicians to explain their results to the public carefully. And so the weird reality of the mathematical academia is that it is not an uncommon practice to keep trade secrets in order to protect one's territory and maintain a monopoly. This is doable because as long as a paper passes the opaque and sometimes political peer review process and is accepted by a journal, it is considered work done, accepted by the whole academic community and adds to the reputation of the author(s). Just like in the software industry, trade secrets and monopoly hinder the development of research as a whole, as well as demoralise outsiders who are interested in participating in related research.

-

Apart from trade secrets and territoriality, another reason to the nonexistence of open research community is an elitist tradition in the mathematical academia, which goes as follows:

-
    -
  • Whoever is not good at mathematics or does not possess a degree in maths is not eligible to do research, or else they run high risks of being labelled a crackpot.
  • -
  • Mistakes made by established mathematicians are more tolerable than those less established.
  • -
  • Good mathematical writings should be deep, and expositions of non-original results are viewed as inferior work and do not add to (and in some cases may even damage) one's reputation.
  • -
-

All these customs potentially discourage public participations in mathematical research, and I do not see them easily go away unless an open source community gains momentum.

-

To solve the above problems, I propose a open source model of mathematical research, which has high levels of openness and transparency and also has some added benefits listed in the last section of this essay. This model tries to achieve two major goals:

-
    -
  • Open and public discussions and collaborations of mathematical research projects online
  • -
  • Open review to validate results, where author name, reviewer name, comments and responses are all publicly available online.
  • -
-

To this end, a Github model is fitting. Let me first describe how open source collaboration works on Github.

-

open source collaborations on Github

-

On Github, every project is publicly available in a repository (we do not consider private repos). The owner can update the project by "committing" changes, which include a message of what has been changed, the author of the changes and a timestamp. Each project has an issue tracker, which is basically a discussion forum about the project, where anyone can open an issue (start a discussion), and the owner of the project as well as the original poster of the issue can close it if it is resolved, e.g. bug fixed, feature added, or out of the scope of the project. Closing the issue is like ending the discussion, except that the thread is still open to more posts for anyone interested. People can react to each issue post, e.g. upvote, downvote, celebration, and importantly, all the reactions are public too, so you can find out who upvoted or downvoted your post.

-

When one is interested in contributing code to a project, they fork it, i.e. make a copy of the project, and make the changes they like in the fork. Once they are happy with the changes, they submit a pull request to the original project. The owner of the original project may accept or reject the request, and they can comment on the code in the pull request, asking for clarification, pointing out problematic part of the code etc and the author of the pull request can respond to the comments. Anyone, not just the owner can participate in this review process, turning it into a public discussion. In fact, a pull request is a special issue thread. Once the owner is happy with the pull request, they accept it and the changes are merged into the original project. The author of the changes will show up in the commit history of the original project, so they get the credits.

-

As an alternative to forking, if one is interested in a project but has a different vision, or that the maintainer has stopped working on it, they can clone it and make their own version. This is a more independent kind of fork because there is no longer intention to contribute back to the original project.

-

Moreover, on Github there is no way to send private messages, which forces people to interact publicly. If say you want someone to see and reply to your comment in an issue post or pull request, you simply mention them by @someone.

-

open research in mathematics

-

All this points to a promising direction of open research. A maths project may have a wiki / collection of notes, the paper being written, computer programs implementing the results etc. The issue tracker can serve as a discussion forum about the project as well as a platform for open review (bugs are analogous to mistakes, enhancements are possible ways of improving the main results etc.), and anyone can make their own version of the project, and (optionally) contribute back by making pull requests, which will also be openly reviewed. One may want to add an extra "review this project" functionality, so that people can comment on the original project like they do in a pull request. This may or may not be necessary, as anyone can make comments or point out mistakes in the issue tracker.

-

One may doubt this model due to concerns of credits because work in progress is available to anyone. Well, since all the contributions are trackable in project commit history and public discussions in issues and pull request reviews, there is in fact less room for cheating than the current model in academia, where scooping can happen without any witnesses. What we need is a platform with a good amount of trust like arXiv, so that the open research community honours (and can not ignore) the commit history, and the chance of mis-attribution can be reduced to minimum.

-

Compared to the academic model, open research also has the following advantages:

-
    -
  • Anyone in the world with Internet access will have a chance to participate in research, whether they are affiliated to a university, have the financial means to attend conferences, or are colleagues of one of the handful experts in a specific field.
  • -
  • The problem of replicating / understanding maths results will be solved, as people help each other out. This will also remove the burden of answering queries about one's research. For example, say one has a project "Understanding the fancy results in [paper name]", they write up some initial notes but get stuck understanding certain arguments. In this case they can simply post the questions on the issue tracker, and anyone who knows the answer, or just has a speculation can participate in the discussion. In the end the problem may be resolved without the authors of the paper being bothered, who may be too busy to answer.
  • -
  • Similarly, the burden of peer review can also be shifted from a few appointed reviewers to the crowd.
  • -
- - -
-
-
    -
  1. Please send your comments to my email address - I am still looking for ways to add a comment functionality to this website.

  2. -
-
- -
-
- - diff --git a/site/posts/2018-04-10-update-open-research.html b/site/posts/2018-04-10-update-open-research.html deleted file mode 100644 index b9b8d98..0000000 --- a/site/posts/2018-04-10-update-open-research.html +++ /dev/null @@ -1,77 +0,0 @@ - - - - - Updates on open research - - - - - - -
- - -
- -
-
-

Updates on open research

-

Posted on 2018-04-29 | Comments

-

It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.

-

As always I discuss open research only in mathematics, not because I think it should not be applied to other disciplines, but simply because I do not have experience nor sufficient interests in non-mathematical subjects.

-

First, I read about Richard Stallman the founder of the free software movement, in his biography by Sam Williams and his own collection of essays Free software, free society, from which I learned a bit more about the context and philosophy of free software and its relation to that of open source software. For anyone interested in open research, I highly recommend having a look at these two books. I am also reading Levy’s Hackers, which documented the development of the hacker culture predating Stallman. I can see the connection of ideas from the hacker ethic to the free software philosophy and to the open source philosophy. My guess is that the software world is fortunate to have pioneers who advocated for various kinds of freedom and openness from the beginning, whereas for academia which has a much longer history, credit protection has always been a bigger concern.

-

Also a month ago I attended a workshop called Open research: rethinking scientific collaboration. That was the first time I met a group of people (mostly physicists) who also want open research to happen, and we had some stimulating discussions. Many thanks to the organisers at Perimeter Institute for organising the event, and special thanks to Matteo Smerlak and Ashley Milsted for invitation and hosting.

-

From both of these I feel like I should write an updated post on open research.

-

Freedom and community

-

Ideals matter. Stallman’s struggles stemmed from the frustration of denied request of source code (a frustration I shared in academia except source code is replaced by maths knowledge), and revolved around two things that underlie the free software movement: freedom and community. That is, the freedom to use, modify and share a work, and by sharing, to help the community.

-

Likewise, as for open research, apart from the utilitarian view that open research is more efficient / harder for credit theft, we should not ignore the ethical aspect that open research is right and fair. In particular, I think freedom and community can also serve as principles in open research. One way to make this argument more concrete is to describe what I feel are the central problems: NDAs (non-disclosure agreements) and reproducibility.

-

NDAs. It is assumed that when establishing a research collaboration, or just having a discussion, all those involved own the joint work in progress, and no one has the freedom to disclose any information e.g. intermediate results without getting permission from all collaborators. In effect this amounts to signing an NDA. NDAs are harmful because they restrict people’s freedom from sharing information that can benefit their own or others’ research. Considering that in contrast to the private sector, the primary goal of academia is knowledge but not profit, NDAs in research are unacceptable.

-

Reproducibility. Research papers written down are not necessarily reproducible, even though they appear on peer-reviewed journals. This is because the peer-review process is opaque and the proofs in the papers may not be clear to everyone. To make things worse, there are no open channels to discuss results in these papers and one may have to rely on interacting with the small circle of the informed. One example is folk theorems. Another is trade secrets required to decipher published works.

-

I should clarify that freedom works both ways. One should have the freedom to disclose maths knowledge, but they should also be free to withhold any information that does not hamper the reproducibility of published works (e.g. results in ongoing research yet to be published), even though it may not be nice to do so when such information can help others with their research.

-

Similar to the solution offered by the free software movement, we need a community that promotes and respects free flow of maths knowledge, in the spirit of the four essential freedoms, a community that rejects NDAs and upholds reproducibility.

-

Here are some ideas on how to tackle these two problems and build the community:

-
    -
  1. Free licensing. It solves NDA problem - free licenses permit redistribution and modification of works, so if you adopt them in your joint work, then you have the freedom to modify and distribute the work; it also helps with reproducibility - if a paper is not clear, anyone can write their own version and publish it. Bonus points with the use of copyleft licenses like Creative Commons Share-Alike or the GNU Free Documentation License.
  2. -
  3. A forum for discussions of mathematics. It helps solve the reproducibility problem - public interaction may help quickly clarify problems. By the way, Math Overflow is not a forum.
  4. -
  5. An infrastructure of mathematical knowledge. Like the GNU system, a mathematics encyclopedia under a copyleft license maintained in the Github-style rather than Wikipedia-style by a “Free Mathematics Foundation”, and drawing contributions from the public (inside or outside of the academia). To begin with, crowd-source (again, Github-style) the proofs of say 1000 foundational theorems covered in the curriculum of a bachelor’s degree. Perhaps start with taking contributions from people with some credentials (e.g. having a bachelor degree in maths) and then expand the contribution permission to the public, or taking advantage of existing corpus under free license like Wikipedia.
  6. -
  7. Citing with care: if a work is considered authorative but you couldn’t reproduce the results, whereas another paper which tries to explain or discuss similar results makes the first paper understandable to you, give both papers due attribution (something like: see [1], but I couldn’t reproduce the proof in [1], and the proofs in [2] helped clarify it). No one should be offended if you say you can not reproduce something - there may be causes on both sides, whereas citing [2] is fairer and helps readers with a similar background.
  8. -
-

Tools for open research

-

The open research workshop revolved around how to lead academia towards a more open culture. There were discussions on open research tools, improving credit attributions, the peer-review process and the path to adoption.

-

During the workshop many efforts for open research were mentioned, and afterwards I was also made aware by more of them, like the following:

-
    -
  • OSF, an online research platform. It has a clean and simple interface with commenting, wiki, citation generation, DOI generation, tags, license generation etc. Like Github it supports private and public repositories (but defaults to private), version control, with the ability to fork or bookmark a project.
  • -
  • SciPost, physics journals whose peer review reports and responses are public (peer-witnessed refereeing), and allows comments (post-publication evaluation). Like arXiv, it requires some academic credential (PhD or above) to register.
  • -
  • Knowen, a platform to organise knowledge in directed acyclic graphs. Could be useful for building the infrastructure of mathematical knowledge.
  • -
  • Fermat’s Library, the journal club website that crowd-annotates one notable paper per week released a Chrome extension Librarian that overlays a commenting interface on arXiv. As an example Ian Goodfellow did an AMA (ask me anything) on his GAN paper.
  • -
  • The Polymath project, the famous massive collaborative mathematical project. Not exactly new, the Polymath project is the only open maths research project that has gained some traction and recognition. However, it does not have many active projects (currently only one active project).
  • -
  • The Stacks Project. I was made aware of this project by Yiting. Its data is hosted on github and accepts contributions via pull requests and is licensed under the GNU Free Documentation License, ticking many boxes of the free and open source model.
  • -
-

An anecdote from the workshop

-

In a conversation during the workshop, one of the participants called open science “normal science”, because reproducibility, open access, collaborations, and fair attributions are all what science is supposed to be, and practices like treating the readers as buyers rather than users should be called “bad science”, rather than “closed science”.

-

To which an organiser replied: maybe we should rename the workshop “Not-bad science”.

- -
-
-
- - diff --git a/site/posts/2018-06-03-automatic_differentiation.html b/site/posts/2018-06-03-automatic_differentiation.html deleted file mode 100644 index 8c2b97a..0000000 --- a/site/posts/2018-06-03-automatic_differentiation.html +++ /dev/null @@ -1,76 +0,0 @@ - - - - - Automatic differentiation - - - - - - -
- - -
- -
-
-

Automatic differentiation

-

Posted on 2018-06-03 | Comments

-

This post is meant as a documentation of my understanding of autodiff. I benefited a lot from Toronto CSC321 slides and the autodidact project which is a pedagogical implementation of Autograd. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.

-

Automatic differentiation (AD) is a way to compute derivatives. It does so by traversing through a computational graph using the chain rule.

-

There are the forward mode AD and reverse mode AD, which are kind of symmetric to each other and understanding one of them results in little to no difficulty in understanding the other.

-

In the language of neural networks, one can say that the forward mode AD is used when one wants to compute the derivatives of functions at all layers with respect to input layer weights, whereas the reverse mode AD is used to compute the derivatives of output functions with respect to weights at all layers. Therefore reverse mode AD (rmAD) is the one to use for gradient descent, which is the one we focus in this post.

-

Basically rmAD requires the computation to be sufficiently decomposed, so that in the computational graph, each node as a function of its parent nodes is an elementary function that the AD engine has knowledge about.

-

For example, the Sigmoid activation \(a' = \sigma(w a + b)\) is quite simple, but it should be decomposed to simpler computations:

-
    -
  • \(a' = 1 / t_1\)
  • -
  • \(t_1 = 1 + t_2\)
  • -
  • \(t_2 = \exp(t_3)\)
  • -
  • \(t_3 = - t_4\)
  • -
  • \(t_4 = t_5 + b\)
  • -
  • \(t_5 = w a\)
  • -
-

Thus the function \(a'(a)\) is decomposed to elementary operations like addition, subtraction, multiplication, reciprocitation, exponentiation, logarithm etc. And the rmAD engine stores the Jacobian of these elementary operations.

-

Since in neural networks we want to find derivatives of a single loss function \(L(x; \theta)\), we can omit \(L\) when writing derivatives and denote, say \(\bar \theta_k := \partial_{\theta_k} L\).

-

In implementations of rmAD, one can represent the Jacobian as a transformation \(j: (x \to y) \to (y, \bar y, x) \to \bar x\). \(j\) is called the Vector Jacobian Product (VJP). For example, \(j(\exp)(y, \bar y, x) = y \bar y\) since given \(y = \exp(x)\),

-

\(\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y\)

-

as another example, \(j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)\) since given \(y = x_1 + x_2\), \(\bar{x_1} = \bar{x_2} = \bar y\).

-

Similarly,

-
    -
  1. \(j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)\)
  2. -
  3. \(j(\log)(y, \bar y, x) = \bar y / x\)
  4. -
  5. \(j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)\).
  6. -
  7. etc...
  8. -
-

In the third one, the function is a matrix \(A\) multiplied on the right by a column vector \(\beta\), and \(\bar y \otimes \beta\) is the tensor product which is a fancy way of writing \(\bar y \beta^T\). See numpy_vjps.py for the implementation in autodidact.

-

So, given a node say \(y = y(x_1, x_2, ..., x_n)\), and given the value of \(y\), \(x_{1 : n}\) and \(\bar y\), rmAD computes the values of \(\bar x_{1 : n}\) by using the Jacobians.

-

This is the gist of rmAD. It stores the values of each node in a forward pass, and computes the derivatives of each node exactly once in a backward pass.

-

It is a nice exercise to derive the backpropagation in the fully connected feedforward neural networks (e.g. the one for MNIST in Neural Networks and Deep Learning) using rmAD.

-

AD is an approach lying between the extremes of numerical approximation (e.g. finite difference) and symbolic evaluation. It uses exact formulas (VJP) at each elementary operation like symbolic evaluation, while evaluates each VJP numerically rather than lumping all the VJPs into an unwieldy symbolic formula.

-

Things to look further into: the higher-order functional currying form \(j: (x \to y) \to (y, \bar y, x) \to \bar x\) begs for a functional programming implementation.

- -
-
-
- - -- cgit v1.2.3