Generally, statistics and linear algebra can be employed in some way for each of these questions. However, to arrive at satisfactory answers often requires a domain-specific approach. If that’s the case, how do you narrow down the kind of math you need to learn?</p>\n<p><strong>Define Your System</strong><br />\nThere is no shortage of resources (e.g. <a href=https://www.ycombinator.com/"http://scikit-learn.org/stable//">scikit-learn for data analysis, <a href=https://www.ycombinator.com/"https://keras.io//">keras for deep learning) that will help you jump into writing code to model your systems. In doing so, try to answer the following questions about the pipeline you need to build:</p>\n<ol>\n<li>What are the inputs/outputs of your system? </li>\n<li>How should you prepare your data to fit your system? </li>\n<li>How can you construct features or curate data to help your model generalize? </li>\n<li>How do you define a reasonable objective for your problem?</li>\n</ol>\n<p>You’d be surprised — defining your system can be hard! Afterwards, the engineering required for pipeline-building is also non-trivial. In other words, building machine learning products requires significant amounts of heavy lifting that don’t require a deep mathematical background.</p>\n<p><strong>Resources</strong><br />\n• <a href=https://www.ycombinator.com/"https://developers.google.com/machine-learning/guides/rules-of-ml//">Best Practices for ML Engineering</a> by Martin Zinkevich, Research Scientist at Google</p>\n<p><strong>Learning Math as You Need It</strong><br />\nDiving headfirst into a machine learning workflow, you might find that there are some steps that you get stuck at, especially while debugging. When you’re stuck, do you know what to look up? How reasonable are your weights? Why isn’t your model converging with a particular loss definition? What’s the right way to measure success? At this point, it may be helpful to make assumptions about the data, constrain your optimization differently, or try different algorithms.</p>\n<p>Often, you’ll find that there’s mathematical intuition baked into the modeling/debugging process (e.g. selecting loss functions or evaluation metrics) that could be instrumental to making informed, engineering decisions. These are your opportunities to learn!</p>\n<p>Rachel Thomas from <a href=https://www.ycombinator.com/"http://www.fast.ai//">Fast.ai is a proponent of this “on-demand” method — while educating students, she found that it was more important for her deep learning students to get far enough to become excited about the material. Afterwards, their math education involved filling in the holes, on-demand.</p>\n<p><strong>Resources</strong><br />\n• Course: <a href=https://www.ycombinator.com/"http://www.fast.ai/2017/07/17/num-lin-alg//">Computational Linear Algebra</a> by fast.ai<br />\n• YouTube: <a href=https://www.ycombinator.com/"https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/">3blue1brown: Essence of <a href=https://www.ycombinator.com/"https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab\%22>Linear Algebra</a> and <a href=https://www.ycombinator.com/"https://www.youtube.com/watch?v=WUvTyaaNkzM&list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr\%22>Calculus
Linear Algebra Done Right</a> by Axler<br />\n• Textbook: <a href=https://www.ycombinator.com/"https://web.stanford.edu/~hastie/ElemStatLearn//">Elements of Statistical Learning</a> by Tibshirani et al.<br />\n• Course: <a href=https://www.ycombinator.com/"http://cs229.stanford.edu/syllabus.html#opt\">Stanford’s CS229 (Machine Learning) Course Notes</a></p>\n<h2>Math for Machine Learning Research</h2>\n<p>I now want to characterize the type of mathematical mindset that is useful for research-oriented work in machine learning. The cynical view of machine learning research points to plug-and-play systems where more compute is thrown at models to squeeze out higher performance. In some circles, <a href=https://www.ycombinator.com/"https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf/">researchers remain skeptical</a> that empirical methods lacking in mathematical rigor (e.g. certain deep learning methods) can carry us to the holy grail of human-level intelligence.</p>\n<p>It’s concerning that the research world might be building on existing systems and assumptions that don’t extend our fundamental understanding of the field. Researchers need to contribute primitives— new, foundational building blocks that can be used to derive entirely new insights and approaches to goals in the field. For instance, this might mean rethinking building blocks like Convolutional Neural Networks for image classification, as Geoff Hinton, “the <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Geoffrey_Hinton/">Godfather of Deep Learning,” does in his recent Capsule Networks <a href=https://www.ycombinator.com/"https://arxiv.org/pdf/1710.09829v1.pdf/">paper.
To make the next leaps in machine learning, we need to ask fundamental questions. This requires a deep mathematical maturity, which Michael Nielsen, author of the Deep Learning book, described to me as “playful exploration.” This process involves thousands of hours of being “stuck”, asking questions, and flipping problems over in pursuit of new perspectives. “Playful exploration” allows scientists to ask deep, insightful questions, beyond the combination of straightforward ideas/architectures.</p>\n<p>To state the obvious— in ML research, it is still impossible to learn <em>everything</em>! To properly engage in “playful exploration” requires that you follow your interest, rather than worrying about the hottest new result.</p>\n<p>ML research is an incredibly rich field of study with pressing problems in fairness, interpretability, and accessibility. As true across all scientific disciplines, fundamental thinking is not an on-demand process— it takes patience to be able to think with the breadth of high-level mathematical frameworks required for critical problem solving.</p>\n<p><strong>Resources</strong><br />\n• Blog: <a href=https://www.ycombinator.com/"https://www.maa.org/external_archive/devlin/devlin_10_00.html/">Do SWEs need mathematics?</a> by Keith Devlin<br />\n• Reddit Thread: <a href=https://www.ycombinator.com/"https://www.reddit.com/r/MachineLearning/comments/73n9pm/d_confession_as_an_ai_researcher_seeking_advice//">Confessions of an AI Researcher</a><br />\n• Blog: <a href=https://www.ycombinator.com/"http://www.people.vcu.edu/~dcranston/490/handouts/math-read.html/">How to Read Mathematics</a> by Shai Simonson and Fernando Gouvea<br />\n• Papers: <a href=https://www.ycombinator.com/"https://papers.nips.cc/book/advances-in-neural-information-processing-systems-30-2017/">NIPS and <a href=https://www.ycombinator.com/"http://proceedings.mlr.press/v70//">ICML recent conference papers<br />\n• Essay: <a href=https://www.ycombinator.com/"https://www.maa.org/external_archive/devlin/LockhartsLament.pdf/">A Mathematician’s Lament</a> by Paul Lockhart<sup id=\"footnoteid1\"><a href=https://www.ycombinator.com/"#footnote1\">1</a></sup></p>\n<p><strong>Democratizing Machine Learning Research</strong><br />\nI hope that I haven’t painted “research math” as too esoteric, because the ideas formulated using math should be presented in intuitive forms! Sadly, many machine learning papers are still <a href=https://www.ycombinator.com/"https://arxiv.org/abs/1807.03341/">rife with complex and inconsistent terminology</a>, leaving key intuition difficult to discern. As a student, you can do yourself and the field a great service by attempting to translate dense papers into consumable chunks of intuition, via blog posts, tweets, etc. You might even take examples from <a href=https://www.ycombinator.com/"http://distill.pub/">distill.pub as an example of a publication focused on offering clear explanations of machine learning research. In other words, take the demystification of technical ideas as a means towards “playful exploration”— your learning (and machine learning Twitter) will thank you for it!</p>\n<h2>Takeaways</h2>\n<p>In conclusion, I hope that I’ve provided a starting point for you to think about your math education for machine learning.</p>\n<ul>\n<li>Different problems require different levels of intuition, and I would encourage you to figure out what your objectives are in the first place. </li>\n<li>If you’re hoping to build products, seek peers and study groups through problems and motivate your learning by diving into the end-goal. </li>\n<li>In the research world, broad mathematical foundations can give you the tools to push the field forward by contributing new, fundamental building blocks. </li>\n<li>In general, math (especially in research paper form) can be intimidating, but getting stuck is a huge part of the learning process. </li>\n</ul>\n<p>Good luck!</p>\n<p><strong>Notes</strong><br />\n<b id=\"footnote1\">1.</b> A rather pointed criticism about math education that details “playful exploration.” But I suspect that Lockhart would disagree with the thesis of this post — that math should be used for anything <em>but</em> fun!.<a href=https://www.ycombinator.com/"http://cs229.stanford.edu/section/cs229-prob.pdf/">↩
/nIf you learn primarily by coding up your own projects, here are some ideas to get started:</p>\n<ul>\n<li>Build your own wallet. It can be a be a web, mobile app or desktop app. </li>\n<li>Create your own ERC-20 token and deploy it on the test net. </li>\n<li>Modify crypto kitties (dogs, tanks, zombies…) and deploy it on the test net.</li>\n</ul>\n<h2>Looking Forward</h2>\n<p>In its current state, yes, blockchain development is messy. No, there aren’t clean frameworks and tools analogous to those that exist for modern web development. But why not see the blockchain’s nascent state as an opportunity to impact a paradigm-shifting technology?</p>\n<p>On the blockchain, you don’t need to deploy any centralized servers, which means that there’s no single point of failure. If your whole stack is decentralized, there is no trusted third party involved making it censorship resistant, and your database is publicly verifiable. As the new paradigm offers opportunity to publicly share data, we observe a supreme advantage to decentralizing databases. This is the future we’re building towards on the Blockchain— where information and power are distributed systematically by cutting out the middleman.</p>\n<p><a href=https://www.ycombinator.com/"https://news.ycombinator.com/item?id=16107597\%22>Comment on Hacker News</a>.</p>\n<hr />\n<p><em>Thanks to Ben Yu (Stream) and Brandon Millman (0x) for lending their time to be interviewed, and thanks to Niharika Bedekar, Craig Cannon, Claire Shu for reading drafts of this post.</em></p>\n<!--kg-card-end: html-->","comment_id":"1101697","feature_image":null,"featured":false,"visibility":"public","email_recipient_filter":"none","created_at":"2018-01-09T03:28:17.000-08:00","updated_at":"2022-02-03T16:40:50.000-08:00","published_at":"2018-01-09T03:28:17.000-08:00","custom_excerpt":"If you’re here, we assume that you’re a developer/hacker who’s intrigued by the blockchain. You’re convinced that you understand how it works and now you’re itching to figure out what the blockchain means for you and your developer skill set.","codeinjection_head":null,"codeinjection_foot":null,"custom_template":null,"canonical_url":null,"tags":[{"id":"61fe29efc7139e0001a7118f","name":"Blockchain","slug":"blockchain","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/blockchain/"},{"id":"61fe29efc7139e0001a7118d","name":"Paths","slug":"paths","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/paths/"}],"authors":[{"id":"61fe29e3c7139e0001a710cf","name":"Vincent Chen","slug":"vincent-chen","profile_image":"/blog/content/images/2022/02/vincent-chen.jpg","cover_image":null,"bio":"Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/vincent-chen/"}],"primary_author":{"id":"61fe29e3c7139e0001a710cf","name":"Vincent Chen","slug":"vincent-chen","profile_image":"https://ghost.prod.ycinside.com/content/images/2022/02/vincent-chen.jpg","cover_image":null,"bio":"Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/vincent-chen/"},"primary_tag":{"id":"61fe29efc7139e0001a7118f","name":"Blockchain","slug":"blockchain","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/blockchain/"},"url":"https://ghost.prod.ycinside.com/building-for-the-blockchain/","excerpt":"If you’re here, we assume that you’re a developer/hacker who’s intrigued by the blockchain. You’re convinced that you understand how it works and now you’re itching to figure out what the blockchain means for you and your developer skill set.","reading_time":8,"access":true,"og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"email_subject":null,"frontmatter":null,"feature_image_alt":null,"feature_image_caption":null},{"id":"61fe29f1c7139e0001a7197b","uuid":"a3c57199-e345-4326-999d-e7bd632aa6d8","title":"How to Get into VR","slug":"how-to-get-into-vr","html":"<!--kg-card-begin: html--><p>This is the second edition of <a href=https://www.ycombinator.com/"https://ycombinator.wpengine.com/category/paths//">Paths, a series outlining emerging technologies with clear steps on how to get started in each field.</p>\n<p>This series was designed with makers and aspiring entrepreneurs in mind. We talked to college students interested in engineering, business, and technology to figure out what resources would be most helpful to them. Then, we reached out to experts from academia, industry, or some combination of the two.</p>\n<p>We’re excited about the potential for this series to evolve, and we’d love to hear your feedback at <a href=https://www.ycombinator.com/blog/author/\"mailto:macro@ycombinator.com\">Macro@YCombinator.com. What would you like to learn about next?</p>\n<p>Today, we’re going to talk about VR.</p>\n<hr />\n<p>Science fiction writers and futurists dreamt up virtual reality (VR) decades ago, and hackers have been attempting to build it ever since. Today, the technology is rapidly advancing with the promise to shift modern computing into a new paradigm, not unlike smartphones did one decade ago.</p>\n<p>Some people are skeptical about whether VR will stick around, because the technology is still very much plagued by issues like high cost, unwieldiness, and <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Simulator_sickness/">simulator sickness</a>. However, many are excited about VR’s potential to become the most intimate human-computer interface that has ever existed.</p>\n<p>In this post, we’ll explore why it’s an exciting time to get into VR now–both for consumers and developers. Then, we’ll discuss how a wide range of interdisciplinary fields have pushed the technology forward. Lastly, we’ll identify concrete ways in which you can get started.</p>\n<p><strong>Why Now?</strong><br />\n<em>“VR is not a new technology. It just became accessible.” – Jeremy Bailenson (Stanford VHIL)</em></p>\n<p>VR technology has existed for many years, in many forms, from stereoscopes <sup id=\"footnoteid1\"><a href=https://www.ycombinator.com/"#footnote1\">1</a></sup> to flight simulators. Academics and researchers in fields like engineering and physics have been working to make the technology feasible. In the last few decades, head mounted displays (HMDs) have emerged as the standard for delivering VR experiences.</p>\n<p>During the late 90s/early 2000s, there were several attempts to bring VR to massive audiences. Sega announced (but never released) a headset. Nintendo launched Virtual Boy, a video game console that included a monochrome HMD. Nevertheless, these technologies were held back by a lack of visual fidelity and insufficient processing power.</p>\n<p>Only recently have developments in VR become more visible to the public eye. CPU/GPUs have reached a point where they can provide high-fidelity, immersive experiences for reasonable prices. Smartphones have enabled mobile VR as cheaper and more accessible options that don’t require you to be tethered to high-end computer. As more people have an opportunity to try VR, it’s becoming increasingly clear how the technology might reach consumers in a big way.</p>\n<p><strong>Virtual Reality and Augmented Reality (AR) are Siblings</strong><br />\nVR entails simulated worlds, and AR entails overlaid information in the real world. We can see the difference more clearly if we take the application of surgery as an example. VR can be used to simulate training for surgeons, while AR can be used to superimpose instructions and diagnostics on a live view in real-time surgeries.</p>\n<p>Some people believe that virtual reality is a stepping stone to augmented reality. While there exist larger technological leaps necessary to achieving high-fidelity AR, the strides made in VR development can help push us there.</p>\n<p>In this post, we avoid discussing whether VR or AR is better, more promising, or more applicable. Other people have speculated on this, so instead we will discuss their joint potential and respective engineering challenges. At this point in time, there is a wonderful opportunity to start developing for both platforms.</p>\n<p><strong>VR is a Concrete Engineering Problem</strong><br />\nThere is no such thing as VR… only tracking, rendering, and display. Tracking is the process of recording the user’s location and orientation in 3D space. Rendering is the process of constructing the appropriate image for a user. Display refers to the fidelity with which the hardware can produce the rendered image.</p>\n<p>We need to solve each of these problems well enough that a user doesn’t feel sick or uncomfortable. People experience simulator sickness when cues for self-motion from the visual and vestibular systems don’t match (the same reason for car/sea-sickness).</p>\n<p>This is a hard problem. For reference, traditional PC games render at a resolution of 1920×1080 at a refresh rate 60Hz. The Oculus Rift runs at 2160×1200 at 90Hz (over two displays– one for each eye).<sup id=\"footnoteid2\"><a href=https://www.ycombinator.com/"#footnote2\">2</a></sup> In other words, current VR solutions need to effectively render at 1080p, for two eyes, at a much higher refresh rate than PC games. <em>At the same time</em>, the processor needs to track a user’s location and provide that information to the headset with as little latency as possible.</p>\n<p>Even if those specs are achieved, it’s less than ideal! VR displays have yet to cover the full human field of view, and we can certainly improve rendering quality to reach the fidelity of modern retina displays. All together, this suggests that we need 8K rendering for VR!</p>\n<p>For now, we have the interesting engineering problem of exploiting limitations in the human visual system to optimize bandwidth and compute power. (e.g. Our peripheral vision is worse than our central vision– so why not try <a href=https://www.ycombinator.com/"http://www.roadtovr.com/nvidia-perceptually-based-foveated-rendering-research//">foveated rendering</a>?)</p>\n<p><strong>VR isn’t Just for Gamers</strong><br />\nYes, it’s fun to <a href=https://www.ycombinator.com/"https://www.epicgames.com/roborecall//">shoot robots in VR</a>.</p>\n<p>But VR will also enable <a href=https://www.ycombinator.com/"https://www.yahoo.com/news/virtual-orchestra-hits-high-notes-london-174308063.html/">immersive concerts</a>, <a href=https://www.ycombinator.com/"http://www.space.com/34129-destination-mars-now-open-kennedy.html/">reinvented museums</a>, and <a href=https://www.ycombinator.com/"http://ftw.usatoday.com/2016/09/2016-nba-finals-virtual-reality-cleveland-cavaliers-golden-state-warriors-lebron-james-steph-curry-video/">live, court-side sporting events</a>. With VR, <a href=https://www.ycombinator.com/"https://www.wsj.com/articles/virtual-reality-takes-on-the-videoconference-1474250761/">videoconferencing will improve, with better eye contact and the inclusion of nuanced, non-verbal cues. The cost of <a href=https://www.ycombinator.com/"http://www.spar3d.com/news/related-new-technologies/vr-training-construction-industry//">training will plummet without the need for human trainers in industries like construction or manufacturing. At the same time, the efficacy of repeatable, hands-on training will increase. Academics will conduct <a href=https://www.ycombinator.com/"https://vhil.stanford.edu/pubs/2002/immersive-virtual-environment-technology-as-a-methodological-tool-for-social-psychology//">social psychology research</a> with more reproducibility, diverse sample sizes, and day-to-day realism without the need for human confederates. VR will provide a scalable way to introduce true <a href=https://www.ycombinator.com/"http://finance.yahoo.com/news/lucile-packard-children-hospital-stanford-150000674.html/">experiential learning</a> into education.</p>\n<p>Personally, I’m extremely excited about the impact of <a href=https://www.ycombinator.com/"https://www.slideshare.net/waltergreenleaf/vr-to-transform-medicine/">VR on healthcare and medicine</a>. The benefits of VR in training/education will also apply to clinicians. In terms of patient care, VR can be used to <a href=https://www.ycombinator.com/"http://www.japantimes.co.jp/news/2016/10/09/business/tech/virtual-reality-helps-treat-phantom-pain-letting-missing-injured-limbs-move/#.WOrUplKZPdR\">manage pain</a>, <a href=https://www.ycombinator.com/"http://www.foxnews.com/health/2016/02/29/virtual-reality-heroin-cave-aimed-at-helping-addicts-kick-habit.html/">combat addiction</a>, and treat <a href=https://www.ycombinator.com/"http://www.techrepublic.com/article/how-virtual-reality-is-transforming-dementia-care-in-australia//">mental health</a> issues.</p>\n<p><strong>No Part of the VR Stack is Mature</strong><br />\n<em>“The field is a molten landscape.” – Morgan Sinko (NullSpace VR)</em></p>\n<p>There is no standard. No best practices. Everyone is trying something different.</p>\n<p>Here are a few technical domains that touch VR, and specific problems within each domain:</p>\n<ul>\n<li><strong>Human Computer Interaction</strong>: How do we implement <a href=https://www.ycombinator.com/"http://www.indiedb.com/features/diegesis-user-interfaces-and-virtual-reality/">non-diegetic UI</a>? </li>\n<li><strong>Optics</strong>: How can we fit a tiny projector for your eye on something with the form factor of prescription glasses? </li>\n<li><strong>Electronics</strong>: How can we optimize devices that we put on our faces for batteries, heat, and size? </li>\n<li><strong>Hardware</strong>: How can build haptics for better tracking and feedback on our bodies? </li>\n<li><strong>Computer Vision</strong>: How can we bridge the real-world and VR with 3D reconstruction and scene understanding? </li>\n<li><strong>Artificial Intelligence/Natural Language Processing</strong>: How do we create realistic agents to interact with us in VR?</li>\n</ul>\n<p>Even in non-technical domains, there exist many unanswered questions:</p>\n<ul>\n<li><strong>Psychology</strong>: What are the effects of VR on addiction? What are the effects of avatars on identity? </li>\n<li><strong>Sociology</strong>: What will it mean to have <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Transformed_social_interaction?\%22>transformed social interactions (TSI)</a> (think automatic eye contact)?</li>\n</ul>\n<p><strong>We’re Close to Early AR, but Far from Mature AR</strong><br />\nIn many ways, it is more approachable to start prototyping in AR than it is in VR. With mobile phones, information provided by GPS and cameras can augment our experiences with the world around us (think Pokemon GO).</p>\n<p>However, AR faces many challenges that don’t exist in VR. By nature of the experience, AR technologies benefit from being untethered, so you can take advantage of the space around you. What does this mean for computing resources if we can’t carry a high-end gaming computer on our backs?</p>\n<p>Other challenges in AR lie in hardware and HCI– how do we create see-through displays with a large field of view? How do we create a form factor that <em>people would actually wear in public?</em> (Hint: <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Google_Glass#Criticism\">remember Google Glass</a>?)</p>\n<p><strong>You Can Start Building Content, <em>Right Now</em></strong><br />\nDownload a game engine, like Unity or Unreal Engine, and start hacking. If you’ve developed games before, you’ll notice that the process is very familiar, except your headset is rigged to the correspond with the in-game camera.</p>\n<p>More generally, these game engines are engineered to be very intuitive and easy to learn. They only require basic scripting and utilize interfaces with shallow learning curves (e.g. <a href=https://www.ycombinator.com/"https://docs.unrealengine.com/latest/INT/Engine/Blueprints//">drag-and-drop visual scripting</a>).</p>\n<p>Here are a few useful resources for getting started:</p>\n<ul>\n<li><a href=https://www.ycombinator.com/"http://fusedvr.com/">FusedVR’s Tutorials and Live Streams</a>: Phenomenal walkthroughs for content creation– from modeling to game engine implementations. </li>\n<li><a href=https://www.ycombinator.com/"https://www.udacity.com/course/vr-developer-nanodegree--nd017/">Udacity’s VR Developer Nanodegree</a>: Thorough program that covers development, design, and optimization of VR applications. </li>\n<li><a href=https://www.ycombinator.com/"https://unity3d.com/learn/tutorials/">Unity and <a href=https://www.ycombinator.com/"https://docs.unrealengine.com/latest/INT/Videos//">Unreal tutorials. </li>\n<li>Toolkits:<br />\n○ VR: <a href=https://www.ycombinator.com/"https://vrtoolkit.readme.io/">VRTK
A-Frame [Update 05/15/17]<br />\n○ AR: <a href=https://www.ycombinator.com/"https://www.vuforia.com/">Vuforia, <a href=https://www.ycombinator.com/"https://github.com/Microsoft/HoloToolkit/">Hololens
AR.js [Update 05/15/17]</li>\n<li>Useful Threads:<br />\n○ <a href=https://www.ycombinator.com/"https://www.reddit.com/r/learnVRdev/comments/65cvxo/how_to_start_making_vr_games_for_beginners_what//">Reddit: How to start making VR games</a><br />\n○ <a href=https://www.ycombinator.com/"https://www.quora.com/I-want-to-be-a-virtual-reality-developer-From-where-can-I-start-What-are-the-best-learning-materials/">Quora: Where can I start? What are the best VR learning materials?</a><br />\n○ <a href=https://www.ycombinator.com/"http://vr.cs.uiuc.edu/">“Virtual Reality” by Steven LaValle</a>: “full-stack” VR textbook [Update 05/15/17]</li>\n</ul>\n<p><strong>An Understanding of the Graphics Pipeline Will Help you Appreciate VR’s Constraints and Possibilities</strong><br />\nFundamentally, VR is a cool application of stereo rendering in front of your eyes with head tracking. With a sound understanding of 3D geometries and how they are rendered, you’ll better understand the constraints and possibilities of VR.</p>\n<p>Here are some good online resources:</p>\n<ul>\n<li><a href=https://www.ycombinator.com/"http://inst.eecs.berkeley.edu/~cs184/fa12/onlinelectures.html/">Berkeley’s CS184: Computer Graphics (2012 Archive)</a> </li>\n<li><a href=https://www.ycombinator.com/"http://www.scratchapixel.com/index.php/">Scratchpixel: Learn everything from the math and physics behind graphics to advanced, modern techniques </li>\n<li><a href=https://www.ycombinator.com/"http://www.songho.ca/opengl/index.html/">Song Ho An’s OpenGL Notes</a>: Introduction to tutorials and concepts in OpenGL </li>\n<li><a href=https://www.ycombinator.com/"https://www.shadertoy.com/">Shadertoy: Code samples for shader programming [Update 05/15/17] </li>\n</ul>\n<p><strong>The Fields of Vision and Imaging are Pushing the Frontier of VR Forward</strong><br />\nIt could be massively rewarding to invest in an understanding of computer vision, optics, imaging, and related topics.</p>\n<ul>\n<li><strong>Computer Vision</strong>: How can we track and understand depth?<br />\n○ <a href=https://www.ycombinator.com/"http://cs231n.github.io/">Stanford’s CS231N Course Notes</a>: Phenomenally clear resource with modern CV techniques, by Andrej Karpathy (now, research scientist at OpenAI)<br />\n○ <a href=https://www.ycombinator.com/"http://www.computervisionmodels.com/">Computer Vision: Models, Learning, and Inference</a>: Slides, exercises, and code samples </li>\n<li><strong>Computational Imaging/Photography</strong>: How does light enter a camera and form an image? Analogously, how can we focus virtual images on our retinas?<br />\n○ <a href=https://www.ycombinator.com/"http://web.media.mit.edu/~raskar/photo//">Computational Photography (Raskar, Tumblin)</a>: Textbook from MIT and Northwestern professors<br />\n○ <a href=https://www.ycombinator.com/"https://www.udacity.com/course/computational-photography--ud955/">Udacity’s Computational Photography Course</a> </li>\n</ul>\n<p><strong>On the learning process: “Approach VR with a rookie mindset.”</strong> – Aashna Mago (RabbitHole VR)<br />\nYes, hype is real, which is all the more reason to take a step back from all the noise. Try to embody the perspective of a beginner: be willing to learn and absorb. Don’t start something because you feel like you have to. Now is a great time to learn, experiment, fail and become part of an amazing community. If you believe that you’re late to anything, I haven’t done a good job in this blog post!</p>\n<p><strong>Bonus Points: Read Science Fiction!</strong><br />\nThe development of VR has been surprisingly tied to science fiction. Authors in the field have envisioned the futures that engineers set out to build.<sup id=\"footnoteid3\"><a href=https://www.ycombinator.com/"#footnote3\">3</a></sup></p>\n<p>Fun fact: Neal Stephenson (author of Snow Crash) advises Magic Leap (AR Company) as their Chief Futurist!</p>\n<p>Must Reads:</p>\n<ul>\n<li><a href=https://www.ycombinator.com/"https://www.amazon.com/Ready-Player-One-Ernest-Cline/dp/0307887448/">Ready Player One</a> by Ernest Cline </li>\n<li><a href=https://www.ycombinator.com/"https://www.amazon.com/Snow-Crash-Neal-Stephenson/dp/0553380958/">Snow Crash</a> by Neal Stephenson </li>\n</ul>\n<p>My Personal Favorites:</p>\n<ul>\n<li><a href=https://www.ycombinator.com/"https://www.amazon.com/Neuromancer-William-Gibson/dp/0441569595/">Neuromancer by William Gibson </li>\n<li><a href=https://www.ycombinator.com/"https://www.amazon.com/Rainbows-End-Vernor-Vinge/dp/0812536363/">Rainbow’s End by Vernor Vinge</a> </li>\n</ul>\n<p><strong>TL;DR: Build Things. Talk to People.</strong><br />\nGet your hands on a headset, and start building in areas where your skills overlap with your interests. VR is a new medium, so creating compelling, shippable content is more of an unsolved problem than hacking on iPhone apps over a weekend. You’ll run into challenges, but in those challenges, you’ll find far-reaching opportunities to fix problems that many people in the field experience.</p>\n<p>In addition to building, talk to a wide variety of people. The process of building a beautiful experience requires the work of not only engineers, but artists, designers, and storytellers as well (game developers will understand this). Ask for feedback in online forums. Join clubs. Start clubs! <a href=https://www.ycombinator.com/"https://medium.com/@chrismtan/11-unconventional-ways-to-get-a-job-in-vr-ecee321836f2/">Work at a VR company</a>. The world of VR is still very small, and that’s really special for someone looking to make a big dent in the field.</p>\n<p>Here are some online communities that might be worth looking into:</p>\n<ul>\n<li><a href=https://www.ycombinator.com/"https://www.facebook.com/groups/virtualrealitys//">Facebook VR Group</a> </li>\n<li><a href=https://www.ycombinator.com/"https://www.facebook.com/groups/womeninvr//">Women in VR</a> </li>\n<li>Subreddits (<a href=https://www.ycombinator.com/"https://www.reddit.com/r/virtualreality//">/r/virtualreality, <a href=https://www.ycombinator.com/"https://www.reddit.com/r/Vive//">/r/vive, <a href=https://www.ycombinator.com/"http://reddit.com/r/oculus/">/r/oculus) </li>\n<li><a href=https://www.ycombinator.com/"https://news.ycombinator.com/item?id=13392885\%22>HackerNews Blog Post About VR</a> </li>\n</ul>\n<p><strong>“If you believe that VR is coming…”</strong><br />\n…that it’s going to be transformative, that it’s going to be ubiquitous, and very significant for the way that society interacts with one another, then it’s exciting to be at the very early stages.” – Jay Borenstein (Stanford CS)</p>\n<p>“The way that people react to VR is so amazing and so visceral. It renews your faith in it, I think, and you know that it’s something special.” -Aashna Mago (<a href=https://www.ycombinator.com/"http://www.rabbitholevr.org/">RabbitHole VR</a>)</p>\n<hr />\n<p><strong>Notes & Refs</strong><br />\n<b id=\"footnote1\">1.</b> Stereoscopes are devices that show different images to the left and right eye. When viewed simultaneously, these images appear in 3D.<a href=https://www.ycombinator.com/"#footnoteid1\">↩</a><br />\n<b id=\"footnote2\">2.</b> Specs from: <a href=https://www.ycombinator.com/"https://www3.oculus.com/en-us/blog/powering-the-rift//">https://www3.oculus.com/en-us/blog/powering-the-rift/http://www.theverge.com/a/virtual-realityFacebook)
Stanford VHIL</a>)<br />\nJordan Cazamias (Engineer @<a href=https://www.ycombinator.com/"https://www.magicleap.com/">MagicLeap)
RabbitHole VR</a>)<br />\nVasanth Mohan (Founder @<a href=https://www.ycombinator.com/"http://fusedvr.com//">FusedVR)
NullSpace VR</a>)<br />\nGordon Wetzstein (Professor @<a href=https://www.ycombinator.com/"https://stanford.edu/~gordonwz//">Stanford EE</a>, Lead @<a href=https://www.ycombinator.com/"http://www.computationalimaging.org/">Stanford Computational Imaging Lab</a>)<br />\nChris Tan (CEO @<a href=https://www.ycombinator.com/"https://www.constructvr.io/">ConstructVR) …for lending their time to be interviewed</p>\n<p>And Jeremy Bailenson, Neel Bedekar, Niharika Bedekar, Craig Cannon, Jordan Cazamias, Darren Handoko, Jason Zheng, Aashna Mago, Kat Mañalac, and Vasanth Mohan …for reading drafts of this post.</p>\n<hr />\n<!--kg-card-end: html-->","comment_id":"1099282","feature_image":null,"featured":false,"visibility":"public","email_recipient_filter":"none","created_at":"2017-05-03T03:04:17.000-07:00","updated_at":"2021-10-20T13:10:05.000-07:00","published_at":"2017-05-03T03:04:17.000-07:00","custom_excerpt":null,"codeinjection_head":null,"codeinjection_foot":null,"custom_template":null,"canonical_url":null,"tags":[{"id":"61fe29efc7139e0001a71174","name":"Advice","slug":"advice","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/advice/"},{"id":"61fe29efc7139e0001a7116d","name":"Essay","slug":"essay","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/essay/"},{"id":"61fe29efc7139e0001a7118d","name":"Paths","slug":"paths","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/paths/"}],"authors":[{"id":"61fe29e3c7139e0001a710cf","name":"Vincent Chen","slug":"vincent-chen","profile_image":"/blog/content/images/2022/02/vincent-chen.jpg","cover_image":null,"bio":"Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/vincent-chen/"}],"primary_author":{"id":"61fe29e3c7139e0001a710cf","name":"Vincent Chen","slug":"vincent-chen","profile_image":"https://ghost.prod.ycinside.com/content/images/2022/02/vincent-chen.jpg","cover_image":null,"bio":"Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/vincent-chen/"},"primary_tag":{"id":"61fe29efc7139e0001a71174","name":"Advice","slug":"advice","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/advice/"},"url":"https://ghost.prod.ycinside.com/how-to-get-into-vr/","excerpt":"This is the second edition of Paths, a series outlining emerging technologies with clear steps on how to get started in each field.","reading_time":9,"access":true,"og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"email_subject":null,"frontmatter":null,"feature_image_alt":null,"feature_image_caption":null},{"id":"61fe29f1c7139e0001a71932","uuid":"1eea200b-a7bb-4da3-b610-d5cb47ff9d95","title":"How To Get Into Natural Language Processing","slug":"how-to-get-into-natural-language-processing","html":"<!--kg-card-begin: html--><p>We’re excited to introduce a new series we’re calling <a href=https://www.ycombinator.com/"https://ycombinator.wpengine.com/category/paths//">Paths. Each post will outline an emerging technology and give you clear steps on how to get started in that field.</p>\n<p>This series was designed with makers and aspiring entrepreneurs in mind. We talked to college students interested in engineering, business, and technology to figure out what resources would be most helpful to them. Then, we reached out to experts from academia, industry, or some combination of the two.</p>\n<p>We’re excited about the potential for this series to evolve, and we’d love to hear your feedback at <a href=https://www.ycombinator.com/"mailto:Macro@YCombinator.com\">Macro@YCombinator.com</a>. What would you like to learn about next?</p>\n<p>Today, we’re going to talk about NLP.</p>\n<hr />\n<p>We don’t often think about how easy it is for humans to understand language. In everyday conversation, we convey meaning without considering how our brains translate so much unstructured data into useful information. For machines, however, understanding human speech and language is very hard.</p>\n<p><strong>What is NLP?</strong><br />\nNatural language processing, or NLP, is a field concerned with enabling machines to understand human language.</p>\n<p><em>“The goal of this new field is to get computers to perform useful tasks involving human language, tasks like enabling human-machine communication, improving human-human communication, or simply doing useful processing of text or speech.”</em> (Jurafsky, Manning 2011)</p>\n<p>Beginning as a field rooted in linguistics, NLP evolved during the mid-twentieth century due to new advances in statistical analysis and, in the last few years, has erupted again as a result of novel techniques in artificial intelligence. Today, the field has become incredibly multidisciplinary, bringing together symbolic paradigms (think pattern-matching based on a set of rules) and stochastic paradigms (which draw from statistics and probability).</p>\n<p><strong>Why Should I Care?</strong><br />\nNLP is changing the way that we interact with our devices, and the field is evolving incredibly rapidly. It can be applied to so many different fields by people of incredibly diverse backgrounds.</p>\n<p>Here’s a look, by industry, into some ways that NLP is being used today:<br />\n– Medicine – Summarized physicians’ notes for billing; Interoperability (moving differently-formatted medical records across providers)<br />\n– Law – Improved and more relevant lookup/research for legal documents<br />\n– Financial Industries / Banking – Actionable insights based on sentiments world news or social media</p>\n<p>That said, there remain so many hard problems to solve in NLP, so it’s exciting to get involved now.</p>\n<p><strong>What Are Examples of NLP?</strong><br />\nPersonal assistants (Apple/Siri, Amazon/Alexa), automated language translation (Microsoft/Skype Translator, Google/Translate), question answering (Google/Search), and text summarization are examples of NLP in real-world products.</p>\n<p><strong>Why is NLP Hard?</strong><br />\nLanguage is highly ambiguous– it relies on subtle cues and contexts to convey meaning.</p>\n<p>Take this simple example: “I love flying planes.”</p>\n<p>Do I enjoy participating in the act of piloting an aircraft? Or am I expressing an appreciation for man-made vehicles engaged in movement through the air on wings?</p>\n<p>A single sentence can carry different meanings. After thousands of years of evolution, languages have evolved to become shorter and less explicit. For humans, this is very efficient. We have developed the ability to communicate with one another by relying on common sense, the context of our conversations, and knowledge about how the world works. The verbal message that we deliver contains as little information as possible to convey meaning.</p>\n<p>Today’s computers struggle immensely with resolving ambiguity. As a result, they fight the uphill battle of interpreting meaning without a full understanding of context, e.g. like common sense and culture.</p>\n<p><strong>Why Now?</strong><br />\nA key driver for NLP’s recent rise is the Web, which introduced tremendous amounts of spoken and written material. Modern computers, with faster multi-core CPUs/GPUs, can take advantage of these large datasets with the advent of more advanced machine learning methods that have developed in the last decade. As a result, we are witness to a ripe environment for applied NLP.</p>\n<p><em>“There exists a lot of infrastructure and tools that are available that weren’t as accessible before. Think about it like the boom of frameworks and tools for web development. An analog of that is now accessible for NLP.”</em> – Jimoh Ovbiagele, ROSS Intelligence</p>\n<p>A more subtle reason for recent progress in NLP is our comfort and trust in computing devices.</p>\n<p><em>“10 years ago, many people were afraid that [devices] were going to make decisions based solely on data and without a human’s perspective. Now, more than ever, people are willing to trust a 100% autonomous AI to send an email.”</em> – Sinan Ozdemir, Kylie.ai</p>\n<p><strong>I’m a Maker, And I’m Intrigued. What Can I Do?</strong><br />\nCertain fundamental skills will be useful for academic or applied work in NLP. As a baseline, foundations in college-level algebra and probability (e.g. random variables, distributions, topic models) will be necessary to understand frequently used methods. In addition, knowledge in linguistics (e.g. understanding of semantics, pragmatics, and symbolic representations of language) can provide useful intuition for why computational methods work in the first place.</p>\n<p>In addition to developing mathematical and linguistic tools, take courses that push you to…<br />\n<em>“… understand how to represent systems in ways that can be turned into something more automated or computational. I spent a lot of time in my undergrad looking at a bunch of mathematical models to get a sense of the important aspects of the system. It’s a way of communicating an abstract idea to myself.”</em> – Jacob Rosen, Legit Patents</p>\n<p>Finally, it can be incredibly valuable to get your hands on some data (e.g. Twitter or Reddit posts) to build an intuition for resolving ambiguity in text. What does this unfiltered/unstructured text look like? Why is data formatted in this way, for this specific platform? Before modeling anything, seek to understand the data. Then, work on building your statistical models and optimizing your system’s infrastructure.</p>\n<p><strong>For More Tools And Resources to Get Started, Check Out:</strong><br />\n– <a href=https://www.ycombinator.com/"https://www.youtube.com/playlist?list=PL6397E4B26D00A269\%22>Stanford NLP Lectures by Dan Jurafsky and Chris Manning</a><br />\n– <a href=https://www.ycombinator.com/"https://news.ycombinator.com/item?id=12916498\%22>HackerNews: “How Can I Get into NLP?”</a><br />\n– <a href=https://www.ycombinator.com/"http://www.nltk.org/book_1ed//">Intro to the popular Natural Language Toolkit in Python</a><br />\n– <a href=https://www.ycombinator.com/"https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words/">Project: Detect sentiment in movie reviews</a></p>\n<p><strong>Do I Need a PhD to Work on NLP?</strong><br />\n<em>“Having a PhD is not 100% necessary. Data science in general is such a new idea to a lot of people in the world, and the science part isn’t 100% there yet.</em></p>\n<p><em>Broadly speaking, we can break down roles into two categories: analysts and builders.</em></p>\n<p><em>Analysts have a more theoretical/statistical background. Therefore, [PhDs working in NLP] tend to approach problems from a mathematical standpoint.</em></p>\n<p><em>Builders work on pipelines that will handle all of the text until something is more usable to prototype.</em></p>\n<p><em>There is always a balance between these two mindsets, especially when building products that need to go to market.”</em> – Sinan Ozdemir, Kylie.ai</p>\n<p><strong>Okay. But What Would it Mean if I Did Get a PhD?</strong><br />\n<em>“It used to be the case that many mathematicians only became famous half a century later, when someone figured out a practical use for their work. Today, academic work is being utilized much quicker, sometimes within only a few years. The rapid influx of academic work will lead to a rapid outflux of production ready software.”</em> – Sinan Ozdemir, Kylie.ai</p>\n<p>In other words, there is tremendous value in pursuing deep work to push academia forward, and this kind of work is having tangible impact in real-world applications sooner and sooner. Additionally, returning to industry with the intuition of an analyst will provide a valuable perspective for shipping user-facing products.</p>\n<p><strong>What Are Some of The Biggest Challenges Working in NLP?</strong><br />\nMany practical challenges prevent us from taking full advantage of the theoretical frameworks and computational tools that have been developed for NLP.</p>\n<p>To work on real problems we need representative and relevant data sets. How can we solve the most pressing healthcare problems when we can’t access secure, patient records? How can we understand social networks in response to global news without infringing Facebook’s privacy policy?</p>\n<p>At the moment, there are two potential workarounds:<br />\n<em>“1) Collaborate. Work with doctors or hospitals on localized data sets. 2) Find data sets that are close to what you want. Use Reddit’s publicly-available dataset, instead of Facebook.”</em> -Dan Jurafsky, NLP Group @ Stanford</p>\n<p>Another challenge, especially in industry, is related to metrics and analytics. What is the right way to measure performance? How do we build robust feedback mechanisms to quantitatively measure the performance of an NLP system?</p>\n<p>Let’s consider the challenge of quantitatively evaluating a chatbot’s effectiveness:<br />\n<em>“We can guess all day long about how our system will be used, but the key will be observing and improving. Once we have that data, it will allow us to improve the logic for entity extraction and intent matching.”</em> – Taylor Halliday, Mesh Studio</p>\n<p><strong>Where is The Field Going?</strong><br />\nNew advances in artificial intelligence and deep learning have completely changed the way we think about NLP. With deep learning, systems handle inputs and outputs that are purely text:</p>\n<p><em>“Consider summarization snippets on Google search results. At the moment, algorithms still use statistical models to find frequent pieces and then paste them together. With deep learning, we use complex neural networks to map text into higher-dimensional representations, and re-generate a sequence of words. All of this work has been done in the last 3-4 years.”</em> – Dan Jurafsky, NLP Group @ Stanford</p>\n<p><strong>Sounds Exciting!</strong><br />\nWe’re at a very unique point in history where natural language interfaces are beginning to dominate the ways that we interact with our machines. With largely available datasets and open source frameworks, working on NLP problems has never been more accessible.</p>\n<p>Perhaps most exciting is that NLP can be tackled from so many different angles. Academic work has become increasingly relevant in real-world products. Diverse backgrounds and interdisciplinary approaches are advantages because context from other fields — linguistics, psychology… even healthcare or law— can be invaluable to solving specific problems from those fields.</p>\n<p>Language is perhaps the most effective and intuitive tool we have to interface with each other. With NLP, we’re working on extending this interface to machines.</p>\n<hr />\n<p><em>Update on 1/30/17</em></p>\n<p><strong>Additions From the HN Community</strong></p>\n<p><a href=https://www.ycombinator.com/"https://alchemy-language-demo.mybluemix.net//">Watson API demo: What’s possible?</a> – <a href=https://www.ycombinator.com/"https://news.ycombinator.com/user?id=garysieling\%22>garysieling
Advice from PhD on NLP, machine learning, and data monetization</a> – <a href=https://www.ycombinator.com/"https://news.ycombinator.com/user?id=danso\%22>danso
SyntaxNet in Context: Understanding Google’s New TensorFlow NLP Model</a> – <a href=https://www.ycombinator.com/"https://news.ycombinator.com/user?id=danso\%22>danso
Announcing Syntax Net: The World’s Most Accurate NLP Parser</a> – <a href=https://www.ycombinator.com/"https://news.ycombinator.com/user?id=danso\%22>danso
Curated Deep Learning for NLP Resources</a> – <a href=https://www.ycombinator.com/"https://news.ycombinator.com/user?id=andrewtbham\%22>andrewtbham
How To Get Into Natural Language Processing
by Vincent Chen1/20/2017
We’re excited to introduce a new series we’re calling Paths. Each post will outline an emerging technology and give you clear steps on how to get started in that field.