-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
1 lines (1 loc) · 79.3 KB
/
index.json
1
[{"authors":["grae"],"categories":null,"content":"I\u0026rsquo;m working to scale one-to-one learning.\nIn 1984 Benjamin Bloom described the \u0026ldquo; Two Sigma Problem\u0026quot;, noting that students tutored with one-to-one techniques performed two standard deviations better than students in a traditional class.\nHe also dismissed large-scale one-to-one learning as \u0026ldquo;too costly\u0026rdquo; and not \u0026ldquo;realistic\u0026quot;.\nI believe Bloom was right about the effectiveness of one-to-one learning, but wrong about scalability. I\u0026rsquo;m building tools to prove that scalable, accessible one-to-one learning is possible today.\n","date":1594332001,"expirydate":-62135596800,"kind":"taxonomy","lang":"en","lastmod":1594332001,"objectID":"6a1d6b23894bdc7f9b1279451bccda6d","permalink":"/author/grae-drake/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/grae-drake/","section":"authors","summary":"I\u0026rsquo;m working to scale one-to-one learning.\nIn 1984 Benjamin Bloom described the \u0026ldquo; Two Sigma Problem\u0026quot;, noting that students tutored with one-to-one techniques performed two standard deviations better than students in a traditional class.","tags":null,"title":"Grae Drake","type":"authors"},{"authors":[],"categories":[],"content":"Today Martin Weller wrote that, while the open source movement is a wildly successful model for producing software, we \u0026ldquo;haven\u0026rsquo;t really cracked a community based production model for learning content\u0026rdquo;. David Wiley followed up to say that, given our lack of progress in this area since the 90\u0026rsquo;s, he believes there\u0026rsquo;s \u0026ldquo;a good argument to be made that a community based production model for learning content isn’t actually possible.\u0026rdquo;\nNot only is this possible, it\u0026rsquo;s already taken over.\nMartin and David are right that we haven\u0026rsquo;t seen a successful translation of open source software practices to producing courses and textbooks. But they\u0026rsquo;re dead wrong about community produced learning content. A google search proves it.\nIn fact, let\u0026rsquo;s open a new tab and google \u0026ldquo;python list comprehension\u0026rdquo;. Google may give you different results, but I get:\n A free tutorial on Programiz The open source Python Documentation A free tutorial on Real Python A free tutorial \u0026amp; course on Python for Beginners A medium post from some internet person explaining list comprehensions. A chapter from Python 3 Patterns, Recipes and Idioms, a collaboratively written book published on a free platform (readthedocs.org) A tutorial on Data Camp, a for-profit company paying a large community of data scientists to produce content, much of which is freely available. A tutorial on Python Course, a multi-language, donation-supported website. An article on Geeks for Geeks a \u0026ldquo;computer science portal for geeks\u0026rdquo; that allows users to draft and publish articles. A medium post from Towards Data Science. Ten out of ten results for this search are community-produced learning resources.\nNot only can communities produce learning content, but they have produced it. Google searches drop us into a vibrant ecosystem of learning content designed, produced, and consumed by the community.\nYet somehow, as educators, it\u0026rsquo;s easy for us to miss this ecosystem. We don\u0026rsquo;t realize it\u0026rsquo;s there at all or, when we do see it, we don\u0026rsquo;t consider it \u0026ldquo;learning content\u0026rdquo;. But internet users do see it and treat it that way every day.\nI suspect it\u0026rsquo;s because we\u0026rsquo;re trapped by the concepts of pre-internet learning. Learning used to be what happened in classrooms. Today literally no one is in a classroom. Classroom experiences used to be collected into a \u0026ldquo;course\u0026rdquo;. Today\u0026rsquo;s learners bounce around the internet whenever they need to learn something to solve a concrete problem, not to achieve a grade or certification. Learning resources used to be books and slides. Today\u0026rsquo;s learning resources are blog posts, documentation, content marketing, playful interactive explanations, and emphatically not textbooks or slide decks. Why would they be?\nHal Plotkin and David followed up on Twitter to clarify that the problem is to expect people to work for free in the first place. That textbooks and courses and so on are public goods and, as such, should be publicly funded rather than produced by free for the community. I agree. There is a role, unfortunately unfilled right now, for publicly funding learning resources that are public goods. I want to be clear about that.\nBut it\u0026rsquo;s wrong to say that there is no community-produced learning content, much less that there can be no community-produced learning content. Doing that is to overlook a revolution in learning that has already happened, which affects millions of people every day, and which is chipping away at the old order. People are working for free and we see their work everywhere.\n","date":1595355212,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1595355212,"objectID":"3316b3e9534370728e101bd9c56d2a4d","permalink":"/post/on_the_vibrant_ecosystem_of_community_based_learning_content/","publishdate":"2020-07-21T11:13:32-07:00","relpermalink":"/post/on_the_vibrant_ecosystem_of_community_based_learning_content/","section":"post","summary":"Today Martin Weller wrote that, while the open source movement is a wildly successful model for producing software, we \u0026ldquo;haven\u0026rsquo;t really cracked a community based production model for learning content\u0026rdquo;. David Wiley followed up to say that, given our lack of progress in this area since the 90\u0026rsquo;s, he believes there\u0026rsquo;s \u0026ldquo;a good argument to be made that a community based production model for learning content isn’t actually possible.","tags":[],"title":"The vibrant ecosystem of community-produced learning content","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"I want you to do better work.\nI joke that my hobby is talking people out of going to law school. It\u0026rsquo;s a joke because it gets a chuckle, but it\u0026rsquo;s not really a joke because I do actually take every chance I get to talk people out of going to law school. If you\u0026rsquo;re planning a typical legal career then there is better work to be done.\n Source: XKCD Moving from law to technology was the best career move I ever made. Yes, I took a hit on salary leaving a big law firm for a scrappy startup, but not as much as you might think and not when you look at per-hour comp. Sure, biglaw salaries look nice but they lose their shine when you consider the realities of billing 120 hours a week. In fact, considering my work week at Thinkful was ~50 hours while as an associate I typically worked 90-100 hours I averaged more per hour than I would have with a biglaw salary.\nBeyond comp, technology workers have opportunities for crazy autonomy. The same is true for mastery and, if you join the right company, purpose. Basically the perfect trifecta for doing good work.\nI have the luxury of working on my own projects now and I want to pay it forward by helping the next generation of ambitious people do good work. To that end I\u0026rsquo;m committed to work 1-on-1 with people who reach out for help and demonstrate their own commitment to doing good work.\nShould you reach out? Yeah, maybe you should. Some of the things I\u0026rsquo;m happy to help with:\n Understanding possible career paths in tech \u0026amp; setting goals Learning to code Landing your first job in a new field Transitioning from individual contributor to first-time manager Scaling an ops-heavy startup If you\u0026rsquo;re committed to accomplishing one of those goals then I\u0026rsquo;d like to work with you and you should reach out. Shoot me an email ([email protected]) or schedule a chat.\n","date":1594332001,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1594332001,"objectID":"bb5037d26f1694792f03c2963efa0a25","permalink":"/post/do_good_work/","publishdate":"2020-07-09T15:00:01-07:00","relpermalink":"/post/do_good_work/","section":"post","summary":"I want you to do better work.\nI joke that my hobby is talking people out of going to law school. It\u0026rsquo;s a joke because it gets a chuckle, but it\u0026rsquo;s not really a joke because I do actually take every chance I get to talk people out of going to law school.","tags":[],"title":"Do better work","type":"post"},{"authors":["Grae Drake"],"categories":["Euler"],"content":"When I first learned to program I didn\u0026rsquo;t explore too much. I played it safe. I took things I knew how to do and I applied those to each new problem I found, no matter how well suited the solution actually was to the problem.\nBut, like any good technologist, I\u0026rsquo;m lazy. So if you give me a problem that (1) I know how to solve but (2) involves me doing a lot of repetitive work and (3) hints at a lazy solution, then, well, I might be lazy enough to actually learn something new. Problem 8 was like that for me. It pushed me to understand and use Python slicing better when I was starting out. It helped me not have to type so much.\nAs usual, spend some time with the problem if you haven\u0026rsquo;t already.\n Just look at this lazy bastard.\nDiogenes, detail from School of Athens by Raphael WET Code This problem gives us a test case and tells us that the biggest product from 4 adjacent digits of our number is 5832. Let\u0026rsquo;s hammer out a quick solution to run against that test case.\nCopying and pasting the 1000 digit number from the problem into Python as a string and assigning it to the variable n we can do:\nbiggest = 0 for i in range(996): product = int(n[i]) * int(n[i + 1]) * int(n[i + 2]) * int(n[i + 3]) biggest = max(biggest, product) print(biggest) This\u0026hellip; works. And, if you can tolerate it, you can fix the ranges, extend this up to n[i + 12] and it\u0026rsquo;ll solve the actual problem. But if you\u0026rsquo;re like me, and I know I am, you\u0026rsquo;re too lazy for that.\nDRY-ing Our Code Copying and pasting code is almost always a bad idea. Imagine maintaining our solution above. You come back to this code to make a change and find yourself making the same change 13 different times. God forbid you mess up a change or, gasp, miss one of the changes you needed to make.\nThis is the \u0026ldquo;Don\u0026rsquo;t Repeat Yourself\u0026rdquo; principle in action. Let\u0026rsquo;s tweak our solution with slicing and a loop to remove repetition.\nbiggest = 0 for i in range(996): substring = n[i:i + 4] product = 1 for digit in substring: product *= int(digit) biggest = max(biggest, product) print(biggest) There we go! Now it\u0026rsquo;s much easier to modify this code to solve the actual problem instead of just the test case. But if you do you\u0026rsquo;ll notice that you still have to make changes in multiple places.\nScience, not Magic The solution above has two magic numbers: the 996 in the range and the 4 in our substring slice. These \u0026ldquo;magic numbers\u0026rdquo;, or unexplained numbers directly in our code, are problematic for at least two reasons.\nFirst, why is 996 there? What does it represent? Why that number instead of another? If we have the context of the problem fresh in our head it might be obvious that it\u0026rsquo;s the number of substrings we\u0026rsquo;re going to sample. But what if you come back to this code in a month? What if someone else needs to use it?\nSecond, these magic numbers introduce the same kind of repetition we want to avoid by keeping our code DRY. If you change the length of the substring from 4 (our test) to 13 (the problem) you now have to change both magic numbers. And what if we made our input number n bigger? We\u0026rsquo;d have to remember to change all those magic numbers too or our solution would no longer be correct.\nLet\u0026rsquo;s rewrite our solution without the magic numbers:\nbiggest = 0 substring_length = 4 for i in range(len(n) - substring_length): substring = n[i:i + substring_length] product = 1 for digit in substring: product *= int(digit) biggest = max(biggest, product) print(biggest) This version will automatically handle any changes to the length of n. It lets us modify the length of the substring we\u0026rsquo;re looking at in one single place. And we use a nice semantic name for the number so our code is easier to read and understand.\nScrap That, Let\u0026rsquo;s Be Complex and Efficient We\u0026rsquo;ve nicely removed the repetition from our code. The solution runs in a fraction of a second. We could stop here, and in real-life work we probably should. But there is a way to speed up the code. It\u0026rsquo;s useful to look at this solution now and stash it in our toolbox for later.\nThere are 996 4-digit substrings of a 1,000 digit string, and in our solutions so far we\u0026rsquo;ve been calculating each product from scratch. It turns out that repeats a lot of work and we can get away with a lazier algorithm.\nConsider the first six digits of our n (73167) and the first three 4-digit substrings:\n 7316__ _3167_ __1671 Each substring is only a little different from the one before it. If we think of a 4-digit-wide \u0026ldquo;window\u0026rdquo; sliding from left to right we see each substring is made by chopping off the leftmost digit of the last substring and adding a new digit to the right.\nThe digit products of each substring are similarly related. To find the product of each new substring we take the previous substring product, divide it by the digit that\u0026rsquo;s disappearing, then multiply it by our new digit. This algorithm works no matter how \u0026ldquo;wide\u0026rdquo; the window is: if we have the product of the last substring we can calculate the next one with only two operations (a division and a multiplication).\nWell, almost. We don\u0026rsquo;t want to divide by zero so we have to be careful when zero is involved. We\u0026rsquo;ll keep track of that.\nLet\u0026rsquo;s code that up:\nsubstring_length = 4 # Keep track of whether our window includes a zero. zero_count = n[0:substring_length].count('0') # Initialize the value of our substring product. previous_product = 1 for digit in n[0:substring_length]: if digit != 0: previous_product *= int(digit) biggest = 0 if zero_count \u0026gt; 0 else previous_product # Slide our window across. for j in range(len(n) - substring_length): left = int(n[j]) right = int(n[j + substring_length]) # Update our count of zeros in the window. zero_count = zero_count - (left == 0) + (right == 0) # Be careful of zeros. if left != 0: next_product = previous_product / left if right != 0: next_product = next_product * right if zero_count == 0: biggest = max(biggest, next_product) previous_product = next_product print(biggest) Phew. That was a lot of work to save a small amount of compute. This is the kind of optimization that\u0026rsquo;s good to talk about during a coding interview but where the extra complexity of the code and difficulty to read and comprehend it isn\u0026rsquo;t worth the performance improvement. At least, not for this problem. But hey we\u0026rsquo;re doing PE for this kind of fuh, eh?\n","date":1594144218,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1594144218,"objectID":"ef4d04bfaef79435741b8d1bb5e0d3cf","permalink":"/post/euler_problem_8/","publishdate":"2020-07-07T10:50:18-07:00","relpermalink":"/post/euler_problem_8/","section":"post","summary":"When I first learned to program I didn\u0026rsquo;t explore too much. I played it safe. I took things I knew how to do and I applied those to each new problem I found, no matter how well suited the solution actually was to the problem.","tags":[],"title":"Project Euler Problem 8: Largest Product in a Series","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"I got my start in computing early because my dad was disabled. He had a degenerative nerve disease and at some point, maybe when I was five or six, my clumsy little kid hands got better at wrangling computer parts than his own traitorous hands. You see his side hustle in the eighties was buying OEM computer parts, building computers, and selling those machines to consumers. This was before closed-body devices made it the norm not to tinker with your hardware, before Newegg made it easy to buy OEM, and even before Gateway started selling fully-assembled PCs direct to consumer.\n This box contained pure, unadulterated excitement Dropping screws and struggling to pick them up wasn\u0026rsquo;t the only way his body betrayed him. I have snapshot memories of riding around our neighborhood on the back of his motorcycle, scared shitless and holding on to him as strong as I could. But those memories are all dusty and faded. I was still very young when he was no longer able to ride his bike. He started to use a wheelchair sometimes and then he was in the wheelchair all the time.\nThat disease took a lot from him, much more than the command of his own body. He had every right to be bitter, and he was bitter. He was mean. He took out his frustration and resentment on those around him and drove people away. As an able-bodied person I can\u0026rsquo;t know what he lost when he lost control of his body. But as an adult I can see how the self-reinforcing cycle of bitterness, anger, and resentment played out and I wonder if he didn\u0026rsquo;t lose more because of that. My mom and I moved away when I was 10.\nI didn\u0026rsquo;t inherit a neurological disease from him but recently I noticed I may have inherited some propensity to bitterness. I wouldn\u0026rsquo;t have said so a few years ago. But recently I\u0026rsquo;ve caught myself angry and complaining kind of a lot. Way more, on reflection, than I realized at the time. Taking a sharp look at myself I don\u0026rsquo;t always like the person I am right now.\nI have things pretty great on almost every dimension. Yeah, I have tough things to deal with. And yeah, world affairs make it tough to be optimistic about much. But I could easily see this side of me harming my relationship with my family, with my kids. Giving me something to really cry about.\nNow that I\u0026rsquo;ve noticed things I think and hope it\u0026rsquo;s in my power to avoid the bitterness trap. To take a breath, deal with my shit, and be the person I want to be to the people I love. To that end, I\u0026rsquo;m declaring indefinite open season on my shit. Call me out on it, publicly, privately, help me to see when I\u0026rsquo;m being an asshole. Really. I\u0026rsquo;ll appreciate the help.\n","date":1594070290,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1594070290,"objectID":"6e2d08a2e6fd3e15b05aa304a23b090e","permalink":"/post/bitterness_trap/","publishdate":"2020-07-06T14:18:10-07:00","relpermalink":"/post/bitterness_trap/","section":"post","summary":"I got my start in computing early because my dad was disabled. He had a degenerative nerve disease and at some point, maybe when I was five or six, my clumsy little kid hands got better at wrangling computer parts than his own traitorous hands.","tags":[],"title":"The Bitterness Trap","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"While I was at Thinkful our instructional design and features evolved a lot. At the beginning things were simple. The curriculum was plain text (a Google doc we shared) that curated 3rd-party resources and \u0026ldquo;explained\u0026rdquo; (in a way that will make instructional designers cringe) the remaining topics. It was a great MVP in that things really were minimal and it (mostly) worked.\nOver time we layered on more features. We added a robust student community. Loads of open office hours that anyone could attend. Chat-based technical coaching. Instructional design basics like learning objectives, formative and summative assessments, backwards design. Things unarguably got better. But not as much as I expected and, critically, not for as many people as I expected.\nA while into building new features I realized that the features we were building often didn\u0026rsquo;t have the effect I expected them to have. Take office hours. All of a sudden students now had access to experts almost around the clock. And the highest-performing students were attending office hours. And yet we didn\u0026rsquo;t see a big change in student achievement rates. What gives?\nIt turns out the causation between office hours attendance and achievement was the reverse of what I expected: high performing students saw a chance for additional value and took it. They didn\u0026rsquo;t need office hours. They\u0026rsquo;d have been successful without them. But they were valuable and they saw that value and weren\u0026rsquo;t afraid to take it. Students at the margin, the students who would most benefit from these extra resources weren\u0026rsquo;t taking advantage of them.\nInternally we called this the \u0026ldquo;hand raiser problem\u0026rdquo;. Many seemingly great features would help the people who didn\u0026rsquo;t need help, the people in a class who always raised their hand, and wouldn\u0026rsquo;t help those who were quietly struggling and most in need of support.\n Not every feature implicated the hand raiser problem, but I can confidently say that 100% of us on the education, product, and engineering teams were, ourselves, handraisers. That\u0026rsquo;s why we worked at Thinkful in the first place. That made all these seemingly wonderful features, these shiny features that were obviously great because we would love to have them, super tempting. But those weren\u0026rsquo;t the features we needed to build. The features that actually had an impact were the ones that everyone valued (not just the hand raisers) or even the ones that hand raisers disliked but that boosted students at the margins.\nThere is a story here about product strategy and feature selection process and understanding your users that is interesting but not what I want to talk about today.\nWhat I want to talk about today is inequality.\nWe thought, when we added office hours, that we were making the world (or at least our product) a better place. Before there wasn\u0026rsquo;t a thing. Now there\u0026rsquo;s a thing. People who interact with the thing benefit. The thing doesn\u0026rsquo;t harm anyone; worst case it gets ignored. Multiply the number of people who use it by the utility each gets from using it and you have a measure of exactly how much it made the world a better place.\n This show has no business being so accurate But while the thing doesn\u0026rsquo;t harm people it had the potential to harm the community. In isolation, this feature amplifies the existing performance gap between students. Learning outcomes become more bimodal. Winners are clearer, better winners. Others lose ground by standing still.\nThis example is just a silly feature from one education company. Students weren\u0026rsquo;t competing against one another in any meaningful way. These games were not zero sum; making well-off students better-off didn\u0026rsquo;t hurt anyone at Thinkful.\nBut looking around at our society now I see hand-raiser problems fucking everywhere.\nLook at all these amazing useful webapps we\u0026rsquo;ve got now! Oh, unless accessibility issues make them painful for you to use.\nThank goodness we have tax-advantaged savings plans to make it easier to save for retirement! Yet, somehow, only 37% of Americans who can contribute to a 401(k), do.\nHealthcare is expensive and complicated, good thing for health insurance! (From our employer of course!) Except for the 51% of us who aren\u0026rsquo;t employed, don\u0026rsquo;t receive employment benefits, or otherwise fall through the cracks.\nMy Dilemma Do employer-sponsored health plans make our world a better place? I don\u0026rsquo;t think so. There seem to be much more efficient ways to organize and allocate healthcare resources. Other countries have figured this out and implemented better solutions. I\u0026rsquo;d love to go back in time and stop the people who made that a thing.\nDoes Khan Academy make the world a better place? I have to believe it does. I\u0026rsquo;m sure hand raisers disproportionately benefit from KA, but I\u0026rsquo;m also sure that the sheer number of students touched by KA is net positive. Here, KA, have an internet fist bump from me. 👊\nDid the 20,000 recorded lectures Berkeley posted to Youtube make the world a better place before Berkeley removed them because of ADA complaints? I\u0026hellip; I don\u0026rsquo;t know.\nThis is where it gets selfish. I want to massively scale 1-on-1 learning. I think that\u0026rsquo;s possible today in a way it\u0026rsquo;s never been before. But should I? I believe many people could benefit enormously from 1-on-1 mentoring relationships. Assuming for now that I\u0026rsquo;m successful in facilitating those, will I just end up making life better for professionals who already have it \u0026ldquo;great\u0026rdquo; by any rational measure? Open up new professional opportunities\u0026hellip; for those of us who least need them? If my work creates a ton of value, is a little bit of increased inequality an acceptable tradeoff? How much, and how do I measure that?\nI don\u0026rsquo;t have any answers to these questions yet. For now I\u0026rsquo;m going to keep building and hope it works out, but I\u0026rsquo;m keeping an eye on this. If you have any helpful ways to look at these issues drop me a line.\n","date":1592595837,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1592595837,"objectID":"4a29de97e417e1122b1cb2a9e9909d09","permalink":"/post/hand_raisers_and_inequality/","publishdate":"2020-06-19T12:43:57-07:00","relpermalink":"/post/hand_raisers_and_inequality/","section":"post","summary":"While I was at Thinkful our instructional design and features evolved a lot. At the beginning things were simple. The curriculum was plain text (a Google doc we shared) that curated 3rd-party resources and \u0026ldquo;explained\u0026rdquo; (in a way that will make instructional designers cringe) the remaining topics.","tags":["Education","Ethics","Inequality"],"title":"The Hand Raiser Problem and Inequality","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"Back to primes! So far we\u0026rsquo;ve been able to get away with being a little greedy with our compute when playing with primes. Now Euler is ratcheting up the difficulty and we\u0026rsquo;ll have to focus on efficiency.\nAs usual, if you haven\u0026rsquo;t spent time with Problem 7 yet, take a chance to play with it on your own and come back.\n Euclid teaching his students, detail from The School of Athens by Raphael Counting Primes Let\u0026rsquo;s start by looking at each integer, deciding whether it\u0026rsquo;s prime, and counting it if it is until we get to the 10,001st prime.\nOur first stab at an is_prime(n) function will be the simplest and we\u0026rsquo;ll iterate into more optimized (and complicated) versions after. Here\u0026rsquo;s the starting point:\ndef is_prime(n): if n \u0026lt; 2: return False for x in range(2, n): if n % x == 0: return False return True This checks every number less than $ n $ to see if it\u0026rsquo;s a factor of $ n $. It\u0026rsquo;s almost good enough to solve the problem in under a minute. My laptop chugs through the first 10,001 primes in 68 seconds using this version of the is_prime(n) function (full code later). But the rules only give us a minute and we can do better.\nCap the Search Space Looking back at our Problem 3 Solution we optimized our is_prime(n) function by caping the space we search to find factors factors by checking only numbers up to $ \\sqrt{n} $. Check out that post if you want to dig deep into why / how that works.\ndef is_prime(n): if n \u0026lt; 2: return False for x in range(2, math.floor(math.sqrt(n)) + 1): if n % x == 0: return False return True This runs a lot faster. It finds the 10,001st prime in 0.29 seconds on my machine. But can we make it even better?\nSkip Through the Search Space Perhaps the most rediscovered result about primes numbers is the fact that every prime bigger than 3 is \u0026ldquo;next\u0026rdquo; to a multiple of 6. That is, for every prime number starting at 5 you can get a multiple of 6 by adding 1 or subtracting 1.\nFor example:\n 5 is prime, add 1 and get 6 13 is prime, subtract 1 and get (6 * 2) 1,361 is prime, add 1 and get (6 * 227) This works for every prime number.\nWe can use this property to skip potential factors we don\u0026rsquo;t need to check. When checking to see if a number $ n $ has factors we can get away with just looking for the prime factors, we don\u0026rsquo;t also need to know if it has any factors that are themselves composite. For example, we don\u0026rsquo;t need to know that 24 is divisible by 8. We can stop as soon as we see it\u0026rsquo;s divisible by 2. So we can skip every potential factor except for those which might be prime.\nIn code:\ndef is_prime(n): if n \u0026lt; 2: return False if n == 2 or n == 3: return True if n % 2 == 0 or n % 3 == 0: return False for x in range(6, math.floor(math.sqrt(n)) + 2, 6): if n % (x - 1) == 0 or n % (x + 1) == 0: return False return True This version takes advantage of Python\u0026rsquo;s \u0026ldquo;step\u0026rdquo; argument to range(). We\u0026rsquo;re looking at every multiple of 6 (below our limit) and checking whether the number before or after it divides our target.\nThis optimizes things a bit more and, indeed, finds the 10,001st prime in 0.17 seconds on my machine.\nPutting it Together Once we have an efficient is_prime() function the solution is a matter of counting primes with a while loop.\nseen = 0 n = 1 while seen \u0026lt; 10001: n += 1 if is_prime(n): seen += 1 print(n) Going Further There are ways to solve this problem even faster. You could use the Prime Number Theorem to approximate an upper bound for a Sieve of Eratosthenes and sieve out the answer. We\u0026rsquo;ll deal with those concepts in coming problems so for now I\u0026rsquo;ll leave that as an exercise for the reader.\n","date":1591574463,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1591574463,"objectID":"5347399ce9cecdc229c3d085eccca863","permalink":"/post/euler_problem_7/","publishdate":"2020-06-07T18:01:03-06:00","relpermalink":"/post/euler_problem_7/","section":"post","summary":"Back to primes! So far we\u0026rsquo;ve been able to get away with being a little greedy with our compute when playing with primes. Now Euler is ratcheting up the difficulty and we\u0026rsquo;ll have to focus on efficiency.","tags":["Euler"],"title":"Project Euler Problem 7: 10001st prime","type":"post"},{"authors":["Grae Drake"],"categories":["Euler"],"content":" Problem 6 has a brute force solution and an elegant formula solution that calculates the answer directly. But the brute force solution is good enough and the formula is obscure enough that I wouldn\u0026rsquo;t have found it without googling so we\u0026rsquo;ll focus on the brute force solution.\nAs always, spend some time with the problem if you haven\u0026rsquo;t yet.\n Spent way too long on google images searching squares Brute Forcing it All the numbers involved in this problem are small enough to quickly calculate:\nlimit = 100 integers = range(1, limit + 1) sum_of_squares = sum([x ** 2 for x in integers]) square_of_sum = sum(integers) ** 2 print(square_of_sum - sum_of_squares) You could get clever and jam all that into one line, but it\u0026rsquo;d be less readable \u0026amp; maintainable:\nprint(sum(range(1, 101)) ** 2 - sum([x ** 2 for x in range(1, 101)])) I feel the temptation to get clever like that a lot. I\u0026rsquo;ve learned future me is usually better off if I avoid that temptation.\nThe Efficient Solution It\u0026rsquo;s not my jam and I didn\u0026rsquo;t figure it out on my own (I just used the brute force solution) so I\u0026rsquo;ll avoid deriving the formula (you can read about it in depth here if you like), but for the series of squares:\n$$ 1^2{,\\ } 2^2{,\\ } 3^2{,\\ } 4^2{,\\ } 5^2\u0026hellip; $$\nthere\u0026rsquo;s a formula to calculate the sum of the first n terms:\n$$ \\sum_{i=1}^{n} i^2 = \\frac{n^3}{3} + \\frac{n^2}{2} + \\frac{n}{6} $$\nAnd of course we know the sum of 1 to 100 is a good old arithmetic progression:\n$$ \\sum_{a_{1}}^{a_{n}} = \\frac{n(a_{1} + a_{n})}{2} $$\nWe can combine those as:\n$$ Answer = \\left( \\frac{100^3}{3} + \\frac{100^2}{2} + \\frac{100}{6} \\right) - \\left( \\frac{100(1 + 100)}{2} \\right)^2 $$\nIn code:\nlimit = 100 sum_of_squares = (limit * (1 + limit) / 2) ** 2 square_of_sum = (limit ** 3) / 3 + (limit ** 2) / 2 + limit / 6 print(sum_of_squares - square_of_sum) Look ma, no iteration.\n","date":1591485014,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1591485014,"objectID":"88e26ca388865bdf0dfcb29d005ee403","permalink":"/post/euler_problem_6/","publishdate":"2020-06-06T17:10:14-06:00","relpermalink":"/post/euler_problem_6/","section":"post","summary":"Problem 6 has a brute force solution and an elegant formula solution that calculates the answer directly. But the brute force solution is good enough and the formula is obscure enough that I wouldn\u0026rsquo;t have found it without googling so we\u0026rsquo;ll focus on the brute force solution.","tags":[],"title":"Project Euler Problem 6: Sum Square Difference","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":" Problem 5 is a lot of fun (well, \u0026ldquo;fun\u0026rdquo;) because (1) there\u0026rsquo;s a very simple program requiring no math that calculates the answer, but (2) that program would need impossible amounts of compute to actually run, and (3) you can figure the answer with pen and paper super fast if you think about the math a bit. That\u0026rsquo;s what normal people consider fun, right?\nSince this problem leans much more on number theory than on programming we\u0026rsquo;ll use it as an excuse to talk about math more than usual. Our first real step into number theory was Problem 3 ( blog post) with prime factors. Here we\u0026rsquo;ll talk about least common multiples and prime factorization. This will be useful: as we get deeper into Project Euler we\u0026rsquo;ll get much deeper into number theory.\nAs always, spend some time with Problem 5 on your own if you haven\u0026rsquo;t already.\n An Ulam Spiral visualizing prime factorization. Source: Wikipedia Brute Force? Each Project Euler problem has basically two parts: an example and a problem statement. In this case the example is:\n 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.\n These examples are crazy useful. They aren\u0026rsquo;t just there to define the problem: they let us work through the problem with simpler, smaller inputs. Testing and tinkering with small examples can yield insights, concepts, and strategies that help solve the problem with tougher inputs. They\u0026rsquo;re hints; if you look at them hard enough.\nNow, as a programmer your instinct might be to throw piles of compute at this problem. The example number is pretty small (just 2,520). Maybe we can just light up a few nested loops to chew through this?\nfound = False n = 1 factors = range(1, 11) while not found: found = True for x in factors: if n % x != 0: found = False if found: print(i) n += 1 print(n) That program gives 2520 with no appreciable delay. Let\u0026rsquo;s crank our range up to 20 and try it out!\n\u0026hellip;\nYeah so I pulled the plug after taking more than a minute to iterate all the way up to 19034074 without finding the answer. Clearly the number we\u0026rsquo;re looking for is too big to find with this brute force approach. Looks like we\u0026rsquo;re going to have to think about the math a bit.\nBreak Out the Pen and Paper If we can\u0026rsquo;t brute force our way, there must be a more direct solution. This is where Project Euler\u0026rsquo;s hints are crazy useful.\nOne way to calculate a number that\u0026rsquo;s divisible by a set of factors (a \u0026ldquo;common multiple\u0026quot;) is to just multiply the factors all together. In our example case that\u0026rsquo;s:\n$$ 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 $$\nThat\u0026rsquo;s cumbersome to write out, so instead I\u0026rsquo;ll use $ ! $, which is short hand for the factorial operation.\nWe see that $ 10! $ is $ 3{,}628{,}800 $. That\u0026rsquo;s MUCH bigger than $ 2{,}520 $. Exactly $ 1{,}440 $ times bigger in fact. What\u0026rsquo;s so special about $ 1{,}440 $? And why can you evenly divide it by 2 and 3 so many times?\nNow, you know that every integer is either prime or the product of prime factors. You may not know that\u0026rsquo;s called the \u0026ldquo;unique factorization theorem\u0026rdquo;, or, with more gravitas, the \u0026ldquo; fundamental theorem of arithmetic\u0026quot;.\nFinding the prime factors of a number is called prime factorization and it can take a long time with big numbers. But you can factorize all the numbers you need to solve this problem in your head because they\u0026rsquo;re all so small.\nTinkering with a calculator and without needing to write any code, we can see that the prime factors of $ 1{,}440 $ are:\n$$ [2{,\\ }2{,\\ }2{,\\ }2{,\\ }2{,\\ }3{,\\ }3{,\\ }5] $$\nYou could say that each of those is an \u0026ldquo;extra\u0026rdquo; factor and that $ 10! $ has a bunch of extra factors of $ 2 $ and $ 3 $ and $ 5 $ compared to the least common multiple. This is even clearer if we set the prime factorization of the least common multiple (\u0026ldquo;LCM\u0026rdquo;) and $ 10! $ next to each other:\n$$ LCM = 2{,}520 = 2^3 * 3^2 * 5^1 * 7^1 $$ $$ 10! = 3{,}628{,}800 = 2^8 * 3^4 * 5^2 * 7^1 $$\nProgress! Is there a way we can figure out (1) which prime numbers are factors of our answer and (2) how many of each prime is \u0026ldquo;enough\u0026rdquo;?\nWhich Prime Numbers? Looking at the example we see none of the prime factors are larger than 10. And that every prime number less than 10 is represented at least once.\nThis makes sense. The next largest prime number, 11, isn\u0026rsquo;t a factor of any of the numbers 1-10. And every number, including each prime number, less than 10 is a factor.\nSo we know that our answer has every prime number less than 20 as a factor at least once, and none of the prime numbers larger than 20 as a factor.\nHow many of each? Some of the primes from 2 to 19 have to be represented more than once. We know that because that number (9,699,690) would have been found with our brute force approach if it were correct. We also know that because a number like 4, which has two factors of 2, wouldn\u0026rsquo;t evenly divide it. And 8, which has three factors of two also wouldn\u0026rsquo;t evenly divide it.\nTurns out that we need \u0026ldquo;enough\u0026rdquo; of each prime to make every one of the factors (1 to 20). Looking at the answer to the example, it needs three factors of 2 because it needs to be divisible by 8. And it needs two factors of 3 because it needs to be divisible by 9. But only one factor of the larger primes.\nCoding a solution At this point it might actually be easier to break out a pen and tally the answer. You could do that by looking at each number from 1 to 20, calculating the prime factors for each number, keeping the largest count of each prime factor, and then multiplying out everything. But instead, let\u0026rsquo;s write some code to do that!\nfrom math import floor # A simple function to test whether a number is prime. def is_prime(n): for x in range(2, floor(n ** 0.5) + 1): if n % x == 0: return False return True # The number we'll check up to. Set to 10 to test the code then change to 20. limit = 100 # A list of primes we know show up at least once. primes = [x for x in range(2, limit + 1) if is_prime(x)] # A simple function to calculate the prime factors of a number. def prime_factors(n): result = {prime: 0 for prime in primes} i = 0 for prime in primes: while n % prime == 0: result[prime] += 1 n = n / prime return result # Initialize factors. We know each prime below limit shows up at least once. factors = {prime: 1 for prime in primes} # Look at the prime counts for each number in our range and keep the highest. for x in range(2, limit + 1): for prime, count in prime_factors(x).items(): factors[prime] = max(count, factors[prime]) # Calculate and print the result. result = 1 for prime, count in factors.items(): result *= prime ** count print(result) There are ways to further optimize this solution. For example, you could sidestep the need for a prime_factors() function altogether with witty application of some logarithms and the observation that 3 * 3 * 3 is bigger than 20. But that feels a bit silly. Even bumping up our limit to 100 and running this program it calculates the 41-digit answer without noticeable delay.\n","date":1591201997,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1591201997,"objectID":"c4f7dedf972cf1dd362a3ac1e0e974db","permalink":"/post/euler_problem_5/","publishdate":"2020-06-03T10:33:17-06:00","relpermalink":"/post/euler_problem_5/","section":"post","summary":"Problem 5 is a lot of fun (well, \u0026ldquo;fun\u0026rdquo;) because (1) there\u0026rsquo;s a very simple program requiring no math that calculates the answer, but (2) that program would need impossible amounts of compute to actually run, and (3) you can figure the answer with pen and paper super fast if you think about the math a bit.","tags":["Euler"],"title":"Project Euler Problem 5: Smallest Multiple","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"Project Euler problem 4 feels like a step back in difficulty. The numbers involved aren\u0026rsquo;t too big so we don\u0026rsquo;t have to worry about resource constraints. The subproblems it breaks down into are fairly straightforward. If you haven\u0026rsquo;t yet, take some time with this problem on your own and continue on down below.\n Photo by Jingwei Ke on Unsplash Breaking the Problem Down We can break this problem down into several subproblems:\n Be able to check whether a number is a \u0026ldquo;palindrome\u0026rdquo; Look at the right set of numbers and check whether each is a palindrome Choose the right one from the final set Subproblem 1: The is_palindrome(n) Function Let\u0026rsquo;s start with the first subproblem and write an is_palindrome(n) function to check whether a number is a palindrome. This function uses arithmetic to create a new integer with the digits of n reversed\ndef is_palindrome(n): backwards = 0 temp = n while temp \u0026gt; 0: backwards *= 10 backwards += temp % 10 temp = temp // 10 return backwards == n There are several ways you could define this is_palindrome(n) function. You might prefer to use strings rather than arithmetic:\ndef is_palindrome(n): return str(n) == str(n)[::-1] This one uses Python\u0026rsquo;s extended slicing, one of my favorite Python features. Slicing makes Python a joy to work with.\nSubproblem 2: Generating Products How can we generate the set of numbers we need to check? We\u0026rsquo;re interested in all the products of two 3-digit numbers. We can use a simple for loop to generate each 3-digit number. To generate all pairs of 3-digit numbers we can nest two for loops:\nproducts = [] for x in range(100, 1000): for y in range(100, 1000): products.append(x * y) The nested for loops above will generate every permutation (specifically, the \u0026ldquo;permutation with repetition\u0026rdquo; or Cartesian product) of three digit numbers, use those factors to calculate the product, and add that product to a list of all products.\nPython has a wonderful standard library. The itertools module ( docs) is particularly useful for Project Euler problems with how often it throws combinatorics problems at us.\nHere is the same solution using itertools.product and a list comprehension:\nfrom itertools import product factor_pairs = product(range(100, 1000), repeat=2) products = [factors[0] * factors[1] for factors in factor_pairs] Speeding up with Combinations The approach above using products will make us duplicate some of our work. For example, $ 101 * 202 $ is the same as $ 202 * 101 $, and the approach above calculates that product ($ 20{,}402 $) multiple times. Wasted compute cycles, right?\nIt doesn\u0026rsquo;t matter much for this problem because the numbers are small enough to just power through, but it\u0026rsquo;s easy to imagine a situation where unnecessarily repeating work does cause problems.\nBecause the order of factors doesn\u0026rsquo;t matter, we can use a combination rather than a product to find all the products we want. We\u0026rsquo;ll skip straight to the itertools solution this time:\nfrom itertools import combinations_with_replacement factor_pairs = combinations_with_replacement(range(100, 1000), r=2) products = [factors[0] * factors[1] for factors in factor_pairs] This approach cuts the length of products about in half because it isn\u0026rsquo;t unnecessarily repeating calculations. That\u0026rsquo;s not critical for this problem, but being aware of these issues will help us down the road.\nSubproblem 3: Finding the Correct Product Now that we\u0026rsquo;ve generated every product we want and stored them in a products list and have a is_palindrome(n) function we can filter down to just palindromic numbers:\npalindromes = [product for product in products if is_palindrome(product)] From there we can sort and take the largest palindrome:\nprint(sorted(palindromes)[-1]) This approach uses Python\u0026rsquo;s built-in sorted() function ( docs) and looks at the end of the list for the biggest value.\nCan you Improve it? The solution described here generates all products of 3-digit numbers, even the small ones we know probably aren\u0026rsquo;t the answer. The nested for loops above count up through all the factors to generate products and then palindromes. It\u0026rsquo;s possible to write a solution that counts down and finds the answer with much less compute than the solution in this post. Can you figure out how?\n","date":1590543681,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1590543681,"objectID":"df83532305caf2399ad188366125d1ec","permalink":"/post/euler_problem_4/","publishdate":"2020-05-26T18:41:21-07:00","relpermalink":"/post/euler_problem_4/","section":"post","summary":"Project Euler problem 4 feels like a step back in difficulty. The numbers involved aren\u0026rsquo;t too big so we don\u0026rsquo;t have to worry about resource constraints. The subproblems it breaks down into are fairly straightforward.","tags":["Euler"],"title":"Project Euler Problem 4: Largest palindrome product","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":" Problem 3 is where Euler starts forcing us to consider resource limitations. Before, the most straightforward solution worked just fine, even if it used more resources than a less complex algorithm would. As we\u0026rsquo;ll see here, that kind of solution, while correct doesn\u0026rsquo;t work for us because the program never finishes in a reasonable amount of time. If we\u0026rsquo;re going to solve this one we need to start thinking about algorithmic complexity.\nAs usual, this is a good chance to take some time with the problem yourself before continuing on.\n Project Euler really likes to play with prime numbers. Image: David Eppstein The Straightforward (Naive) Approach So far I\u0026rsquo;ve been super intentional about saying \u0026ldquo;straightforward\u0026rdquo; when talking about the first solution that comes to mind. A lot of people call this the \u0026ldquo;naive\u0026rdquo; solution. I understand why people say \u0026ldquo;naive\u0026rdquo; but I usually avoid that word because I don\u0026rsquo;t like the negative connotations it carries. As we\u0026rsquo;ve seen, sometimes the \u0026ldquo;naive\u0026rdquo; or \u0026ldquo;straightforward\u0026rdquo; solution really is the best solution: it really depends on your context and the tradeoffs you\u0026rsquo;re making. I\u0026rsquo;ve seen people internalize the principle that the straightforward solution must be a \u0026ldquo;bad\u0026rdquo; solution and refuse to consider the tradeoffs involved. In the real world that mindset will just kill your productivity and lead you to produce work that isn\u0026rsquo;t appropriate for the context you\u0026rsquo;re working in.\nThat said, there are situations where it\u0026rsquo;s easier to call a solution \u0026ldquo;naive\u0026rdquo; and this problem gives us a good example. My first attempt at solving this problem went like this:\n Ok, let\u0026rsquo;s just iterate over every number between 1 and 600,851,475,143, test whether it\u0026rsquo;s prime, then if so test whether it\u0026rsquo;s a factor of 600,851,475,143. Finally take the biggest one and there\u0026rsquo;s your answer.\n In code that would look something like this. Ignore the magic is_prime() function for now, we\u0026rsquo;ll get to that further down:\nfactors = [] for x in range(2, 600851475143): if is_prime(x): if 600851475143 % x == 0: factors.append(x) print(factors[-1]) Now, it\u0026rsquo;s important to note that this solution is correct. We can test it on the example the problem gives us (13195) and we get the right answer (29). And we get it fast: running it on my machine there\u0026rsquo;s no noticeable delay. But when I plug in 600851475143 the program just seems to freeze. What gives? Even after waiting five minutes the program is just sitting there. What\u0026rsquo;s happening?\nTinkering and timing Let\u0026rsquo;s keep the logic of the program the same and play around with inputs to see if we get any hints about what the problem is. Remember I\u0026rsquo;m still using a magic is_prime() function that we\u0026rsquo;ll get to below.\nHere we refactor our program into a function to make it easier to call repeatedly on different inputs:\ndef largest_prime_factor(n): factors = [] for x in range(2, n): if is_prime(x): if n % x == 0: factors.append(x) return factors[-1] Now we can easily run that code with multiple inputs:\n\u0026gt;\u0026gt;\u0026gt; largest_prime_factor(13195) 29 \u0026gt;\u0026gt;\u0026gt; largest_prime_factor(21952) 7 \u0026gt;\u0026gt;\u0026gt; largest_prime_factor(98989) 8999 On my laptop, those first two examples seemed to run instantaneously, but the third one had a noticeable delay. Let\u0026rsquo;s measure how long the program takes to run every time we run it. We\u0026rsquo;ll use a simple way to time our code. There are of course, much more accurate ways to measure performance that are well suited for real-world profiling, but this is good enough for us right now.\nimport time def timed (n): start = time.clock() result = largest_prime_factor(n) end = time.clock() duration = end - start print('Largest prime factor of {} is {}. Execution: {} seconds'.format( n, result, duration )) Now we can see how fast the code is running:\n\u0026gt;\u0026gt;\u0026gt; timed(13195) \u0026quot;Largest prime factor of 13195 is 29. Execution: 0.018383000000000038 seconds\u0026quot; \u0026gt;\u0026gt;\u0026gt; timed(21952) \u0026quot;Largest prime factor of 21952 is 7. Execution: 0.026741000000015447 seconds\u0026quot; \u0026gt;\u0026gt;\u0026gt; timed(98989) \u0026quot;Largest prime factor of 98989 is 8999. Execution: 0.13390699999999356 seconds\u0026quot; Of course, all these numbers are specific to my computer and setup. They\u0026rsquo;ll fluctuate a little each time I run the code, and they\u0026rsquo;ll also change depending on what else my computer is doing and several other factors. So if you run this you\u0026rsquo;ll get different numbers. That\u0026rsquo;s ok. The point is that I can easily see how execution time increases as I increase the size of the input.\nThat last part is important and bears repeating. Execution time increases as I increase the size of the input. Bigger inputs take more time to process. Thinking about the for loop in our largest_prime_factor function it makes a lot of sense: the more we loop the longer our program takes to run.\nLet\u0026rsquo;s start adding zeros to our input and see what happens:\n\u0026gt;\u0026gt;\u0026gt; timed(10000) \u0026quot;Largest prime factor of 10000 is 5. Execution: 0.01425699999998642 seconds\u0026quot; \u0026gt;\u0026gt;\u0026gt; timed(100000) \u0026quot;Largest prime factor of 100000 is 5. Execution: 0.13481099999998492 seconds\u0026quot; \u0026gt;\u0026gt;\u0026gt; timed(1000000) \u0026quot;Largest prime factor of 1000000 is 5. Execution: 2.938510000000008 seconds\u0026quot; \u0026gt;\u0026gt;\u0026gt; timed(10000000) \u0026quot;Largest prime factor of 10000000 is 5. Execution: 74.51557199999999 seconds\u0026quot; Uuuugh, that last one was a pain to wait for. And the rules of Project Euler say that solutions should run in a minute or less so that\u0026rsquo;s no good. Every time we add a zero to our input the code takes at least 10 times longer to run. The input we need to solve for, 600851475143, has 4 more digits than the number that took 74 seconds to solve. Back of the napkin, running our program on it will take something like $ 74 * 10^4 = 740{,}000 $ seconds to run. That\u0026rsquo;s 8 or 9 days. No good.\nCorrect isn\u0026rsquo;t Good Enough If we\u0026rsquo;re actually going to solve this we need a better \u0026ldquo;solution\u0026rdquo;. It isn\u0026rsquo;t good enough that our solution is correct. The solution above is correct. But we don\u0026rsquo;t have the resources to actually run it on the input we need to so it\u0026rsquo;s as good as useless. In our context, the naive solution really is naive: it doesn\u0026rsquo;t take into account critical resource limitations.\nComputer cycles are cheap. But they aren\u0026rsquo;t free. Give or take, my 2017 Macbook Air can run a simple python loop about $ 10{,}000{,}000 $ times in a second. That\u0026rsquo;s:\n $ \\approx 1 * 10^7 $ loops in a second $ \\approx 1 * 10^{14} $ loops in a year $ \\approx 1 * 10^{16} $ loops in a lifetime (~80 years) Viewed through the lens of a human life, burning a million \u0026ldquo;Macbook loops\u0026rdquo; of time here or there doesn\u0026rsquo;t matter. But my computing resources are limited. As the exponent on a program\u0026rsquo;s computational needs increases that starts to matter more and more and more.\nLet\u0026rsquo;s see if we can refactor our correct code into an actual, pragmatic solution.\nCorrect and Efficient Ok, so life is to short for programs that loop 600851475143 times. How can we find the factors of a number without needing so many loops?\nThe key insight for this problem is to see that factors come in pairs. Take an easy number like $ 2{,}500 $. We can see right away that $ 2 $ is a factor:\n$$ 2{,}500 / 2 = 1{,}250 $$\nHey look at that, we got another factor for free! If we see that $ 2 $ is a factor we can deduce that $ 1,250 $ must also be a factor. Let\u0026rsquo;s do this a few more times and see if any patterns stand out:\n$$ 2{,}500 / 5 = 500 $$ $$ 2{,}500 / 10 = 250 $$ $$ 2{,}500 / 20 = 125 $$ $$ 2{,}500 / 25 = 100 $$ $$ 2{,}500 / 50 = 50 $$\nAs one factor gets larger it\u0026rsquo;s \u0026ldquo;pair\u0026rdquo; factor gets smaller and smaller. Until they meet. Where do they meet? At the square root: $ \\sqrt(2{,}500) $ is $ 50 $. That means every factor of $ n $ larger than $ \\sqrt(n) $ is paired with a factor smaller than $ \\sqrt(n) $. And so: we can find every factor of $ n $ just by looking at the integers up to $ \\sqrt(n) $!\nWe can use this to improve our solution. Instead of looping every number below 600851475143 to find factors we can just loop over every integer below its square root: about 775146. That\u0026rsquo;s\u0026hellip; a lot fewer loops. In code:\nimport math def largest_prime_factor(n): factors = [] for x in range(2, math.floor(math.sqrt(n)) + 1): if is_prime(x): if n % x == 0: pair = n / x factors.append(x) if is_prime(pair): factors.append(pair) return sorted(factors)[-1] Ok, let\u0026rsquo;s check against our test input to see if it works:\n\u0026gt;\u0026gt;\u0026gt; largest_prime_factor(13195) 29 Yep, still works. How fast?\n\u0026gt;\u0026gt;\u0026gt; timed(13195) \u0026quot;Largest prime factor of 13195 is 29. Execution: 0.0001859999999851425 seconds\u0026quot; Hey, that\u0026rsquo;s a lot faster than our naive solution. Is this good enough?\n\u0026gt;\u0026gt;\u0026gt; timed(600851475143) \u0026quot;Largest prime factor of 600851475143 is [censored]. Execution: 2.0993389999999863 seconds\u0026quot; Woohoo!\n Ah. That\u0026rsquo;s the good stuff. The is_prime() Function, Finally Ok so up till now I\u0026rsquo;ve asked you to just accept that we have a good is_prime() function. Let\u0026rsquo;s dig into that. A prime is a number that is divisible only by 1 and itself. That is: it has no factors. And hey, we\u0026rsquo;ve already written code to find factors! Let\u0026rsquo;s repurpose that for our is_prime() function:\ndef is_prime(n): for x in range(2, n): if n % x == 0: return False return True If you squint at this function you\u0026rsquo;ll see it\u0026rsquo;s wolfing down compute and running more loops than we need just like our initial solution code was. Let\u0026rsquo;s use the same insight about only needing to search for factors up to the square root to improve things:\ndef is_prime(n): for x in range(2, math.floor(math.sqrt(n)) + 1): if n % x == 0: return False return True And there we go: we have an is_prime() function good enough to solve this problem. Are there ways to improve it? Yes, and we might look into that later if needed for harder problems, but in the spirit of pragmatism we\u0026rsquo;ll stick with this straightforward solution for as long as it does what we need it to do.\n","date":1590436220,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1590436220,"objectID":"04e1f4c7d9901f8eca4a18ad97ab917c","permalink":"/post/euler_problem_3/","publishdate":"2020-05-25T12:50:20-07:00","relpermalink":"/post/euler_problem_3/","section":"post","summary":"Problem 3 is where Euler starts forcing us to consider resource limitations. Before, the most straightforward solution worked just fine, even if it used more resources than a less complex algorithm would.","tags":["Euler"],"title":"Project Euler Problem 3: Largest Prime Factor","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"Our first, and certainly not our last, encounter with the Fibonacci Sequence on Project Euler. Before we dive into Problem 2 together take some time to chew on it yourself if you haven\u0026rsquo;t already. Did you find a solution? If so have you been able to improve or streamline your first one? If not can you clearly describe to yourself what you\u0026rsquo;re stuck on?\n Modern mathematicians just aren\u0026rsquo;t this imposing. Image: Wikipedia The Straightforward Approach The most straightforward way to solve this problem is to generate every Fibonacci number below four million, then look at each one, check whether it\u0026rsquo;s even and add it to our total if it is:\nfibs = [1, 1] while fibs[-1] + fibs[-2] \u0026lt; 4000000: fibs.append(fibs[-1] + fibs[-2]) evens = [] for n in fibs: if n % 2 == 0: evens.append(n) print(sum(evens)) Saving Some Memory The solution above works and runs plenty quick. We don\u0026rsquo;t need to simplify. But can we? You might notice that we\u0026rsquo;re creating a pretty big list of fibonacci numbers. Is there a way we can avoid using all that memory? What if, instead of a list, we just kept track of the two most recent numbers and checked for evenness at the same time we generate each new number?\nresult = 0 a = 1 b = 1 while a + b \u0026lt; 4000000: next = a + b a = b b = next if b % 2 == 0: result += b print(result) Saving Some Lines Mathematicians and programmers coming from other languages might get weirded out by an amazing Python feature called multiple assignment. It lets us do things like this:\nx = 'ham' y = 'eggs' x, y = y, x print(x) \u0026gt;\u0026gt;\u0026gt; 'eggs' print(y) \u0026gt;\u0026gt;\u0026gt; 'ham' Multiple assignment lets us swap two variables in a single line and do other fun things with assigning to more than one variable at a time. Check out how we can use multiple assignment to compress our solution above in two places:\nresult = 0 a, b = 1, 1 while a + b \u0026lt; 4000000: a, b = b, a + b if b % 2 == 0: result += b print(result) This is super useful and can make for much more concise code. Trey Hunter has a great tutorial on multiple assignment if you want to learn more.\nSkipping Odds In our solutions so far we\u0026rsquo;ve been calculating every fibonacci number and then checking if it\u0026rsquo;s even. What if we could just skip the fibonacci numbers we don\u0026rsquo;t want and just calculate the ones we need?\nLook at the sequence below. Do you see a pattern with even numbers?\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, \u0026hellip;\nEvery third term in the sample above is even. Can you see why it\u0026rsquo;s not just a coincidence and is true for the entire sequence?\nIs there a way to calculate only the 3rd, 6th, 9th, etc. items so we don\u0026rsquo;t waste precious CPU cycles on lame odd numbers like 89? Yes.\n He must have used more compute than absolutely necessary I won\u0026rsquo;t run through the steps here, but you can start with the fibonacci series definition:\n$$ F_{n} = {\\color{#FF007F}F_{n-2}} + \\color{#0072BB}F_{n-1} $$\nwhere $ F_{n} $ is the nth term in the sequence, and algebraically derive the equation:\n$$ F_{n} = {\\color{#FF007F}F_{n-6}} + {\\color{#0072BB}F_{n-3}} + {\\color{#177245}(3 * F_{n-3})} $$\nSo normally to get the next term we add the previous two terms. For the next even term we add the previous two even terms, and then add the previous even term three more times. In code:\n# Seed with first two even terms. a, b = 2, 8 result = 10 while a + 4 * b \u0026lt; 4000000: a, b = b, a + b + (3 * b) result += b print(result) It\u0026rsquo;s always nice when a more optimal algorithm also makes for less code. However, we\u0026rsquo;ve lost some clarity compared to our previous solutions. It\u0026rsquo;s not clear from the code itself where the hard-coded magic numbers 2, 8, and 10 are coming from. Magic numbers aren\u0026rsquo;t self-explanatory in the way named variables are and they can make a program harder to understand and maintain. That\u0026rsquo;s why I added a comment at the top: I did\u0026rsquo;t think the code alone made it obvious enough to you what it was doing.\nA Golden Solution Let\u0026rsquo;s get real funky with it. Is there a way to calculate each term directly from the single term before it? For example, how could we look at $ 34 $ and calculate $ 55 $ without knowing or caring that the previous term was 21?\nOur buddy Fibonacci discovered way back in 1202 that the ratio between successive terms in the fibonacci sequence converges on $ \\phi $, the golden ratio. So if you take $ F_n $ and multiply it by $ \\phi $ you get alllllmost $ F_{n+1} $. Using the example above you get:\n$$ 34 * \\phi = 55.0131556175\u0026hellip; $$\nThat\u0026rsquo;s super close to the right answer: $ 55 $.\nMultiply by $ \\phi $ again and you get $ \\approx{89} $, and multiply by $ \\phi $ one last time to get $ \\approx{144} $. Each time we multiply by $ \\phi $ we step to the next fibonacci number. We can take three steps at once by multiplying by $ \\phi^3 $. More formally:\n$$ F_{n+3} \\approx F_{n} * \\phi^3 $$\nOk this trick gets us an approximate answer, but how do we turn that into an exact answer? It turns out the approximation is so good and the error so small that you can just round the result to the nearest integer. That\u0026rsquo;s it:\n$$ F_{n+3} = \\|(F_{n} * \\phi^3)\\| $$\nLet\u0026rsquo;s code that up:\n# Define phi because it isn't predifined in the Python standard library. phi = (1 + 5 ** 0.5) / 2 a = result = 2 while a * phi ** 3 \u0026lt; 4000000: a = round(a * phi ** 3) result += a print(result) Now ain\u0026rsquo;t that a shiny solution.\nAnalytic Approximation (not Solution) This is a nice solution but it still relies on a while loop to calculate items one by one. I can almost, but not quite get to a direct calculation. The approximation of $ \\phi $, which was small enough between terms to round away in the solution above, compounds in this stab at an analytic solution and so only gives an approximation.\nFor the specific problem inputs this gives a result about 5% off from the true answer. Here\u0026rsquo;s my approximation.\nIf we ignore rounding for now, we can write out the sequence of terms we generate above like this:\n$$ 2,\\: 2\\phi^3,\\: 2\\phi^6,\\: 2\\phi^9,\\: 2\\phi^{12},\\: \u0026hellip; $$\nIf you squint real hard, you can see that\u0026rsquo;s a geometric series. It\u0026rsquo;s easier to see if we replace $ \\phi^3 $ with the symbol $ r $:\n$$ 2r^0,\\: 2r^1,\\: 2r^2,\\: 2r^3,\\: 2r^4,\\: \u0026hellip; $$\nwhere $ r = \\phi^3 \\approx{4.2360679775} $\nSince this is a geometric series we can use the formula for the sum of the first $ n $ terms of a geometric series:\n$$ Geometric{\\ }Sum = \\frac{a(1-r^n)}{1 - r} $$\nWhere $ a $ is the start term (in our case: $ 2 $), $ r $ is the ratio between terms (in our case $ \\phi^3 $ or about $ 4.2360679775 $), and $ n $ is the number of terms.\nAll we\u0026rsquo;re missing now is $ n $. We can get that by taking the log base $ r $ of our limit (spoiler: it\u0026rsquo;s $ 11 $). Let\u0026rsquo;s code it up:\nimport math phi = (1 + 5 ** 0.5) / 2 r = phi ** 3 a = 2 n = 11 print((a * (1 - r ** n)) / (1 - r)) Unfortunately the approximations, which were small enough to ignore last time, are now compounding. This attempt overshoots the right answer by about 5%.\nI don\u0026rsquo;t know whether there\u0026rsquo;s a way to improve the accuracy of this approach or if there\u0026rsquo;s a way to tweak it to sidestep the approximation issues. If you see something I\u0026rsquo;m overlooking please reach out and let me know!\n","date":1590358406,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1590358406,"objectID":"928cad04aed7dee55594b4680e713a15","permalink":"/post/euler_problem_2/","publishdate":"2020-05-24T15:13:26-07:00","relpermalink":"/post/euler_problem_2/","section":"post","summary":"Our first, and certainly not our last, encounter with the Fibonacci Sequence on Project Euler. Before we dive into Problem 2 together take some time to chew on it yourself if you haven\u0026rsquo;t already.","tags":["Euler"],"title":"Project Euler Problem 2: Even Fibonacci numbers","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"This is a lovely problem to start with. It has a straightforward brute-force loop solution as well as a nice analytic solution where you can calculate the solution directly without the need for much programming. And it\u0026rsquo;s fizzbuzz! What a great way to dive in.\nWho is this for? A quick note: this series of posts is meant for people who already have a development environment set up and are familiar with the very basics of Python. If you\u0026rsquo;ve never written a line of Python before I recommend Data Camp\u0026rsquo;s Python introduction or another free Python 3 tutorial that covers the basics of variables, basic types (ints, floats, strings, booleans), operations, lists, dictionaries, and functions. If you\u0026rsquo;re an experienced programmer but new to Python check out the \u0026ldquo;official\u0026rdquo; Python Tutorial. And if you\u0026rsquo;re comfy writing Python but don\u0026rsquo;t have a good local development environment set up yet check out Problem 0: Getting Started [TODO].\nAlso, this is meant for people who enjoy solving or reading about Project Euler problems. PE has this to say about sharing solutions:\n We hope that you enjoyed solving this problem. Please do not deprive others of going through the same process by publishing your solution outside of Project Euler. Members found to be spoiling problems beyond #100 will have their accounts locked (see note).\nNote: The rule about sharing solutions outside of Project Euler does not apply to the first 100 problems, as long as any discussion clearly aims to instruct methods, not just provide answers, and does not directly threaten to undermine the enjoyment of solving later problems. Problems 1 to 100 provide a wealth of helpful introductory teaching material and if you are able to respect our requirements, then we give permission for them to be discussed elsewhere.\n I take PE\u0026rsquo;s requirements about sharing information seriously and hope you will too. I love that so much of the programming world happens in the open with public repos and such, but you won\u0026rsquo;t find my work on problems 101+ here, on GitHub, or anywhere else public.\nThe Problem Ok, let\u0026rsquo;s talk about threes and fives. Before we analyze it together, take a few minutes alone with Problem 1. Set up a Project Euler account if you haven\u0026rsquo;t already. Consider how you might solve it. Take a stab at writing a solution before moving on. I\u0026rsquo;ll wait.\n\u0026hellip;\nIterative Solutions Great, you\u0026rsquo;re back. Let\u0026rsquo;s dive in. The most straightforward way to solve this problem is to look at every number from one to a thousand, test whether it\u0026rsquo;s divisible by 3 or 5, and add it to our running total if it is:\nresult = 0 integers = range(1, 1000) for x in integers: if x % 3 == 0 or x % 5 == 0: result += x print(result) In this solution we initialize our result to 0, use a for loop to iterate over every integer from 1 through 1000, test whether it\u0026rsquo;s divisible by 3 or 5 using the modulo operator (%) and add it to our result variable if it is. Bam.\nUsing List Comprehensions List comprehensions can be an elegant and \u0026ldquo;pythonic\u0026rdquo; way to solve problems. Here\u0026rsquo;s the same iterative solution above using a list comprehension rather than a loop.\nprint(sum([x for x in range(1, 1000) if x % 3 == 0 or x % 5 == 0])) Now isn\u0026rsquo;t that short and sweet. I love list comprehensions. They can get ugly fast, and it\u0026rsquo;s possible to overuse them, but comprehensions excel in cases like this where you can express a few lines of procedural code as a single thought.\nA More Functional Approach Python isn\u0026rsquo;t known for being a naturally functional language, but it\u0026rsquo;s certainly possible to use it that way. You can use filter() in place of the comprehension above:\nprint(sum(filter(lambda x: x % 3 == 0 or x % 5 == 0, range(1, 1000)))) Looking at this approach next to the comprehension above it\u0026rsquo;s easy to see why the comprehension is more idiomatic: it just reads easier.\nAnd of course it\u0026rsquo;s no fun to talk about functional programming without contorting reduce() into a solution:\nfrom functools import reduce print(reduce( lambda x, y: x + y if y % 3 == 0 or y % 5 == 0 else x, range(1, 1000), 0 )) With Python 3 reduce() is no longer a built-in function and instead needs to be imported from the functools library. That makes me sad. I can understand why (a comprehension is almost always more practical), but speaking as an apostle of Reduce, God of Parentheses I have to say I miss having it an my fingertips.\nAnalytic solution The iterative solutions above all rely on generating a list of integers, checking them one by one, and adding them up. That works just fine when you\u0026rsquo;re only counting to a thousand and you have modern computing resources at your fingertips. But things aren\u0026rsquo;t always so easy. What if we were solving this for all integers up to a quadrillion? What if we were working in an environment with limited memory or compute? What if we needed to run this solution gajillions of times every second?\nYou might have heard a famous anecdote about Gauss and how as a young kid he summed the numbers from 1 to 100 in just a few seconds. He didn\u0026rsquo;t actually add all the numbers together, he recognized a pattern in the arithmetic progression 1, 2, 3, \u0026hellip; 99, 100 and used a formula instead. Specifically, you can \u0026ldquo;fold\u0026rdquo; the sequence and match pairs like 1 + 100, 2 + 99, 3 + 98, \u0026hellip; 49 + 52, 50 + 51. There are exactly 50 such pairs and each pair equals 101, so the total sum is 101 * 50 = 5,050.\nThis approach works with any arithmetic progression. We could sum all multiples of 5 below a hundred by \u0026ldquo;folding\u0026rdquo; the sequence 5, 10, 15, \u0026hellip; 95, 100 and getting 10 pairs that equal 105 for a total sum of 1,050.\nIn general, the formula to calculate the sum of an arithmetic progression is: number_of_terms * (first_term + last_term) / 2.\nHow does this apply to our fizzbuzz problem? The integers we\u0026rsquo;re summing don\u0026rsquo;t form a nice arithmetic progression. And we can\u0026rsquo;t just add the sum of multiples of three to the sum of multiples of five, because that would double-count numbers like 15 and 30. And 45. And 60. See a pattern?\nThe trick is in recognizing that we\u0026rsquo;re dealing with three arithmetic progressions. If we want the sum of all numbers that are a multiple of three or a multiple of five, we can find that by adding sum(multiples_of_three to sum(multiples_of_five) and then subtracting sum(multiples_of_fifteen).\ndef simple_arithmetic_series(step, limit): first = step count = (limit - 1) // step last = step * count return count * (first + last) / 2 multiples_of_3 = simple_arithmetic_series(3, 1000) multiples_of_5 = simple_arithmetic_series(5, 1000) multiples_of_15 = simple_arithmetic_series(15, 1000) print(multiples_of_3 + multiples_of_5 - multiples_of_15) Evil optimization The analytic solution involves a lot more typing and it isn\u0026rsquo;t as simple to read and understand, so why is it better?\nWell, it isn\u0026rsquo;t better. There are tradeoffs between the analytic solution and the iterative solution. Neither is inherently better. Deciding which approach is better for you depends on your context.\nIn the context of this specific problem, the iterative solution works just fine. It\u0026rsquo;s faster to conceive and implement, it\u0026rsquo;s easier to understand, and less code is less chance for things to break. With our crazy strong computers and the small numbers involved the analytic solution is just a bunch of optimization that was (maybe) fun to do but doesn\u0026rsquo;t buy us anything valuable.\nThis is an example of premature optimization, and as Donald Knuth famously said, \u0026ldquo; premature optimization is the root of all evil\u0026quot;. Spending time optimizing things we don\u0026rsquo;t need to isn\u0026rsquo;t just a waste of time better spent elsewhere, it also makes our programs worse: harder to understand and maintain.\nThat said, it\u0026rsquo;s easy to think of cases where the tradeoffs might play out the other way. What if we were working with much larger numbers? Try plugging in bigger numbers to the iterative and analytic solutions and watch where the difference in performance starts to get noticeable. My laptop struggles to run the iterative solution above about a hundred million. Even the world\u0026rsquo;s biggest supercomputer, Summit doesn\u0026rsquo;t have enough memory to store a list of all integers up to a quadrillion. But my dinky laptop can calculate simple_arithmetic_series(3, 1000000000000000) with no noticeable delay. With the right algorithm, my laptop can do something that\u0026rsquo;s literally impossible for the world\u0026rsquo;s best supercomputer.\nThose numbers seem ridiculous but spoiler alert: Project Euler is going to start throwing hefty numbers at us pretty quick. As we dig deeper performance, resources, and computational complexity are going to get critically important.\n","date":1590189158,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1590189158,"objectID":"22eb9349b31e386e9982d84abf3b51b2","permalink":"/post/euler_problem_1/","publishdate":"2020-05-22T16:12:38-07:00","relpermalink":"/post/euler_problem_1/","section":"post","summary":"This is a lovely problem to start with. It has a straightforward brute-force loop solution as well as a nice analytic solution where you can calculate the solution directly without the need for much programming.","tags":["Euler"],"title":"Project Euler Problem 1: Multiples of 3 and 5","type":"post"},{"authors":["Grae Drake"],"categories":[],"content":"In 2012 I was a corporate lawyer at a big fancy firm. If you\u0026rsquo;d seen teenage me or college me you\u0026rsquo;d be understandably confused. Grae somehow ended up a lawyer? A corporate lawyer at a fancy firm? What? And if you know me today you might be similarly confused.\nMy life as a corporate lawyer was short in part because of Project Euler.\nThat winter Kelly, a law school classmate, shared an article with a group of us about this crazy new company called Dev Bootcamp. It trained people to be programmers. And actually got them jobs! Good ones!\n DBC launched the coding bootcamp industry. By then we were starting our second year of practice. It wasn\u0026rsquo;t a great time. Graduating into the great recession meant a lot of us couldn\u0026rsquo;t land a firm job. And those who did get hired were dealing with the tough reality of law firm life. A bunch of us had the same thought when we learned about coding bootcamps: \u0026ldquo;Holy shit, I wish this was a thing four years ago when I made the naive decision to go to law school.\u0026rdquo;\n Aww, poor white collar professional. Photo credit: Ethan Sykes on Unsplash Most of the people who would have done a bootcamp instead of law school couldn\u0026rsquo;t stomach eating the sunk cost to change careers. I was lucky: Stassia saw how unsatisfied I was. She said \u0026ldquo;If you aren\u0026rsquo;t happy you should quit; we\u0026rsquo;ll figure it out.\u0026rdquo; She\u0026rsquo;s done a lot of amazing things for me over the years, but giving me permission \u0026amp; encouragement to abandon a lucrative career is near the top of the list. She was still several years of training away from starting her own and mine was the only income. It was risky, but with her support I left biglaw.\nTuition at DBC was I think like $12k. That\u0026rsquo;s a tiny fraction of the cost of my JD. Still, with my student debt and monthly loan payments we didn\u0026rsquo;t have the reserves or cashflow for that. So what could I do?\nCasting about the web for low cost options I ran into Hacker School (now Recurse Center). Seemed like a cool opportunity. Price was right. But I wasn\u0026rsquo;t qualified yet. I knew I didn\u0026rsquo;t have anywhere near enough programming experience to apply, but how much was enough? I checked their FAQ and that\u0026rsquo;s where I found it:\n How much programming experience do I need for Hacker School?\nIf you\u0026hellip; solve Project Euler problems for fun\u0026hellip; you\u0026rsquo;re almost certainly a good fit for Hacker School.\n Huh. Project Euler. What\u0026rsquo;s that? Click\nI got hooked right away. Hooked bad. Being newly unemployed I spent my spring at The Wormhole drinking coffee and teaching myself Python by solving Project Euler problems. They scratched an itch I forgot I had. They were frustrating and impossible and clever and solving one felt amazing. That silly green checkmark became the best part of my day.\n You\u0026rsquo;re good enough, you\u0026rsquo;re smart enough, and doggone it, people like you! By the time I interviewed at Thinkful I\u0026rsquo;d solved about 80 problems, averaging a solution every day or two, though some took much longer.\nProject Euler had a big impact on my work and my life. It made me excited to write code. It gave me the motivation to keep beating my head against the wall while I was learning. It lent me credibility during my interviews and helped me join Thinkful, where I successfully pivoted into tech and got paid to help others do the same. It drilled me, without me realizing, in data structures and algorithmic complexity before I knew what those were. And it reminded me how much I love math.\nSo I want to talk about it here. I\u0026rsquo;ve held off on sharing my solutions or analysis because, well, Project Euler asked me not to. But now they let people talk about the first hundred problems. So let\u0026rsquo;s dive in and see how many of those we can cover.\n ","date":1590130483,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1590130483,"objectID":"70c703a5c9d5571e992ac495f5ef0820","permalink":"/post/heart_euler/","publishdate":"2020-05-21T23:54:43-07:00","relpermalink":"/post/heart_euler/","section":"post","summary":"How I found Project Euler and pivoted into tech","tags":["Euler"],"title":"❤️ Project Euler","type":"post"}]