[
        {
          "id": "books-i-have-read",
          "title": "Books I have read",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "",
          "url": "/books-i-have-read/",
          "content": "This is a live blog, meaning I will be updating this blog every time I complete reading the book. Books here are divided first by year and then by their type/category. Most of the books are accompanied by related blogs with the extractions from my experience.\n\n2022\n\nSelf help\n\n\n  \n    Ikigai (Hector Garcia and Francesc Miralles)\n\n    Ikigai [But Not a review] \n  \n\n\nTechnical\n\n\n  Testing Rails (Josh Steiner &amp; Thoughtbot)\n\n\n2021\n\nSelf help\n\n\n  \n    The 4 Hour Work Week (Tim Ferris)\n\n    The 4 Hour Work Week [But Not a review] \n  \n  \n    The Subtle Art of Not Giving a Fuck (Mark Manson)\n\n    The Subtle Art of Not Giving a Fuck [But Not a review] \n  \n\n\n2020\n\nBusiness\n\n\n  \n    Zero to One (Peter Thiel)\n\n    Zero to One [But Not a review] \n  \n  \n    Rework (Jason Fried and David Heinemeier Hansson aka DHH)\n\n    Rework [But Not a review] \n  \n\n\nFinancial Literacy\n\n\n  Rich Dad Poor Dad (Robert Kiyosaki)\n\n\nSelf help\n\n\n  \n    Atomic Habits (James Clear)\n\n    Top Quotes and important points from Atomic Habits\n  \n  \n    How to Win Friends and Influence People (Dale Carnegie)\n\n    Valuable lessons from How to Win Friends And Influence People\n  \n\n\nImage Credits: Cover Image by Jaredd Craig from Unsplash"
        },
        {
          "id": "404",
          "title": "404",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/404",
          "content": "404\n        \n        \n          \n          \n          \n          \n          \n        \n      \n\n      \n      \n        \n          Page Not Found\n        \n        \n          The page you're looking for doesn't exist or has been moved.\n        \n        \n          Let's get you back on track.\n        \n      \n\n      \n      \n        \n          \n            \n            \n          \n          Go Home\n        \n        \n          \n            \n            \n          \n          Browse Articles"
        },
        {
          "id": "500",
          "title": "500",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/500",
          "content": "500\n        \n        \n          \n          \n          \n          \n          \n        \n      \n\n      \n      \n        \n          Internal Server Error\n        \n        \n          Something went wrong on our end. We're working to fix it.\n        \n        \n          Please try again in a few moments, or return to the homepage.\n        \n      \n\n      \n      \n        \n          \n            \n            \n          \n          Go Home\n        \n        \n          \n            \n            \n          \n          Try Again"
        },
        {
          "id": "about",
          "title": "About",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/about/",
          "content": "About Prabin Poudel\n  \n  \n    \n    \n    \n  \n\n\n    \n    \n    \n      \n        I'm a Ruby on Rails consultant, owner of a SaaS product, open source contributor, and speaker working mostly with agencies, SaaS founders, and technical teams around the world.\n      \n      \n      \n        I don't just write code; I solve problems, ship features, and deliver on time. My approach is straightforward: take deadlines seriously, keep promises, and always think about how I can help succeed the project or companies I am working with. Whether it's building a custom Rails applications, SaaS platforms, or contributing to open source, I bring the same level of commitment and technical depth.\n      \n      \n      \n        I co-founded and led Truemark Technology, a software development agency, from 2017 until 2025. That experience taught me how to build products, lead teams, and deliver at scale. Now I work with clients like CPA Connect, Hamilton Dev Co., and Open Secrets Pro, delivering production-ready Rails applications.\n        \n        I'm also an active contributor to projects like Bullet Train and maintain my own SaaS product Zero Config Rails, helping other developers ship faster.\n      \n    \n  \n\n  \n  \n    \n      \n      \n\n    \n  \n\n  \n  \n    \n      \n  \n    What I Do\n  \n  \n    \n    \n    \n  \n\n\n    \n    \n    \n      \n        \n          Consulting &amp; Development\n        \n        \n          I work with agencies, SaaS founders, and technical teams to build production-ready Rails applications. Deep expertise in Ruby on Rails, from API design to automated tests, multi-tenancy and enterprise-scale development. I focus on delivering on time, every time.\n        \n      \n      \n      \n        \n          Building &amp; Leading Teams\n        \n        \n          I co-founded and led Truemark Technology, a software development agency, from 2017 until 2025. We built custom software solutions, e-commerce platforms, and provided remote development teams for clients worldwide. That experience taught me how to build products, lead teams, and deliver at scale—skills I bring to every consulting engagement.\n        \n      \n      \n      \n        \n          Side Hustle\n        \n        \n          I maintain my own SaaS product Zero Config Rails, which automates Rails project setup and helps developers create fully configured Rails applications in minutes. This side project lets me experiment with new ideas, solve real problems I face daily, and build something that helps other developers ship faster. It's the perfect blend of product development, customer feedback, and continuous iteration.\n        \n      \n      \n      \n        \n          Open Source Contributions\n        \n        \n          I contribute to projects that help other developers. You'll find my code in Bullet Train (the Rails SaaS framework powering 1000+ applications). Contributing to open source is my way of giving back to the community that has given me so much.\n        \n      \n      \n      \n        \n          Speaking &amp; Knowledge Sharing\n        \n        \n          I share what I've learned through talks at meetups and conferences. Topics range from coding standards to building reusable UIs with ViewComponent. You can find my talks on Speaker Deck.\n        \n      \n    \n  \n\n  \n  \n    \n  \n  \n  \n  \n  \n    \n    \n      \n        \n      \n    \n  \n  \n  \n  \n    \n      Want to work together?\n    \n      \n        I&#39;m always open to discussing new projects, speaking opportunities, or just connecting with fellow developers.\n      \n    \n    \n      Work with Me\n      \n        \n      \n    \n  \n\n\n  \n\n  \n  \n    \n      \n  \n    Beyond Code\n  \n  \n    \n    \n    \n  \n\n\n    \n    \n    \n      When I'm not writing code, you'll find me doing things that keep me balanced and inspired:\n    \n\n    \n      \n  \n      \n\n  \n\n  \n    Trekking\n  \n\n  \n    I love exploring the mountains of Nepal. There&#39;s something about being in nature, pushing physical limits, and disconnecting from screens that helps me think clearer and come back to code with fresh perspective.\n  \n\n\n\n      \n  \n      \n\n  \n\n  \n    Luna\n  \n\n  \n    I have a husky named Luna who keeps me active and reminds me to take breaks. Dogs are great at forcing you to step away from the computer and get some fresh air, exactly what you need during long coding sessions.\n  \n\n\n\n      \n  \n      \n\n  \n\n  \n    Meditation\n  \n\n  \n    Meditation helps me calm my mind and stay focused. It&#39;s become an essential part of my routine, especially when dealing with tight deadlines. A clear mind writes better code.\n  \n\n\n\n      \n  \n      \n\n  \n\n  \n    Fish Keeping\n  \n\n  \n    There&#39;s something peaceful about watching fish swim, it&#39;s like a living screensaver. It has become one of my daily rituals before I go to sleep.\n  \n\n\n    \n\n    \n    \n  \n    \n      Travelling the World\n    \n      \n        Exploring new places, cultures, and experiences around the globe. Each journey brings fresh perspectives and inspiration back to my work.\n      \n  \n\n    \n        \n          \n            \n\n            \n            \n            \n            \n              \n                India\n              \n              \n                2019 • 2022\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                Thailand\n              \n              \n                2019 • 2025\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                China\n              \n              \n                2019\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                Denmark\n              \n              \n                2019\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                Finland\n              \n              \n                2019\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                Italy\n              \n              \n                2019\n              \n            \n          \n        \n        \n          \n            \n\n            \n            \n            \n            \n              \n                Vietnam\n              \n              \n                2025"
        },
        {
          "id": "contact",
          "title": "Work with Me",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/contact/",
          "content": "Let&#39;s Work Together\n  \n  \n    \n    \n    \n  \n\n\n    \n    \n      Want to discuss your next project with me? Just send me the details using the form below.\n    \n  \n\n  \n    \n  \n\n  \n  \n    \n      Don't fill this out if you're human: \n    \n  \n\n  \n  \n    \n      Name\n    \n    \n  \n\n  \n  \n    \n      Email address (will remain private)\n    \n    \n  \n\n  \n  \n    \n      Message\n    \n    \n  \n\n  \n  \n    \n      How'd you hear about my website?\n    \n    \n  \n\n  \n  \n    \n      Send message"
        },
        {
          "id": "",
          "title": "Home",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/",
          "content": "👋 Hi, I am Prabin Poudel.\n        \n        \n          Ruby on Rails Consultant\n        \n\n        \n          Also a SaaS founder, open source contributor, and speaker.\n        \n\n        \n          Focused on the technical side. Built for agencies, SaaS founders, and technical founders who need reliable, expert Rails development.\n        \n\n        \n          \n            \n              \n            \n            \n              Deadline-focused: Your timelines are my commitments. I deliver on time, every time.\n            \n          \n\n          \n            \n              \n            \n            \n              Promise keeper: When I commit to something, it gets done. No excuses, no surprises.\n            \n          \n\n          \n            \n              \n            \n            \n              Technical depth: Deep Rails expertise for complex, production-grade applications.\n            \n          \n\n          \n            \n              \n            \n            \n              Built for technical teams: Agencies, SaaS founders, and technical founders who value expertise and reliability.\n            \n          \n        \n      \n\n      \n      \n        \n          \n          \n\n        \n      \n    \n  \n\n\n\n\n  \n    \n    \n  \n\n\n\n\n  \n    \n    \n  \n\n\n\n  \n    \n      \n        \n        \n          \n            \"\n          \n          \n            \n              I had a bad experience with freelancing in the past and Prabin completely changed that with his work.\n            \n            \n              \n                RP\n              \n              \n                Ronni Poulsen\n                Owner, Stay Connected (Flexonet)\n              \n            \n          \n        \n\n        \n        \n          \n            \n            \n\n          \n          \n            \n              JH\n            \n            \n              Justin Hamilton\n              Owner, Hamilton Dev Co.\n            \n          \n        \n      \n    \n  \n\n\n\n  \n    \n      \n  \n    Work\n  \n  \n    \n    \n    \n  \n\n\n    \n\n    \n    \n      \n        I don't just write code. I solve problems, ship features, and deliver on time. From custom Rails applications to SaaS platforms, from open source to side projects, here's what I've built.\n      \n    \n\n    \n    \n      \n          \n  \n    \n      Client\n    \n      \n        Data &amp; Reporting\n      \n  \n  \n  \n    \n      Open Secrets Pro\n    \n  \n  \n  \n    Built data and reporting features for a platform that makes government transparency accessible. Delivered production-ready features that help users understand government data and spending.\n  \n  \n    \n      Agency Work - Hamilton Dev Co.\n    \n\n\n          \n  \n    \n      SaaS\n    \n        \n          \n        \n  \n  \n  \n    \n      Zero Config Rails\n    \n  \n  \n  \n    My SaaS product that automates Rails project setup. Create new Rails applications in less than 30 minutes by choosing gems from a Web UI, answering configuration questions, and getting a fully configured app ready to build; all without touching a single file. The philosophy: zero configuration, maximum productivity.\n  \n  \n    \n      SaaS Product\n    \n\n\n          \n  \n    \n      Open Source\n    \n        \n          \n        \n  \n  \n  \n    \n      Bullet Train\n    \n  \n  \n  \n    The open source Ruby on Rails SaaS framework. Contributing to a project that&#39;s powering 1000+ production applications. This is the framework that helps teams ship faster and I&#39;m playing a small part in making it better.\n  \n  \n    \n      62+ stars • 387+ users • Active contributor\n    \n\n\n          \n  \n    \n      Client\n    \n      \n        Customer Experience\n      \n  \n  \n  \n    \n      Soono\n    \n  \n  \n  \n    Built and shipped features for a customer experience management platform that helps businesses capture real-time feedback and turn negative experiences into positive ones. Contributed to system enhancements that improved performance and developer experience, enabling businesses to take control of their customer experience instead of worrying about negative Yelp and Google reviews.\n  \n  \n\n\n      \n      \n      \n      \n        \n          View All Work →\n        \n      \n    \n  \n\n\n\n  \n    \n      \n        Testimonials\n      \n      \n        \n        \n        \n      \n    \n    \n    \n      \n      \n        \n          \n        \n        \n          \n            \n  \n    I had a bad experience with freelancing previously and was quite afraid when I hired Prabin to work on our telecom app. But he is such a professional that he right away made me feel that he was the right choice for the job. I needed a full stack developer who could work on redesigning existing system and also create new applications as we moved forward, and Prabin has done a brilliant job in that regard. He has excellent knowledge on Ruby on Rails and React. He always comes up with suggestions and is very approachable and flexible. I will definitely hire him again in the future and recommend him to anyone wanting to hire him!\n  \n  \n  \n      \n\n    \n    \n      \n        Ronni Poulsen\n      \n      \n        Owner, Stay Connected (Flexonet)\n      \n        \n          Upwork\n        \n    \n  \n\n\n          \n      \n\n      \n      \n        \n            \n              \n  \n    Prabin is an excellent Rails/Bullet Train developer. He was instrumental in helping us build out multiple new features from the ground up, as well as getting our test suite to passing. Would happily work with him again and recommend without hesitation.\n  \n  \n  \n      \n\n    \n    \n      \n        Zack Gilbert\n      \n      \n        Owner, CPA Connect\n      \n    \n  \n\n\n            \n            \n              \n  \n    Prabin is really easy to work with, and has a good understanding of react-native and javascript. I hope to work with him again one day.\n  \n  \n  \n      \n\n    \n    \n      \n        Anton Hughes\n      \n      \n        Owner, Price Insight\n      \n        \n          Upwork\n        \n    \n  \n\n\n            \n            \n              \n  \n    I hired Prabin to build our mobile app and he did a great job! He was able to get it built and released to both Apple and Google Play app stores with great quality. Prabin has been the ideal partner; flexible, reliable, and very responsive. He also has great time management skills and always comes to the table with ideas and suggestions to help make the project run smoother. Highly recommended!\n  \n  \n  \n      \n\n    \n    \n      \n        Reneldy Senat\n      \n      \n        Owner, Idoleyes Interactive\n      \n    \n  \n\n\n            \n        \n      \n    \n  \n\n\n\n\n  \n    \n    \n  \n\n\n\n\n  \n    \n    \n    \n  \n\n\n\n    \n      \n        \n          Featured Articles\n        \n        \n          \n          \n          \n        \n      \n      \n      \n          \n    \n      \n\n    \n  \n  \n    \n        July 25, 2021\n        •\n      13 min read\n    \n    \n    \n      \n        Search Engine with Rails\n      \n    \n    \n      \n        Yes! You can create a full functioning search engine with Rails. In this tutorial you will be learning how to create a search engine with Rails by using elasticsearch. You will learn to configure e...\n      \n    \n      \n          \n            Ruby on rails\n          \n      \n  \n\n\n          \n    \n      \n\n    \n  \n  \n    \n        September 18, 2022\n        •\n      1 min read\n    \n    \n    \n      \n        Books I have read\n      \n    \n    \n      \n        This is a live blog, I will be updating this blog every time I complete reading the book. Books here are divided first by year and then by their type/category. Most of the books are accompanied by ...\n      \n    \n  \n\n\n          \n    \n      \n\n    \n  \n  \n    \n        April 23, 2021\n        •\n      10 min read\n    \n    \n    \n      \n        Build Twitter Bot with Ruby\n      \n    \n    \n      \n        Did you know? We can also build bot with Ruby. Today we will be building twitter bot that retweets set of hashtags. We will be using twitter gem which uses Twitter API under the hood.\n      \n    \n      \n          \n            Bot\n          \n          \n            Tutorial\n          \n          \n            Ruby\n          \n      \n  \n\n\n      \n    \n  \n\n\n    \n      \n        \n          Latest Posts\n        \n        \n          \n          \n          \n        \n      \n      \n      \n          \n    \n      \n\n    \n  \n  \n    \n        March 20, 2026\n        •\n      25 min read\n    \n    \n    \n      \n        Build a RAG App for Documentation Q&amp;A using Rails\n      \n    \n    \n      \n        Learn how to build your first RAG app with Rails, Ruby LLM, and Ollama so you can ask questions and get answers from your own blog content using a local LLM model.\n      \n    \n      \n          \n            Ruby on rails\n          \n          \n            Ollama\n          \n          \n            Rag\n          \n          \n            Ruby llm\n          \n          \n            Ai\n          \n      \n  \n\n\n          \n    \n      \n\n    \n  \n  \n    \n        November 12, 2024\n        •\n      12 min read\n    \n    \n    \n      \n        Configure Minitest with Gitlab CI and Rails\n      \n    \n    \n      \n        Running tests in CI is a very important step to make sure there are no breaking changes in the new code. Today we will look at configuring Gitlab CI to run tests for Ruby on Rails applications writ...\n      \n    \n      \n          \n            Ruby on rails\n          \n          \n            Minitest\n          \n          \n            Gitlab ci\n          \n      \n  \n\n\n          \n    \n      \n\n    \n  \n  \n    \n        November 12, 2024\n        •\n      14 min read\n    \n    \n    \n      \n        Setup RSpec Tests in Rails with Gitlab CI\n      \n    \n    \n      \n        Running tests in CI is a very important step to make sure there are no breaking changes in the new code. Today we will look at configuring Gitlab CI to run RSpec tests for Ruby on Rails applications.\n      \n    \n      \n          \n            Ruby on rails\n          \n          \n            Rspec\n          \n          \n            Gitlab ci"
        },
        {
          "id": "work",
          "title": "My Work",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/work/",
          "content": "Work\n  \n  \n    \n    \n    \n  \n\n\n    \n\n    \n    \n      \n        \n        \n\n      \n    \n\n    \n    \n      \n        I don't just write code. I solve problems, ship features, and deliver on time. From custom Rails applications to SaaS platforms, from open source to side projects, here's what I've built.\n      \n    \n\n    \n    \n      \n        \n  \n    Building for Clients\n  \n  \n    \n    \n    \n  \n\n\n      \n      \n      \n\n        \n        \n          \n            \n              \n                \n                  Via Hamilton Dev Co.\n                \n                Agency Work\n              \n              \n                Worked on multiple client projects through Hamilton Dev Co., delivering production-ready features and systems:\n              \n              \n                  \n                    \n                        \n                          Open Secrets Pro\n                        \n                    \n                    Built data and reporting features for a platform that makes government transparency accessible. Delivered production-ready features that help users understand government data and spending.\n                  \n                  \n                    \n                        Beneke Wire\n                    \n                    Developed an intranet site to manage daily operations for a manufacturing company. The kind of internal tool that makes teams more efficient.\n                  \n              \n            \n          \n        \n\n        \n          \n            \n              \n                \n                  \n                    \n                      CPA Connect\n                    \n                  \n                    SaaS\n                \n                \n                  Built new features and played a key role in getting the project test suites fully passing, which significantly improved the code quality and development workflow for the project. The project is a SaaS platform that helps CPA firms manage their clients and their tax returns.\n                \n              \n            \n          \n\n          \n            \n              \n                \n                  \n                    \n                      Soono\n                    \n                  \n                    Customer Experience\n                \n                \n                  Built and shipped features for a customer experience management platform that helps businesses capture real-time feedback and turn negative experiences into positive ones. Contributed to system enhancements that improved performance and developer experience, enabling businesses to take control of their customer experience instead of worrying about negative Yelp and Google reviews.\n                \n              \n            \n          \n          \n            \n              \n                \n                  \n                    \n                      Stay Connected\n                    \n                  \n                    Telecom\n                \n                \n                  Took on a full system redesign and new application development for a telecom company. The client had previous bad experiences with freelancers, but we delivered a complete redesign of their existing system while building new applications as they scaled. Deep Rails expertise, always coming through with suggestions and staying flexible to their needs.\n                \n              \n            \n          \n      \n    \n\n    \n    \n      \n  \n  \n  \n  \n  \n    \n    \n      \n        \n      \n    \n  \n  \n  \n  \n    \n      Have a project in mind?\n    \n      \n        Let&#39;s discuss how I can help you build it.\n      \n    \n    \n      Work with Me\n      \n        \n      \n    \n  \n\n\n    \n\n    \n    \n      \n        \n  \n    Contributing to the Community\n  \n  \n    \n    \n    \n  \n\n\n      \n      \n      \n        \n          I believe in giving back. When I see something that could be better, I contribute. When I build something useful, I open source it. Here's where my code is helping other developers:\n        \n      \n\n      \n            \n  \n    \n      Open Source\n    \n        \n          \n        \n  \n  \n  \n    \n      Bullet Train\n    \n  \n  \n  \n    The open source Ruby on Rails SaaS framework. Contributing to a project that&#39;s powering 1000+ production applications. This is the framework that helps teams ship faster and I&#39;m playing a small part in making it better.\n  \n  \n    \n      62+ stars • 387+ users • Active contributor\n    \n\n\n            \n              \n                  \n                    \n                  \n                \n                  \n                    Spree\n                  \n                \n              \n              \n                The leading open-source e-commerce platform for Rails. Contributing to a codebase that powers thousands of online stores. This is production-grade Rails at scale.\n              \n                \n                  Industry standard • Thousands of stores\n                \n            \n            \n              \n                  \n                    \n                  \n                \n                  \n                    Boring Generators\n                  \n                \n              \n              \n                Rails generators that eliminate the boring, repetitive setup work. 276+ developers are using these generators to ship faster. Because why write boilerplate when you can generate it?\n              \n                \n                  276+ stars • Active maintenance\n                \n            \n            \n              \n                  \n                    \n                  \n                \n                  \n                    Zero Config Rails Generators\n                  \n                \n              \n              \n                Part of the Zero Config Rails ecosystem—generators that eliminate configuration overhead. Because the best code is the code you don&#39;t have to write.\n              \n                \n                  Zero Config Rails • Rapid development\n                \n            \n      \n    \n\n    \n    \n      \n        \n  \n    What Clients Say\n  \n  \n    \n    \n    \n  \n\n\n      \n      \n      \n        \n          Here's what people who've worked with me on Rails and SaaS projects have to say:\n        \n      \n\n      \n        \n        \n          \n            \n          \n        \n\n        \n        \n          \n            \n              \n                \n  \n    I’m pleased to recommend Prabin, who worked with me as a Rails contractor for several months. He consistently delivered high-quality work and was a reliable addition to our team, building new features and helping get our test suite fully passing, which significantly improved our code quality and development workflow. Beyond his technical skills, Prabin was a clear and proactive communicator who consistently met deadlines.I would gladly work with him again and recommend him without hesitation to anyone looking for a skilled Rails &amp; Bullet Train developer.\n  \n  \n  \n      \n\n    \n    \n      \n        Zack Gilbert\n      \n      \n        Owner, CPA Connect\n      \n    \n  \n\n\n              \n          \n        \n      \n    \n\n    \n    \n      \n        \n  \n    Building My Own\n  \n  \n    \n    \n    \n  \n\n\n      \n      \n      \n        \n          When I'm not shipping for clients or contributing to open source, I'm building tools that solve real problems. Here's what I've created:\n        \n      \n\n      \n          \n            \n              \n                \n                  Zero Config Rails\n                \n              \n            \n            \n              My SaaS product that automates Rails project setup. Create new Rails applications in less than 30 minutes by choosing gems from a Web UI, answering configuration questions, and getting a fully configured app ready to build; all without touching a single file. The philosophy: zero configuration, maximum productivity.\n            \n          \n          \n            \n                \n                  \n                \n              \n                \n                  rails.new\n                \n              \n            \n            \n              From a new Mac to a fully configured Rails development environment in 11 minutes. Built in collaboration with Andrew Culver from Bullet Train.\n            \n          \n          \n            \n                \n                  \n                \n              \n                \n                  Ruby Twitter Bot\n                \n              \n            \n            \n              An open-source Twitter bot framework built with Ruby. Because sometimes you need to automate social media interactions, and you want to do it with clean, maintainable code.\n            \n          \n      \n    \n  \n\n  \n  \n    \n      \n        Talks\n      \n      \n        \n        \n        \n      \n    \n    \n    \n      \n        Sharing knowledge with the community through meetups and conferences. Here are the talks I've given:\n      \n    \n\n    \n        \n          \n            \n                \n            \n            \n              Programming\n            \n          \n          \n            \n              Coding Standards\n            \n          \n          \n            A comprehensive guide to establishing and maintaining coding standards in your development team. Covers static code analyzers, git hooks, and CI tools to enforce consistency across projects.\n          \n          \n            January 17, 2023\n            •\n            \n              View on Speaker Deck\n            \n          \n        \n        \n          \n            \n                \n            \n            \n              Rails\n            \n          \n          \n            \n              Thinking in Components: Building Reusable UIs in Rails with ViewComponent\n            \n          \n          \n            Learn how to build encapsulated, reusable view components in Rails using ViewComponent. Discover the benefits of component-based architecture and how it improves maintainability, testability, and developer experience.\n          \n          \n            November 9, 2025\n            •\n            \n              View on Speaker Deck"
        },
        {
          "id": "",
          "title": "Prabin Poudel - Rails Freelancer",
          "collection": {
            "label": "data",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "",
          "content": ""
        },
        {
          "id": "articles-build-a-rag-app-for-documentation-qa-using-rails",
          "title": "Build a RAG App for Documentation Q&A using Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, ollama, rag, ruby llm, ai",
          "url": "/articles/build-a-rag-app-for-documentation-qa-using-rails/",
          "content": "I had wanted to build something in the AI space for about a year now but didn’t know where to start. I chose a RAG (Retrieval-Augmented Generation) app because it felt approachable: you ask a question and get an answer grounded only in your blog content, so I could learn embeddings, vector search, and prompting without dealing with training or fine-tuning.\n\nThe full app is open sourced at [blog-ai-chat-app](https://github.com/coolprobn/blog-ai-chat-app). You can clone it and run it locally to see it in action.\n\nThis post walks you through how it was built so you can do the same or adapt it to your own blog.\n\nWithout further ado, let’s jump right in.\n\n## Tested and working in\n\n- Ruby 3.4+\n- Rails 8.1\n- PostgreSQL with pgvector extension\n- Ollama (qwen2.5:7b-instruct, nomic-embed-text)\n- RubyLLM 1.12\n\n## RAG ... What is RAG?\n\n**RAG** stands for *Retrieval-Augmented Generation*. In short:\n\n1. You store your content (e.g. blog posts) as **chunks** and turn each chunk into a **vector** (embedding) using an embedding model.\n2. When the user asks a question, you turn the question into a vector and **search** for the most similar chunks (e.g. with cosine similarity or a vector index).\n3. You pass those chunks as **context** to an LLM and ask it to answer **only** from that context.\n\nSo the model doesn’t rely on its training alone, it “retrieves” relevant bits of your content and then \"generates\" an answer from them. That keeps answers grounded in your blog and reduces hallucination.\n\n## Project setup\n\nIn this project we use:\n\n- **Rails** for the app and database.\n- **[Ruby LLM](https://github.com/crmne/ruby_llm)** for search (embeddings, tool-calling, and LLM-generated answers).\n- **Ollama** for local models (no API keys, free to run).\n- **pgvector** (via the [Neighbor](https://github.com/ankane/neighbor) gem) for vector search in PostgreSQL.\n\n### Generate a new Rails app\n\nCreate a new Rails app with PostgreSQL (required for pgvector later) and TailwindCSS:\n\n```bash\nrails new blog_ai_chat_app --database=postgresql --css=tailwind\ncd blog_ai_chat_app\nbin/rails db:create\n```\n\nUse whatever other options you prefer (e.g. `--skip-test`). The important part is postgreSQL for vector search and tailwind for the styling.\n\n### Setup Ruby LLM\n\nAdd the gem and run the generator to create base for RubyLLM:\n\n```ruby\n# Gemfile\ngem \"ruby_llm\"\n```\n\n```bash\nbundle install\nbin/rails generate ruby_llm:install\n```\n\nThe Ruby LLM installer generates migrations and models for you. You will see new tables such as `chats`, `messages`, `tool_calls`, and `models` along with models like `Chat`, `Message` and other related models. Run migrations after the generator finishes:\n\n```bash\nbin/rails db:migrate\n```\n\nThe app uses these models later for the RAG flow (user question → tool call to search the blog → LLM-generated answer). So don’t skip these steps of running the generator and applying migrations.\n\n### Setup pg_vector\n\nWe need vector support in PostgreSQL for storing embeddings and doing similarity search.\n\nInstall pgvector on your machine (macOS example):\n\n```bash\nbrew install pgvector\n```\n\nIn the Rails app, we use the [Neighbor](https://github.com/ankane/neighbor) gem for the `vector` type, add the following to the Gemfile:\n\n```ruby\n# Gemfile\ngem \"neighbor\"\n```\n\nThen run the bundle install and generator:\n\n```bash\nbundle install\nbin/rails generate neighbor:vector\nbin/rails db:migrate\n```\n\nThe generator adds a migration that enables the `vector` extension. We are still missing `has_neighbors` declaration in the model for the neighbor gem to work, we will add that after generating the relevant model in upcoming sections. Let's configure Ollama before we dive into those stuffs.\n\n**Note**:\npgvector via homebrew only works for postgresql@17 and postgresql@18, if you have older version of postgresql then you need to install from the source. You can do so by following instructions at [Install pgvector from source](https://github.com/pgvector/pgvector?tab=readme-ov-file#linux-and-mac)\n\n## Configure Ollama\n\nI chose **Ollama** because it’s free and runs locally. The tradeoff is latency, with a 7B model on my machine (M1 Mac) I often wait around a minute for a response. I was fine with that because I wanted to learn more about local models and avoid API keys.\n\n1. **Install Ollama**\n  \n    You can visit [ollama.com](https://ollama.com) for installation instructions. If you are using macOS you can install it with brew:\n\n    ```bash\n      brew install ollama\n    ```\n    \n    You can then start it with:\n\n    ```bash\n      ollama serve\n    ```\n\n2. **Pull the model** (used to generate answers from retrieved context)\n\n    While you have the ollama running in one tab, open a new tab in your terminal and run the following command:\n\n   ```bash\n   ollama pull qwen2.5:7b-instruct\n   ```\n\n3. **Pull the embedding model** (used for RAG retrieval):\n\n   ```bash\n   ollama pull nomic-embed-text\n   ```\n\nThen point Ruby LLM at Ollama and set the default model and embedding model:\n\n```ruby\n# config/initializers/ruby_llm.rb\nrequire \"ruby_llm\"\n\nRubyLLM.configure do |config|\n  config.ollama_api_base = \"http://localhost:11434/v1\"\n  config.default_model = \"qwen2.5:7b-instruct\"\n  config.default_embedding_model = \"nomic-embed-text\"\n  config.use_new_acts_as = true\nend\n```\n\nNo API keys needed. If you prefer another provider (e.g. OpenAI or Gemini), you can set the corresponding API key and change `default_model` / `default_embedding_model` in the same file.\n\n## Blog ingestion\n\nFor the RAG app to work we need to fetch the blog content from a blog, split each of the blog into chunks, compute embeddings, and store them in the database so we can search later.\n\nIn this guide, I have used my [personal blog](https://prabinpoudel.com.np/articles) but you can just change the URL and the rake task should work for your blog as well.\n\n### Add tables to store blog chunks and articles\n\nWe need two tables: **article_chunks** (for RAG: chunk text, embedding, and source metadata) and **articles** (for listing and displaying full post HTML).\n\n1. Model and migration for article_chunks\n\n    Run the following command to add a model and migration for Article Chunk:\n\n    ```bash\n    bin/rails generate model ArticleChunk content:text embedding:vector source_url:string source_title:string\n    ```\n\n2. Model and migration for articles\n\n    Run the following command to add a model and migration for Article:\n\n    ```bash\n    bin/rails generate model Article source_url:string:uniq title:string content:text\n    ```\n\n    Add validations to the Article model so we don't store invalid data:\n\n    ```ruby\n    # app/models/article.rb\n    class Article = MAX_ARTICLES\n\n      # Next page: try page_num + 1; stop if this page had no new article links\n      has_next =\n        links.any? { |u| u.match?(%r{/articles/page/#{page_num + 1}/?}) }\n      break unless has_next\n\n      page_num += 1\n    end\n\n    article_urls = article_urls.first(MAX_ARTICLES)\n    puts \"Found #{article_urls.size} article(s) to ingest\"\n\n    article_urls.each do |full_url|\n      puts \"Fetching #{full_url}\"\n\n      page = HTTParty.get(full_url)\n      next unless page.success?\n\n      page_doc = Nokogiri.HTML(page.body)\n      article_node = page_doc.css(\"article\").first\n      next unless article_node\n\n      source_title =\n        article_node.css(\"h1, h2\").first&.text&.strip.presence || full_url\n\n      # Store full article HTML for display (code blocks, headings, etc.)\n      full_html = article_node.inner_html\n      fragment = Nokogiri::HTML.fragment(full_html)\n      fragment\n        .css(\"a[href]\")\n        .each do |a|\n          href = a[\"href\"].to_s.strip\n          a[\"href\"] = URI.join(full_url, href).to_s if href.start_with?(\"/\")\n        end\n      Article.find_or_initialize_by(source_url: full_url).update!(\n        title: source_title,\n        content: fragment.to_html\n      )\n\n      # Extract text but preserve hyperlinks as \"link text (url)\" so references are stored\n      content = []\n      article_node.traverse do |node|\n        if node.is_a?(Nokogiri::XML::Text)\n          # Skip text inside  so we don't duplicate link text\n          content \n  \n    \n    \n      \n        Blog posts\n        \n          \n            \n              \n                \n              \n            \n          \n        \n          No posts ingested yet.\n        \n      \n    \n\n    \n    \n      \n        \n          \n            \n              \n              \" target=\"_blank\" rel=\"noopener\" class=\"mt-3 inline-flex items-center text-sm font-medium text-[#00638a] hover:underline\">\n                View original post\n                \n              \n            \n            \n              \n            \n          \n        \n          \n            \n              Content not found\n              Re-run rails blog:ingest to fetch this post.\n            \n          \n        \n      \n        \n          \n            \n              \n            \n            Select a post from the left\n            or use the search bar above to ask a question.\n          \n        \n      \n    \n  \n\n```\n\nAdd a helper that sanitizes the stored HTML so you can safely render `@selected_article_content`:\n\n```ruby\n# app/helpers/application_helper.rb\nmodule ApplicationHelper\n  # Sanitizes stored article HTML (from Article) for safe display. Keeps structure, code blocks, links.\n  def sanitize_article_html(html)\n    return \"\" if html.blank?\n\n    sanitize(\n      html,\n      tags: %w[\n        p\n        div\n        span\n        br\n        h1\n        h2\n        h3\n        h4\n        h5\n        h6\n        a\n        strong\n        em\n        b\n        i\n        code\n        pre\n        ul\n        ol\n        li\n        blockquote\n        hr\n        img\n        table\n        thead\n        tbody\n        tr\n        th\n        td\n      ],\n      attributes: {\n        \"a\" => %w[href target rel class],\n        \"img\" => %w[src alt width height class],\n        \"code\" => [ \"class\" ],\n        \"pre\" => [ \"class\" ],\n        \"div\" => [ \"class\" ],\n        \"span\" => [ \"class\" ]\n      }\n    )\n  end\nend\n```\n\nWe have already come up a long way. We have the UI to list and view blogs in our app now. Let's quickly take a look at how the UI looks in the browser, fire up the server with:\n\n```bash\nbin/dev\n```\n\nThen visit http://localhost:3000 and you should see a UI like this:\n\n![RAG app blog list](../../images/articles/build-a-rag-app-for-documentation-qa-using-rails/blog-list.webp)\n\nNot that good but hey, it's also not that bad. You can fix the CSS part of the app yourself, you can reference the Github repo as well for this. We won't be going into that here.\n\n## RAG app: ask a question, get an answer\n\nWe want a search box at the top of the app where users can type a question and open a dropdown overlay to see the answer. The flow: search the chunks, pass context to the LLM, and show the generated reply grounded to your blog content.\n\n### Files to generate\n\n- **Controller:** handle the question, run the RAG flow, return the answer.\n- **View:** a search box and an area for the answer. We use Turbo Streams so the answer updates in place without a full reload.\n- **Tool:** Ruby LLM–powered class that runs the blog search and feeds context to the LLM for the answer\n\n### Controller and routes for the search\n\n**Routes:**\n\nAdd following to the routes file:\n\n```ruby\n# config/routes.rb\nRails.application.routes.draw do\n  # ....\n  post \"search\", to: \"search#create\", as: :search\n  # ....\nend\n```\n\n**Controller:** create a chat, call the blog search “tool” then ask the LLM with the retrieved context:\n\nCreate a file `app/controllers/search_controller.rb` and add the following:\n\n```ruby\n# app/controllers/search_controller.rb\nclass SearchController  e\n    Rails.logger.error(\n      \"[Search] #{e.message}\\n#{e.backtrace.first(5).join(\"\\n\")}\"\n    )\n    @error_message = \"Search failed. Please try again.\"\n\n    respond_to { |format| format.turbo_stream }\n  end\n\n  private\n\n  def chat_attrs\n    {\n      model: \"qwen2.5:7b-instruct\",\n      provider: :ollama,\n      assume_model_exists: true\n    }\n  end\nend\n```\n\nDo you notice \"system_prompt\" that we are passing to with_instructions in the rag_chat? We will talk about that next.\n\n### Basic prompt for answering questions\n\nThe system prompt tells the LLM to (1) use the tool to get blog context, (2) read only that context, and (3) answer only from it.\n\nAdd the following to the search_controller just below the chat_attrs:\n\n```ruby\n# app/controllers/search_controller.rb\ndef system_prompt\n  \n\n\n  \n  \n    \n      search-overlay#onSubmitStart turbo:submit-end->search-overlay#onSubmitEnd\" } do %>\n        \n        \n          \n        \n      \n      \n      search-overlay#keepOpen\">\n        \n          Type a question and press Enter to search.\n        \n        \n          \n          Searching…\n        \n        \n      \n    \n  \n\n  \n    \n    \n      \n        Blog posts\n        \n          \n            \n              \n                \n              \n            \n          \n        \n          No posts ingested yet.\n        \n      \n    \n\n    \n    \n      \n        \n          \n            \n              \n              \" target=\"_blank\" rel=\"noopener\" class=\"mt-3 inline-flex items-center text-sm font-medium text-[#00638a] hover:underline\">\n                View original post\n                \n              \n            \n            \n              \n            \n          \n        \n          \n            \n              Content not found\n              Re-run rails blog:ingest to fetch this post.\n            \n          \n        \n      \n        \n          \n            \n              \n            \n            Select a post from the left\n            or use the search bar above to ask a question.\n          \n        \n      \n    \n  \n\n  \n  search-overlay#close\">\n\n```\n\nWe now need to handle the response/answer when question is asked and controller processes it.\n\nAdd following files to handle the answer together with loading and error handling:\n\n```html\n\n\n  \n    \n  \n\n  \n    \n  \n\n\n\n\n\n```\n\n```html\n\nType a question and press Enter to search.\n```\n\n```html\n\n\n  \n\n\n  \n    Sources\n    \n      \n        \" target=\"_blank\" rel=\"noopener\" class=\"text-sm text-[#00638a] hover:underline\">\n      \n    \n  \n\n```\n\n```html\n\n p-6 text-center text-gray-500\">\n  \n  Searching…\n\n```\n\n```html\n\n\">\n  \n\n```\n\nFinally we also need a Stimulus controller to handle the overlay open/close and interaction bit while loading the answer. Create a new file `app/javascript/controllers/search_overlay_controller.js` and add the following:\n\n```javascript\n// app/javascript/controllers/search_overlay_controller.js\nimport { Controller } from \"@hotwired/stimulus\"\n\nexport default class extends Controller {\n  static targets = [\"form\", \"input\", \"panel\", \"result\", \"loading\", \"error\", \"submitBtn\", \"backdrop\"]\n\n  connect() {\n    this.closeOnEscape = this.closeOnEscape.bind(this)\n  }\n\n  onSubmitStart() {\n    this.showPanel()\n    // Use getElementById so we target current DOM nodes (Turbo Stream replace can leave targets stale)\n    const loadingEl = document.getElementById(\"search-loading\")\n    const resultEl = document.getElementById(\"search-result\")\n    const errorEl = document.getElementById(\"search-error\")\n    if (errorEl) errorEl.classList.add(\"hidden\")\n    if (resultEl) resultEl.classList.add(\"hidden\")\n    if (loadingEl) loadingEl.classList.remove(\"hidden\")\n  }\n\n  onSubmitEnd() {\n    const loadingEl = document.getElementById(\"search-loading\")\n    const resultEl = document.getElementById(\"search-result\")\n    if (loadingEl) loadingEl.classList.add(\"hidden\")\n    if (resultEl) resultEl.classList.remove(\"hidden\")\n  }\n\n  showPanel() {\n    this.panelTarget.classList.remove(\"hidden\")\n    if (this.hasBackdropTarget) this.backdropTarget.classList.remove(\"hidden\")\n    document.addEventListener(\"keydown\", this.closeOnEscape)\n  }\n\n  close() {\n    this.panelTarget.classList.add(\"hidden\")\n    if (this.hasBackdropTarget) this.backdropTarget.classList.add(\"hidden\")\n    document.removeEventListener(\"keydown\", this.closeOnEscape)\n  }\n\n  keepOpen(e) {\n    e.stopPropagation()\n  }\n\n  closeOnEscape(e) {\n    if (e.key === \"Escape\") this.close()\n  }\n}\n```\n\nSince we are using importmap, we don't need to import the JS controller; it is handled automatically by Rails.\n\n## First RAG app response\n\nWith the frontend and backend wired up, you can try the RAG app:\n\n1. Run `bin/dev` from the root of your project\n2. Go to `http://localhost:3000`\n3. In the search bar ask a question e.g. \"setup rspec tests in rails with gitlab ci\" and hit enter.\n    \n    You need to be very specific when asking questions or it might say \"I couldn't find anything about this in the blog\". We will tune the system prompt in upcoming sections so it can handle more generic search queries as well.\n4. Keep an eye on the terminal, it should process the request in about a minute using ollama\n\nThat’s your first RAG app result: question in, answer out, grounded in your blog. Here is what it might look like:\n\n![RAG search and answer](../../images/articles/build-a-rag-app-for-documentation-qa-using-rails/rag-search-answer.webp)\n\nBut wait, it only shows answers. How do I know where those answers came from? Where is the link to the blog? That's what we will implement in the next section.\n\n## Displaying sources of the answer\n\nUsers should know which posts the answer came from and be able to open them.\n\nWe already store `source_url` and `source_title` on each `ArticleChunk` and set them in the ingest task. The **BlogSearch** tool should return not only the context string but also a list of `{ title, url }` for the UI.\n\nReplace the BlogSearch tool with the following code to handle source URLs:\n\n```ruby\n# app/tools/blog_search.rb\n\nclass BlogSearch  to break the line.\n    - Then give the detailed explanation. Use  to break the line between each paragraph.\n    - For lists: use Markdown bullets (- ) or numbers (1. 2. 3. ), with  to break the line before the list and between list items if they are long.\n    - Put code or commands in backticks. Use  to break the line before and after code blocks if needed.\n    - End with a link when relevant: \"For more details, see [Post title](url).\" Use only URLs from the context ([Source: ... | URL: ...]); never link to external sites.\n    - Use two  (a blank line) between sections so the response is easy to read. Do not output \"Summary:\" or \"Answer:\" - just the content.\n  PROMPT\nend\n```\n\nTry searching again, you will see a new \"SOURCES\" section at the bottom of the answer overlay with a list of sources where answer was extracted from. It is still probably showing irreleavnt links as well, it will go away when we implement the Hybrid search and Re-ranking; coming up next.\n\n## Bonus: Hybrid search (vector + keyword) for better output\n\nVector search alone can miss exact terms (e.g. “Minitest” or “GitLab CI”). Adding **keyword (full-text) search** and combining it with vector search often gives more relevant chunks.\n\nGenerate a new migration for adding **Full-text column** on `article_chunks`:\n\n```bash\nrails g migration AddFulltextToArticleChunks\n```\n\nReplace the file with the following:\n\n```ruby\n# db/migrate/xxxx_add_fulltext_to_article_chunks.rb\nclass AddFulltextToArticleChunks ] chunks to rerank\n  # @param keep_top [Integer] number of top chunks to return\n  # @return [Array] top keep_top chunks in relevance order\n  def call(query, chunks, keep_top)\n    return chunks.first(keep_top) if chunks.size  e\n    Rails.logger.warn(\n      \"[Reranker] Re-rank failed: #{e.message}. Returning top #{keep_top} chunks.\"\n    )\n    chunks.first(keep_top)\n  end\n\n  private\n\n  def build_prompt(query, chunks, keep_top)\n    passages =\n      chunks\n        .each_with_index\n        .map { |c, i| \"#{i + 1}. #{c.content.truncate(600)}\" }\n        .join(\"\\n\\n\")\n\n     post_limit\n      chunks = Reranker.new.call(q, chunks, post_limit)\n    else\n      chunks = chunks.first(post_limit)\n    end\n    # .... Other code ....\n  end\nend\n```\n\nThis is what the final result could look like with hybrid search, updated prompt and re-ranking:\n\n\n![Fine tuned search with specific prompt, hybrid search and re-ranking](../../images/articles/build-a-rag-app-for-documentation-qa-using-rails/fine-tuned-search.webp)\n\nAnd done, if you have followed through here then you have come a long way, congratulations!\n\n## 10. Conclusion\n\nThis was my first real AI app. I chose a RAG app because it seemed easier than other ideas: no training, no fine-tuning, just retrieval + prompting. In practice it still took me about two months to go from “what is RAG?” to a working app with sources, hybrid search, and a prompt I was happy with. The concepts (embeddings, vector search, tool use) were relatively easy; the hard part was not knowing where to start or what to build first.\n\nWhen I first sat down to plan how to learn and use more AI in my workflow, my notes literally said:\n\n> **THE MAIN QUESTION IS WHERE DO I START AND WHAT DO I BUILD FIRST ???**\n\nIf that’s you: starting with a RAG app for your own blog is a solid choice. You’ll learn:\n\n- **Prompting** — how to instruct the model to stay on your content and format answers.\n- **Ruby LLM** — chat, embeddings, tools, and how they plug into Rails.\n- **Models** — the difference between chat and embedding models, and what “tokens” mean when you’re passing context.\n\nThe [blog-ai-chat-app](https://github.com/coolprobn/blog-ai-chat-app) repo has the full code: migrations, ingest task, `BlogSearch` tool, reranker, and Turbo Stream UI. Clone it, run `bin/dev` and `bin/rails blog:ingest`, and tinker. If you have questions, reach out via [Twitter/X](https://x.com/coolprobn).\n\nAlso a disclaimer: I heavily used Cursor to build this app, if you seem some AI slop in the code then that's Cursor, don't blame me :P\n\nThanks for reading. Happy tinkering and happy coding!\n\n## References\n\n- Cover image was generated using Gemini"
        },
        {
          "id": "articles-configure-minitest-with-gitlab-ci-and-rails",
          "title": "Configure Minitest with Gitlab CI and Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, minitest, gitlab ci",
          "url": "/articles/configure-minitest-with-gitlab-ci-and-rails/",
          "content": "At Zero Config Rails, I am constantly working on automating configurations and boring setups like “Configure Minitest with Gitlab CI”.\n\nIf you don't want to read the whole blog and just want the whole configuration automatically, you can do so using Zero Config Rails. Just hit the following command and you will be good to go:\n\n```bash\n$ gem install zcr-zen && zen add ci:gitlab_ci --app_test_framework=minitest\n```\n\nFor the detailed list of configurations, you can visit Gitlab CI Generator.\n\nNow without further ado, let’s jump right into setting up Gitlab CI for Minitest and run those tests in CI.\n\n## Assumptions\n\n* You have basic Gitlab CI configurations ready i.e. `.gitlab-ci.yml` exists in your project.\n    \n    If it doesn’t, you can refer to my other article Integrate Pronto with Gitlab for Rails App\n    \n* You are using PostgreSQL in your app, though with minimal changes it should work for any other databases.\n    \n* You are using import-maps, though I have added configurations for projects with esbuild as well, you can just uncomment them.\n\n## Tested and working in\n\n* Ruby 3.3.0\n    \n* Rails 7.2.1\n    \n* Minitest 5.25.1\n    \n* selenium-webdriver 4.24.0\n\n## Configure Gitlab CI Variables\n\nFirs of all, we need to add some configurations required by the CI to run tests. This should be done over at Gitlab.\n\n### Add variable for storing environment variables\n\nI normally use Figjam which is a maintained version of the popular Figaro gem for storing environment variables which uses `config/application.yml` but just the plain `.env` file using dotenv gem is also very popular. Anyway, just copy the content from whatever you are using and paste it inside the Value for this new variable.\n\nYou can visit the official documentation to learn about setting up variables for Gitlab CI. You have to go to your project's setting in Gitlab and configure these in CI/CD variables.\n\nCreate a new variable for storing content in your `config/application.yml`:\n\n1. Type: File\n    \n2. Flags\n    \n    Uncheck all checklists here i.e. Protect variable, Mask variable and Expand variable reference\n    \n3. Description\n    \n    You can add “Environment Variables“ but it's optional and you can skip this as \"Key\" (just below this) is already clear enough on what this variable is storing.\n    \n4. Key: `env`\n    \n    In \"Value\", add the copied content from your env file.\n\n    *NOTE*: Make sure to only copy what is under \"test\" block or \".env.test\", you don’t want to add production variables here!\n\n### Add variable for `MASTER_KEY`\n\nRails comes with `config/credentials.yml.enc` for storing secrets, we generally also use ENV variables for this but since Rails credentials is the default, we will also look at how to configure those.\n\nTo decrypt credentials file, you need MASTER\\_KEY. If you have generated multiple credentials file per environment then you might have multiple keys like master.key, staging.key, production.key, etc..\n\nCreate a new variable for storing content in the “.key” file that can decrypt secrets configured for the test environment; normally this will be inside the `config/master.key`:\n\n1. Type: Variable (Default)\n    \n2. Flags\n    \n    Uncheck all checklists here i.e. Protect variable, Mask variable and Expand variable reference\n    \n3. Description\n    \n    Optional. You can leave it blank.\n    \n4. Key: MASTER\\_KEY\n    \n    And in Value, add the content copied from the `master.key`\n\n## Add `database.yml.ci` file\n\nIt's not considered a good practice to use `config/database.yml` file for the CI so we will instead create a new file `config/database.yml.ci` and add configurations required to run tests inside.\n\nYou can visit the official documentation to learn about setting up variables. You have to go to your project's setting in Gitlab and configure these in CI/CD variables.\n\nAfter creating the file, add the following:\n\n```yml\ntest:\n  adapter: postgresql\n  encoding: unicode\n  host: postgres\n  database: ci_db\n  username: postgres\n  pool: \n```\n\nFor username it should be “postgres” which is the default user that gets created when postgres service/docker is created hence it doesn’t ask password or tries to authenticate user. You might get an error otherwise because no other users will have been created in postgres at this point: \\`Please check your database configuration to ensure the username/password are valid\\`.\n\nFor host make sure to use \"postgres\" instead of “localhost”. For MySQL, you will have to use \"mysql\" as said in the official documentation:\n\n> The service container for MySQL is accessible under the hostname mysql. To access your database service, connect to the host named mysql instead of a socket or localhost.\n\nSQLite doesn’t need any host configurations but other configurations will most probably vary.\n\n## Configure Capybara with Selenium\n\nWe will configure Selenium with Chrome to be used both in CI and Local with Headless mode (by default) while also allowing to run in the browser if needed for debugging.\n\nCreate a new file \"test/helpers/capybara.rb\" and add the following code:\n\n```ruby\nrequire \"selenium-webdriver\"\n\nCapybara.register_driver :selenium_chrome_custom do |app|\n  options = Selenium::WebDriver::Chrome::Options.new\n\n  options.add_argument(\"--headless=new\") unless ENV[\"SELENIUM_HEADFUL\"]\n\n  options.add_argument(\"--window-size=1400,1400\")\n  options.add_argument(\"--no-sandbox\")\n  options.add_argument(\"--disable-dev-shm-usage\")\n\n  remote_url = ENV[\"SELENIUM_REMOTE_URL\"]\n\n  if remote_url\n    Capybara::Selenium::Driver.new(\n      app,\n      browser: :remote,\n      url: remote_url,\n      options:\n    )\n  else\n    Capybara::Selenium::Driver.new(app, browser: :chrome, options:)\n  end\nend\n```\n\n### Explanation\n\nLet's look at what each of the code block above is doing.\n\n#### Custom Selenium Chrome driver\n\n`Capybara.register_driver :selenium_chrome_custom`\n\nSince existing Selenium Drivers don't provide the custom options we want, we are creating a new driver `selenium_chrome_custom` which will handle Remote/Local connection as well as Headless/Headful mode.\n\n#### Options\n\n* `--window-size=1400,1400`\n    \n    Set the window size to 1400x1400 pixels. This is a reasonable size without being too large, but you can set it to whatever you like. This mostly impacts the size of debugging screenshots, but some tests may fail if you ask Capybara to click on an element which is not currently visible on the page.\n    \n* `--no-sandbox`\n    \n    Disables Chrome’s sandbox functionality because it has an issue with Docker version 1.10.0 and later.\n    \n* `--disable-dev-shm-usage`\n    \n    The \"/dev/shm\" shared memory partition is too small on many VM environments which will cause Chrome to fail or crash so we are disabling it.\n    \n* `--headless=new`\n    \n    Enable Chrome’s headless mode which will run Chrome without a UI.\n    \n    `SELENIUM_HEADFUL` will control this option. In development, you may want to run Chrome and see what's happening in the browser for debugging; you can do so by running tests with `SELENIUM_HEADFUL=true bundle exec rails test:system`.\n    \n    We will see list of other commands to run system tests at the end of this explanation section in a bit.\n\nSome guides may suggest using the --disable-gpu flag, but this is no longer necessary on any operating system.\n\nThis explanation was shamelessly copied from Remote Selenium WebDriver servers with Rails, Capybara, RSpec, and Chrome🙈.\n\n#### Selenium remote URL\n\n`remote_url = ENV[“SELENIUM_REMOTE_URL\"]`\n\nRemote option is required mostly for CI but you can also test it out in local by running the Selenium Docker image e.g. with `SELENIUM_REMOTE_URL=http://localhost:4444/wd/hub bundle exec rails test:system`\n\nRemote option is controlled by `SELENIUM_REMOTE_URL` which needs to be passed when running tests as seen above.\n\nAnother configuration related to the remote is the use of `browser: :remote` inside `Capybara::Selenium::Driver.new` which tells Capybara to run tests in remote Chrome browser instead of local one.\n\n### Add host configurations\n\nUpdate the `test/application_system_test_case.rb` file to include the following content so Gitlab CI can run tests in the remote browser.\n\n```ruby\n# other require declarations ...\nrequire \"helpers/capybara\"\n\nclass ApplicationSystemTestCase official Rails Documentation.\n\n### Commands to run tests\n\nLastly, let's see various commands we can use to run system tests.\n\n* Run in headless mode (default): `bundle exec rails test:system`\n    \n* Run in headful mode: `SELENIUM_HEADFUL=true bundle exec rails test:system`\n    \n* Run in headless mode inside external docker image in local: `SELENIUM_REMOTE_URL=http://localhost:4444/wd/hub bundle exec rails test:system`\n    \n\nFor CI, default command `bundle exec rails test:system` will work. But `SELENIUM_REMOTE_URL` will be `http://selenium:4444/wd/hub` and it will be passed as an Environment Variable instead. We will look at how to do that next.\n\n## Update `.gitlab-ci.yml` to run all tests\n\nWe will be adding code to enable all the following tests and you can choose to pickup or ignore as per your requirement:\n\n* Unit and Integration tests (Model, Requests, Authorization, Services etc.) which don't require us to start browser\n    \n* System Tests where we will start the Chrome browser and run tests inside it\n    \n\nUpdate your `.gitlab-ci.yml` with the configurations given below. Most of the configurations are accompanied by explanation, you can find clean configuration without comment at the end of the blog in the section \"**Final** `.gitlab-ci.yml`\"\n\n```yml\n# change to the ruby version your application uses\nimage: ruby:3.3.0\n\nvariables:\n  MASTER_KEY: $MASTER_KEY\n\n# explanation in next section\ncache:\n  paths:\n    - vendor/\n    # uncomment till yarn.lock if you are using esbuild i.e. you have package.json in your project\n    # - node_modules/\n    # - yarn.lock # or package-lock.json\n\nstages:\n  - test\n\n# base configuration required for running tests\n.base_db:\n  # add-on docker images required for running tests\n  services:\n    - postgres:latest\n  variables:\n    # set Rails environment so we don't have to prefix each command with RAILS_ENV=test\n    RAILS_ENV: test\n    # Postgres runs in a separate docker image and requires authentication to connect. Disabling that here by using \"trust\" so it doesn't ask for authentication\n    POSTGRES_HOST_AUTH_METHOD: trust\n  before_script:\n    # use same bundler version that was used in bundling the Gemfile\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    # install all gems to \"vendor\" folder which helps in caching of gem installation in between the execution of CI jobs\n    - bundle config set --local path \"vendor\"\n    # you can uncomment lines till `yarn install` if you are using esbuild\n    # - apt-get update -qq\n    # install \"nodejs\" required for yarn\n    # - apt-get install -y -qq nodejs\n    # - curl -o- -L https://yarnpkg.com/install.sh | bash\n    # Make yarn available in the current terminal\n    # - export PATH=\"$HOME/.yarn/bin:$HOME/.config/yarn/global/node_modules/.bin:$PATH\"\n    # - yarn install --pure-lockfile\n    - bundle install --jobs $(nproc)\n    - cp config/database.yml.ci config/database.yml\n    # config/application.yml can be different for you. If you are using dotenv gem then this content will be `cat $env > .env`\n    - cat $env > config/application.yml\n    - bundle exec rails db:test:prepare\n\nunit_and_integration_tests:\n  # reuse all configurations defined in .base_db above\n  extends: .base_db\n  stage: test\n  # run this job only when merge requests are created, updated or merged\n  only:\n    - merge_requests\n  script:\n    - bundle exec rails test\n\nsystem_tests:\n  extends: .base_db\n  stage: test\n  services:\n    - name: selenium/standalone-chrome:latest\n      alias: selenium\n    # need to declare postgres again because \"services\" key will override the one defined in .base_db\n    - postgres:latest\n  variables:\n    RAILS_ENV: test\n    # Location of the selenium docker image. \"selenium\" is an alias, you can also use http://selenium-standalone-chrome:4444/wd/hub or selenium__standalone-chrome (commonly seen in other guides)\n    SELENIUM_REMOTE_URL: http://selenium:4444/wd/hub\n  only:\n    - merge_requests\n  script:\n    - bundle exec rails test:system\n  # store necessary files and folders in case of test failure for debugging the error\n  artifacts:\n    when: on_failure\n    paths:\n      - log/test.log\n      - tmp/screenshots/\n    expire_in: 1 week\n```\n\n### Explanation\n\nLet's look at some configurations where explanation was missing and would be lengthy to add there.\n\n#### cache\n\n```yml\ncache:\n  paths:\n    - vendor/\n    - node_modules/\n    - yarn.lock\n```\n\nThis tells Gitlab CI to cache vendor folder where we are storing all our gems, **node\\_modules** where all JS packages are stored and **yarn.lock** which stores the information about installed packages with their versions.\n\nStoring all these folders and files speed up the CI in subsequent runs. `bundle install` and `yarn install` will only install new packages that are not already inside the cache.\n\n#### stages\n\n```yml\nstages:\n  - test\n```\n\nStages define when to run the jobs.\n\nIf you also have linting and continuous deployment configured then stages could look like this:\n\n```yml\nstages:\n  - lint\n\t- test\n\t- staging_deploy\n\t- production_deploy\n```\n\nJobs are run in the same order as configured here i.e. linting will run first then test and lastly deployments.\n\n#### .base\\_db\n\nAll common configurations used by jobs that require database access are extracted here.\n\n`services` are add-on docker images and provide capabilities like database, redis, selenium drivers, etc.\n\n`variables` are environment variables used by Rails.\n\n`before_script` runs before the `script` so anything that needs to be pre-configured can be added here.\n\n#### unit\\_and\\_integration\\_tests\n\n`extends` will extend the configurations defined in the `.base_db` and use those configurations for this job.\n\n`stage` tells this job at what stage to run. Depending on `stages` defined just above this job configuration.\n\n`script` are the series of command to execute for running this job.\n\n#### system\\_tests\n\n`selenium/standalone-chrome:latest` configures the docker image for Selenium with Chrome with the latest version.\n\n`artifacts` is used to store necessary files and folders in case of test failure. This helps us in debugging failing tests when needed. We are storing test log files for this purpose.\n\n## Final `.gitlab-ci.yml`\n\nThis is how your `gitlab-ci.yml` should look like if you have followed everything in this blog:\n\n```yml\nimage: ruby:3.3.0\n\nvariables:\n  MASTER_KEY: $MASTER_KEY\n\ncache:\n  paths:\n    - vendor/\n\nstages:\n  - test\n\n.base_db:\n  services:\n    - postgres:latest\n  variables:\n    RAILS_ENV: test\n    POSTGRES_HOST_AUTH_METHOD: trust\n  before_script:\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    - bundle config set --local path 'vendor'\n    - bundle install --jobs $(nproc)\n    - cp config/database.yml.ci config/database.yml\n    - cat $env > config/application.yml\n    - bundle exec rails db:test:prepare\n\nunit_and_integration_tests:\n  extends: .base_db\n  stage: test\n  only:\n    - merge_requests\n  script:\n    - bundle exec rails test\n\nsystem_tests:\n  extends: .base_db\n  stage: test\n  services:\n    - postgres:latest\n    - name: selenium/standalone-chrome:latest\n      alias: selenium\n  variables:\n    RAILS_ENV: test\n    POSTGRES_HOST_AUTH_METHOD: trust\n    SELENIUM_REMOTE_URL: http://selenium:4444/wd/hub\n  only:\n    - merge_requests\n  script:\n    - bundle exec rails test:system\n  artifacts:\n    when: on_failure\n    paths:\n      - log/test.log\n      - tmp/screenshots/\n    expire_in: 1 week\n```\n\n## Conclusion\n\nPhew, that was a lot of configurations and explanation. And you can automate all of this with just a single command from Zero Config Rails in near future, stay tuned!\n\nWith this, your Rails app now has all type of tests running in the Gitlab CI so you can now merge changes without any worry for them breaking the production application.\n\nThank you for reading. Happy coding!\n\n## References\n\n* Setup RSpec Tests in Rails with Gitlab CI\n\n## Image Credits\n\n* Cover Image by Tania C on Unsplash"
        },
        {
          "id": "articles-setup-and-run-rspec-tests-with-gitlab-ci",
          "title": "Setup RSpec Tests in Rails with Gitlab CI",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, rspec, gitlab ci",
          "url": "/articles/setup-and-run-rspec-tests-with-gitlab-ci/",
          "content": "At Truemark, we are constantly looking to improve the code quality in our projects. And one way to do that is through the integration of CI into our workflow. CI can help in automating code reviews for linting and standard practices as well as for running tests to check if code change breaks any existing functionalities.\n\nIf you don't want to read the whole blog and just want the whole configuration automatically, you can do so using Zero Config Rails. Just hit the following command and you will be good to go:\n\n```bash\n$ gem install zcr-zen && zen add ci:gitlab_ci --app_test_framework=rspec\n```\n\nFor the detailed list of configurations, you can visit Gitlab CI Generator.\n\nNow without further ado, let's look at adding configurations to Gitlab CI for running RSpec tests in our Rails application.\n\n## Assumption\n\n- You have basic Gitlab CI configurations ready i.e. `.gitlab-ci.yml` exists in your project.\n    If it doesn't, you can refer to my other article Integrate Pronto with Gitlab for Rails App.\n- You are using PostgreSQL in your app though with minimal changes it should also work for any other databases like MySQL and SQLite (let me know in comments if it doesn't and I will help you!)\n- You are using RSpec as a test library but again with minimal changes it should work for MiniTest as well (let me know if it doesn't and I will help you!)\n\n## Tested and working in\n\n- Ruby 3.3.0\n- Rails 7.1.3\n- rspec-rails 6.1.1\n- selenium-webdriver 4.18.1\n\n## Configure Gitlab CI Variables\n\nFirs of all, we need to add some configurations and files required by the CI to run tests. This should be done over at Gitlab. Let's looks at them one by one.\n\n### Add a variable for storing database.yml file content\n\nIt's not considered a good practice to use `config/database.yml` file for the CI so we will have to add a Gitlab CI variable and store the content required to setup PostgreSQL database inside it.\n\nYou can visit the official documentation to know a way to setup variables. You have to go to your project's setting in Gitlab and configure these in CI/CD variables.\n\nCreate a new variable for this:\n\n1. Type: File\n2. Flags\n\n    Uncheck all checklists here i.e. Protect variable, Mask variable and Expand variable reference\n3. Description\n\n\tYou can write \"Database YML\" but it's optional and you can skip this as \"Key\" (just below this) is already clear enough on what this variable is storing.\n4. Key: `database_yml`\n\nLastly in \"Value\" add the following:\n\n```yml\ntest:\n  adapter: postgresql\n  encoding: unicode\n  host: postgres\n  database: test_ci_db\n  username: postgres\n  password: postgres\n  pool: \n```\n\nFor host make sure to use \"postgres\" instead of \"localhost\". For other services like MySQL and SQLite you will most probably have to use \"mysql\" or \"sqlite\" respectively as said in the official documentation:\n\n> The service container for MySQL is accessible under the hostname mysql. To access your database service, connect to the host named mysql instead of a socket or localhost.\n\n### Add a variable for storing environment variables\n\nI normally use Figaro for storing environment variables which uses `config/application.yml` but just the plain `.env` file is also very popular. Anyway, just copy the content from whatever you are using and paste it inside the Value for this new variable.\n\nType, Flags and Description will be the same as described above for database.yml\n\nYou can add `env` for Key.\n\nIn \"Value\", add the copied content from your env file.\n\n_NOTE_: Make sure to only copy what is under \"test\" block or \".env.test\", you won't want to add production variables here as that will lead to security issues.\n\n## Require selenium-webdriver\n\nWe need to enable selenium-webdriver that is required to run tests in the browser, especially system tests.\n\nAdd the following to `spec/rails_helper.rb` if it's not already there:\n\n```\nrequire \"selenium-webdriver\"\n```\n\nNote: Previously \"webdrivers\" gem was required to automate the installation and update browser specific drivers. But Selenium 4 ships with webdrivers now leading to the webdrivers being deprecated. Quoting the webdrivers Github:\n\n> If you can update to the latest version of Selenium (4.11+), please do so and stop requiring this gem.\n\n## Configure Capybara with Selenium\n\nWe will configure Selenium with Chrome to be used both in CI and Local with Headless mode (by default) while also allowing to run in the browser if needed for debugging.\n\nCreate a new file \"spec/helpers/capybara.rb\" and add the following code:\n\n```ruby\nCapybara.register_driver :selenium_chrome_custom do |app|\n  options = Selenium::WebDriver::Chrome::Options.new\n\n  options.add_argument(\"--headless=new\") unless ENV[\"SELENIUM_HEADFUL\"]\n\n  options.add_argument(\"--window-size=1400,1400\")\n  options.add_argument(\"--no-sandbox\")\n  options.add_argument(\"--disable-dev-shm-usage\")\n\n  remote_url = ENV.fetch(\"SELENIUM_REMOTE_URL\", nil)\n\n  if remote_url\n    Capybara::Selenium::Driver.new(\n      app,\n      browser: :remote,\n      url: remote_url,\n      options:\n    )\n  else\n    Capybara::Selenium::Driver.new(app, browser: :chrome, options:)\n  end\nend\n\nRSpec.configure do |config|\n  config.before(:each, type: :system, js: true) do\n    # Make the test app listen to outside requests, required for the remote Selenium instance\n    Capybara.server_host = \"0.0.0.0\"\n\n    if ENV.fetch(\"SELENIUM_REMOTE_URL\", nil)\n      # Use the application container's IP instead of localhost so Capybara knows where to direct Selenium\n      ip = Socket.ip_address_list.detect(&:ipv4_private?).ip_address\n      Capybara.app_host = \"http://#{ip}:#{Capybara.server_port}\"\n    end\n\n    driven_by :selenium_chrome_custom\n  end\nend\n```\n\n### Explanation\n\nLet's look at what each of the code block above is doing.\n\n#### Custom Selenium Chrome driver\n\n`Capybara.register_driver :selenium_chrome_custom`\n\nSince existing Selenium Drivers don't provide the custom options we want, we are creating a new driver `selenium_chrome_custom` which will handle Remote/Local connection as well as Headless/Headful mode.\n\n#### Options\n\n- `--window-size=1400,1400`\n\n  Set the window size to 1400x1400 pixels. This is a reasonable size without being too large, but you can set it to whatever you like. This mostly impacts the size of debugging screenshots, but some tests may fail if you ask Capybara to click on an element which is not currently visible on the page.\n- `--no-sandbox`\n\n  Disables Chrome’s sandbox functionality because it has an issue with Docker version 1.10.0 and later.\n- `--disable-dev-shm-usage`\n\n  The \"/dev/shm\" shared memory partition is too small on many VM environments which will cause Chrome to fail or crash so we are disabling it.\n- `--headless=new`\n\n  Enable Chrome’s headless mode which will run Chrome without a UI.\n\n  `SELENIUM_HEADFUL` will control this option. In development, you may want to run Chrome and see what's happening in the browser; you can do so by running tests with `SELENIUM_HEADFUL=true bundle exec rspec spec/system`.\n  \n  We will see list of other commands to run system tests at the end of this explanation section in a bit.\n\nSome guides may suggest using the --disable-gpu flag, but this is no longer necessary on any operating system.\n\nThis explanation was shamelessly copied from Remote Selenium WebDriver servers with Rails, Capybara, RSpec, and Chrome 🙈.\n\n#### Selenium remote URL\n\n`remote_url = ENV.fetch(\"SELENIUM_REMOTE_URL\", nil)`\n\nRemote option is required mostly for CI but you can also test it out in local by running the Selenium Docker image e.g. with `SELENIUM_REMOTE_URL=http://localhost:4444/wd/hub bundle exec rspec spec/system`\n\nRemote option is controlled by `SELENIUM_REMOTE_URL` which needs to be passed when running tests as seen above.\n\nAnother configuration related to the remote is the use of `browser: :remote` inside `Capybara::Selenium::Driver.new` which tells Capybara to run tests in remote Chrome browser instead of local one.\n\n#### Capybara server and app host\n\n```ruby\nCapybara.server_host = \"0.0.0.0\"\n\nif ENV.fetch(\"SELENIUM_REMOTE_URL\", nil)\n  # Use the application container's IP instead of localhost so Capybara knows where to direct Selenium\n  ip = Socket.ip_address_list.detect(&:ipv4_private?).ip_address\n  Capybara.app_host = \"http://#{ip}:#{Capybara.server_port}\"\nend\n```\n\n`server_host` and `app_host` are required for Capybara to know how it can call driver in the Remote Server.\n\nThis piece of code was extracted from the official Rails Documentation.\n\n#### Commands to run tests\n\nLastly, let's see various commands we can use to run system tests.\n\n- Run in headless mode (default): `bundle exec rspec spec/system`\n- Run in headful mode: `SELENIUM_HEADFUL=true bundle exec rspec spec/system`\n- Run in headless mode inside external docker image in local: `SELENIUM_REMOTE_URL=http://localhost:4444/wd/hub bundle exec rspec spec/system`\n\nFor CI, default command `bundle exec rspec spec/system` will work. But `SELENIUM_REMOTE_URL` will be `http://selenium:4444/wd/hub` and it will be passed an Environment Variable instead. We will look at how to do that next. \n\n## Update `.gitlab-ci.yml` to run all tests\n\nWe will be adding code to enable all the following tests and you can choose to pickup or ignore as per your requirement:\n\n- Unit and Integration tests (Model, Requests, Authorization, Services etc.) which don't require us to start browser\n- System Tests where we will start the Chrome browser and run tests inside it\n\nUpdate your `.gitlab-ci.yml` with the configurations given below. Most of the configurations are accompanied by explanation, you can find clean configuration without comment at the end of the blog in the section \"Final .gitlab-ci.yml\" \n\n```yml\n# change to the ruby version your application uses\nimage: ruby:3.3.0\n\n# explanation in next section\ncache:\n  paths:\n    - vendor/\n    - node_modules/\n    - yarn.lock\n\nstages:\n  - test\n\n# base configuration required for running tests\n.base_db:\n  # add-on docker images required for running tests\n  services:\n    - postgres:latest\n  variables:\n    # set Rails environment so we don't have to prefix each command with RAILS_ENV=test\n    RAILS_ENV: test\n    # Postgres runs in a separate docker image and requires authentication to connect. Disabling that here by using \"trust\" so it doesn't ask for authentication\n    POSTGRES_HOST_AUTH_METHOD: trust\n  before_script:\n    # use same bundler version that was used in bundling the Gemfile\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    # install all gems to \"vendor\" folder which helps in caching of gem installation in between the execution of CI jobs\n    - bundle config set --local path 'vendor'\n    # install \"nodejs\" required for yarn and \"cmake\" required for pronto\n    - apt-get update -qq && apt-get install -y -qq nodejs cmake\n    # install gems in parallel, nproc returns the number of available processors\n    - bundle install --jobs $(nproc)\n    # install yarn\n    - curl -o- -L https://yarnpkg.com/install.sh | bash\n    # Make yarn available in the current terminal\n    - export PATH=\"$HOME/.yarn/bin:$HOME/.config/yarn/global/node_modules/.bin:$PATH\"\n    - yarn install\n    # copy all database configurations stored as the Gitlab CI variable to the file \"config/database.yml\"\n    - cat $database_yml > config/database.yml\n    # 👋 config/application.yml can be different for you. For e.g. if you are using \".env\" then this content will be `cat $envfile > .env`\n    - cat $env > config/application.yml\n    - bundle exec rails db:create\n    - bundle exec rails db:schema:load\n    # Required for integration and system tests\n    - bundle exec rails assets:precompile\n\nunit_and_integration_tests:\n  # reuse all configurations defined in .base_db above\n  extends: .base_db\n  stage: test\n  # run this job only when merge requests are created, updated or merged\n  only:\n    - merge_requests\n  script:\n    # run all tests except system tests\n    - bundle exec rspec --exclude-pattern \"spec/system/**/*.rb\"\n\nsystem_tests:\n  extends: .base_db\n  stage: test\n  services:\n    # need to declare postgres again because \"services\" key will override the one defined in .base_db\n    - postgres:latest\n    # Docker image for Selenium with Chrome so test can run inside the browser\n    - name: selenium/standalone-chrome:122.0\n      alias: selenium\n  variables:\n    RAILS_ENV: test\n    POSTGRES_HOST_AUTH_METHOD: trust\n    # Location of the selenium docker image. \"selenium\" is an alias, you can also use http://selenium-standalone-chrome:4444/wd/hub or selenium__standalone-chrome (commonly seen in other guides)\n    SELENIUM_REMOTE_URL: http://selenium:4444/wd/hub\n  # store necessary files and folders in case of test failure for debugging the error\n  artifacts:\n    when: on_failure\n    paths:\n      - log/test.log\n    expire_in: 1 week\n  only:\n    - merge_requests\n  script:\n    - bundle exec rspec spec/system\n```\n\n### Explanation\n\nLet's look at some configurations where explanation was missing and would be lengthy to add there.\n\n#### cache\n\n```yml\ncache:\n  paths:\n    - vendor/\n    - node_modules/\n    - yarn.lock\n```\n\nThis tells Gitlab CI to cache vendor folder where we are storing all our gems, node_modules where all JS packages are stored, yarn.lock which stores the information about installed packages with their versions.\n\nStoring all these folders and files speed up the CI in subsequent runs. `bundle install` and `yarn install` will only install new packages that are not already inside the cache.\n\n#### stages\n\n```yml\nstages:\n  - test\n```\n\nStages define when to run the jobs. For example, stages that run tests after stages that runs linting on new changes.\n\nIf you also have linting and continuous deployment configured then stages could look like this:\n\n```yml\nstages:\n  - lint\n\t- test\n\t- staging_deploy\n\t- production_deploy\n```\n\nJobs are run in the same order as configured here i.e. linting will run first then test and lastly deployments.\n\n#### .base_db\n\nThis configuration is used by all jobs that require database access. All common configurations for such jobs are extracted here.\n\n`services` are add-on docker images and provide capabilities like database, redis, selenium drivers, etc.\n\n`variables` are environment variables used by Rails.\n\n`before_script` runs before the `script` so anything that needs to be pre-configured can be added here.\n\n#### unit\\_and\\_integration_tests\n\n`extends` will extend the configurations defined in the `.base_db` and use those configurations for this job.\n\n`stage` tells this job at what stage to run. Depending on **`stages`** defined just above this job configuration.\n\n`script` are the series of command to execute for this job. We are running all tests except system tests by using the rspec command helper `--exclude-pattern \"spec/system/**/*.rb`\n\n#### system_tests\n\n`selenium/standalone-chrome:122.0` configures the docker image for Selenium with Chrome with the version 122.0, normally people use `selenium/standalone-chrome:latest` instead of this. But at the time of writing this blog, the latest version \"123.0\" had some issues and Chrome browser was not starting; I Had to spend 6+ hours in debugging just for finding that out 🫠\n\n`artifacts` is used to store necessary files and folders in case of test failure. This helps us in debugging failing tests when needed. We are storing test log files for this purpose.\n\n## Final `.gitlab-ci.yml`\n\nIf you also have Pronto or any other linter configured in CI then your final file could look like this:\n\n```yml\nimage: ruby:3.3.0\n\ncache:\n  paths:\n    - vendor/\n    - node_modules/\n    - yarn.lock\n\nstages:\n  - lint\n  - test\n\npronto:\n  before_script:\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    - apt-get update -qq && apt-get install -y -qq cmake\n    - bundle config set --local path 'vendor'\n    - bundle install --jobs $(nproc)\n  stage: lint\n  only:\n    - merge_requests\n  variables:\n    PRONTO_GITLAB_API_PRIVATE_TOKEN: $PRONTO_ACCESS_TOKEN\n  script:\n    - git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n    - bundle exec pronto run -f gitlab_mr -c origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n\n.base_db:\n  services:\n    - postgres:latest\n  variables:\n    RAILS_ENV: test\n    POSTGRES_HOST_AUTH_METHOD: trust\n  before_script:\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    - bundle config set --local path 'vendor'\n    - apt-get update -qq && apt-get install -y -qq nodejs cmake\n    - bundle install --jobs $(nproc)\n    - curl -o- -L https://yarnpkg.com/install.sh | bash\n    - export PATH=\"$HOME/.yarn/bin:$HOME/.config/yarn/global/node_modules/.bin:$PATH\"\n    - yarn install\n    - cat $database_yml > config/database.yml\n    - cat $env > config/application.yml\n    - bundle exec rails db:create\n    - bundle exec rails db:schema:load\n    - bundle exec rails assets:precompile\n\nunit_and_integration_tests:\n  extends: .base_db\n  stage: test\n  only:\n    - merge_requests\n  script:\n    - bundle exec rspec --exclude-pattern \"spec/system/**/*.rb\"\n\nsystem_tests:\n  extends: .base_db\n  stage: test\n  services:\n    - postgres:latest\n    - name: selenium/standalone-chrome:122.0\n      alias: selenium\n  variables:\n    RAILS_ENV: test\n    POSTGRES_HOST_AUTH_METHOD: trust\n    SELENIUM_REMOTE_URL: http://selenium:4444/wd/hub\n  artifacts:\n    when: on_failure\n    paths:\n      - log/test.log\n    expire_in: 1 week\n  only:\n    - merge_requests\n  script:\n    - bundle exec rspec spec/system\n```\n\n## Conclusion\n\nPhew, that was a lot of configurations and explanation.\n\nThe reason why I wrote this blog was because I faced various problems when trying out other blogs in the internet today and couldn't fully understand what was happening inside the configuration file because there were no explanations. I hope I have explained everything the code is doing and you don't have to waste time in researching about these things again.\n\nWith this, your Rails app now has all type of tests running in the Gitlab CI so you can now merge changes without any worry for them breaking the production application.\n\nThank you for reading. Happy coding!\n\n**References**\n\n- How services are linked to the Job (Gitlab)\n- Gitlab CI Config for System Tests with Minitest (Github Gist)\n- Remote Selenium WebDriver servers with Rails, Capybara, RSpec, and Chrome\n- System Testing (Official Rails Documentation)\n\n**Image Credits:**\n\n- Cover Image by Jens Freudenau on Unsplash"
        },
        {
          "id": "articles-setup-active-job-with-sidekiq-in-rails",
          "title": "Setup Active Job with Sidekiq in Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails",
          "url": "/articles/setup-active-job-with-sidekiq-in-rails/",
          "content": "Active Job is a framework for declaring jobs and making them run on a variety of queuing backends. These jobs can be everything from regularly scheduled clean-ups, to billing charges, to mailings. Anything that can be chopped up into small units of work and run in parallel.\n\nActive Job comes pre installed in Rails.\n\nNow on to Sidekiq; it is one of the most widely used background job frameworks that you can implement in a Rails application. It is multi-threaded and utilizes Redis for its queuing storage.\n\nToday we will be working on setting up Active Job with Sidekiq while also following the official Rails documentation.\n\n## Skills required to follow the tutorial\n\nIntermediate:\n\n- Rails\n- Linux skills to work with commands in server where your app has been deployed\n\n## You should have\n\n- Existing Rails app\n- Linux server already setup to run Rails app\n\n## Step 1: Install Capistrano Gems\n\nAdd the following to your Gemfile under development group:\n\n```ruby\ngem 'sidekiq'\n```\n\nRun the following in command line:\n\n```shell\n$ bundle install\n```\n\nThat should install the latest version of Sidekiq gem, now let's lock the version by looking at the \"Gemfile.lock\", this version was what I had in my lock file, yours could be different.\n\n```ruby\ngem 'sidekiq', '~> 7.0.0'\n```\n\n## Step 2: Enable Sidekiq\n\nAdd the following to \"config/application.rb\"\n\n```ruby\n# Use sidekiq for active jobs\nconfig.active_job.queue_adapter = :sidekiq\n```\n\n## Step 3: Install Redis\n\nRedis is an open source, in-memory data store used by millions of developers as a database, cache, streaming engine, and message broker. It is an in-memory key-value store known for its flexibility and performance.\n\nSidekiq is backed by Redis as a job management store to process thousands of jobs per second.\n\nOfficial Redis Documentation has got you covered for the installation of Redis in any OS.\n\n### On MacOS via Homebrew\n\nIf you are on MacOS, you can follow instructions at Install Redis on MacOS (Official Documentation).\n\n### Linux\n\nYou can either choose to install from source or from APT repository. \n\nI had installed Redis from the source and these were the instructions that I needed to perform from the command line:\n\n```shell\n$ wget http://download.redis.io/redis-stable.tar.gz\n$ tar xvzf redis-stable.tar.gz\n$ cd redis-stable\n$ make\n```\n\nCopy the executables to you local usr bin folder so you can run the redis-server and cli commands from your home/user directory\n\n```shell\nsudo cp src/redis-server /usr/local/bin/\nsudo cp src/redis-cli /usr/local/bin/\n```\n\n#### Automatically start on machine restart\n\n_NOTE:_ You can also find these instructions in the official Redis documentation at Getting Started > Installing Redis more properly\n\n1. Create a directory in which to store your Redis config files and your data:\n\n    ```shell\n    sudo mkdir /etc/redis\n    sudo mkdir /var/redis\n    ```\n\n2. Copy the init script that you'll find in the Redis distribution under the utils directory into /etc/init.d. We suggest calling it with the name of the port where you are running this instance of Redis. For example:\n\n    ```shell\n    sudo cp redis-stable/utils/redis_init_script /etc/init.d/redis_6379\n    ```\n\n3. Edit the init script\n\n    ```shell\n    sudo nano /etc/init.d/redis_6379\n    ```\n\n    Make sure to modify REDISPORT according to the port you are using. Both the pid file path and the configuration file name depend on the port number. I had used the port \"6379\"\n\n4. Copy the template configuration file you'll find in the root directory of the Redis distribution into /etc/redis/ using the port number as name, for instance:\n\n    ```shell\n    sudo cp redis-stable/redis.conf /etc/redis/6379.conf\n    ```\n\n5. Create a directory inside /var/redis that will work as data and working directory for this Redis instance:\n\n    ```shell\n    sudo mkdir /var/redis/6379\n    ```\n\n6. Edit the configuration file with `sudo nano /etc/redis/6379.conf`, making sure to perform the following changes:\n\n    - Change the port accordingly. In our example it is not needed as the default port is already 6379. It is under the section ## NETWORK ##\n    - Set daemonize to yes (by default it is set to no). It is under the section ## GENERAL ##.\n    - Set the pidfile to /var/run/redis_6379.pid (modify the port if needed). It is under the section ## GENERAL ##\n    - Set your preferred loglevel (notice by default). It is under the section ## GENERAL ##\n    - Set the logfile to /var/log/redis_6379.log (\"\" by default). It is under the section ## GENERAL ##\n    - Set the dir to /var/redis/6379 (very important step!). It is under the section ## SNAPSHOTTING ##\n\n7. Finally add the new Redis init script to all the default runlevels using the following command\n\n  ```shell\n  sudo update-rc.d redis_6379 defaults\n  ```\n\n8. You are done! Now you can try running your instance with:\n\n  ```shell\n  sudo /etc/init.d/redis_6379 start\n  ```\n\n9. Test if it working correctly\n\n  - Try pinging your instance with redis-cli using the command `redis-cli ping`, you should see PONG\n  - Do a test save with `redis-cli save` and check that the dump file is correctly stored into /var/redis/6379/ (you should find a file called dump.rdb).\n  - Check that your Redis instance is correctly logging in the log file with `tail -f /var/log/redis_6379.log`, you can exit with `CTRL + c`\n  - If you are on a new machine/server where you can try things without taking anything else down in the machine/server make sure that after a reboot everything is still working (redis should start automatically on machine restart)\n\n### Windows\n\nIf you are on Windows, you can follow instructions at Install Redis on Windows (Official Documentation).\n\n## Step 4: Enable Web UI to Monitor Jobs\n\nAdd the following to your \"config/routes.rb\":\n\n```ruby\n# config/routes.rb\n\nrequire 'sidekiq/web'\n\nMyapp::Application.routes.draw do\n  # mount Sidekiq::Web in your Rails app\n  mount Sidekiq::Web => \"/sidekiq\"\nend\n```\n\n## Step 6: Accessing the Sidekiq Web UI\n\n1. Run rails server: `rails s`\n2. Go to \"http://localhost:3000/sidekiq\"\n\nYou should now see the UI similar to this:\n\n![Home Page of Sidekiq UI in Rails App](../../images/articles/setup-active-job-with-sidekiq-in-rails/sidekiq-web-ui.webp)\n\n## Step 6: Add Basic Authentication to Web UI for Preventing Unauthorized Access\n\nBy default there is no authentication and anyone that knows your API URL will be able to access the sidekiq UI.\n\nYou can allow any authenticated `User` to access the Sidekiq UI by adding following configurations to your \"config/routes.rb\":\n\n```ruby\n# config/routes.rb\n\nauthenticate :user do\n  mount Sidekiq::Web => '/sidekiq'\nend\n```\n\n_NOTE:_ This configuration is for the Devise gem, if you are using any other authentication gem then this configuration might be different.\n\nYou can view more options for authenticating users to restrict Sidekiq UI at Authentication (Sidekiq Gem Github).\n\n## Example Job\n\nNow let's look at how a job can look like in the Rails App after setting up Sidekiq.\n\n### Application Job\n\nYour \"app/jobs/application.job\" should look similar to the following:\n\n```ruby\nclass ApplicationJob Slack we can add status and set expiration date and time. Your job could like the following for such feature:\n\n```ruby\nclass ExpireUserStatusJob  { expiry_time }\n  validates :expiry_time, presence: true, if: -> { expiry_date }\n  validates :description, presence: true, if: -> { expiry_date || expiry_time }\n\n  def schedule_expiration\n    return if expiry_date.blank? || expired?\n\n    expiry_date_time = combine_date_time(expiry_date, expiry_time)\n\n    ExpireUserStatusJob.set(wait_until: expiry_date_time).perform_later(id)\n  end\n\n  private\n\n  def expired?\n    return if expiry_date.blank? || expiry_date Example Sidekiq systemd configurations to this file.\n3. Update the file:\n  - Change the WorkingDirectory to the folder where your app is deployed e.g. `WorkingDirectory=/home/deploy/my-app/current`\n  - Update the ExecStart path. This specifies the path and command to start Sidekiq. E.g. my current ExecStart path with rbenv is: `ExecStart=/home/deploy/.rbenv/shims/bundle exec sidekiq -e production`. \n\n  **_NOTE:_** If you are using different Ruby version manager then \"/home/deploy/.rbenv/shims/bundle\" will be different.\n\n  - Add `ExecReload=/usr/bin/kill -TSTP $MAINPID` below `ExecStart=....`\n  - Remove comment from `# User=deploy` and update deploy to be the user you are using in the server, mine was deploy so I changed the line to `User=deploy`\n  - Uncomment `# Group=sudo` to make it `Group=sudo`\n  - Uncomment `# UMask=0002` to make it `UMask=0002`\n  - Update `WantedBy=multi-user.target` to `WantedBy=default.target`\n  - Save the file\n\n### Run Sidekiq\n\nYou can enable the Sidekiq service with `sudo systemctl daemon-reload`\n\nYou can start the Sidekiq service with `sudo systemctl start sidekiq`\n\nHere are some useful commands for changing status of Sidekiq:\n\n\"sudo systemctl {start,stop,status,restart} sidekiq\" e.g. `sudo systemctl status sidekiq`\n\nYou can also use \"sudo service sidekiq {start,stop,status,restart}\" to perform commands e.g. `sudo service sidekiq status`\n\n### Sidekiq with Capistrano\n\nIf you are using capistrano for deployment, you can refer to the section \"Bonus 2: Sidekiq for background jobs\" to configure Sidekiq with Capistrano at Deploy API only Rails App with Capistrano.\n\n## Conclusion\n\nCongratulation!!! You have successfully setup and deployed sidekiq if you followed the blog all the way to the end.\n\nIf you have any queries or confusions please let me know in the comments below and I will help you to clear those to the best of my ability.\n\nThanks for reading. Happy tinkering and happy coding!\n\n## Image Credits\n\n- Cover Image by Scott Blake on Unsplash\n\n## References\n\n- Active Job Basics [Rails Documentation]\n- How To Add Sidekiq and Redis to a Ruby on Rails Application [Digital Ocean]"
        },
        {
          "id": "articles-deploy-api-only-rails-app-with-capistrano",
          "title": "Deploy API only Rails App with Capistrano",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails",
          "url": "/articles/deploy-api-only-rails-app-with-capistrano/",
          "content": "Capistrano is a deployment automation tool built on Ruby, Rake, and SSH. It allows you to deploy your app to a remote server in a single command after initial configurations are done.\n\nDeployment with capistrano is as easy as `cap production deploy`. But to be able to hit this command, a lot of configurations need to be added first.\n\nToday we will be looking into setting up capistrano in our API only Rails application for making it easier to deploy to any remote server.\n\n## Skills required to follow the tutorial\n\nIntermediate:\n\n- Rails\n- Linux skills to work with commands in server where your app has been deployed\n\n## You should have\n\n- Existing Rails app\n- Linux server already setup to run Rails app\n\n_NOTE_:\n\nI am using rbenv for ruby so all configurations will be based on that, you can replace it as required for your version manager as required e.g. for rvm.\n\n## Step 1: Install Capistrano Gems\n\nAdd the following to your Gemfile under development group\n\n```ruby\ngroup :development do\n  gem \"capistrano\"\n  gem \"capistrano-rails\"\n  gem 'capistrano-rbenv'\nend\n```\n\nRun the following from command line:\n\n```shell\n$ bundle install\n```\n\nThat should install the latest version of capistrano gems, now let's lock those versions by looking at Gemfile.lock, these versions were what I had in my lock file, yours could be different.\n\n```ruby\ngroup :development do\n  gem \"capistrano\", \"~> 3.17\", require: false\n  gem \"capistrano-rails\", \"~> 1.6\", require: false\n  gem 'capistrano-rbenv', '~> 2.2', require: false\nend\n```\n\n## Step 2: Generate default configuration files\n\nRun the generator to create a basic set of configuration files:\n\n```shell\n$ bundle exec cap install\n```\n\n## Step 3: Require correct plugins in Capfile\n\nUncomment the following plugins in Capfile located at the root of the project\n\n```ruby\n  # require \"capistrano/rvm\"\n  require 'capistrano/rbenv'\n  # require \"capistrano/chruby\"\n  require 'capistrano/bundler'\n  require \"capistrano/rails/assets\"\n  require 'capistrano/rails/migrations'\n  # require 'capistrano/passenger'\n```\n\n## Step 4: Update Deploy file\n\nUpdate the deploy file at `config/deploy.rb` with values as required:\n\n1. application: Set to the application name or project name\n2. repo_url: Set to the URL where your code is being stored, you can normally get this with `git remote -v` and copy the URL of origin \n3. linked_files: Set to a list of files that should be shared and persisted over all releases, for e.g. config/database.yml, config/master.key, config/application.yml, etc. These files will not be deleted/reset in every release (other files will!!)\n4. linked_dirs: Set to a list of folders that should be shared and persisted over all releases, for e.g. log, tmp/pids, node_modules, etc. These folders will not be deleted/reset in every release (other folders will!!)\n5. keep_releases: Set to a number of previous releases you want to keep in the server after each release, I normally keep this to 3.\n6. conditionally_migrate: Skip migration if files in \"db/migrate\" were not modified. I normally set this to \"true\" but default is \"false\"\n\nYour deploy file could look something like below:\n\n```ruby\n# frozen_string_literal: true\n\n# config valid for current version and patch releases of Capistrano\nlock '~> 3.17.0'\n\nset :application, 'contract-template-editor-api'\nset :repo_url, 'git@github.com:truemark/contract-template-editor/api.git'\n\n# Default value for :linked_files is []\nset :linked_files, %w[config/application.yml config/database.yml config/master.key]\n\n# Default value for linked_dirs is []\nset :linked_dirs, %w[log tmp/pids tmp/cache tmp/sockets vendor/bundle .bundle public/system public/uploads node_modules]\n\n# Default value for keep_releases is 5\nset :keep_releases, 3\n\n# Skip migration if files in db/migrate were not modified\n# Defaults to false\nset :conditionally_migrate, true\n```\n\n## Step 5: Update environment specific deploy files\n\nI have normally worked on projects that has staging and production environment and configuration files for these two environment is provided default by Capistrano; each environment specific file is located under `config/deploy/{environment}.rb`.\n\nIn these files, there should be configurations that can change based on the environment of the Rails application.\n\nConfigurations for each environment will be the same with different value based on the environment e.g. server ip address will be different in production and staging. \n\nWe will only be looking at `config/deploy/production.rb` in this tutorial.\n\nUpdate the deploy file at `config/deploy/production.rb` with values as required:\n\n1. server: IP address of the server where you will deploy the app. It is a good idea to sore this value inside env file or rails credentials and take it from there.\n2. user: name of the user in the server, normally it will be \"deploy\" but could be username e.g. prabin\n3. roles: list of accessible roles for this user\n4. deploy_to: path of the folder to deploy your app to, you can get the path by logging in to the server, going to the folder you want your app to be deployed to and enter the command `pwd`. If you haven't created the folder yet, you can create one e.g. contract-template-api and set the path of that folder.\n5. branch: git branch that will be used to deploy the app, normally \"main\" or \"master\"\n6. stage: environment of the app, should be \"production\"\n7. rails_env: same as stage, should be \"production\"\n\nYour file could look something like below:\n\n```ruby\nserver ENV['deploy_server_ip'], user: 'deploy', roles: %w[app db web], primary: 'true'\nset :deploy_to, 'path to the folder where app should be deployed e.g. /home/deploy/contract-template-editor/api'\nset :branch, 'main'\nset :stage, :production\nset :rails_env, :production\n```\n\n## Step 6: Upload secret keys and files\n\nSecret keys and files should never be committed to git. These files are required for app to function properly. Normally these files are always configured in the developer's machine so they can be uploaded directly from the project to the server using ssh.\n\nCreate a new rake task at `lib/capistrano/tasks/config_files.rake` and add the following content:\n\n```ruby\nnamespace :config_files do\n  desc 'Upload yml files inside config folder'\n  task :upload do\n    on roles(:app) do\n      execute \"mkdir -p #{shared_path}/config\"\n\n      upload! StringIO.new(File.read('config/database.yml')), \"#{shared_path}/config/database.yml\"\n      upload! StringIO.new(File.read('config/application.yml')), \"#{shared_path}/config/application.yml\"\n      upload! StringIO.new(File.read('config/master.key')), \"#{shared_path}/config/master.key\"\n    end\n  end\nend\n```\n\nWe now need to tell capistrano to run this code during deployment, add the following to the end of `config/deploy.rb`\n\n```ruby\n# ================================================\n# ============ From Custom Rake Tasks ============\n# ================================================\n# ===== See Inside: lib/capistrano/tasks =========\n# ================================================\n\n# upload configuration files\nbefore 'deploy:starting', 'config_files:upload'\n```\n\n## Step 7: Create a database if deploying for the first time\n\nCreate a new rake task at `lib/capistrano/tasks/database.rake` and add the following content:\n\n```ruby\nnamespace :database do\n  desc 'Create the database'\n  task :create do\n    on roles(:app) do\n      within release_path do\n        with rails_env: fetch(:rails_env) do\n          execute :rake, 'db:create'\n        end\n      end\n    end\n  end\nend\n```\n\nWe now need to tell capistrano to run this code during deployment, add the following to the end of the deploy file.\n\n```ruby\n# set this to false after deploying for the first time \nset :initial, true\n\n# run only if app is being deployed for the very first time, should update \"set :initial, true\" above to run this\nbefore 'deploy:migrate', 'database:create' if fetch(:initial)\n```\n\n## Step 8: Reload the Rails application after successful deploy\n\nCreate a new rake task at `lib/capistrano/tasks/application.rake` and add the following content:\n\n```ruby\nnamespace :application do\n  desc 'Reload application'\n  task :reload do\n    desc 'Reload app after deployment'\n    on roles(:app), in: :sequence, wait: 5 do\n      execute :touch, release_path.join('tmp/restart.txt')\n    end\n  end\nend\n```\n\nWe now need to tell capistrano to run this code during deployment, add the following to the end of the deploy file.\n\n```ruby\n# reload application after successful deploy\nafter 'deploy:publishing', 'application:reload'\n```\n\n## Final file\n\nYour final \"config/deploy.rb\" file should look similar to this:\n\n```ruby\n# frozen_string_literal: true\n\n# config valid for current version and patch releases of Capistrano\nlock '~> 3.17.0'\n\nset :application, '{project name}'\nset :repo_url, '{remote git repository where project for the code is stored}'\n\n# Default value for :linked_files is []\nappend :linked_files, %w[config/application.yml config/database.yml config/master.key]\n\n# Default value for linked_dirs is []\nappend :linked_dirs, %w[log tmp/pids tmp/cache tmp/sockets vendor/bundle .bundle public/system public/uploads node_modules]\n\n# Default value for keep_releases is 5\nset :keep_releases, 3\n\n# Skip migration if files in db/migrate were not modified\n# Defaults to false\nset :conditionally_migrate, true\n\n# ================================================\n# ============ From Custom Rake Tasks ============\n# ================================================\n# ===== See Inside: lib/capistrano/tasks =========\n# ================================================\n\n# upload configuration files\nbefore 'deploy:starting', 'config_files:upload'\n\nset :initial, true\n\n# run only if app is being deployed for the very first time, should update \"set :initial, true\" above to run this\nbefore 'deploy:migrate', 'database:create' if fetch(:initial)\n\n# reload application after successful deploy\nafter 'deploy:publishing', 'application:reload'\n```\n\n## Deploy the app\n\nFrom the command line you can now deploy the app to production using the following command:\n\n```shell\ncap production deploy\n```\n\nThe app should be deployed in around 5-10 minutes, if you encounter any error while deploying and need any help, please post a comment below and I will do my best to help you resolve it.\n\n## Bonus 1: Whenever for cron jobs\n\nwhenever gem is used in Rails applications to schedule cron jobs e.g. send email notification about monthly expenditure on the 1st of each month.\n\nAdd the following to the deploy file just below \"set :conditionally_migrate, true\":\n\n```ruby\n# Skip migration if files in db/migrate were not modified\n# Defaults to false\nset :conditionally_migrate, true\n\n# Set unique identifier for background jobs\nset :whenever_identifier, -> { \"#{fetch(:application)}_#{fetch(:stage)}\" }\n```\n\nwhenever_identifier should be set to unique identifier for cron jobs, this is required if you have deployed multiple applications to same server (normally for staging servers) that require whenever gem so that those applications don't clash with one another for cron jobs. This is optional if you only have one application in the  server.\n\nCreate a new rake task at `lib/capistrano/tasks/whenever.rake` and add the following content which will be responsible for updating cron tasks configured inside the \"config/schedule.rb\":\n\n```ruby\nnamespace :whenever do\n  desc 'Update cron job'\n  task :update_crontab do\n    on roles(:app) do\n      within current_path do\n        execute :bundle, :exec, \"whenever --update-crontab #{fetch :whenever_identifier} --set 'environment=#{fetch(:stage)}'\"\n      end\n    end\n  end\nend\n```\n\nWe now need to tell capistrano to run this code during deployment, add the following to the deploy file just below \"before 'deploy:migrate', 'database:create' if fetch(:initial)\":\n\n```ruby\n# run only if app is being deployed for the very first time, should update \"set :initial, true\" above to run this\nbefore 'deploy:migrate', 'database:create' if fetch(:initial)\n\n# update cron job from whenever schedule file at \"config/schedule.rb\"\nafter 'deploy:finishing', 'whenever:update_crontab'\n```\n\n## Bonus 2: Sidekiq for background jobs\n\nSidekiq gem is used in Rails applications to schedule background jobs so as to perform them at a later point without having to stop the execution of other codes.\n\n### Capistrano Configurations\n\nCreate a new rake task at `lib/capistrano/tasks/sidekiq.rake` and add the following content which will be responsible for updating cron tasks configured inside the \"config/schedule.rb\":\n\n```ruby\nnamespace :sidekiq do\n  desc 'Quieten sidekiq'\n  task :quiet do\n    on roles(:app) do\n      puts capture(\"pgrep -f 'sidekiq' | xargs kill -TSTP\")\n    end\n  end\n\n  desc 'Restart Sidekiq'\n  task :restart do\n    on roles(:app) do\n      execute :sudo, :systemctl, :restart, :sidekiq\n      execute :sudo, :systemctl, 'daemon-reload'\n    end\n  end\nend\n```\n\nWe now need to tell capistrano to run this code during deployment, add the following to the deploy file just below \"after 'deploy:publishing', 'application:reload'\":\n\n```ruby\n# reload application after successful deploy\nafter 'deploy:publishing', 'application:reload'\n\n# sidekiq related commands\nafter 'deploy:starting', 'sidekiq:quiet'\nafter 'deploy:reverted', 'sidekiq:restart'\nafter 'deploy:published', 'sidekiq:restart'\n```\n\nNow try to deploy the app to production and server will ask for password when running sidekiq commands. To fix that we need to add some more configurations in the remote server.\n\n### Server Configurations\n\nWe are assuming that sidekiq is already configured in your remote server. If you have not configured it yet, you can refer to the section \"Bonus: Setup Sidekiq in Ubuntu Server\" at Setup Active Job with Sidekiq in Rails\n\nFor capistrano to perform sudo actions without asking for the password, the user used by capistrano, normally \"deploy\" user should be in the sudo group and you should add commands that need to be executed in the server with sudo access but without using password to \"/etc/sudoers\" file.\n\n1. Add deploy user to the sudo group\n    \n    You can add your deploy user to the sudo group with the following command\n\n    ```shell\n    # add \"deploy\" user to sudo group\n    $ sudo usermod -aG sudo deploy\n\n    # verify if the user has been added to the sudo group\n    # result should include \"sudo\" for the deploy user\n    $ groups deploy\n    deploy : deploy sudo\n    ```\n\n2. Add commands required for the sidekiq restart and daemon-reload to be performed without password\n\n    - Open the sudoers file for the edit with `sudo EDITOR=nano visudo`, this will ensure that the content inside the file is validated before saving so you don't end up with invalid file. If you do `sudo nano /etc/sudoers` then it doesn't validate the content so you should never do that.\n    - Add the following below the line `root    ALL=(ALL:ALL) ALL` under \"# User privilege specification\"\n    \n    ```text\n      deploy ALL=NOPASSWD: /bin/systemctl restart sidekiq\n      deploy ALL=NOPASSWD: /bin/systemctl daemon-reload\n    ```\n\n3. You can try running above two commands now in the command line of the server and it should run normally without asking for password:\n\n    ```shell\n    $ sudo systemctl restart sidekiq\n    # doesn't ask for the password and executes the command\n    ``` \n\nNow if you try to deploy again, capistrano won't stop to ask password when running sidekiq commands.\n\n## Conclusion\n\nWe have come to the finish line, now you should be able to deploy your API only Rails application to the server with one command.\n\nThanks for reading. Happy tinkering and happy coding!\n\n## Image Credits\n\n- Cover Image by NASA on Unsplash"
        },
        {
          "id": "articles-action-mailbox-with-sendgrid",
          "title": "Setup Action Mailbox with SendGrid",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, tutorial, web development",
          "url": "/articles/action-mailbox-with-sendgrid/",
          "content": "Rails 6 released with many awesome features and action mailbox was one of them that has come to make the life easier. From Official Action Mailbox Guide:\n\n> Action Mailbox routes incoming emails to controller-like mailboxes for processing in Rails. It ships with ingresses for Mailgun, Mandrill, Postmark, and SendGrid. You can also handle inbound mails directly via the built-in Exim, Postfix, and Qmail ingresses.\n\nBasically, action mailbox can be used to forward all incoming emails to your Rails app and process it further as required like storing attachments, creating records from the email body in you database and many more.\n\nAnd today, we will be implementing Action Mailbox with SendGrid.\n\n## Requirements\n\n- Setup Action Mailbox with SendGrid using the official Rails documentation\n- Update DNS records to forward emails received in the mailbox towards our Rails app\n- Test integration in development with built in UI provided by Rails\n- Test integration in development with NGROK to ensure seamless production release\n\n## Tested and working in\n\n- Ruby 3.0.0\n- Rails 7.0.2.4\n- Action Mailbox 7.0.2.4\n\n## You should have\n\n- Existing app built with Rails 7 or higher\n\nLet's start integrating Action Mailbox with SendGrid in our Rails app now.\n\n## Step 1: Setup action mailbox\n\nWe will be following instructions from the Official Rails Guide for Action Mailbox.\n\n- Install migrations needed for InboundEmail and ensure Active Storage is set up:\n\n```shell\n$ rails action_mailbox:install\n$ rails db:migrate\n```\n\n## Step 2: Ingress Configuration\n\nTell Action Mailbox to accept emails from SendGrid by adding the following to both \"development.rb\" and \"production.rb\"\n\n```ruby\n# config/environments/development.rb & config/environments/production.rb\nconfig.action_mailbox.ingress = :sendgrid\n```\n\n## Step 3: Generate Password for authenticating requests\n\nFirst of all, we should generate a strong password that Action Mailbox can use to authenticate requests to the SendGrid ingress.\n\nYou can add any strong password or let Rails generate it for you. You can log into Rails console and generate a password for you:\n\n```cmd\n> rails c\nirb > SecureRandom.alphanumeric\n# => \"Kk9YGvzdPN69bfiu\"\n```\n\nAfter that you can use `rails credentials:edit` in the command line to add the password to your application's encrypted credentials under `action_mailbox.ingress_password`, where Action Mailbox will automatically find it:\n\n```ruby\naction_mailbox:\n  ingress_password: YOUR_STRONG_PASSWORD\n```\n\nIf you are using **nano** editor you can edit credentials with following command:\n\n```shell\n  $ EDITOR=\"nano\" rails credentials:edit\n```\n\nAlternatively, you can also provide the password in the `RAILS_INBOUND_EMAIL_PASSWORD` environment variable.\n\nIf you are using `figaro` gem you can add the following to your \"config/application.yml\":\n\n```yml\n# config/application.yml\n\nRAILS_INBOUND_EMAIL_PASSWORD: 'YOUR_STRONG_PASSWORD'\n```\n\n## Step 4: Setup a mailbox\n\nNow we should setup a mailbox that will process all incoming emails through our Rails app.\n\nYou can generate a new mailbox with:\n\n```shell\n$ bin/rails generate mailbox forwards\n```\n\nThis will create `forwards_mailbox` inside `app/mailboxes`\n\n```ruby\n# app/mailboxes/forwards_mailbox.rb\nclass ForwardsMailbox  :forwards\n  end\n  ```\n\n- Accept all emails from single domain\n\n  ```ruby\n  # app/mailboxes/application_mailbox.rb\n  class ApplicationMailbox  :forwards\n  end\n  ```\n\n- Accept email from multiple domains\n\n```ruby\n# app/mailboxes/application_mailbox.rb\nclass ApplicationMailbox  :forwards\nend\n```\n\nThis regex matching is telling application mailbox to forward all emails coming from `@email-domain.com` to our `forwards_mailbox`. For e.g. if we configure it to be `/.*@gmail.com/i` and our Rails app receives email to `john-doe@gmail.com` then it will be forwarded to our `forwards_mailbox` where we can further process it since this email matches with the pattern `@gmail.com`.\n\n[[notice | Note]]\n|Your mailbox name should match the name you've given it in the routing params i.e. `forwards` will route to `forwards_mailbox`.\n\n## Step 6: Test in development\n\nAction Mailbox provides it's own set of UIs to test inbound emails in the development environment. To access this, let's fire up the Rails server first:\n\n```ruby\n$ rails s\n```\n\nVisit Action Mailbox Inbound Emails Localhost URL and click on `New inbound email by form`. Fill in all required details like From, To, Subject and Body. You can leave other fields blank.\n\nBefore clicking on `Deliver inbound email`, let's add `byebug` (or any other debugging breakpoint e.g. binding.pry) to our `process` method so we know action mailbox is actually forwarding our emails to the right place.\n\n```ruby\n# app/mailboxes/forwards_mailbox.rb\nclass ForwardsMailbox [\"\"]` and controller is trying to process it further with the following code:\n\n```ruby\nprivate\n  def new_mail\n    Mail.new(mail_params.except(:attachments).to_h).tap do |mail|\n      mail[:bcc]&.include_in_headers = true\n      mail_params[:attachments].to_a.each do |attachment|\n        mail.add_file(filename: attachment.original_filename, content: attachment.read)\n      end\n    end\n  end\n```\n\nHere, we are getting error in the line `mail.add_file(filename: attachment.original_filename, content: attachment.read)` because \"attachment\" is empty string i.e. \"\" and not an object which has properties like \"original_filename\". Hence the error.\n\nAfter looking into controller, my next stop for debugging the error was to look into the view because it shouldn't have sent the empty attachment in the first place.\n\nView was just using a normal file_upload tag:\n\n```html.erb\n\n  \n  \n\n```\n\nThere couldn't be any issue here, so I looked into the rendered HTML in the webpage and found out that there was a hidden tag for attachment:\n\n```html\n\n\n```\n\nHence, the form is submitting empty attachment to the controller.\n\nThis problem could be solved in controller by filtering out attachments that are empty and I was near to submitting a PR to Rails. But then I thought, if I am getting this issue, there are obviously other developers who have been into this since this is an issue in Rails core and not in the code I have written.\n\nSearching further, I found this issue titled Action Mailbox Conductor throws NoMethodError when creating inbound email submitted to Rails Core Github.\n\nAnd if there is an issue, there must also be a PR. YES, there was one already titled Cannot deliver new inbound email via form but it hadn't been merged yet. \n\nBut for this tutorial and until PR is merged, we need this to work in our app. So, I was looking into how I can resolve it the best and searching for the solution that would work for all of us and not just me.\n\nScrolling further into the issue, I found a monkey patching very suitable for our use case.\n\n### Monkey Patching the issue \n\nAdd the following to your `config/application.rb`\n\n```ruby\n# monkey patching to resolve the issue of action mailbox inbound email sending empty attachment\nconfig.to_prepare do\n  Rails::Conductor::ActionMailbox::InboundEmailsController.class_eval do\n    private\n\n    def new_mail\n      Mail.new(mail_params.except(:attachments).to_h).tap do |mail|\n        mail[:bcc]&.include_in_headers = true\n        mail_params[:attachments].to_a.compact_blank.each do |attachment|\n          mail.add_file(filename: attachment.original_filename, content: attachment.read)\n        end\n      end\n    end\n  end\nend\n```\n\nDon't forget to restart the server and reload the page. After that you can submit the form again.\n\nVoilà!! It works 🥳. \n\nNow, your server should have stuck in the debugging breakpoint.\n\nThat's it, we have now successfully setup action mailbox and tested in development.\n\nNow let's test using NGROK so we know that our configuration will work seamlessly (pretty much) in our production environment.\n\n## Step 7: Setup NGROK\n\nLet's setup NGROK in our local machine:\n\n1. Download the application\n\n    You can download the application from this download link.\n\n    If you are on MacOS, I highly suggest downloading NGROK using homebrew with the command `brew install ngrok/ngrok/ngrok`. It's easier than manual download and also don't normally give off any issue.\n\n2. Serve your app using NGROK URL\n\n    While keeping the rails server running as it is, open a new tab in your command line.\n  \n    You can then run the command `ngrok http 3000`, which will give you an URL connecting your local Rails app running on port 3000 to the internet. You should look at the URL besides the \"Forwarding\" option, it will be something similar to `Forwarding https://e73a-27-34-12-7.in.ngrok.io -> http://localhost:3000`\n\n    When running the NGROK, you should see a screen similar to the screenshot below:\n\n    ![Screenshot of active NGROK session in command line](../../images/articles/action-mailbox-with-sendgrid/ngrok-running-screen.webp)\n\n3. Access the Rails app with NGROK URL\n  \n    Open the URL you got before from NGROK e.g. `https://e73a-27-34-12-7.in.ngrok.io` in your browser and you should be able to see the Rails welcome screen or whatever your default page for the app is.\n\n    But, but, there is an error again 😭\n  \n    Hah, don't worry. I have got you covered.\n\n    You should be seeing the Error UI similar to what is in the screenshot below:\n\n    ![Ngrok error page when trying to access rails app due to missing auth token](../../images/articles/action-mailbox-with-sendgrid/ngrok-error-due-to-missing-auth-token.webp)\n\n    This happens because of missing \"auth token\" which you can get after signing up to NGROK for free.\n\n4. Sign up to NGROK\n  \n    You can sign up to NGROK using this signup link.\n\n5. Add NGROK auth-token to local configuration file\n  \n    After signing up, you are presented with a dashboard and you can copy the auth-token from setup-and-installation step number 2 called \"Connect you account\"\n\n    Or you can follow this link to your auth token page.\n\n    Copy the token given and run the following in your command line:\n\n    ```cmd\n    $ ngrok config add-authtoken \n    ```\n\n    Now restart your NGROK server and go to the new URL provided. \n    \n    [[notice | Note]]\n    |URL changes each time you restart the NGROK server unless you use pro version and pay for the static URL.\n\n    What, Error? Again!!! 🤕\n\n6. Resolving blocked host in Rails app\n  \n    After accessing the NGROK URL, you should see an error page similar to the one below:\n\n    ![Blocked host error in Rails](../../images/articles/action-mailbox-with-sendgrid/blocked-host-error-in-rails.webp)\n\n    This is because Rails blocks https access in development and unauthorized URLs overall.\n\n    Let's add the NGROK URL to \"config/environments/development.rb\"\n\n    ```ruby\n    # config/environments/development.rb\n\n    config.hosts  official SendGrid tutorial to authenticate your domain in SendGrid.\n\nPart of the tutorial also goes through setting up MX records which we will go into detail here.\n\nYou can authenticate your domain by following this link to the Domain Authentication page \n\nYou will receive list of CNAME and Values similar to what is listed below in the process where `prabinpoudel.com.np` and `em1181` will be different:\n\n1. em1181.prabinpoudel.com.np\n2. s1._domainkey.prabinpoudel.com.np\n3. s2._domainkey.prabinpoudel.com.np\n\nWe will come back to this page after next step again so don't close the page yet.\n\nLet's go to our DNS provider's dashboard and configure these records first.\n\n### Setup DNS Records\n\nWe need to add DNS records from SendGrid to our DNS provider so our email is actually being processed by SendGrid and routed to our Rails app with Inbound Parse Hook.\n\nI use CloudFlare, so I will be showing you process to setup MX record using the settings from CloudFlare as an example.\n\n1. Go to DNS tab from the left menu\n    ![Side Menu with menu item \"DNS\" active in Cloudflare](../../images/articles/action-mailbox-with-sendgrid/side-menu-in-cloudflare.webp)\n\n2. Click on \"Add Record\" and choose MX from the dropdown then add the following values to each field\n\n    - Name: \"@\"\n    - Mail Server: \"mx.sendgrid.net\"\n    - TTL: \"auto\"\n    - Priority: \"10\"\n\n    ![Adding MX Record from SendGrid to Cloudflare](../../images/articles/action-mailbox-with-sendgrid/adding-mx-records-from-sendgrid-to-cloudflare.webp)\n\n    You can also find the instruction for adding MX record in the tutorial to setup Inbound Parse Hook from SendGrid.\n\n3. Click on \"Add Record\" again and add all three CNAME records we got previously while authenticating the domain one by one\n\n    Copy values from authenticating the domain page add them to CloudFlare:\n    \n    - Type: CNAME\n    - Name: value from CNAME\n    - Target: value from VALUE\n    - Proxy Status: Turn the toggle button off (it will be on by default)\n    - TTL: Auto\n\n    ![Adding CNAME Records from SendGrid to Cloudflare](../../images/articles/action-mailbox-with-sendgrid/adding-cname-records-from-sendgrid-to-cloudflare.webp)\n\n4. Go back to domain authentication page and click on \"I've added these records\" and click on \"Verify\" button\n  \n    If everything was copied over correctly, you will see a page with the information \"It worked! Your authenticated domain for prabinpoudel.com.np was verified.\"\n\n    Else you will get errors and you will have to fix those before moving forward.\n\n## Step 9: Configure Inbound Parse in SendGrid\n\nWe will be following the official SendGrid doc for configuring inbound parse hook to forward inbound emails to `/rails/action_mailbox/sendgrid/inbound_emails` with the username \"actionmailbox\" and the password we generated just before this.\n\n1. From your SendGrid Dashboard click Settings, and then click Inbound Parse. You are now on the Inbound Parse page. Or you can click on this Inbound Parse Link to go there directly.\n2. Click \"Add Host & URL\"\n3. You can add/leave the subdomain part as required. I have left it blank because I don't have any subdomain just for receiving emails\n4. Under \"Domain\", choose your domain name that you just verified\n\n5. Under the URL we will have to construct one and add it\n  \n    The format for the URL should be: `https://actionmailbox:@/rails/action_mailbox/sendgrid/inbound_emails`\n\n    For e.g. it will be `https://actionmailbox:my_strong_password@5829-2400-1a00-b050-3fb6-b0ce-5946-b9be.in.ngrok.io/rails/action_mailbox/sendgrid/inbound_emails` for my Rails app.\n\n    In production it can be a different URL so you should replace `rails_app_nginx_url` with the URL from where your Rails application is accessible to the internet.\n\n    ![Configure Inbound Parse Hook Page in SendGrid](../../images/articles/action-mailbox-with-sendgrid/configure-inbound-parse-hook-in-sendgrid.webp)\n  \n6. Check \"POST the raw, full MIME message\" and click on Add\n\nNow we are ready to test our integration with live email using SendGrid and Ngrok.\n\n## Step 10: Test if MX records are recognized by the internet\n\nBefore we test our integration with live email, we need to make sure that MX records are recognized by the Internet.\n\nIt may take some time for DNS records to be recognized throughout the world so email forwarding may yet not work for you. The maximum time period until this happens is 24 hours.\n\nYou can test if DNS records for your domain is working correctly and recognized from the website MX Toolbox\n\n1. Add your domain name e.g. prabinpoudel.com.np\n2. Click on \"MX Lookup\"\n\n    You should see \"DNS Record Published\" status in the test result table\n\n    ![Test result for MX records in the website of MX Toolbox](../../images/articles/action-mailbox-with-sendgrid/test-mx-records-using-mx-toolbox-website.webp)\n\n\n## Step 11: Test incoming email with SendGrid and NGROK\n\nFinally, we are now at the last step. We will now send email to our mail server and we should receive it in our local Rails app and server should stop in our debugging breakpoint.\n\nFrom your favorite email provider e.g. Gmail, send a test email to your domain e.g. for me I will test it via sendgrid-test@prabinpoudel.com.np. \n\nIt takes some time to process the email by SendGrid and receive in our Rails app, maximum ~1 minute.\n\nYou can check if the email is being received by SendGrid or not from Parse Webhook Statistics\n\n![Statistics of incoming emails received by SendGrid from the internet and forwarded to Rails App URL as per the configuration](../../images/articles/action-mailbox-with-sendgrid/sendgrid-inbound-parse-webhook-statistics.webp)\n\nTada!! 🎉\n\nYou should have received the email and rails server must have stopped in the debugging breakpoint.\n\n## Conclusion\n\nCongratulations!!! You have come a long way and gone through a lot of process to integrate Action Mailbox with SendGrid.\n\nYou can find a working app for this blog at Action Mailbox with SendGrid. You can view all changes I made for configuring SendGrid with Action Mailbox in the PR: Setting up Action Mailbox with SendGrid\n\nNext, you can deploy the app to staging or production and add new Inbound Parse URL in SendGrid to point to the URL of those applications.\n\nIf you have any confusions, suggestions or issues while implementing any steps in this email, please let me know in comment section below and I will do my best to help you.\n\nThanks for reading. Happy coding and tinkering!\n\n## Similar Articles\n\nIf you are interested in seeing how this same process can be accomplished with other ingress options, you can check articles below:\n\n- Action Mailbox with Postfix\n- Deploy Action Mailbox To Postmark [External Link] from Cody Norman\n\n**References:** \n\n- Action Mailbox (Official Documentation)\n- Using Action Mailbox in Rails 6 to Receive Mail \n- Action Mailbox Conductor throws NoMethodError when creating inbound email\n- Cannot deliver new inbound email via form #44008\n- Setting Up The Inbound Parse Webhook\n- How to set up domain authentication for Twilio SendGrid\n\n**Image Credits:** Cover Image by erica steeves from Unsplash"
        },
        {
          "id": "articles-run-eslint-on-git-commit-with-husky-and-lint-staged",
          "title": "Run ESLint on git commit with Husky and Lint Staged",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "reactjs, git",
          "url": "/articles/run-eslint-on-git-commit-with-husky-and-lint-staged/",
          "content": "How do you make sure that ESLint rules configured in your project are followed by all your team members and code with issues are not pushed to remote Git repository?\n\nAnswer to the question is; using Husky package with git hooks. Git hooks are one of the most popular way to trigger and enforce different side effects like ESLint rules. Husky depends on git hooks to trigger ESLint rules and make sure that all issues are resolved before you or anyone on your team can commit and push new changes to Git.\n\n## Assumptions\n\n- You have basic knowledge of ReactJS \n- You have worked with ESLint previously and have required configuration file for ESLint in your project\n\n## What are Git Hooks?\n\nGit hooks are a set of scripts that Git executes before or after events such as: commit, push, and receive. Git hooks are a built-in feature - you don't need to download anything for them to work.\n\nWhen you initialize git in your project with `git init`, git hooks are also automatically added. You can find sample files for each event under the folder `your_project_path/.git/hooks`.\n\nTo view list of sample files for various types of hooks, you can hit the following command:\n\n```cmd\n  $ ls your_project_path/.git/hooks\n```\n\nAt Truemark, we normally use it to enforce coding standards and code quality by running ESLint before \"git commit\".\n\n## What is Husky?\n\nOne important thing to note for Git Hooks is, it is not version controlled, meaning whatever you add to hooks folder is only in your machine and not configured in the GIT. \n\nSo, what happens when new member in your team clones the repository? \nNothing, they will get sample files like I mentioned above.\n\nWhat?\n\nThen \"How do we as a team make sure that hooks are executed for everyone?\"\n\nThe answer to that is **husky** package.\n\nHusky package helps you and your team to manage and configure Git hooks in your projects. \n\nWith \"husky\" installed in your project; after you clone the repo, you just have to hit the command `npm run prepare` and all hooks are configured by husky in your machine.\n\nHusky makes git hooks much more manageable because you don't have to write scripts for hooks manually. You can just add the command you want to run e.g. run ESLint before commit inside the configuration file provided by Husky and everything else will be taken care by the package.\n\n## Install Husky\n\nExecute the following in the command line:\n\n```cmd\n  npm install husky -D\n```\n\nThis will add the husky package to your package.json under \"devDependencies\":\n\n```json\n  \"devDependencies\": {\n    // other dependencies here,\n    \"husky\": \"^7.0.4\"\n  }\n```\n\n## Enable Git Hooks in your Project with Husky\n\nYou can enable git hooks in your project by running the command provided by husky package. In your project root path run following commands:\n\n```cmd\n  npm set-script prepare \"husky install\"\n  npm run prepare\n```\n\nAfter running above commands, you should see the following inside package.json:\n\n```json\n  \"scripts\": {\n    // other scripts here,\n    \"prepare\": \"husky install\"\n  }\n```\n\nThis will also add required hooks in your project inside the folder `.git/hooks/`.\n\nIt will also add configuration files for Husky under the folder `.husky` inside your project root. This file is used to control all git hooks configured in your project, and this is also where you will be adding configurations for running ESLint before commit.\n\n## Enable ESLint as Pre-Commit Hook with Husky\n\nUpdate scripts under package.json and add the script to run ESLint:\n\n```json\n  \"scripts\": {\n      // other scripts here,\n      \"lint\": \"eslint .\"\n    }\n```\n\nAdd a pre-commit hook to run eslint with husky by running the following command:\n\n```cmd\n  npx husky add .husky/pre-commit \"npm run lint\"\n```\n\nYou should see the following code inside `.husky/pre-commit` file now:\n\n```sh\n  #!/bin/sh\n  . \"$(dirname \"$0\")/_/husky.sh\"\n\n  npm run lint\n```\n\n## Run ESLint on git commit\n\nAfter you are done making changes to your code, try committing your code:\n\n```cmd\n  git add .\n  git commit -m \"your commit message\"\n```\n\nGit hooks will run ESLint before commit and throw errors if any. If it didn't throw any error, add new code with issues manually and see the hook in action 🙈\n\nThis is something similar to what you will see in case there are issues in your code:\n\n![Overcommit posts RuboCop errors for code with issues](../../images/articles/run-eslint-on-git-commit-with-husky-and-lint-staged/husky-throwing-eslint-errors.webp)\n\nIf there are no errors then your code will be committed to git and you can push to the remote repository.\n\n## What is lint-staged?\n\nWith Husky, ESLint is run on each and every file inside the project and if you ask me if that is a good idea; I will tell you that it's a very bad idea.\n \nWhy? Because running ESLint on code that was not changed as part of the feature can lead to various unforeseen bugs.\n \nFor big projects it can take a lot of time to run eslint on each and every file inside the project. Also in old projects, it doesn't make sense to sit and fix all best practice issues instead of shipping new features.\n\nSo, how do we run ESLint only on the code that we changed?\n\nThe answer is lint-staged. It is a package that helps in running pre-commit hooks only on files that have been changed in current commit.\n\n## Install lint-staged\n\nRun the following command to install lint-staged in the project:\n\n```cmd\n  npm install lint-staged --save-dev\n```\n\nYou should see the following in your package.json:\n\n```json\n  \"devDependencies\": {\n    // other dependencies here,\n    \"lint-staged\": \"^12.3.8\",\n  }\n```\n\n## Run ESLint on \"git commit\" with Husky and lint-staged\n\nYou can configure lint-staged in separate file or inside package.json itself, since there is only one command I felt that it was not worth it to have a separate file for the configuration.\n\nYou can view all supported options here.\n\nAdd following to package.json just below scripts:\n\n```json\n  \"scripts\": {\n    \"build\": \"react-scripts build\",\n    \"eject\": \"react-scripts eject\",\n    // other scripts here,\n  },\n  \"lint-staged\": {\n    \"*.{js,jsx}\": \"eslint --fix\"\n  }\n```\n\nWe have added {js,jsx} so that staged files with only these extension are run through lint. You can update this to support other extensions like ts, tsx for typescript.\n\nUpdate pre-commit file to run lint-staged and remove other commands, your file should look like this:\n\n```sh\n  #!/bin/sh\n  . \"$(dirname \"$0\")/_/husky.sh\"\n\n  npx lint-staged\n```\n\nThis will run lint-staged script which will show ESLint issues only on staged files.\n\nTo test, you can now manually add new code with issues and see issues thrown only on changed files instead of in all files inside the project as what had happened previously before configuring lint-staged.\n\n## Conclusion\n\nWith Husky package configured in your project, you will never have to worry about having to comment on issues in merge requests which could already have been detected by ESLint in local machine of developers. This way, you and your team can focus on having meaningful discussion in merge requests which leads to overall growth of the project and members in your team.\n\nThanks for reading. Happy coding!\n\n## Image Credits\n\n- Cover Image by WOLF Λ R T on Unsplash\n  \n\n## References\n\n- Git Hooks\n- Husky - Official Documentation\n- Lint Staged - Official Documentation"
        },
        {
          "id": "articles-run-rubocop-on-git-commit-with-overcommit-gem",
          "title": "Run RuboCop on git commit with Overcommit Gem",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, rubocop",
          "url": "/articles/run-rubocop-on-git-commit-with-overcommit-gem/",
          "content": "How do you make sure that RuboCop rules configured in your project are followed by all your team members and code with issues are not pushed to Git repository?\n\nAnswer is using Overcommit gem with git hooks. Git hooks are one of the most popular way to trigger and enforce different side effects like RuboCop rules. Overcommit gem depends on git hooks to trigger RuboCop rules and make sure that all issues are resolved before you or anyone on your team can commit and push new changes to Git.\n\n## Assumptions\n\n- You have basic knowledge of Rails \n- You have worked with RuboCop previously and have required configuration files for RuboCop in your project\n\n_Shameless Plug_:\n\nIf you are new to RuboCop, you can read about it with setup instructions in my another article - Beginner's Guide to RuboCop in Rails\n\n## In this blog\n\nYou will be learning the following:\n\n- What are git hooks?\n- Why use git hooks?\n- What is overcommit gem?\n- Install overcommit gem\n- Enable git hooks in your project with overcommit gem\n- Enable RuboCop as pre-commit hook with overcommit gem\n- Update git hooks with overcommit gem\n- Run RuboCop on \"git commit\"\n\n## What are Git Hooks?\n\nGit hooks are a set of scripts that Git executes before or after events such as: commit, push, and receive. Git hooks are a built-in feature - you don't need to download anything for them to work.\n\nWhen you initialize git in your project with `git init`, git hooks are also automatically added. You can find sample files for each event under the folder `your_project_path/.git/hooks`.\n\nTo view list of sample files for various types of hooks, you can hit the following command:\n\n```cmd\n  $ ls your_project_path/.git/hooks\n```\n\n## Why use Git Hooks?\n\nThere are various use cases for git hooks:\n\n- Check the commit message for spelling errors\n- Enforce pattern for commit messages\n- Enforce project coding standards like RuboCop\n- Email/SMS team members of a new commit\n- Push the code to production\n\nAt Truemark, we normally use it to enforce coding standards and code quality by running RuboCop before \"git commit\".\n\n## What is Overcommit Gem?\n\nOne important thing to note for Git Hooks is it is not version controlled, meaning whatever you add to hooks folder is only in your machine. \n\nSo, what happens when new member in your team clones the repository? \nNothing, they will get sample files like I mentioned above.\n\nWhat?\n\nThen \"How do we as a team make sure that hooks are executed for everyone?\"\n\nThe answer to that is **overcommit** gem.\n\nOvercommit gem helps you and your team to manage and configure Git hooks in your projects. \n\nWith overcommit gem installed in your project; after you clone the repo, you just have to hit the command `overcommit --install` and all hooks are configured by overcommit in your machine.\n\nOvercommit gem makes git hooks much more manageable because you don't have to write scripts for hooks manually. You can just add the command you want to run e.g. run RuboCop before commit, inside the configuration file provided by overcommit gem and everything else will be taken care by the gem.\n\n## Install Overcommit Gem\n\nAdd the following to your Gemfile:\n\n```rb\n  group :development, :test do\n    # run rubocop before commit with overcommit and much more\n    gem 'overcommit', '~> 0.58.0'\n  end\n```\n\n## Enable Git Hooks in your Project with Overcommit Gem\n\nYou can enable git hooks in your project by running the command provided by overcommit gem. From your project root path:\n\n```cmd\n  overcommit --install\n```\n\nThis will add required hooks in your project inside the folder `.git/hooks/`\n\nIt will also add configuration file `.overcommit.yml` inside your project root. This file is used to control all git hooks configured in your project, and this is also where you will be adding configurations for running rubocop before commit.\n\n## Enable RuboCop as Pre-Commit Hook with Overcommit\n\nYou can remove everything inside the configuration file `.overcommit.yml` and add the following inside:\n\n```yml\n# Use this file to configure the Overcommit hooks you wish to use. This will\n# extend the default configuration defined in:\n# https://github.com/sds/overcommit/blob/master/config/default.yml\n#\n# At the topmost level of this YAML file is a key representing type of hook\n# being run (e.g. pre-commit, commit-msg, etc.). Within each type you can\n# customize each hook, such as whether to only run it on certain files (via\n# `include`), whether to only display output if it fails (via `quiet`), etc.\n#\n# For a complete list of hooks, see:\n# https://github.com/sds/overcommit/tree/master/lib/overcommit/hook\n#\n# For a complete list of options that you can use to customize hooks, see:\n# https://github.com/sds/overcommit#configuration\n#\n# Uncomment the following lines to make the configuration take effect.\n\nPreCommit:\n RuboCop:\n   enabled: true\n   on_warn: fail # Treat all warnings as failures\n   problem_on_unmodified_line: ignore # run RuboCop only on modified code\n\n```\n\nWhat's with all these configuration options?\n\n1. \"PreCommit\"\n     \n    Git hooks will run \"rubocop\" to check issues in code when you try to commit your changes.\n\n2. `on_warn: fail` \n\n    All warnings will be treated as failures and you will have to resolve those warnings first before your code is committed to the remote git repository.\n\n3. `problem_on_unmodified_line: ignore`\n\n    Tell overcommit to run RuboCop only on code that were changed in this commit.\n    \n    This is specially useful when you are adding overcommit in old projects and you don't want to sit fixing all the issues inside the project. This lets you fix issues that you introduced or issues you want to refactor/fix while you can fix old issues in your own time (ohh, will that time ever come?).\n\n## Update Git Hooks with Overcommit\n\nNow that you have added configurations for running RuboCop on pre-commit, you will need to tell overcommit gem to persist these changes to \"pre-commit\" script inside git hooks folder.\n\n1. Add changes inside overcommit.yml to git\n\n    ```cmd\n      git add .\n    ```\n\n2. Update scripts inside git hooks folder with overcommit gem\n\n     ```cmd\n       overcommit --sign\n     ```\n\n## Run RuboCop on git commit\n\nAfter you are done making changes to your code and done with the part of \"git add\", try committing your code:\n\n```cmd\n  git commit -m \"your commit message\"\n```\n\nGit hooks will run RuboCop before commit and throw errors if any. If it didn't throw any error, add new code with issues manually and see the hook in action 🙈\n\nThis is something similar to what you will see in case there are issues in your code:\n\n![Overcommit posts RuboCop errors for code with issues](../../images/articles/run-rubocop-on-git-commit-with-overcommit-gem/overcommit-throwing-rubocop-errors.webp)\n\nIf there are no errors then your code will be committed to git and you can push to the remote repository.\n\n![All checks passed when running hooks by Overcommit](../../images/articles/run-rubocop-on-git-commit-with-overcommit-gem/overcommit-all-hooks-passed.webp)\n\n## Conclusion\n\nWith overcommit gem configured in your project, you will never have to worry about having to comment on issues in merge requests which could already have been detected by RuboCop in local machine of developers. This way, you and your team can focus on having meaningful discussion in the merge requests which leads to overall growth of the project and members in your team.\n\nIf you use any other gem for managing git hooks, I would like to hear your opinions on what you think about the gem you are using in comparison to overcommit. Personally, I find overcommit gem much easier to use and with a lot of configuration options. (Oh boy! You haven't even tried other gems)\n\nHappy coding!\n\n## Image Credits\n\n- Cover Image by Kelly Sikkema on Unsplash\n\n## References\n\n- Git Hooks\n- Overcommit Gem - Official Documentation"
        },
        {
          "id": "articles-beginners-guide-to-rubocop-in-rails",
          "title": "Beginner's Guide to RuboCop in Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, rubocop",
          "url": "/articles/beginners-guide-to-rubocop-in-rails/",
          "content": "RuboCop is a static code analyzer which analyzes the code based on the best practices followed by the Ruby developers around the world and defined on the community Ruby style guide.\n\nApart from analyzing the code, it also provides us the feature of automatically formatting the code and fix warnings inside our code.\n\nIf you are coming from Javascript background, you may have heard about ESLint.\n\n> RuboCop is ESLint for Ruby\n\nApart from Ruby, RuboCop also provides gems for implementing rules on various extensions like Rails, Minitest, RSpec, etc.\n\n## Why RuboCop?\n\nIt begs the question and curiosity among us, so why do we actually need RuboCop? What's the use of having RuboCop in our projects.\n\nHere are some reasons on why we would want to use RuboCop in our projects:\n\n1. Clean code\n\n   We all want to write clean code that adheres to best practices followed by developers around the world. Best practices comes from experience, it may take some years to know about the language and know the anti-patterns and good patterns to follow if we only rely on ourselves. \n\n   With RuboCop, we have the advantage of not having the experience because best practices have been bundled as rules and shipped to us inside the \"rubocop\" gem. RuboCop throws warnings whenever we violate rules configured for best practices and after fixing these issues, our code is most of the time clean and easy to understand.\n\n2. Eases the code review process\n   \n   The main purpose of code review is to fix logics in the code, or fix security vulnerabilities or discuss the path we took to develop the feature. \n   \n   But hey, imagine a situation where we push a code with a typo and reviewer spots that, then comments on it for fixing because obviously no one wants to ship the code with typo to production!\n\n   What's wrong with that? It takes significant time to review the code, and with typo or discussion about best practices in merge requests, we as a developer are wasting a lot of time which could have easily be solved with the help of RuboCop by configuring rules.\n\n   RuboCop makes sure that code with issues never makes it to the merge/pull requests.\n\n3. Best practice is no one size fits all\n\n   Normally best practices means what we like or dislike about the code or pattern we follow when we write the code and it differs for each one of us. If we focus our energy in discussing these practices in the code for every feature, when will we ship features?\n\n   With RuboCop, we can discuss with the team on what best practices should the team follow and disable or enable rules based on the conclusion, hence making everyone happy (well, you can never make everyone happy!).\n\n## Setup RuboCop in Rails\n\nIn this article, we will be installing main 'rubocop' gem for implementing rules in Ruby code along with the extension 'rubocop-rails' for Rails specific code.\n\n### Add Gems to Gemfile\n\nAdd the following to Gemfile inside the group `:development, :test`\n\n```rb\ngroup :development, :test do\n  # enforce rails best practice with rubocop\n  gem 'rubocop', '~> 1.18.0', require: false\n  gem 'rubocop-performance', '~> 1.11.0', require: false\n  gem 'rubocop-rails', '~> 2.11.0', require: false\nend\n```\n\n_NOTE_: Update gem versions based on what is latest at the time you are installing these gems in our project\n\nWe have added the following gems to our Gemfile:\n\n- rubocop: For Ruby code\n- rubocop-performance: For code performance related rules\n- rubocop-rails: For Rails specific rules\n\n### Install Gems in the Project\n\n1. Install rubocop globally\n\n   `gem install rubocop`\n\n   This will help us in running commands provided by 'rubocop' gem like auto formatting, running rubocop in the project, etc.\n\n2. Install new gems with `bundle install`\n\n### Add Configuration Files\n\nTo control (enable/disable) rules, we need to create configuration files for each extension. If there is no file then RuboCop will enable default extensions. I like to have configuration files because it provides flexibility to team.\n\nLet's create configuration files for RuboCop and it's extensions:\n\n```cmd\n  $ cd /path/to/our/project\n  $ touch .rubocop.yml\n  $ touch .rubocop-performance.yml\n  $ touch .rubocop-rails.yml\n```\n\n### Add Rules to Configuration files\n\n> I also have a blog written specifically for configuration files of RuboCop, you can find it at RuboCop Configuration Files for Rails if you want more options.\n\nLet's update configuration files and add rules for Ruby and installed extensions.\n\n### Ruby\n\n```yml\n# .rubocop.yml\n\n# The behavior of RuboCop can be controlled via the .rubocop.yml\n# configuration file. It makes it possible to enable/disable\n# certain cops (checks) and to alter their behavior if they accept\n# any parameters. The file can be placed either in your home\n# directory or in some project directory.\n#\n# RuboCop will start looking for the configuration file in the directory\n# where the inspected file is and continue its way up to the root directory.\n#\n\ninherit_from:\n  - '.rubocop-performance.yml'\n  - '.rubocop-rails.yml'\n\nrequire:\n  - rubocop-performance\n  - rubocop-rails\n\nAllCops:\n  TargetRubyVersion: 2.7\n  TargetRailsVersion: 6.0\n  Exclude:\n    - '**/db/migrate/*'\n    - 'db/schema.rb'\n    - '**/Gemfile.lock'\n    - '**/Rakefile'\n    - '**/rails'\n    - '**/vendor/**/*'\n    - '**/spec_helper.rb'\n    - 'node_modules/**/*'\n    - 'bin/*'\n\n###########################################################\n###################### RuboCop ############################\n###########################################################\n\n# You can find all configuration options for rubocop here: https://docs.rubocop.org/rubocop/cops_bundler.html\n\n###########################################################\n####################### Gemspec ###########################\n###########################################################\n\nGemspec/DateAssignment: # (new in 1.10)\n  Enabled: true\n\n###########################################################\n######################## Layout ###########################\n###########################################################\n\nLayout/ClassStructure:\n  ExpectedOrder:\n    - module_inclusion\n    - constants\n    - association\n    - public_attribute_macros\n    - public_delegate\n    - macros\n    - initializer\n    - public_class_methods\n    - public_methods\n    - protected_attribute_macros\n    - protected_methods\n    - private_attribute_macros\n    - private_delegate\n    - private_methods\n\nLayout/EmptyLineAfterMultilineCondition:\n  Enabled: true\n\nLayout/EmptyLinesAroundAttributeAccessor:\n  Enabled: true\n\nLayout/FirstArrayElementIndentation:\n  EnforcedStyle: consistent\n\nLayout/FirstArrayElementLineBreak:\n  Enabled: true\n\nLayout/FirstHashElementIndentation:\n  EnforcedStyle: consistent\n\nLayout/FirstHashElementLineBreak:\n  Enabled: true\n\nLayout/LineEndStringConcatenationIndentation: # (new in 1.18)\n  Enabled: true\n\nLayout/LineLength:\n  Max: 150\n  Exclude:\n    - '**/spec/**/*'\n\nLayout/MultilineArrayBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineOperationIndentation:\n  EnforcedStyle: indented\n\nLayout/MultilineHashBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineHashKeyLineBreaks:\n  Enabled: true\n\nLayout/MultilineMethodCallBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineMethodDefinitionBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/SpaceAroundMethodCallOperator:\n  Enabled: true\n\nLayout/SpaceBeforeBrackets: # (new in 1.7)\n  Enabled: true\n\nLayout/SpaceInLambdaLiteral:\n  EnforcedStyle: require_space\n\n\n###########################################################\n######################## Lint #############################\n###########################################################\n\nLint/AmbiguousAssignment: # (new in 1.7)\n  Enabled: true\n\nLint/AmbiguousBlockAssociation:\n  Exclude:\n    - '**/spec/**/*'\n\nLint/AssignmentInCondition:\n  AllowSafeAssignment: false\n\nLint/BinaryOperatorWithIdenticalOperands:\n  Enabled: true\n\nLint/DeprecatedConstants: # (new in 1.8)\n  Enabled: true\n\nLint/DeprecatedOpenSSLConstant:\n  Enabled: true\n\nLint/DuplicateBranch: # (new in 1.3)\n  Enabled: true\n\nLint/DuplicateElsifCondition:\n  Enabled: true\n\nLint/DuplicateRegexpCharacterClassElement: # (new in 1.1)\n  Enabled: true\n\nLint/DuplicateRequire:\n  Enabled: true\n\nLint/DuplicateRescueException:\n  Enabled: true\n\nLint/EmptyBlock: # (new in 1.1)\n  Enabled: true\n\nLint/EmptyClass: # (new in 1.3)\n  Enabled: true\n\nLint/EmptyConditionalBody:\n  Enabled: true\n\nLint/EmptyFile:\n  Enabled: true\n\nLint/EmptyInPattern: # (new in 1.16)\n  Enabled: true\n\nLint/FloatComparison:\n  Enabled: true\n\nLint/LambdaWithoutLiteralBlock: # (new in 1.8)\n  Enabled: true\n\nLint/MissingSuper:\n  Enabled: true\n\nLint/MixedRegexpCaptureTypes:\n  Enabled: true\n\nLint/NoReturnInBeginEndBlocks: # (new in 1.2)\n  Enabled: true\n\nLint/NumberConversion:\n  Enabled: true\n\nLint/NumberedParameterAssignment: # (new in 1.9)\n  Enabled: true\n\nLint/OrAssignmentToConstant: # (new in 1.9)\n  Enabled: true\n\nLint/RaiseException:\n  Enabled: true\n\nLint/RedundantDirGlobSort: # (new in 1.8)\n  Enabled: true\n\nLint/SelfAssignment:\n  Enabled: true\n\nLint/SymbolConversion: # (new in 1.9)\n  Enabled: true\n\nLint/ToEnumArguments: # (new in 1.1)\n  Enabled: true\n\nLint/TrailingCommaInAttributeDeclaration:\n  Enabled: true\n\nLint/TripleQuotes: # (new in 1.9)\n  Enabled: true\n\nLint/UnexpectedBlockArity: # (new in 1.5)\n  Enabled: true\n\nLint/UnmodifiedReduceAccumulator: # (new in 1.1)\n  Enabled: true\n\nLint/UnusedBlockArgument:\n  IgnoreEmptyBlocks: false\n\nLint/UnusedMethodArgument:\n  IgnoreEmptyMethods: false\n\nLint/UselessMethodDefinition:\n  Enabled: true\n\n###########################################################\n######################## Metric ###########################\n###########################################################\n\nMetrics/AbcSize:\n Max: 45\n\nMetrics/BlockLength:\n  CountComments: false\n  Max: 50\n  Exclude:\n    - '**/spec/**/*'\n    - '**/*.rake'\n    - '**/factories/**/*'\n    - '**/config/routes.rb'\n\nMetrics/ClassLength:\n  CountAsOne: ['array', 'hash']\n  Max: 150\n\nMetrics/CyclomaticComplexity:\n  Max: 10\n\nMetrics/MethodLength:\n  CountAsOne: ['array', 'hash']\n  Max: 30\n\nMetrics/ModuleLength:\n  CountAsOne: ['array', 'hash']\n  Max: 250\n  Exclude:\n    - '**/spec/**/*'\n\nMetrics/PerceivedComplexity:\n  Max: 10\n\n###########################################################\n######################## Naming ###########################\n###########################################################\n\nNaming/InclusiveLanguage: # (new in 1.18)\n  Enabled: true\n\n###########################################################\n######################## Style ############################\n###########################################################\n\nStyle/AccessorGrouping:\n  Enabled: true\n\nStyle/ArgumentsForwarding: # (new in 1.1)\n  Enabled: true\n\nStyle/ArrayCoercion:\n  Enabled: true\n\nStyle/AutoResourceCleanup:\n  Enabled: true\n\nStyle/BisectedAttrAccessor:\n  Enabled: true\n\nStyle/CaseLikeIf:\n  Enabled: true\n\nStyle/ClassAndModuleChildren:\n  Enabled: false\n\nStyle/CollectionCompact: # (new in 1.2)\n  Enabled: true\n\nStyle/CollectionMethods:\n  Enabled: true\n\nStyle/CombinableLoops:\n  Enabled: true\n\nStyle/CommandLiteral:\n  EnforcedStyle: percent_x\n\nStyle/ConstantVisibility:\n  Enabled: true\n\nStyle/Documentation:\n  Enabled: false\n\nStyle/DocumentDynamicEvalDefinition: # (new in 1.1)\n  Enabled: true\n\nStyle/EndlessMethod: # (new in 1.8)\n  Enabled: true\n\nStyle/ExplicitBlockArgument:\n  Enabled: true\n\nStyle/GlobalStdStream:\n  Enabled: true\n\nStyle/HashConversion: # (new in 1.10)\n  Enabled: true\n\nStyle/HashEachMethods:\n  Enabled: true\n\nStyle/HashExcept: # (new in 1.7)\n  Enabled: true\n\nStyle/HashLikeCase:\n  Enabled: true\n\nStyle/HashTransformKeys:\n  Enabled: true\n\nStyle/HashTransformValues:\n  Enabled: true\n\nStyle/IfWithBooleanLiteralBranches: # (new in 1.9)\n  Enabled: true\n\nStyle/ImplicitRuntimeError:\n  Enabled: true\n\nStyle/InlineComment:\n  Enabled: true\n\nStyle/InPatternThen: # (new in 1.16)\n  Enabled: true\n\nStyle/IpAddresses:\n  Enabled: true\n\nStyle/KeywordParametersOrder:\n  Enabled: true\n\nStyle/MethodCallWithArgsParentheses:\n  Enabled: true\n\nStyle/MissingElse:\n  Enabled: true\n\nStyle/MultilineInPatternThen: # (new in 1.16)\n  Enabled: true\n\nStyle/MultilineMethodSignature:\n  Enabled: true\n\nStyle/NegatedIfElseCondition: # (new in 1.2)\n  Enabled: true\n\nStyle/NilLambda: # (new in 1.3)\n  Enabled: true\n\nStyle/OptionalBooleanParameter:\n  Enabled: true\n\nStyle/QuotedSymbols: # (new in 1.16)\n  Enabled: true\n\nStyle/RedundantArgument: # (new in 1.4)\n  Enabled: true\n\nStyle/RedundantAssignment:\n  Enabled: true\n\nStyle/RedundantBegin:\n  Enabled: true\n\nStyle/RedundantFetchBlock:\n  Enabled: true\n\nStyle/RedundantFileExtensionInRequire:\n  Enabled: true\n\nStyle/RedundantSelfAssignment:\n  Enabled: true\n\nStyle/SingleArgumentDig:\n  Enabled: true\n\nStyle/StringChars: # (new in 1.12)\n  Enabled: true\n\nStyle/StringConcatenation:\n  Enabled: true\n\nStyle/SwapValues: # (new in 1.1)\n  Enabled: true\n\n```\n\n### Rails\n\n```yml\n# .rubocop-rails.yml\n\n###########################################################\n#################### RuboCop Rails ########################\n###########################################################\n\n# You can find all configuration options for rubocop-rails here: https://docs.rubocop.org/rubocop-rails/cops_rails.html\n\nRails/ActiveRecordCallbacksOrder:\n  Enabled: true\n\nRails/AddColumnIndex: # (new in 2.11)\n  Enabled: true\n\nRails/AfterCommitOverride:\n  Enabled: true\n\nRails/AttributeDefaultBlockValue: # (new in 2.9)\n  Enabled: true\n\nRails/DefaultScope:\n  Enabled: true\n\nRails/EagerEvaluationLogMessage: # (new in 2.11)\n  Enabled: true\n\nRails/ExpandedDateRange: # (new in 2.11)\n  Enabled: true\n\nRails/FindById:\n  Enabled: true\n\nRails/I18nLocaleAssignment: # (new in 2.11)\n  Enabled: true\n\nRails/Inquiry:\n  Enabled: true\n\nRails/MailerName:\n  Enabled: true\n\nRails/MatchRoute:\n  Enabled: true\n\nRails/NegateInclude:\n  Enabled: true\n\nRails/OrderById:\n  Enabled: true\n\nRails/Pluck:\n  Enabled: true\n\nRails/PluckId:\n  Enabled: true\n\nRails/PluckInWhere:\n  Enabled: true\n\nRails/RenderInline:\n  Enabled: true\n\nRails/RenderPlainText:\n  Enabled: true\n\nRails/SaveBang:\n  Enabled: true\n  AllowImplicitReturn: false\n\nRails/ShortI18n:\n  Enabled: true\n\nRails/SquishedSQLHeredocs: # (new in 2.8)\n  Enabled: true\n\nRails/TimeZoneAssignment: # (new in 2.10)\n  Enabled: true\n\nRails/UnusedIgnoredColumns: # (new in 2.11)\n  Enabled: true\n\nRails/WhereEquals: # (new in 2.9)\n  Enabled: true\n\nRails/WhereExists:\n  Enabled: true\n\nRails/WhereNot:\n  Enabled: true\n\n```\n\n### Performance\n\n```yml\n.rubocop-performance.yml\n\n###########################################################\n#################### RuboCop Performance ##################\n###########################################################\n\n# You can find all configuration options for rubocop-performance here: https://docs.rubocop.org/rubocop-performance/\n\nPerformance/AncestorsInclude: # (new in 1.7)\n  Enabled: true\n\nPerformance/BigDecimalWithNumericArgument: # (new in 1.7)\n  Enabled: true\n\nPerformance/BlockGivenWithExplicitBlock: # (new in 1.9)\n  Enabled: true\n\nPerformance/CollectionLiteralInLoop: # (new in 1.8)\n  Enabled: true\n\nPerformance/ConstantRegexp: # (new in 1.9)\n  Enabled: true\n\nPerformance/MapCompact: # (new in 1.11)\n  Enabled: true\n\nPerformance/MethodObjectAsBlock: # (new in 1.9)\n  Enabled: true\n\nPerformance/RedundantEqualityComparisonBlock: # (new in 1.10)\n  Enabled: true\n\nPerformance/RedundantSortBlock: # (new in 1.7)\n  Enabled: true\n\nPerformance/RedundantSplitRegexpArgument: # (new in 1.10)\n  Enabled: true\n\nPerformance/RedundantStringChars: # (new in 1.7)\n  Enabled: true\n\nPerformance/ReverseFirst: # (new in 1.7)\n  Enabled: true\n\nPerformance/SortReverse: # (new in 1.7)\n  Enabled: true\n\nPerformance/Squeeze: # (new in 1.7)\n  Enabled: true\n\nPerformance/StringInclude: # (new in 1.7)\n  Enabled: true\n\nPerformance/Sum: # (new in 1.8)\n  Enabled: true\n\n```\n\n## Run RuboCop\n\nWe have the option to run RuboCop on \n\n- Whole project\n- Files inside single folder\n- Only on single file\n\nAfter running commands of RuboCop in the command line, we will be presented with issues found in our code inside the project, which we can then fix manually or also have option to auto correct issues in most cases.\n\n### Whole Project\n\n```cmd\n  $ cd /path/to/your/project\n  $ rubocop\n```\n\n### Files inside single folder\n\n```cmd\n  $ rubocop app\n```\n\n### Single file\n\n```cmd\n  $ rubocop app/models/user.rb\n```\n\n### Auto fix warnings\n\nRubCop also provides the feature of auto correcting issues in our code. \n\nThere are a couple of things to keep in mind about auto-correct:\n\n- For some offenses, it is not possible to implement automatic correction.\n- Some automatic corrections that are possible have not been implemented yet.\n- Some automatic corrections might change (slightly) the semantics of the code, meaning they’d produce code that’s mostly equivalent to the original code, but not 100% equivalent. We call such auto-correct behavior \"unsafe\"\n\nWe can run auto correction with the following command:\n\n```cmd\n$ rubocop -a\n# or\n$ rubocop --auto-correct\n# or\n$ rubocop -A\n# or\n$ rubocop --auto-correct-all\n```\n\n## Other RuboCop Extensions\n\nRuboCop also has options for implementing rules on other extensions like:\n\n- rubocop-rspec For Rspec; a test framework popular for testing Rails code\n- rubocop-rake: A RuboCop plugin for Rake\n- rubocop-minitest: Another popular testing library for testing Ruby and Rails code\n\n## Style Guide \n\nRuboCop is based on style guides which helps in maintaining best practices for each extension. If you are curious, you can view and read guidelines from links below:\n\n- Ruby\n- Rails\n- RSpec\n- Minitest\n\n## Conclusion\n\nRuboCop is very helpful in maintaining best practices and it's one of the gem that we include in all our project setup here at Truemark.\n\nOne thing to remember with Static Code Analyzers is we have the flexibility to enable and disable rules, hence we should always discuss with the team what to include, why to include and what to disable.\n\nThis is the guide I hope I had when I was starting out as a Rails developer. I hope you find it useful!\n\nHappy coding!\n\n## Image Credits\n\n- Cover Image by Scott Webb on Unsplash\n\n## References\n\n- Official RuboCop Docs\n- RuboCop Configuration Files for Rails"
        },
        {
          "id": "articles-search-engine-with-rails",
          "title": "Search Engine with Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails",
          "url": "/articles/search-engine-with-rails/",
          "content": "I had always wondered \"How do I search through the relational database, in any table, in any column and get the related result?\". After searching for a bit, I reached to the conclusion **YOU DON'T**.\n\nSearching through more than one table and add to that, more than one column is very complex with relational database. That's where Elasticsearch comes into play. Elasticsearch stores all records in documents and provides search functionality that is very fast.\n\n__DISCLAIMER__\n\nBefore we move into the implementation part of the tutorial, I want to make this clear:\n\n> This is not the tutorial for building the web search engine like Google. What you will be building is a search engine for the Rails App where you can search for any string inside any table in the app. This will help you in adding the functionality of app wide search.\n\nNow then, let's create a search engine for our Rails App.\n\n## Clone the example Rails app\n\nI have prepared an example app for making it more easier to follow through the tutorial and pushed to Github. You can clone it from here.\n\nOr with the following command:\n\n```cmd\n  git clone git@github.com:coolprobn/rails-search-engine.git\n```\n\n## Configure Elasticsearch\n\n### Install Elasticsearch\n\nYou can install Elasticsearch by following instructions in the official website.\n\n### Run Elasticsearch Server\n\nWhen you have installed Elasticsearch in your machine, you will also be presented with commands to run the server. You can also find commands to run Elasticsearch for your OS here.\n\nFollowing command is for MacOS where I installed Elasticsearch with Homebrew:\n\n1. Run only once\n\n\t`elasticsearch`\n\n2. Run in background and on machine restart\n\n\t`brew services start elastic/tap/elasticsearch-full`\n\n_NOTE_: It can take some time to fully start the server.\n\n### Check if Elasticsearch is working\n\nYou can check if Elasticsearch is working by opening localhost on port **9200** in your browser:\n\n`http://localhost:9200/`\n\nYou should see content similar to this:\n\n```\n{\n  \"name\" : \"Prabins-MacBook-Pro.local\",\n  \"cluster_name\" : \"elasticsearch_cool\",\n  \"cluster_uuid\" : \"J2CAnnSoRI6p2zZGV3K8eg\",\n  \"version\" : {\n    \"number\" : \"7.13.3\",\n    \"build_flavor\" : \"default\",\n    \"build_type\" : \"tar\",\n    \"build_hash\" : \"5d21bea28db1e89ecc1f66311ebdec9dc3aa7d64\",\n    \"build_date\" : \"2021-07-02T12:06:10.804015202Z\",\n    \"build_snapshot\" : false,\n    \"lucene_version\" : \"8.8.2\",\n    \"minimum_wire_compatibility_version\" : \"6.8.0\",\n    \"minimum_index_compatibility_version\" : \"6.0.0-beta1\"\n  },\n  \"tagline\" : \"You Know, for Search\"\n}\n```\n\n## Install elasticsearch in the app\n\nAdd the following to your `Gemfile.rb`\n\n```ruby\n# Elasticsearch for powerful searching\ngem 'elasticsearch-model'\ngem 'elasticsearch-rails'\n```\n\nInstall gems with `bundle install`\n\n## Add Elasticsearch to Models\n\nAdd following to all the models. At present, there are three models inside the app; Author, Article and Category.\n\nExample for Author model:\n\n```ruby\nclass Author  :environment do |_, args|\n    # eager load first so that models are available for next step\n    Rails.application.eager_load!\n    \n    # include all models inside the app/models folder\n    default_models = ApplicationRecord.descendants.map(&:to_s)\n\n    argument_models = args[:models]&.split(',')&.map(&:strip)\n    models = argument_models || default_models\n\n    model_classes = models.map { |model| model.underscore.camelize.constantize }\n\n    model_classes.each do |model|\n      model.import force: true\n    end\n  end\nend\n```\n\nInside the rake task, following thing is happening:\n\n1. `[:models]` allows the rake task to accept arguments, in this case rake task accepts string separated by \",\" i.e. each model name which needs to be indexed is separated by a comma.\n2. `Rails.application.eager_load!` loads all the classes inside the \"app\" folder, this is so that all model names are available to this rake task in the next step.\n3. `ApplicationRecord.descendants.map(&:to_s)` returns array of model names inside the folder `app/models` meaning all models in the app will be indexed. If this is not what you desire you can replace the code with array of model name like `['Author', 'Article']`\n4. If argument is passed which should be in the form of string separated by comma as mentioned above, they are converted to array by using the `split` method and then unnecessary whitespace are removed with `strip` method\n5. In `model_classes`, each model name is converted to camel case to maintain consistency and avoid errors, and then each model name which is in string is converted to **constant** with the help of the method `constantize`.\n6. Finally, each model is looped through and indexed one by one with the method 'import' provided by elasticsearch gem.\n\n### Index models\n\n1. Index all models\n    \n    If you are running the rake task for the first time, it's better if you don't pass any argument since you want records from all models to be indexed.\n\n    Run the following command in that case:\n\n    `rails elastic_search:index_models`\n\n2. For new models\n   \n    If you add new models, you will normally want to only index that model, for that you can pass the names of new models when executing the rake task\n\n    `rails \"elastic_search:index_models[Comment\\, Tag]\"`\n\n    You need to escape comma (,) with `\\` otherwise it will be treated as second argument to rake task and only \"Comment\" will be passed to the argument model_names.\n\n## Search with Elasticsearch\n\nNow that all the records from required models are indexed, it's finally the time to search through them.\n\n### Search only one model\n\nYou can search only one model with `ModelName.search 'query'`. Go to rails console and fire the command:\n\n`Author.search 'Jane'`\n\n```cmd\n> response = Author.search('Jane').results\n> response.first.as_json\n=> {\"_index\"=>\"authors\", \"_type\"=>\"_doc\", \"_id\"=>\"2\", \"_score\"=>0.6931471, \"_source\"=>{\"id\"=>2, \"first_name\"=>\"Jane\", \"last_name\"=>\"Jones\", \"email\"=>\"jane@email.com\", \"nickname\"=>\"marvellous.jane\", \"created_at\"=>\"2021-07-25T15:18:18.192Z\", \"updated_at\"=>\"2021-07-25T15:18:18.192Z\"}}\n```\n\nFor the result part, you can also achieve the similar result by using the method \"records\" instead of \"results\". Difference between them is, \"results\" always returns Elasticsearch result while \"records\" convert Elasticsearch results to active record query.\n\nYou can read more about **records** here\n\n### Search in multiple models\n\nYou can search in multiple models with `Elasticsearch::Model.search('query', [ModelName1, ModelName2])` e.g. `Elasticsearch::Model.search('ruby', [Article, Category])`\n\n```cmd\n> Elasticsearch::Model.search('Ruby', [Article, Category]).results.as_json\n\t\n=> [{\"_index\"=>\"categories\", \"_type\"=>\"_doc\", \"_id\"=>\"1\", \"_score\"=>1.5697745, \"_source\"=>{\"id\"=>1, \"title\"=>\"ruby\", \"created_at\"=>\"2021-07-25T15:18:18.202Z\", \"updated_at\"=>\"2021-07-25T15:18:18.202Z\"}}, {\"_index\"=>\"articles\", \"_type\"=>\"_doc\", \"_id\"=>\"2\", \"_score\"=>1.2920684, \"_source\"=>{\"id\"=>2, \"title\"=>\"Build Twitter Bot with Ruby\", \"content\"=>\"Today, we will be building a bot for Twitter that will retweet all hashtags related to #ruby or #rails. We can also configure it to retweet any hashtags so you can use this tutorial to create bot that can retweet whatever hashtag you want. Yes, and we will be building this Twitter bot with Ruby.\\n\\nWe will be using Twitter gem (Github) to help us in getting up and running quickly with Twitter APIs.\\n\", \"published_on\"=>\"2021-04-23T05:00:00.000Z\", \"author_id\"=>1, \"created_at\"=>\"2021-07-25T15:18:18.236Z\", \"updated_at\"=>\"2021-07-25T15:18:18.236Z\"}}, {\"_index\"=>\"articles\", \"_type\"=>\"_doc\", \"_id\"=>\"4\", \"_score\"=>0.83619946, \"_source\"=>{\"id\"=>4, \"title\"=>\"Setup Factory Bot in Rails\", \"content\"=>\"Factory Bot is a library for setting up test data objects in Ruby. Today we will be setting up Factory Bot in Rails which uses RSpec for testing. If you are using different test suite, you can view all supported configurations in the official github repository of Factory Bot.\\n\", \"published_on\"=>\"2021-06-13T13:00:00.000Z\", \"author_id\"=>2, \"created_at\"=>\"2021-07-25T15:18:18.245Z\", \"updated_at\"=>\"2021-07-25T15:18:18.245Z\"}}]\n```\n\n## Converting Search Results to Active Record\n\nYou can convert search result to active record with `to_a` For e.g. `Author.search('john').records.to_a`\n\n```cmd\n> Author.search('john').records.to_a\nAuthor Load (0.4ms)  SELECT \"authors\".* FROM \"authors\" WHERE \"authors\".\"id\" = $1  [[\"id\", 1]]\n=> [#]\n```\n\nThis tutorial won't use this technique nor should this be used in actual implementation because it executes an extra query and adds more time to the request since each record should be converted to Active Record. In production application and especially for search app even 1 millisecond matters which is why in this tutorial, all search results will be converted to JSON and same records will be rendered inside the \"view\".\n\n## API for search engine\n\nAdding the search functionality to the app means everything that happened above in rails console should be replicated and added to the API.\n\n### Create a controller\n\nFrom command line, run the command to create the controller: `touch app/controllers/search_controller.rb`\n\nAdd the following to it:\n\n```\nclass SearchController App Search\n\n\n  \n    \n    \n    \n  \n\n\n\n  No results found for \n\n\n\n  \n\n  \n    \n      \n\n      \n        \n      \n    \n  \n\n```\n\n## Test the implementation\n\nFire up the rails server `rails s` and go to `localhost:3000/search`, you will see a view with search box in it like this:\n\n![Empty Search View](../../images/articles/search-engine-with-rails/empty-search-view.webp)\n\nType relevant text e.g. \"ruby\" and hit search.\n\nTada 🎉 \n\nYou will see search results grouped by model name and link to individual record's detail page like this:\n\n![Search result are grouped by model name](../../images/articles/search-engine-with-rails/search-results-grouped-by-model-name.webp)\n\n\nIf there are no results you will see \"No results found for [query]\" like this:\n\n![Results not found](../../images/articles/search-engine-with-rails/results-not-found.webp)\n\nSince there aren't any APIs and required Views for other features, link will not work at the moment.\n\n## Highlight matched text\n\nElasticsearch also provides the feature of highlighting the matched text like what Google does in it's search results. You can take the search feature to next level by adding the highlighted text and rendering them in the view.\n\n### Update search query in the controller\n\nUpdate the code inside else part of the controller in \"search\" action with the following:\n\n```ruby\n@results = Elasticsearch::Model\n                   .search(params[:q], [], { body: highlighted_fields })\n                   .results.as_json\n                   .group_by { |result| result['_index'] }\n```\n\n### Add private method for highlight fields\n\n```ruby\nprivate\n\ndef highlight_fields\n  {\n    highlight: {\n      fields: {\n        pre_tags: [''],\n        post_tags: [''],\n        first_name: {},\n        last_name: {},\n        nickname: {},\n        email: {},\n        title: {},\n        content: {}\n      }\n    }\n  }\nend\n```\n\nInside \"fields\" in the method \"highlight_fields\", you can add column names of any model that you want to highlight the text of. Current configuration includes highlighting for all 3 models available in the app.\n\nBy default, highlighted texts are wrapped around \"em\" tag and can easily be overridden by specifying `pre` and `post` tags; here \"em\" tag is overridden by \"strong\" tag because I felt that bold text catches more attention than italicized text. You can ignore these two tags and remove them completely if you think italicized texts work great.\n\nYour final controller will look like this:\n\n```ruby\nclass SearchController '],\n          post_tags: [''],\n          first_name: {},\n          last_name: {},\n          nickname: {},\n          email: {},\n          title: {},\n          content: {}\n        }\n      }\n    }\n  end\nend\n\n```\n\n### Update view to show highlighted text\n\nAdd following code just below the \"link_to\":\n\n```erb\n\n   \n\n```\n\nHighlighted result will be available inside the key \"highlight\" and text to highlight will be available inside the \"snippet\" key which is what we are using to render highlighted text.\n\nYour final view will look like this:\n\n```\nApp Search\n\n\n  \n    \n    \n    \n  \n\n\n\n  No results found for \n\n\n\n  \n\n  \n    \n      \n\n      \n        \n\n        \n           \n        \n      \n    \n  \n\n```\n\nYou can search again for the same query and you will see the highlighted text like this:\n\n![Highlighted search results](../../images/articles/search-engine-with-rails/highlighted-search-results.webp)\n\n## Improve the app further\n\nThere are many functionalities that I have skipped deliberately for making this tutorial small and more simpler, you can add the following functionalities if you want to play more with this app:\n\n1. Add API and required view to make the links to detail page work\n2. Show related articles when associated author or category is searched, for e.g. if user searches for \"jane\" show articles of the author \"Jane\", or for \"ruby\" show all articles that have categories \"Ruby\"\n\n## Conclusion\n\nIn real world application, I am sure that search functionality can be a lot complex than what is shown here, but this is the start and you can build as required on top of this.\n\nI had always wanted to explore the idea of app wide search, and this blog is the result of my habit of exploring new technology every Sunday. It was fun to learn about elasticsearch, research through the internet on how other have implemented similar search features and actually implementing this with the sample app and in existing project for the client.\n\nI hope you enjoyed this blog as much as I enjoyed it building and writing. I thank you for sticking with me to the very end of the blog.\n\nFull code of this tutorial is available in the branch \"app-search\", you can find it here.\n\nHappy tinkering and happy coding!\n\n## Image Credits\n\n- Cover Image by Jon Tyson on Unsplash"
        },
        {
          "id": "articles-live-stream-logs-to-browser-with-rails",
          "title": "Live Stream Logs to Browser with Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails",
          "url": "/articles/live-stream-logs-to-browser-with-rails/",
          "content": "Live streaming log files have fascinated me for a long time now. I saw the use of live streaming log when I deployed an app in Netlify for the first time. While deploying, Netlify displays the server log right in the browser so that as a user, we know what's happening in the background.\n\nIf you are confused on what I am talking about right now, you can also replicate that behavior if you open the log file with the command `tail -f` prepended to the file name like this: `tail -f log/development.log`\n\nNow if you fire the rails server and access any route, that changes will be appended to the file and shown in the bash where we have opened `log/development.log`.\n\n## Backstory\n\nIn one of the projects I am working on, we have invoicing module and we can create invoices with one click from the browser. Invoicing can take a long time to complete and user will have to wait there without knowing what's going in the background. That was when I began to wonder, what if we also try same thing like Netlify and show the logs to user as it happens in our Rails app server, that will be so cool.\n\nThen I began my research and found this gem of a tutorial from Aaron Patterson himself. \n\nIt was a 9 years old tutorial but had what we needed to start with. He streams some static code and not the actual file content but that was the start to know more about streaming in Rails. After a day of more research and trial and error, I got the live streaming for the log file to the browser from Rails app working.\n\n## Implementation\n\nLet's see step by step how I implemented live log streaming in the browser from Rails App.\n\n### Step 1: Create a new Rails app\n\n`rails new file-streaming-app`\n\n### Step 2: Generate a controller for streaming files manually\n\n`touch live_file_streams_controller.rb`\n\nAdd the following code inside\n\n```ruby\nclass LiveFileStreamsController official Rails documentation.\n\n### Step 5: View response in browser\n\n- Fire rails server `rails s`\n- Go to `localhost:3000/live_streams/log_file`\n- You will see \"hello world\" printed 5 times in the browser\n- Response is printed at the same time, even though we used sleep function in between `response.write`\n\n![hello world is printed 5 times in the browser](../../images/articles/live-stream-logs-to-browser-with-rails/print-hello-world-five-times.webp)\n\nLet's print them one by one next.\n\n### Step 6: Include `ActionController::Live` for live streaming response\n\n```ruby\nclass LiveStreamsController here.\n\nAs suggested in one of the comments in the discussion, let's add \"Last-Modified\" in `response.headers` with current time.\n\nLet's also add \"Content-Type\" to `response.headers` with \"text/event-stream\" so that our response are actually streamed and displayed one by one.\n\n```ruby\ndef log_file\n  response.headers['Content-Type'] = 'text/event-stream'\n\n  # hack due to new version of rack not supporting sse and sending all response at once: https://github.com/rack/rack/issues/1619#issuecomment-848460528\n  response.headers['Last-Modified'] = Time.now.httpdate\n\n  5.times {\n    response.stream.write \"hello world\\n\"\n\n    sleep 0.2\n  }\n\n  response.stream.close\nend\n```\n\nYou should be able to see \"hello world\" printed one by one like below:\n\n![Stream hello world 5 times](../../images/articles/live-stream-logs-to-browser-with-rails/stream-hello-world-5-times.gif)\n\nWow! We live streamed something!\n\n### Step 8: Server side events\n\nFrom Aaron's blog:\n\n> If you’ve never heard of Server-Sent Events (from here on we will be calling them SSEs), it’s a feature of HTML5 that allows long polling, but is built in to the browser. Basically, the browser keeps a connection open to the server, and fires an event in JavaScript every time the server sends data.\n\nYou can read further about it here\n\n### Step 9: Create `file_streaming_app/sse.rb`\n\nTo emit events and format the response instead of inside controller, we will be creating a new class called `file_streaming_app/sse` inside `lib` folder.\n\nCreate the file with: `touch lib/file_streaming_app/sse.rb`\n\nAdd following to it:\n\n```ruby\nrequire 'json'\n\nmodule FileStreamingApp\n  class SSE\n    def initialize(io)\n      @io = io\n    end\n\n    def write(object)\n      @io.write \"#{JSON.dump(object)}\"\n    end\n\n    def close\n      @io.close\n    end\n  end\nend\n```\n\n### Step 10: Use \"SSE\" class inside the controller\n\n_NOTE_: Only copy changed lines (Don't override the controller)\n\n```ruby\nrequire 'file_streaming_app/sse'\n\nclass LiveStreamsController File watcher gem watches the files for different events (or changes) like create, update, delete. It was the best gem I could find for our purpose, I tried other gems like:\n\n- **rb-fsevent** doesn't fire the event when file is modified in background by rails, had to do `touch log/development.log` every time to run the code inside watcher. Also, it didn't support file path, instead we had to always provide folder path.\n- **ruby-filewatch** was working flawlessly but the project was not maintained actively\n- **listen** rails uses this gem to auto load files after change so we don't have to reload server after every change to file. This also acted in the same way as rb-fsevent\n\n```ruby\ngem 'filewatcher', '~> 1.1.1' # specify latest version here and not 1.1.1, this was the latest at the time of writing this tutorial\n```\n\nDon't forget to install gem with `bundle install`\n\n### Step 12: Create `file_streaming_app/log_file.rb`\n\nTo get all lines inside the file in array, we will be creating a new class called `file_streaming_app/log_file` inside `lib` folder. This should normally have been a util, but to show only newly added lines, we need instance variable to store the last line position, hence we will be creating new class.\n\nCreate the file with: `touch lib/file_streaming_app/log_file.rb`\n\nAdd following code to it:\n\n```ruby\nmodule FileStreamingApp\n  class LogFile\n    def added_lines(file_path)\n      file_content = File.open(file_path).readlines\n\n      file_content.last(20)\n    end\n  end\nend\n```\n\n`File.open(file_path).readlines` returns all array of all lines inside the file.\n\nFor now, we will only print last 20 lines of the file when it is modified, hence `added_lines` is doing what we want with `.last(20)`\n\n### Step 13: Stream file content when it is modified\n\nUpdate controller with the following code:\n\n```ruby\ndef log_file\n    response.headers['Content-Type'] = 'text/event-stream'\n\n    # hack due to new version of rack not supporting sse and sending all response at once: https://github.com/rack/rack/issues/1619#issuecomment-848460528\n    response.headers['Last-Modified'] = Time.now.httpdate\n\n    sse = FileStreamingApp::SSE.new(response.stream)\n\n    log_file_path = Rails.root.join('log/development.log').to_s\n\n    file = FileStreamingApp::LogFile.new\n\n    # watch development.log file for changes\n    Filewatcher.new([log_file_path]).watch do |_file_path, event_type|\n      next unless event_type.to_s.eql?('updated')\n\n      file_lines = file.added_lines(log_file_path)\n\n      sse.write(file_lines)\n    end\n  ensure\n    sse.close\n  end\n``` \n\nHere, we are using `FileWatcher` to watch for changes in the file given in the `log_file_path` i.e. we are watching changes inside `log/development.log` only.\n\nWe only want to stream the content of file when something is added to it, so we are ignoring other event types with `next unless event_type.to_s.eql?('updated')`\n\nFinally, we are sending array of lines inside the file to write to browser with `sse.write(file_lines)`\n\n### Step 15: Update \"SSE\" to print array of file lines\n\nPreviously, we were just rendering string and using JSON to dump that data and print to browser. But now, we have array of lines from the file and we need to print them line by line in the browser.\n\nLet's update the SSE class with following code to reflect the changes:\n\n```ruby\ndef write(file_lines)\n  file_lines.each do |line|\n    @io.write line\n  end\nend\n```   \n\n### Step 16: View changes in file in the browser\n\nTo emit the event and print the content of file to the browser we will first need to find a way to modify the `development.log`.\n\n- Reload the browser where streaming url is open\n- In new tab, open rails default view: `localhost:3000`\n- When this page loads, log file will be modified and streaming api will be called, which then renders the last 20 lines from the file to the browser\n\n![Live stream last 20 lines of log file](../../images/articles/live-stream-logs-to-browser-with-rails/stream-20-lines-of-log-file.gif)\n\nWe have now streamed the file content every time the file is modified, next step for us will be to stream only added lines.\n\n### Step 17: Parallel Requests\n\nBy default, in Rails development environment, requests are not served parallelly and you may be facing the issue of browser just hanging when trying to open two urls at the same time.\n\nTo resolve that, let's add a little hack from Stack Overflow.\n\nAdd the following to your `config/environments/development.rb`\n\n```ruby\nRails.application.configure do\n  # other configurations\n\n  config.middleware.delete Rack::Lock\nend\n```\n\n### Step 18: Stream only changed lines in the log file\n\nFor streaming only changed lines, \"LogFile\" will need to remember the position of the last line in the log file before the change and render lines after that position only.\n\nLet's update the `LogFile` to make that possible.\n\n```ruby\nclass LogFile\n  def added_lines(file_path)\n    file_content = File.open(file_path).readlines\n    total_lines = file_content.length\n\n    @last_known_line_position ||= initial_line_position(total_lines)\n\n    start_position = @last_known_line_position\n\n    @last_known_line_position = total_lines\n\n    file_content[start_position, total_lines]\n  end\n\n  private\n\n  def initial_line_position(total_lines)\n    return 0 if total_lines.zero? || total_lines Log File Live Streamer [Github] \n\nThank you for reading. Happy live streaming!\n\n## References\n\n- Is it live? [Aaron Patterson's Blog]\n\n## Image Credits\n\n- Cover Image by Nadjib BR on Unsplash"
        },
        {
          "id": "articles-setup-factory-bot-in-rails",
          "title": "Setup Factory Bot in Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, factory bot, testing",
          "url": "/articles/setup-factory-bot-in-rails/",
          "content": "Factory Bot is a library for setting up test data objects in Ruby. Today we will be setting up Factory Bot in Rails which uses RSpec for testing. If you are using different test suite, you can view all supported configurations here.\n\nTo setup Factory Bot in Rails, we should follow the steps given below:\n\n1. Add **`factory_bot_rails`** to your **Gemfile** in **:development, :test** group\n\n    ```ruby\n      group :development, :test do\n        gem 'factory_bot_rails'\n      end\n   ```\n\n2. Install gem with `bundle install`\n3. Create a file `spec/support/factory_bot.rb` and add the following configuration inside\n\n    ```ruby\n      RSpec.configure do |config|\n        config.include FactoryBot::Syntax::Methods\n      end\n    ```\n\n4. Uncomment following line from **rails_helper.rb** so all files inside `spec/support` are loaded automatically by rspec\n\n    ```ruby\n      # Dir[Rails.root.join('spec', 'support', '**', '*.rb')].sort.each { |f| require f }\n\n      Dir[Rails.root.join('spec', 'support', '**', '*.rb')].sort.each { |f| require f }\n    ```\n5. Check factory bot rails version inside **Gemfile.lock** and update the gem with that version in `Gemfile`. It was `6.1.0` while writing this tutorial, yours may be different depending on latest gem version.\n    \n    ```ruby\n      group :development, :test do\n        gem 'factory_bot_rails', '~> 6.1.0'\n      end\n   ```\n\n6. Run `bundle install` (Optional, since nothing will change inside `Gemfile.lock`)\n7. Add **factories** folder inside `spec` folder if it doesn't already exist. You can then create factories inside `spec/factories` folder.\n9. Assuming you have model **User**, you can create `factories/users.rb`\n10. If attributes in `users` table are first_name, last_name, email, mobile_number. Your `users` factory will look something like this:\n\n    ```ruby\n      FactoryBot.define do\n        factory :user do\n          first_name { 'John' }\n          last_name  { 'Doe' }\n          email { john@email_provider.com }\n          mobile_number { 7860945310 }\n        end\n      end \n    ```\n11. You can use the `user` factory inside your `user_specs` like this\n       \n      ```ruby\n        require 'rails_helper'\n\n        RSpec.describe User, type: :model do\n          let(:user) { build(:user) }\n        end\n      ```\n\n12. You can view various use cases in official documentation for using  factories in your tests.\n\n## Conclusion\n\nFactory Bot helps in reusing the same code in multiple test examples, this way you will have to write less code and as they say \"Less code is always better\".\n\nThank you for reading. Happy coding!\n\n## References\n\n- Factory Bot [Github]\n\n## Image Credits\n\n- Cover Image by Anchor Lee on Unsplash"
        },
        {
          "id": "articles-integrate-pronto-with-gitlab-ci-for-rails-app",
          "title": "Integrate Pronto with Gitlab CI for Rails App",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, rubocop, lint, automated code review, gitlab ci",
          "url": "/articles/integrate-pronto-with-gitlab-ci-for-rails-app/",
          "content": "At Truemark, we are constantly looking to improve the code quality in our projects. And one way to do that is through the regular code review process. Code review process can quickly get exhausting if team members have to spend majority of their time on maintaining best practices.\n\nEnters automated code review; which reviews source code for compliance with a predefined set of rules and best practices.\n\nIn Rails projects, to define rules and best practices, we use RuboCop and for code reviews we will be using Pronto integrated with Gitlab CI.\n\n## What is Pronto?\n\nPronto is a gem which uses RuboCop configuration file to perform analysis on changes made in the given feature branch and adds comment to merge requests based on the best practices configured. Pronto can be integrated with popular version control system managers like Gitlab. Github and Bitbucket.\n\nIt also works on local machine and is perfect if you want to find out quickly if a branch introduces changes that conform to your style guide (rules configuration file), are DRY and doesn't introduce security holes.\n\n\n## Why Pronto?\n\nShort answer, for automating the code review process so your team doesn't have to manually comment and make sure that every team member is following the best practices.\n\nEvery developer has their own belief on the best practices and styles, for e.g. some want to use single quotes whereas others prefer double quotes. Some love using semicolons, other just think it's unnecessary. This can brew conflict in the team, when teammates are reviewing merge requests, hence we let Pronto do this. \n\nBest practices and style guide can first be setup by the team and Pronto makes sure that every member is adhering to those rules.\n\n## Assumption\n\n- RuboCop has been configured in the app, i.e. **.rubocop.yml** exists in the project\n\n## Install Pronto\n\n1. Add following to development, test group\n\n    ```gemfile\n      group :development, :test do\n        gem 'pronto'\n        gem 'pronto-rubocop', require: false\n        gem 'pronto-flay', require: false\n      end\n    ```\n\n2. From command line, run `bundle install`\n\n   ```cmd\n    bundle install\n   ```\n\n## Setup Pronto\n\nCreate **.gitlab-ci.yml** in the root project and add the following:\n\n```yml\n# this should be the ruby version that your rails app is using, ours was using 3.3.0\nimage: ruby:3.3.0\n\nstages:\n  # You can name your stage anything you like, just need to be sensible\n  - lint\n\npronto:\n  before_script:\n    #  Install bundler version from Gemfile.lock\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    # Install cmake required for rugged gem (Pronto depends on it)\n    - apt-get update -qq && apt-get install -y -qq cmake\n    - bundle install --jobs $(nproc)\n  stage: lint\n  only:\n    # run pronto only on merge requests (also runs when new changes are pushed to the merge request)\n    - merge_requests\n  variables:\n    PRONTO_GITLAB_API_PRIVATE_TOKEN: $PRONTO_ACCESS_TOKEN\n  script:\n    # Pronto fails with the error \"revspec 'origin/{target_branch}' because Gitlab fetches changes with git depth set to 20 by default. You can remove this line if you update Gitlab CI setting to clone the full project.\n    - git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n    # Run pronto on branch of current merge request\n    - bundle exec pronto run -f gitlab_mr -c origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n```\n\n_NOTES_: \n\n1. `$PRONTO_ACCESS_TOKEN` should be configured in gitlab which is explained in steps below. \n2. `$CI_MERGE_REQUEST_TARGET_BRANCH_NAME` is the predefined Gitlab CI variable which returns the branch name of the current merge request. You can read more about the predefined Gitlab CI variables here.\n\n\n## Setup Personal Access Token\n\n1. In Gitlab, after login, go to personal access token\n2. Enter a name and optional expiry date for the token.\n3. In scopes, choose only **api**, it is enough for our purpose as it gives all access to the project via Gitlab API.\n4. Click on **Create token**\n5. Copy the token and keep it somewhere safe, we will need this in next step.\n\nReference: Official documentation\n\n## Add Personal Access Token to the Project\n\nTo use the Personal Access Token (`$PRONTO_ACCESS_TOKEN`) in our Gitlab CI, we should set it up as custom variable in settings inside CI/CD.\n\n_NOTE_: Only project members with maintainer permissions can add or update project CI/CD variables, so make sure you have the correct access.\n\n1. Go to your project\n2. From the left menu, in Settings; choose the CI/CD\n3. Expand **Variables** Section\n4. Click on **Add Variable**\n5. In **Key**, add `PRONTO_ACCESS_TOKEN` as that is what we have configured in our .gitlab-ci.yml file. You can use any key name, just make sure to update it inside .gilab-ci.yml.\n6. In **Value**, add the token generated in the previous step\n7. In **Flags** section, uncheck **Protect variable**, checking this option will export the variable (`PRONTO_ACCESS_TOKEN`) only for protected branches like master/main. But we need this variable inside all merge request branches.\n8. Check **Mask variable** so our token value is not visible in the CI job logs.\n9. Click on **Add variable**\n\nReference: Official documentation\n\n## Run Pronto locally\n\n1. Run against master\n    `pronto run`\n\n2. Run against other branches\n    `pronto run -c branch-name`\n\n## Run Pronto with Gitlab CI\n\n1. Commit the changes made in the branch and push the code to Gitlab\n2. You should see Gitlab CI running automatically now and it should pass\n3. If Pronto finds any issues after analyzing the codes changed in the merge request, it will post those issues as comments in that merge request.\n\n_NOTE_: Sometimes it throws **Reference::RuggedError** due to missing git branch, retry running the job in that case and it should work the second time.\n\n## View Job Log\n\nIf you are curious and want to see what is happening in the background, you can check the Gitlab CI Job log.\n\n1. From the left menu inside the project, hover over CI/CD and click on Jobs\n2. To view the log, click on job id in the Job column which starts with #, for e.g. #1290157388\n\nDue to installation of all gems (`bundle install`) in the project, it can take up to 3 minutes for the job to be completed even when there are not many changes.\n\n## Bonus: Add Caching to Gitlab CI\n\nCaching helps in making the CI run really fast. At Truemark, before CI was implemented every job would take ~7 minutes and after the CI was implemented it's now taking ~1 minute.\n\nLet's look at the configuration changes we need to do in \".gitlab-ci.yml\" file for the caching.\n\n```yml\nimage: ruby:3.0.0\n\n# add this\ncache:\n  paths:\n    - vendor/\n\nbefore_script:\n  # add this\n  - bundle config set --local path 'vendor'\n  # replace \"bundle install\" with 👇\n  - bundle install -j $(nproc)\n\n# other content will be the same\n```\n\nWith above configuration, we are telling Gitlab Ci to cache \"vendor\" folder where all our gems will be stored. Then when running \"bundle install\", we will ask the CI to install gems to vendor folder from where CI will reuse gems that haven't changed in version. This will help in reducing the time CI takes to install gems.\n\nYour final \".gitlab-ci.yml\" could look similar to this:\n\n```yml\nimage: ruby:3.3.0\n\ncache:\n  paths:\n    - vendor/\n\nstages:\n  - lint\n\npronto:\n  before_script:\n    - gem install bundler -v \"$(grep -A 1 \"BUNDLED WITH\" Gemfile.lock | tail -n 1)\" --no-document\n    - apt-get update -qq && apt-get install -y -qq cmake\n    - bundle config set --local path 'vendor'\n    - bundle install --jobs $(nproc)\n  stage: lint\n  only:\n    - merge_requests\n  variables:\n    PRONTO_GITLAB_API_PRIVATE_TOKEN: $PRONTO_ACCESS_TOKEN\n  script:\n    - git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n    - bundle exec pronto run -f gitlab_mr -c origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME\n```\n\n## Conclusion\n\nThe main reason for writing this article was because I couldn't find decent article explaining exactly what we should do to integrate Pronto in Gitlab. Integration guide for Pronto in official documentation was not clear enough to guide us what exactly to do for integrating Pronto with Gitlab CI.\n\nWe were able to find some blogs, and majority of them were using Docker or were Github integrations, it took a while for the team to figure this solution out. Now this blog should save your team's time when you are using Pronto with Gitlab Ci in your projects. Good luck!\n\nThank you for reading. See you in the next blog.\n\n**References**\n\n- https://github.com/prontolabs/pronto\n- https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html\n- https://docs.gitlab.com/ee/ci/variables/#project-cicd-variables\n\n\n**Image Credits:** Cover Image by Pankaj Patel from Unsplash"
        },
        {
          "id": "articles-rubocop-configuration-files-for-rails",
          "title": "Rubocop Configuration Files for Rails",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "rubocop, ruby on rails",
          "url": "/articles/rubocop-configuration-files-for-rails/",
          "content": "> I spent a whole day configuring Rubocop in one of our Rails project. I am here to save your day's worth of time.\n\nRubocop is a linter for Ruby and Rails Projects. It enforces best practices based on the guidelines outlined in the community Ruby Style Guide. Apart from reporting the problems discovered in your code, RuboCop can also automatically fix many of them for you.\n\n## Configuration for Rails\n\nThis project assumes that you have already setup rubocop in your project and have `.rubocop.yml`.\n\n### Step 1: Add rules to `.rubocop.yml`\n\n```ruby\n# The behavior of RuboCop can be controlled via the .rubocop.yml\n# configuration file. It makes it possible to enable/disable\n# certain cops (checks) and to alter their behavior if they accept\n# any parameters. The file can be placed either in your home\n# directory or in some project directory.\n#\n# RuboCop will start looking for the configuration file in the directory\n# where the inspected file is and continue its way up to the root directory.\n#\n\ninherit_from:\n  - '.rubocop-rails.yml'\n  - '.rubocop-rspec.yml'\n\nrequire:\n  - rubocop-rails\n  - rubocop-rspec\n\nAllCops:\n  TargetRubyVersion: 2.7\n  TargetRailsVersion: 6.0\n  Exclude:\n    - '**/db/migrate/*'\n    - '**/Gemfile.lock'\n    - '**/Rakefile'\n    - '**/rails'\n    - '**/vendor/**/*'\n    - '**/spec_helper.rb'\n    - 'node_modules/**/*'\n    - 'bin/*'\n\n###########################################################\n###################### Rubocop ############################\n###########################################################\n\n# You can find all configuration options for rubocop here: https://docs.rubocop.org/rubocop/cops_bundler.html\n\n# ============== Layout =================\n\nLayout/ClassStructure:\n  ExpectedOrder:\n    - module_inclusion\n    - constants\n    - association\n    - public_attribute_macros\n    - public_delegate\n    - macros\n    - initializer\n    - public_class_methods\n    - public_methods\n    - protected_attribute_macros\n    - protected_methods\n    - private_attribute_macros\n    - private_delegate\n    - private_methods\n\nLayout/EmptyLineAfterMultilineCondition:\n  Enabled: true\n\nLayout/EmptyLinesAroundAttributeAccessor:\n  Enabled: true\n\nLayout/FirstArrayElementIndentation:\n  EnforcedStyle: consistent\n\nLayout/FirstArrayElementLineBreak:\n  Enabled: true\n\nLayout/FirstHashElementIndentation:\n  EnforcedStyle: consistent\n\nLayout/FirstHashElementLineBreak:\n  Enabled: true\n\nLayout/LineLength:\n  Max: 150\n  Exclude:\n    - '**/spec/**/*'\n\nLayout/MultilineArrayBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineOperationIndentation:\n  EnforcedStyle: indented\n\nLayout/MultilineHashBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineHashKeyLineBreaks:\n  Enabled: true\n\nLayout/MultilineMethodCallBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/MultilineMethodDefinitionBraceLayout:\n  EnforcedStyle: new_line\n\nLayout/SpaceAroundMethodCallOperator:\n  Enabled: true\n\nLayout/SpaceInLambdaLiteral:\n  EnforcedStyle: require_space\n\nLint/AmbiguousBlockAssociation:\n  Exclude:\n    - '**/spec/**/*'\n\nLint/AssignmentInCondition:\n  AllowSafeAssignment: false\n\nLint/BinaryOperatorWithIdenticalOperands:\n  Enabled: true\n\nLint/DeprecatedOpenSSLConstant:\n  Enabled: true\n\nLint/DuplicateElsifCondition:\n  Enabled: true\n\nLint/DuplicateRequire:\n  Enabled: true\n\nLint/DuplicateRescueException:\n  Enabled: true\n\nLint/EmptyConditionalBody:\n  Enabled: true\n\nLint/EmptyFile:\n  Enabled: true\n\nLint/FloatComparison:\n  Enabled: true\n\nLint/MissingSuper:\n  Enabled: true\n\nLint/MixedRegexpCaptureTypes:\n  Enabled: true\n\nLint/NumberConversion:\n  Enabled: true\n\nLint/RaiseException:\n  Enabled: true\n\nLint/SelfAssignment:\n  Enabled: true\n\nLint/TrailingCommaInAttributeDeclaration:\n  Enabled: true\n\nLint/UnusedBlockArgument:\n  IgnoreEmptyBlocks: false\n\nLint/UnusedMethodArgument:\n  IgnoreEmptyMethods: false\n\nLint/UselessMethodDefinition:\n  Enabled: true\n\n# ============== Metric =================\n\nMetrics/AbcSize:\n Max: 45\n\nMetrics/BlockLength:\n  CountComments: false\n  Max: 50\n  Exclude:\n    - '**/spec/**/*'\n    - '**/*.rake'\n    - '**/factories/**/*'\n    - '**/config/routes.rb'\n\nMetrics/ClassLength:\n  CountAsOne: ['array', 'hash']\n  Max: 150\n\nMetrics/CyclomaticComplexity:\n  Max: 10\n\nMetrics/MethodLength:\n  CountAsOne: ['array', 'hash']\n  Max: 30\n\nMetrics/ModuleLength:\n  CountAsOne: ['array', 'hash']\n  Max: 250\n  Exclude:\n    - '**/spec/**/*'\n\nMetrics/PerceivedComplexity:\n  Max: 10\n\n# ============== Variable ==================\n\n# Most of the Naming configurations are enabled by default, we should enable or disable configuration depending on what the team needs\n\n### Example\n##\n#  Naming/VariableNumber:\n#    Enabled: false\n##\n###\n\n# ============== Style ================\n\nStyle/AccessorGrouping:\n  Enabled: true\n\nStyle/ArrayCoercion:\n  Enabled: true\n\nStyle/AutoResourceCleanup:\n  Enabled: true\n\nStyle/BisectedAttrAccessor:\n  Enabled: true\n\nStyle/CaseLikeIf:\n  Enabled: true\n\nStyle/ClassAndModuleChildren:\n  Enabled: false\n\nStyle/CollectionMethods:\n  Enabled: true\n\nStyle/CombinableLoops:\n  Enabled: true\n\nStyle/CommandLiteral:\n  EnforcedStyle: percent_x\n\nStyle/ConstantVisibility:\n  Enabled: true\n\nStyle/Documentation:\n  Enabled: false\n\nStyle/ExplicitBlockArgument:\n  Enabled: true\n\nStyle/GlobalStdStream:\n  Enabled: true\n\nStyle/HashEachMethods:\n  Enabled: true\n\nStyle/HashLikeCase:\n  Enabled: true\n\nStyle/HashTransformKeys:\n  Enabled: true\n\nStyle/HashTransformValues:\n  Enabled: true\n\nStyle/ImplicitRuntimeError:\n  Enabled: true\n\nStyle/InlineComment:\n  Enabled: true\n\nStyle/IpAddresses:\n  Enabled: true\n\nStyle/KeywordParametersOrder:\n  Enabled: true\n\nStyle/MethodCallWithArgsParentheses:\n  Enabled: true\n\nStyle/MissingElse:\n  Enabled: true\n\nStyle/MultilineMethodSignature:\n  Enabled: true\n\nStyle/OptionalBooleanParameter:\n  Enabled: true\n\nStyle/RedundantAssignment:\n  Enabled: true\n\nStyle/RedundantBegin:\n  Enabled: true\n\nStyle/RedundantFetchBlock:\n  Enabled: true\n\nStyle/RedundantFileExtensionInRequire:\n  Enabled: true\n\nStyle/RedundantSelfAssignment:\n  Enabled: true\n\nStyle/SingleArgumentDig:\n  Enabled: true\n\nStyle/StringConcatenation:\n  Enabled: true\n\n```\n\n### Step 2: Create `.rubocop-rails.yml`\n\nAbove file enforces all best practices for Ruby code, now we will be adding configuration explicit for Rails.\n\nCreate `.rubocop-rails.yml` if you haven't already and add the following inside it:\n\n```ruby\n###########################################################\n#################### Rubocop Rails ########################\n###########################################################\n\n# You can find all configuration options for rubocop-rails here: https://docs.rubocop.org/rubocop-rails/cops_rails.html\n\nRails/ActiveRecordCallbacksOrder:\n  Enabled: true\n\nRails/AfterCommitOverride:\n  Enabled: true\n\nRails/DefaultScope:\n  Enabled: true\n\nRails/FindById:\n  Enabled: true\n\nRails/Inquiry:\n  Enabled: true\n\nRails/MailerName:\n  Enabled: true\n\nRails/MatchRoute:\n  Enabled: true\n\nRails/NegateInclude:\n  Enabled: true\n\nRails/OrderById:\n  Enabled: true\n\nRails/Pluck:\n  Enabled: true\n\nRails/PluckId:\n  Enabled: true\n\nRails/PluckInWhere:\n  Enabled: true\n\nRails/RenderInline:\n  Enabled: true\n\nRails/RenderPlainText:\n  Enabled: true\n\nRails/SaveBang:\n  Enabled: true\n  AllowImplicitReturn: false\n\nRails/ShortI18n:\n  Enabled: true\n\nRails/WhereExists:\n  Enabled: true\n\nRails/WhereNot:\n  Enabled: true\n\n```\n\n### Bonus: Configuration for RSpec\n\n_NOTE_: You should already have setup the rubocop-rspec gem.\n\nCreate `.rubocop-rspec.yml` in the root project and add the following:\n\n```ruby\n###########################################################\n#################### Rubocop Rspec ########################\n###########################################################\n\n# You can find all configuration options for rubocop-rspec here: https://docs.rubocop.org/rubocop-rspec/cops.html\n\nRSpec/AnyInstance:\n  Enabled: false\n\nRSpec/BeforeAfterAll:\n  Enabled: false\n\nRSpec/ContextWording:\n  Enabled: false\n\nRSpec/DescribeClass:\n  Enabled: false\n\nRSpec/ExampleLength:\n  Enabled: false\n\nRSpec/ExpectInHook:\n  Enabled: false\n\nRSpec/FilePath:\n  Enabled: false\n\nRSpec/InstanceVariable:\n  Enabled: false\n\nRSpec/LetSetup:\n  Enabled: false\n\nRSpec/MessageChain:\n  Enabled: false\n\nRSpec/MessageSpies:\n  Enabled: false\n\nRSpec/MultipleExpectations:\n  Enabled: false\n\nRSpec/NamedSubject:\n  Enabled: false\n\nRSpec/NestedGroups:\n  Max: 7\n\nRSpec/SubjectStub:\n  Enabled: false\n\nRSpec/VerifiedDoubles:\n  Enabled: false\n\nRSpec/VoidExpect:\n  Enabled: false\n\n```\n\n## Conclusion\n\nYou should now have fully working configuration for Rubocop in your Rails application in like 2 minutes.\n\nPlease note that, these Rubcop rules are not hard-and-fast, you should add and remove rules from the configuration files based on the decisions of your team.\n\nThank you for reading, see you in the next blog.\n\n**Image Credits:** Cover Image by Tim Gouw on Unsplash"
        },
        {
          "id": "articles-build-twitter-bot-with-ruby",
          "title": "Build Twitter Bot with Ruby",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "bot, tutorial, ruby",
          "url": "/articles/build-twitter-bot-with-ruby/",
          "content": "Today, we will be building a bot for Twitter that will retweet all hashtags related to #ruby or #rails. We can also configure it to retweet any hashtags so you can use this tutorial to create bot that can retweet whatever hashtag you want. Yes, and we will be building this Twitter bot with Ruby. \n\nWe will be using Twitter gem (Github) to help us in getting up and running quickly with Twitter APIs.\n\n## Background\n\nI am quite active in Twitter nowadays, and I have seen a lot of bots that retweet #100DaysOfCode. There were a lot of newbies trying out Javascript and getting help from the JS community. Dang, no one is using Ruby nowadays, I thought then. But it wasn't the case, yes less people are using it, but it's still popular. So I made plans to create a bot that will retweet all tweets with hashtag #ruby or #rails. The purpose of this bot is to bring Ruby and Rails community together, motivate newbies to use Ruby and for us to help each other.\n\n## Skills required to follow the tutorial\n\nIntermediate:\n\n- Ruby\n- Deployment skills if you are trying to deploy the bot to remote server\n\n## You should have\n\n- Configured twitter app and have four keys and secrets from from Twitter. If you haven't done so, you can follow this tutorial to set it up and come back here after you have the keys and secret. \n\n## Steps\n\n### Step 1: Create a ruby file\n\nLet's first create a ruby file with name **re_tweet_service.rb** where we will write code/script to instruct Twitter to retweet the hashtags we want.\n\n```cmd\n# create a folder to save the bot service\n$ mkdir ruby-twitter-bot\n\n# create ruby file\n$ cd ruby-twitter-bot\n$ mkdir app/services/twitter\n$ cd app/services/twitter\n$ touch re_tweet_service.rb\n```\n\nWhat is happening?\n\n- We are following the folder structure of Rails framework and saving our service inside the folder `app/services/twitter`\n- After that inside the twitter folder we created the file `re_tweet_service.rb` where we will be adding the code next.\n\n### Step 2: Require Twitter Gem in Ruby file\n\nInside the `re_tweet_service.rb`, add the following code at the top:\n\n```ruby\nrequire 'rubygems'\nrequire 'bundler/setup'\n\nrequire 'twitter'\n```\n\nWhat is happening?\n\n- First two lines will let us use gem in our pure Ruby app, if you have used Rails before then it is automatically done by the framework and you may not have come across this.\n- With `require 'twitter'`, we are instructing our Ruby app to use twitter gem.\n\n### Step 3: Create application.yml to store twitter secrets and keys\n\nSince secrets and keys should not be added to git repositories, let's create a folder `config` in the project root and add `application.yml` file to store secrets and keys which you get when configuring your twitter app.\n\n```cmd\n# create config folder\n$ mkdir config\n\n# create application.yml\n$ cd config\n$ touch application.yml\n```\n\nLet's add the following to the file:\n\n```ruby\ndefaults: &defaults\n  CONSUMER_KEY: '' # API KEY\n  CONSUMER_SECRET: '' # API KEY SECRET\n  ACCESS_TOKEN: ''\n  ACCESS_TOKEN_SECRET: ''\n\nproduction:\n   HASHTAGS_TO_WATCH.join(',')) do |tweet|\n    puts \"\\nCaught the tweet -> #{tweet.text}\"\n\n    if should_re_tweet?(tweet)\n      rest_client.retweet tweet\n\n      puts \"[#{Time.now}] Retweeted successfully!\\n\"\n    end\n  end\nrescue StandardError => e\n  puts \"=========Error========\\n#{e.message}\"\n\n  puts \"[#{Time.now}] Waiting for 60 seconds ....\\n\"\n\n  sleep 60\nend\n\n```\n\nWhat's happening?\n\n```ruby\ndef perform\n  rest_client = configure_rest_client\n  stream_client = configure_stream_client\n\n  while true\n    puts 'Starting to Retweet 3, 2, 1 ... NOW!'\n\n    re_tweet(rest_client, stream_client)\n  end\nend\n```\n\n- We are using `while true` so that our service runs forever, once we start.\n\n```ruby\ndef should_re_tweet?(tweet)\n  tweet?(tweet) && !retweet?(tweet) && allowed_hashtag_count?(tweet) && !sensitive_tweet?(tweet) && allowed_hashtags?(tweet)\nend\n\ndef re_tweet(rest_client, stream_client)\n  stream_client.filter(:track => HASHTAGS_TO_WATCH.join(',')) do |tweet|\n    puts \"\\nCaught the tweet -> #{tweet.text}\"\n\n    if should_re_tweet?(tweet)\n      rest_client.retweet tweet\n\n      puts \"[#{Time.now}] Retweeted successfully!\\n\"\n    end\n  end\nrescue StandardError => e\n  puts \"=========Error========\\n#{e.message}\"\n\n  puts \"[#{Time.now}] Waiting for 60 seconds ....\\n\"\n\n  sleep 60\nend\n```\n\n- Stream client live streams the tweets that match configured hashtags, hence we are looping through each tweet with `stream_client.filter(:track => HASHTAGS_TO_WATCH.join(','))`\n- If there is any error, we are rescuing it so that Twitter Bot doesn't stop due to the error. We are then making the bot sleep for 60 seconds, it's just a cooldown period and you are right, it's absolutely not necessary and can remove it if you want.\n- `should_re_tweet?` method is calling bunch of other methods that is checking various conditions so that the bot knows if it should retweet the given tweet received from twitter stream client.\n\n```ruby\ndef tweet?(tweet)\n  tweet.is_a?(Twitter::Tweet)\nend\n```\n\n- Check if received tweet is an original tweet or retweeted one, if it was retweeted then this method returns false and bot doesn't retweet.\n\n```ruby\ndef retweet?(tweet)\n  tweet.retweet?\nend\n```\n\n- Check if received tweet is not original and just retweeted, bot will skip this tweet if this method return true\n\n```ruby\ndef sensitive_tweet?(tweet)\n  tweet.possibly_sensitive?\nend\n```\n\n- Check if tweet is sensitive and has content that is not suitable for the bot to retweet\n\n### Step 8: Execute 'perform' method to run the service\n\nLet's add the following code at the absolute end of the service:\n\n```ruby\nTwitter::ReTweetService.new.perform\n```\n\n### Step 9: Run the bot\n\nFrom the command line in project root, execute the ruby file and your bot should start retweeting:\n\n```cmd\n$ ruby app/services/twitter/re_tweet_service.rb\n```\n\nYayy! Now you can just sit back and watch you bot wreck havoc the twitter with retweets.\n\n## Final code\n\n```ruby\nrequire 'rubygems'\nrequire 'bundler/setup'\n\nrequire 'twitter'\nrequire 'figaro'\nrequire 'pry-byebug'\n\nFigaro.application = Figaro::Application.new(\n  environment: 'production',\n  path: File.expand_path('config/application.yml')\n)\n\nFigaro.load\n\nmodule Twitter\n  class ReTweetService\n    attr_reader :config\n\n    def initialize\n      @config = twitter_api_config\n    end\n\n    def perform\n      rest_client = configure_rest_client\n      stream_client = configure_stream_client\n\n      while true\n        puts 'Starting to Retweet 3, 2, 1 ... NOW!'\n\n        re_tweet(rest_client, stream_client)\n      end\n    end\n\n    private\n\n    MAXIMUM_HASHTAG_COUNT = 10\n    HASHTAGS_TO_WATCH = %w[#rails #ruby #RubyOnRails]\n\n    def twitter_api_config\n      {\n        consumer_key: ENV['CONSUMER_KEY'],\n        consumer_secret: ENV['CONSUMER_SECRET'],\n        access_token: ENV['ACCESS_TOKEN'],\n        access_token_secret: ENV['ACCESS_TOKEN_SECRET']\n      }\n    end\n\n    def configure_rest_client\n      puts 'Configuring Rest Client'\n\n      Twitter::REST::Client.new(config)\n    end\n\n    def configure_stream_client\n      puts 'Configuring Stream Client'\n\n      Twitter::Streaming::Client.new(config)\n    end\n\n    def hashtags(tweet)\n      tweet_hash = tweet.to_h\n      extended_tweet = tweet_hash[:extended_tweet]\n\n      (extended_tweet && extended_tweet[:entities][:hashtags]) || tweet_hash[:entities][:hashtags]\n    end\n\n    def tweet?(tweet)\n      tweet.is_a?(Twitter::Tweet)\n    end\n\n    def retweet?(tweet)\n      tweet.retweet?\n    end\n\n    def allowed_hashtags?(tweet)\n      includes_allowed_hashtags = false\n\n      hashtags(tweet).each do |hashtag|\n        if HASHTAGS_TO_WATCH.map(&:upcase).include?(\"##{hashtag[:text]&.upcase}\")\n          includes_allowed_hashtags = true\n\n          break\n        end\n      end\n\n      includes_allowed_hashtags\n    end\n\n    def allowed_hashtag_count?(tweet)\n      hashtags(tweet)&.count  HASHTAGS_TO_WATCH.join(',')) do |tweet|\n        puts \"\\nCaught the tweet -> #{tweet.text}\"\n\n        if should_re_tweet?(tweet)\n          rest_client.retweet tweet\n\n          puts \"[#{Time.now}] Retweeted successfully!\\n\"\n        end\n      end\n    rescue StandardError => e\n      puts \"=========Error========\\n#{e.message}\"\n\n      puts \"[#{Time.now}] Waiting for 60 seconds ....\\n\"\n\n      sleep 60\n    end\n  end\nend\n\nTwitter::ReTweetService.new.perform\n```\n\n## Bonus: Running the bot in remote server\n\nWith this, you should be able to run the bot in your local machine. If you want to keep it running even when your maching is powered off, then you will have to deploy the code to remote server.\n\nI am assuming that you have a good knowledge of working with remote server and have already configured the server.\n\nIf you have already configured the server then you can run the bot in the server with following steps:\n\n### Step 1: Push code to git\n\nPush the current folder to git repo so that we can download it to the server and use it to run the bot.\n\n### Step 2: Clone the project\n\nInside the server, `git clone` the project\n\n### Step 3: Move inside the project folder\n\n```cmd\n$ cd ruby-twitter-bot # assuming your project is named ruby-twitter-bot\n```\n\n### Step 4: Create a new shell\n\n```cmd\n$ screen -S twitter-bot\n```\n\n### Step 5: Run the ruby twitter bot\n\n```cmd\n$ ruby app/services/twitter/re_tweet_service.rb \n```\n\n### Step 6: Detach shell and move to original shell\n\n```cmd\n$ CTRL + a + d\n```\n\nWith that, you should now have a bot that will run forever unless your server is down or your twitter app has reached the tweet limit.\n\n## Conclusion\n\nNow you have learned how to create Twitter bot with Ruby, go and show your power on Twitter. Hope you use the bot for the better of the community.\n\nYou can view the full code and folder structure at Ruby Twitter Bot (Github)\n\nThanks for reading, see you in the next blog.\n\n**References:** Run ruby script in the background (Stack Overflow)\n\n**Image Credits:** Cover Image by Rock'n Roll Monkey on Unsplash"
        },
        {
          "id": "articles-fix-issue-while-installing-ruby-with-rbenv-in-m1-mac",
          "title": "[Fix] Issue while installing ruby with rbenv in M1 Mac",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby, rbenv",
          "url": "/articles/fix-issue-while-installing-ruby-with-rbenv-in-m1-mac/",
          "content": "## Error Message\n\nLet's reproduce the error first:\n\n1. Install ruby with rbenv\n\n    ```cmd\n      $ rbenv install 2.7.2\n    ```\n\n    _NOTE_: In my M1 Mac, I tried installing various ruby versions with rbenv like 2.5.0, 2.6.0, 2.7.0, 2.7.1, 2.7.2  and always ran into this same issue.\n\n2. Install error\n\n    You will get the following error message \n\n      - BUILD FAILED (macOS 11.2.3 using ruby-build 20210309)\n      - Inspect or clean up the working tree at `/var/folders/.....`\n\n      \n        \n          \n        \n        Error Message\n      \n\n## Fix\n\nInstead of running the normal ruby install command with rbenv, let's prepend it with `RUBY_CFLAGS=\"-Wno-error=implicit-function-declaration\"`, which will supress all error and warnings and let the ruby installation complete.\n\n```cmd\n$ RUBY_CFLAGS=\"-Wno-error=implicit-function-declaration\" rbenv install 2.5.0\n```\n\n## Conclusion\n\nTada! See the magic? Ruby should install without any issue now.\n\nAre you using any another method to fix the issue? Let us know in the comments below. \n\nThank you for reading!\n\n**References**\n\n- [Github] Installation issues with Arm Mac (M1 Chip)\n\n**Image Credits:** \n\nCover Image by Joshua Fuller from Unsplash"
        },
        {
          "id": "articles-setup-gatsby-with-strapi-in-m1-mac",
          "title": "Setup Gatsby with Strapi in M1 Mac",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "gatsby, strapi, tutorial",
          "url": "/articles/setup-gatsby-with-strapi-in-m1-mac/",
          "content": "Setting up Gatsby with Strapi is straight forward, but while I was setting it up in M1 Mac, I encountered some errors maybe due to software support issues as M1 is relatively new in the market right now. It took some time for me to figure out solutions and I want to save your day. Let's setup the Gatsby blog with Strapi now!\n\nApart from error and solutions, every step I mention in this blog is from the tutorial in the official strapi website.\n\n## Step 1: Install node v14\n\nDuring the time I wrote this blog, v15 was the latest node version but not supported by strapi.\n\nIn command line type the following:\n\n```cmd\nnvm install 14.16.0\n```\n\n## Step 2: Install yarn\n\n```cmd\nnpm install --global yarn\n```\n\n## Step 3: Create strapi-blog folder\n\nCreate a folder to store the backend (strapi) and frontend (gatsby) part of the blog. Following command will create a folder and move inside it.\n\n```cmd\ntake strapi-blog\n\n# above command is short for\n\nmkdir strapi-blog\ncd strapi-blog\n```\n\n## Step 4: Setup strapi with template\n\n```cmd\nyarn create strapi-app backend --quickstart --template https://github.com/strapi/strapi-template-blog\n```\n\nThere you go, we encounter our first error:\n\n### Error\n\nERR! sharp Prebuilt libvips 8.10.5 binaries are not yet available for darwin-arm64v8\n\n#### Solution\n\nRef: Fix from github repo of sharp library\n\n##### Install vips with brew\n\n```cmd\nbrew install vips\n```\n\nIt took me around 14 minutes for it to install in my machine.\n\n##### Install sharp\n\n```cmd\nnpm i sharp\n```\n\n##### Remove backend folder\n\nWhen you run command to setup strapi with template, it creates backend folder to add all related files and folder for strapi to it. When you run the command again, you will strapi will complain that backend folder should be empty so let's empty it before strapi can even complain.\n\n```cmd\nrm -rf backend\n```\n\n##### Run the command to setup strapi project again\n\n```cmd\nyarn create strapi-app backend --quickstart --template https://github.com/strapi/strapi-template-blog\n```\n\nCommand should run without any issue now.\n\n## Step 5: Setup admin user for strapi dashboard\n\nAs soon as the strapi setup is complete, dashboard will open in the browser and you will see the sign up page to setup the admin user. \n\nAdd necessary details and we won't need to deal with strapi anymore, apart from running the server for now.\n\n## Step 6: Setup Gatsby\n\n### Install gatsby cli\n\n```cmd\nyarn global add gatsby-cli\n```\n\n### Move out of backend folder\n\nWe need to setup gatsby project in separate folder so if you are inside backend folder, first you will have to move out from there\n\n```cmd\ncd ..\n```\n\n### Create gatsby project\n\nNow you should be inside strapi-blog, run the following command to setup new gatsby project\n\n```cmd\ngatsby new frontend\n```\n\n#### Error\n\nwasm code commit Allocation failed - process out of memory\n\n##### Solution\n\n###### Switch to node v15\n\nThis issue is specifically in node v14 so switch to v15. If you haven't installed it in your machine yet, you can do so with following commands\n\n```cmd\n# install node v15\nnvm install 15.0.0\n\n# use v15 locally\nnvm use 15.0.0\n```\n\nInstallation took around 10 minutes in my machine and I could hear the fan from Mac loudly (lol).\n\n###### Run the command to setup gatsby project again\n\n```cmd\ngatsby new frontend\n```\n\nNow the project should setup without any issue.\n\n## Step 7: Create .env file inside your gatsby project root\n\n```cmd\n# move to gatsby project\ncd frontend\n\n# create .env file inside the project root\nnano .env\n```\n\nAdd following to the file:\n\n```env\nGATSBY_ROOT_URL=http://localhost:8000\nAPI_URL=http://localhost:1337\n```\n\n## Step 8: Setup strapi for gatsby\n\n### Install gatsby-source-strapi\n\n```cmd\nyarn add gatsby-source-strapi\n```\n\n### Replace the content of gatsby-config.js with the following\n\n```js\nrequire(\"dotenv\").config({\n path: `.env`,\n});\n \nmodule.exports = {\n plugins: [\n   \"gatsby-plugin-react-helmet\",\n   {\n     resolve: `gatsby-source-filesystem`,\n     options: {\n       name: `images`,\n       path: `${__dirname}/src/images`,\n     },\n   },\n   {\n     resolve: \"gatsby-source-strapi\",\n     options: {\n       apiURL: process.env.API_URL || \"http://localhost:1337\",\n       contentTypes: [\"article\", \"category\", \"writer\"],\n       singleTypes: [`homepage`, `global`],\n       queryLimit: 1000,\n     },\n   },\n   \"gatsby-transformer-sharp\",\n   \"gatsby-plugin-sharp\",\n   {\n     resolve: `gatsby-plugin-manifest`,\n     options: {\n       name: \"gatsby-starter-default\",\n       short_name: \"starter\",\n       start_url: \"/\",\n       background_color: \"#663399\",\n       theme_color: \"#663399\",\n       display: \"minimal-ui\",\n       icon: `src/images/gatsby-icon.png`\n     },\n   },\n   \"gatsby-plugin-offline\",\n ],\n};\n\n```\n\n## Step 9: Replace the content of src/components/seo.js with the following:\n\n```js\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nimport { Helmet } from \"react-helmet\";\nimport { useStaticQuery, graphql } from \"gatsby\";\n\nconst SEO = ({ seo = {} }) => {\n  const { strapiGlobal } = useStaticQuery(query);\n  const { defaultSeo, siteName, favicon } = strapiGlobal;\n\n  // Merge default and page-specific SEO values\n  const fullSeo = { ...defaultSeo, ...seo };\n\n  const getMetaTags = () => {\n    const tags = [];\n\n    if (fullSeo.metaTitle) {\n      tags.push(\n        {\n          property: \"og:title\",\n          content: fullSeo.metaTitle,\n        },\n        {\n          name: \"twitter:title\",\n          content: fullSeo.metaTitle,\n        }\n      );\n    }\n    if (fullSeo.metaDescription) {\n      tags.push(\n        {\n          name: \"description\",\n          content: fullSeo.metaDescription,\n        },\n        {\n          property: \"og:description\",\n          content: fullSeo.metaDescription,\n        },\n        {\n          name: \"twitter:description\",\n          content: fullSeo.metaDescription,\n        }\n      );\n    }\n    if (fullSeo.shareImage) {\n      const imageUrl =\n        (process.env.GATSBY_ROOT_URL || \"http://localhost:8000\") +\n        fullSeo.shareImage.publicURL;\n      tags.push(\n        {\n          name: \"image\",\n          content: imageUrl,\n        },\n        {\n          property: \"og:image\",\n          content: imageUrl,\n        },\n        {\n          name: \"twitter:image\",\n          content: imageUrl,\n        }\n      );\n    }\n    if (fullSeo.article) {\n      tags.push({\n        property: \"og:type\",\n        content: \"article\",\n      });\n    }\n    tags.push({ name: \"twitter:card\", content: \"summary_large_image\" });\n\n    return tags;\n  };\n\n  const metaTags = getMetaTags();\n\n  return (\n    \n  );\n};\n\nexport default SEO;\n\nSEO.propTypes = {\n  title: PropTypes.string,\n  description: PropTypes.string,\n  image: PropTypes.string,\n  article: PropTypes.bool,\n};\n\nSEO.defaultProps = {\n  title: null,\n  description: null,\n  image: null,\n  article: false,\n};\n\nconst query = graphql`\n  query {\n    strapiGlobal {\n      siteName\n      favicon {\n        publicURL\n      }\n      defaultSeo {\n        metaTitle\n        metaDescription\n        shareImage {\n          publicURL\n        }\n      }\n    }\n  }\n`;\n\n```\n\n## Step 10: Style the blog\n\nCreate **src/assets/css/main.css** file and add the following:\n\n```css\na {\n  text-decoration: none !important;\n}\n\nh1 {\n  font-family: Staatliches !important;\n  font-size: 120px !important;\n}\n\n#category {\n  font-family: Staatliches !important;\n  font-weight: 500 !important;\n}\n\n#title {\n  letter-spacing: 0.4px !important;\n  font-size: 22px !important;\n  font-size: 1.375rem !important;\n  line-height: 1.13636 !important;\n}\n\n#banner {\n  margin: 20px !important;\n  height: 800px !important;\n}\n\n#editor {\n  font-size: 16px !important;\n  font-size: 1rem !important;\n  line-height: 1.75 !important;\n}\n\n.uk-navbar-container {\n  background: #fff !important;\n  font-family: Staatliches !important;\n}\n\nimg:hover {\n  opacity: 1 !important;\n  transition: opacity 0.25s cubic-bezier(0.39, 0.575, 0.565, 1) !important;\n}\n\n```\n\n## Step 11: Remove useless components/pages\n\nIn command line, type the following:\n\n```cmd\nrm src/components/header.js src/components/layout.css  src/pages/page-2.js src/pages/using-typescript.tsx\n```\n\n## Step 12: Replace the content of pages/index.js with the following code\n\n```js\nimport React from \"react\";\nimport { graphql, useStaticQuery } from \"gatsby\";\nimport Layout from \"../components/layout\";\nimport \"../assets/css/main.css\";\n\nconst IndexPage = () => {\n  const data = useStaticQuery(query);\n\n  return (\n    \n      \n        \n          {data.strapiHomepage.hero.title}\n        \n      \n    \n  );\n};\n\nconst query = graphql`\n  query {\n    strapiHomepage {\n      hero {\n        title\n      }\n      seo {\n        metaTitle\n        metaDescription\n        shareImage {\n          publicURL\n        }\n      }\n    }\n  }\n`;\n\nexport default IndexPage;\n\n```\n\n## Step 13: Replace the content of components/layout.js with the following code\n\n```js\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nimport { StaticQuery, graphql } from \"gatsby\";\nimport Seo from \"./seo\";\n\nconst Layout = ({ children, seo }) => (\n   (\n      \n        \n        {children}\n      \n    )}\n  />\n);\n\nLayout.propTypes = {\n  children: PropTypes.node.isRequired,\n};\n\nexport default Layout;\n\n```\n\n## Step 14: Create a ./src/components/nav.js with the following code\n\nFrom code editor, create a new file and add the following:\n\n```js\nimport React from \"react\";\nimport { Link, StaticQuery, graphql } from \"gatsby\";\n\nconst Nav = () => (\n   (\n      \n        \n          \n            \n              \n                \n                  {data.strapiGlobal.siteName}\n                \n              \n            \n            \n              \n                Categories\n              \n              \n                \n                  {data.allStrapiCategory.edges.map((category, i) => (\n                    \n                      \n                        {category.node.name}\n                      \n                    \n                  ))}\n                \n              \n            \n          \n        \n      \n    )}\n  />\n);\n\nexport default Nav;\n\n```\n\n## Step 15: Import and use Nav component inside components/layout.js\n\nReplace the code inside **components/layout.js** with the following:\n\n```js\nimport React from \"react\";\nimport PropTypes from \"prop-types\";\nimport { StaticQuery, graphql } from \"gatsby\";\nimport Nav from \"./nav\";\nimport Seo from \"./seo\";\n\nconst Layout = ({ children, seo }) => (\n   (\n      \n        \n        \n        {children}\n      \n    )}\n  />\n);\n\nLayout.propTypes = {\n  children: PropTypes.node.isRequired,\n};\n\nexport default Layout;\n\n```\n\n## Step 16: Blog listing UI\n\n### Install gatsby-image\n\n```cmd\nnpm install gatsby-image\n```\n\n_NOTE_\n\nTutorial in official strapi site is using **gatsby-image** which has already been deprecated but we will not update in this blog.\n\nDeprecation Note\n\n### Create a new file components/card.js and add following code\n\n```js\nimport React from \"react\";\nimport { Link } from \"gatsby\";\nimport Img from \"gatsby-image\";\n \nconst Card = ({ article }) => {\n return (\n   \n     \n       \n         \n       \n       \n         \n           {article.node.category.name}\n         \n         \n           {article.node.title}\n         \n         \n           \n           \n             \n               {article.node.author.picture && (\n                 \n               )}\n             \n             \n               \n                 {article.node.author.name}\n               \n             \n           \n         \n       \n     \n   \n );\n};\n \nexport default Card;\n\n```\n\n### Create a new file components/articles.js and add following code\n\n```js\nimport React from \"react\";\nimport Card from \"./card\";\n \nconst Articles = ({ articles }) => {\n const leftArticlesCount = Math.ceil(articles.length / 5);\n const leftArticles = articles.slice(0, leftArticlesCount);\n const rightArticles = articles.slice(leftArticlesCount, articles.length);\n \n return (\n   \n     \n       \n         {leftArticles.map((article, i) => {\n           return (\n             \n           );\n         })}\n       \n       \n         \n           {rightArticles.map((article, i) => {\n             return (\n               \n             );\n           })}\n         \n       \n     \n   \n );\n};\n \nexport default Articles;\n\n```\n\n### Replace the code inside pages/index.js with the following\n\n```js\nimport React from \"react\";\nimport { graphql, useStaticQuery } from \"gatsby\";\nimport Layout from \"../components/layout\";\nimport ArticlesComponent from \"../components/articles\";\nimport \"../assets/css/main.css\";\n \nconst IndexPage = () => {\n const data = useStaticQuery(query);\n \n return (\n   \n     \n       \n         {data.strapiHomepage.hero.title}\n         \n       \n     \n   \n );\n};\n \nconst query = graphql`\n query {\n   strapiHomepage {\n     hero {\n       title\n     }\n     seo {\n       metaTitle\n       metaDescription\n       shareImage {\n         publicURL\n       }\n     }\n   }\n   allStrapiArticle(filter: { status: { eq: \"published\" } }) {\n     edges {\n       node {\n         strapiId\n         slug\n         title\n         category {\n           name\n         }\n         image {\n           childImageSharp {\n             fixed(width: 800, height: 500) {\n               src\n             }\n           }\n         }\n         author {\n           name\n           picture {\n             childImageSharp {\n               fixed(width: 30, height: 30) {\n                 src\n               }\n             }\n           }\n         }\n       }\n     }\n   }\n }\n`;\n \nexport default IndexPage;\n\n```\n\n### Start gatsby app\n\nTo see what we have been building up till now, start the gatsby app and view the blog:\n\n```js\ngatsby develop\n```\n\n## Step 17: Article Page\n\n### Install react-markdown and react-moment\n\n```cmd\nyarn add react-markdown react-moment moment\n```\n\n### Replace the content inside gatsby.node.js with following code\n\n```js\nexports.createPages = async ({ graphql, actions }) => {\n   const { createPage } = actions;\n   const result = await graphql(\n     `\n       {\n         articles: allStrapiArticle {\n           edges {\n             node {\n               strapiId\n               slug\n             }\n           }\n         }\n       }\n     `\n   );\n    if (result.errors) {\n     throw result.errors;\n   }\n    // Create blog articles pages.\n   const articles = result.data.articles.edges;\n    const ArticleTemplate = require.resolve(\"./src/templates/article.js\");\n    articles.forEach((article, index) => {\n     createPage({\n       path: `/article/${article.node.slug}`,\n       component: ArticleTemplate,\n       context: {\n         slug: article.node.slug,\n       },\n     });\n   });\n };\n  module.exports.onCreateNode = async ({ node, actions, createNodeId }) => {\n   const crypto = require(`crypto`);\n    if (node.internal.type === \"StrapiArticle\") {\n     const newNode = {\n       id: createNodeId(`StrapiArticleContent-${node.id}`),\n       parent: node.id,\n       children: [],\n       internal: {\n         content: node.content || \" \",\n         type: \"StrapiArticleContent\",\n         mediaType: \"text/markdown\",\n         contentDigest: crypto\n           .createHash(\"md5\")\n           .update(node.content || \" \")\n           .digest(\"hex\"),\n       },\n     };\n     actions.createNode(newNode);\n     actions.createParentChildLink({\n       parent: node,\n       child: newNode,\n     });\n   }\n };\n\n```\n\n### Create a file src/templates/article.js with the following code\n\n```js\nimport React from \"react\";\nimport { graphql } from \"gatsby\";\nimport Img from \"gatsby-image\";\nimport Moment from \"react-moment\";\nimport Layout from \"../components/layout\";\nimport Markdown from \"react-markdown\";\n \nexport const query = graphql`\n query ArticleQuery($slug: String!) {\n   strapiArticle(slug: { eq: $slug }, status: { eq: \"published\" }) {\n     strapiId\n     title\n     description\n     content\n     publishedAt\n     image {\n       publicURL\n       childImageSharp {\n         fixed {\n           src\n         }\n       }\n     }\n     author {\n       name\n       picture {\n         childImageSharp {\n           fixed(width: 30, height: 30) {\n             src\n           }\n         }\n       }\n     }\n   }\n }\n`;\n \nconst Article = ({ data }) => {\n const article = data.strapiArticle;\n const seo = {\n   metaTitle: article.title,\n   metaDescription: article.description,\n   shareImage: article.image,\n   article: true,\n };\n \n return (\n   \n     \n       \n         {article.title}\n       \n \n       \n         \n           \n \n           \n \n           \n             \n               {article.author.picture && (\n                 \n               )}\n             \n             \n               \n                 By {article.author.name}\n               \n               \n                 {article.published_at}\n               \n             \n           \n         \n       \n     \n   \n );\n};\n \nexport default Article;\n\n```\n\nSince we edited **gatsby-node.js**, we will need to restart the gatsby server to view the new changes. You should be able to view the blog detail page now.\n\nIn command line where gatsby server is running, do the following: \n\n```cmd\n# stop the server\ncontrol + c\n\n# run the gatsby server again\ngatsby develop\n```\n\n## Step 18: Blog Category Page\n\n### Create a file src/templates/category.js with the following code\n\n```js\nimport React from \"react\";\nimport { graphql } from \"gatsby\";\nimport ArticlesComponent from \"../components/articles\";\nimport Layout from \"../components/layout\";\n \nexport const query = graphql`\n query Category($slug: String!) {\n   articles: allStrapiArticle(\n     filter: { status: { eq: \"published\" }, category: { slug: { eq: $slug } } }\n   ) {\n     edges {\n       node {\n         slug\n         title\n         category {\n           name\n         }\n         image {\n           childImageSharp {\n             fixed(width: 660) {\n               src\n             }\n           }\n         }\n         author {\n           name\n           picture {\n             childImageSharp {\n               fixed(width: 30, height: 30) {\n                 ...GatsbyImageSharpFixed\n               }\n             }\n           }\n         }\n       }\n     }\n   }\n   category: strapiCategory(slug: { eq: $slug }) {\n     name\n   }\n }\n`;\n \nconst Category = ({ data }) => {\n const articles = data.articles.edges;\n const category = data.category.name;\n const seo = {\n   metaTitle: category,\n   metaDescription: `All ${category} articles`,\n };\n \n return (\n   \n     \n       \n         {category}\n         \n       \n     \n   \n );\n};\n \nexport default Category;\n\n```\n\n### Replace the content inside gatsby.node.js with the following code\n\n```js\nexports.createPages = async ({ graphql, actions }) => {\n   const { createPage } = actions;\n   const result = await graphql(\n     `\n       {\n         articles: allStrapiArticle {\n           edges {\n             node {\n               strapiId\n               slug\n             }\n           }\n         }\n         categories: allStrapiCategory {\n           edges {\n             node {\n               strapiId\n               slug\n             }\n           }\n         }\n       }\n     `\n   );\n    if (result.errors) {\n     throw result.errors;\n   }\n    // Create blog articles pages.\n   const articles = result.data.articles.edges;\n   const categories = result.data.categories.edges;\n    const ArticleTemplate = require.resolve(\"./src/templates/article.js\");\n    articles.forEach((article, index) => {\n     createPage({\n       path: `/article/${article.node.slug}`,\n       component: ArticleTemplate,\n       context: {\n         slug: article.node.slug,\n       },\n     });\n   });\n    const CategoryTemplate = require.resolve(\"./src/templates/category.js\");\n    categories.forEach((category, index) => {\n     createPage({\n       path: `/category/${category.node.slug}`,\n       component: CategoryTemplate,\n       context: {\n         slug: category.node.slug,\n       },\n     });\n   });\n };\n  module.exports.onCreateNode = async ({ node, actions, createNodeId }) => {\n   const crypto = require(`crypto`);\n    if (node.internal.type === \"StrapiArticle\") {\n     const newNode = {\n       id: createNodeId(`StrapiArticleContent-${node.id}`),\n       parent: node.id,\n       children: [],\n       internal: {\n         content: node.content || \" \",\n         type: \"StrapiArticleContent\",\n         mediaType: \"text/markdown\",\n         contentDigest: crypto\n           .createHash(\"md5\")\n           .update(node.content || \" \")\n           .digest(\"hex\"),\n       },\n     };\n     actions.createNode(newNode);\n     actions.createParentChildLink({\n       parent: node,\n       child: newNode,\n     });\n   }\n };\n\n```\n\n### Restart the gatsby server to view new changes\n\nIn command line where gatsby server is running, do the following: \n\n```cmd\n# stop the server\ncontrol + c\n\n# run the gatsby server again\ngatsby develop\n```\n\nYou should now be able to view blogs listed by specific category.\n\n## Conclusion\n\nCongratulations, you have successfully setup a gatsby blog with strapi. There was a small issue that I encountered after setting up the blog; in blog listing page, featured images of blog were not loading properly while they were loading correctly in the page where blogs are listed by category. Let me know how it goes for you!\n\nThanks for reading, if you have any confusion or suggestion, please comment below. See you soon in next blog!\n\n**References**\n\n- Strapi with Gatsby - Official Blog"
        },
        {
          "id": "articles-interact-with-mysql-server-using-mysql2-gem-part-4-perform-transactions",
          "title": "Interact with Mysql Server using mysql2 gem [Part 4] - Perform Transactions",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, mysql, tutorial",
          "url": "/articles/interact-with-mysql-server-using-mysql2-gem-part-4-perform-transactions/",
          "content": "This is the fourth part of the series where we create service to interact with mysql server in rails using mysql2 gem.\n\n## Others in series\n\n- Interact with MySQL Server using mysql2 gem [Part 1] - Select Operations\n- Interact with MySQL Server using mysql2 gem [Part 2] - Insert and Update Operations\n- Interact with MySQL Server using mysql2 gem [Part 3] - Prepared Statements\n\n## Requirements\n\n- [x] Service to connect with external mysql server\n- [x] Perform basic query: select, insert and update\n- [x] Prepared statement\n- [ ] Perform transactions\n- [ ] Perform join query\n\nIn previous three articles, we created a service, added methods to help us perform select, insert and update operations and also added method to help us in performing prepared statements. Today we will be looking at performing transactions in mysql server using mysql2 gem.\n\n## In this blog\n\nWe will be learning the following in this blog:\n\n- Perform transactions\n\n## Transaction\n\nA transaction helps us in performing multiple queries to database. Though each query is performed one by one, the concept of transaction is either perform all queries or none at all which means even if one query fails, changes made by all other queries will be undone from the database.\n\nTransaction is very helpful when we have to make sure that all queries are performed successfully. The most famous example for this is money transfer via bank, i.e. when one person transfers amount to another persons' account, amount from first account should be decreased and amount from second account should be increased. This can't be failed as this affects one/both person severely. In this case transaction is used to ensure that decrease and increase of amount is made on both side or transfer is failed as a whole.\n\n### Performing Transaction\n\nHere is what we will do for supporting transactions in our service:\n\n1. Accept `transaction_attributes_array` parameter in both `insert` and `update` method. `transaction_attributes` is an array of hashes which includes name of a table for the query, it's primary column and finally attribute hash needed to perform the operation.\n2. Create new method `prepare_transaction_queries` which will take `transaction_attributes_array` as params and return array of prepared queries.\n3. In `insert` and `update`, we will push existing/main query to transaction queries array for performing transactions.\n4. For performing transactions we will add a method `perform_transaction` which will accept a block i.e. queries here.\n5. `perform_transaction` method will then call another method called `transaction` which will wrap all queries inside **BEGIN** and **COMMIT** and execute them one by one. This is a standard way of performing transactions in MySQL. Also we will rescue and execute **ROLLBACK** in case any of the query in the array fails to execute.\n\n#### Code\n\n```ruby\nINSERT_QUERY_TYPE = 'insert'.freeze\nUPDATE_QUERY_TYPE = 'update'.freeze\n\ndef insert(attributes, transaction_attributes_array = [])\n  query = prepare_query(attributes, INSERT_QUERY_TYPE)\n\n  transaction_queries = prepare_transaction_queries(transaction_attributes_array, INSERT_QUERY_TYPE)\n\n  transaction_queries.push(query)\n\n  perform_mysql_operation do\n    perform_transaction(INSERT_QUERY_TYPE, transaction_queries)\n\n    puts 'Record inserted!'\n  end\nend\n\ndef update(id, attributes, transaction_attributes_array = [])\n  query = prepare_query(attributes, UPDATE_QUERY_TYPE)\n\n  transaction_queries = prepare_transaction_queries(transaction_attributes_array, UPDATE_QUERY_TYPE)\n\n  transaction_queries.push(query)\n\n  perform_mysql_operation do\n    perform_transaction(UPDATE_QUERY_TYPE, transaction_queries, id)\n\n    puts 'Record Updated!'\n  end\nend\n\nprivate\n\ndef prepare_insert_query(keys, transaction_table = nil)\n  columns = keys.join(', ')\n  substituted_columns = keys.map { '?' }.join(', ')\n  table_name = transaction_table || table\n\n  \"INSERT INTO #{table_name} (#{columns}) VALUES (#{substituted_columns})\"\nend\n\ndef prepare_update_query(keys, transaction_table = nil, transaction_primary_column = nil)\n  columns = keys.map { |key| \"#{key} = ?\" }.join(', ')\n  table_name = transaction_table || table\n  primary_column_name = transaction_primary_column || primary_column\n\n  \"UPDATE #{table_name} SET #{columns} WHERE #{primary_column_name} = ?\"\nend\n\ndef primary_column_hash(query_type, primary_column, attributes)\n  return {} if primary_column.nil? || query_type == INSERT_QUERY_TYPE\n\n  column_hash = {}\n  primary_column_symbol = primary_column.to_sym\n\n  column_hash[primary_column_symbol] = attributes[primary_column_symbol]\n\n  {\n    **column_hash,\n    primary_column_name: primary_column\n  }\nend\n\ndef prepared_query_by_type(query_type, keys, transaction_table = nil, transaction_primary_column = nil)\n  if query_type == INSERT_QUERY_TYPE\n    prepare_insert_query(keys, transaction_table)\n  else\n    prepare_update_query(keys, transaction_table, transaction_primary_column)\n  end\nend\n\ndef prepare_query(attributes, type, transaction_table = nil, transaction_primary_column = nil)\n  raise 'Attributes cannot be empty' if attributes.empty?\n\n  keys = attributes.keys\n  values = attributes.values\n\n  {\n    prepared_query: prepared_query_by_type(type, keys, transaction_table, transaction_primary_column),\n    values: values\n  }\nend\n\ndef params_for_prepare_query(query_type, transaction_attribute)\n  attributes = transaction_attribute[:attributes]\n  transaction_table = transaction_attribute[:table]\n  default_params = [attributes, query_type, transaction_table]\n\n  return default_params if query_type == INSERT_QUERY_TYPE\n\n  transaction_primary_column = transaction_attribute[:primary_column]\n\n  default_params.push(transaction_primary_column)\nend\n\ndef prepare_transaction_queries(attributes_array, type)\n  attributes_array.map do |transaction_attribute|\n    params = params_for_prepare_query(type, transaction_attribute)\n\n    {\n      **primary_column_hash(type, transaction_attribute[:primary_column], transaction_attribute[:attributes]),\n      **prepare_query(*params)\n    }\n  end\nend\n\ndef transaction\n  raise ArgumentError, 'No block was given' unless block_given?\n\n  begin\n    mysql_client.query('BEGIN')\n    yield\n    mysql_client.query('COMMIT')\n  rescue StandardError => e\n    mysql_client.query('ROLLBACK')\n\n    raise e\n  end\nend\n\ndef perform_insert_transaction(transaction_queries)\n  transaction_queries.each do |transaction_query|\n    statement = mysql_client.prepare(transaction_query[:prepared_query])\n    statement.execute(*transaction_query[:values])\n  end\nend\n\ndef perform_update_transaction(transaction_queries, main_table_id)\n  transaction_queries.each do |transaction_query|\n    values = transaction_query[:values]\n    primary_column_name = transaction_query[:primary_column_name]\n    record_id = primary_column_name && transaction_query[primary_column_name.to_sym] || main_table_id\n    values.push(record_id)\n\n    statement = mysql_client.prepare(transaction_query[:prepared_query])\n    statement.execute(*values)\n  end\nend\n\ndef perform_transaction(query_type, transaction_queries, main_table_id = nil)\n  transaction do\n    if query_type == INSERT_QUERY_TYPE\n      perform_insert_transaction(transaction_queries)\n    else\n      perform_update_transaction(transaction_queries, main_table_id)\n    end\n  end\nend\n```\n\n#### Explanation\n\nThere's a lot of refactoring going on here. Don't get overwhelmed just yet, we will go through each one of them. We had to refactor existing methods to support transactions. Let's now go through each methods and understand the refactor as well as transactions process.\n\n1. `insert`, `update`\n\n   - `insert` and `update` is taking additional param `transaction_attributes_array` which is an array of hashes with required information for each query needed to perform transactions. Following is happening inside these methods:\n   - `transaction_attributes_array` is sent to `prepare_transaction_queries` which converts each transaction query to prepared query and returns array of prepared transaction queries.\n   - We are pushing main query to the array since all queries have to be performed in same transaction.\n   - Finally we are performing transactions by calling `perform_transaction` method and sending all transaction queries.\n\n2. `prepare_transaction_queries`\n\n   - `prepare_transaction_queries` is taking params `attributes_array` and `type`. `transaction_attributes_array` is sent to `attributes_array` while nature of query i.e. insert or update is sent to `type`.\n   - Each transaction attribute is iterated one by one to get required query for transaction.\n\n3. `params_for_prepare_query`\n\n   - `params_for_prepare_query` is taking params `query_type` and `transaction_attributes`. `transaction_attributes` is a hash with `attributes`, `table_name` and `primary_column` required for preparing single query.\n   - If `query_type` is **insert** then params returned are `[attributes, query_type, transaction_table]` where `attributes` is a hash of attributes of the transaction query. `transaction_table` is the name of the table to perform query on.\n   - If `query_type` is **update**, we are pushing `primary_column` to the `default_params`. `primary_column` which helps us in specifying the record we need to update. You can view method `prepare_update_query` method to see how the `primary_column` is being used for that purpose.\n\n4. `primary_column_hash`\n\n   - `primary_column_hash` is receiving params `query_type`, `primary_column` and `attributes`\n   - Params description is same as above method `params_for_prepare_query`\n   - Empty hash is returned if query type is `insert` else primary column attribute of the transaction query is returned together with the name of primary column in `primary_column_name`\n   - This is required when pushing value of primary_column to other attributes' values while updating the record. You can view method `perform_update_transaction` to see how we are using `primary_column_name` and pushing the primary column attribute value to other attribute values.\n\n5. `prepare_query`\n\n   - `prepare_query` is taking additional params `transaction_table` and `transaction_primary_column` required for preparing transaction queries based on the query type.\n\n6. `prepared_query_by_type`\n\n   - Responsibility of `prepared_query_by_type` is to call either `prepare_insert_query` or `prepare_update_query` based on params `query_type` i.e. **insert** or **update** and return prepared query for performing transactions\n\n7. `prepare_insert_query`\n\n   - For supporting transactions, `prepare_insert_query` is taking additional param `transaction_table`\n   - `transaction_table` is the name of table where queries need to be performed on.\n\n8. `prepare_update_query`\n\n   - `prepare_update_query` is taking additional two params `transaction_table` and `transaction_primary_column` for supporting transactions\n   - `transaction_primary_column` is the column name for the primary key of the table where transaction needs to be performed on.\n\n9. `perform_transaction`\n\n   - `perform_transaction` takes three params; `query_type`, `transaction_queries` and `main_table_id`\n   - `transaction_queries` is an array of queries for performing transactions.\n   - `main_table_id` is the id of the record for the main table. You can see `perform_update_transaction` on how it is being used.\n\n10. `transaction`\n\n    - `transaction` takes a block and perform **transactions**.\n    - **BEGIN** tells mysql to begin the transaction for performing multiple queries to database.\n    - **yield** is supporting block of code, inside the block, each query in an array is executed one by one with a loop.\n    - Finally **COMMIT** tells mysql to commit all transactions to database and persist all of it.\n    - We are rescuing and rolling back all the performed queries in case error occurs with **ROLLBACK** i.e. if even one query fails, all other queries count as failed and nothing is persisted to the database\n\n11. `perform_insert_transaction`\n\n    - `perform_insert_transaction` is taking param `transaction_queries`\n    - Each query inside transaction is prepared and executed one by one in a loop\n\n12. `perform_update_transaction`\n\n    - `perform_update_transaction` is taking additional param `main_table_id` apart from `transaction_queries`\n    - `main_table_id` is the id of a record for the main table in our service.\n    - As with insert, we are processing each query in a loop.\n    - We are storing all values of the operation inside `values`\n    - If query is not the main one, i.e. is related transaction query, we are extracting name of its primary column stored inside key **`primary_column_name`** to variable `primary_column_name`\n    - If the query is not the main, we are storing `main_table_id` else we are extracting value of the key **`primary_column_name`** and storing it to variable `record_id`\n    - We are then pushing the id of the record to the existing values\n    - Finally, we are preparing and executing the query in and to the database.\n\nPractically:\n\n`transaction_attributes_array` contains\n\n```ruby\n# For insert transactions\n[\n  {\n    table: 'users',\n    attributes: {\n      first_name: 'John',\n      last_name: 'Doe'\n    },\n    primary_column: 'id',\n  },\n  {\n    table: 'users',\n    attributes: {\n      first_name: 'Jane',\n      last_name: 'Doe'\n    },\n    primary_column: 'id',\n  }\n]\n\n# For update transactions\n[\n  {\n    table: 'users',\n    attributes: {\n      id: 115,\n      first_name: 'John'\n    },\n    primary_column: 'id',\n  },\n  {\n    table: 'users',\n    attributes: {\n      id: 116,\n      last_name: 'Doe'\n    },\n    primary_column: 'id',\n  }\n]\n```\n\n- As discussed previously in last article, `prepare_query` converts primary table attributes to prepared statement.\n- We are sending **`transaction_attributes_array`** to `prepare_transaction_queries` for receiving array of queries.\n- This is what we will receive back depending on the nature of operation we are performing i.e. insert or update\n\n  ```ruby\n    # insert\n    [\n      {\n        :prepared_query=>\"INSERT INTO users (first_name, last_name) VALUES (?, ?)\",\n        :values=>[\"John\", \"Doe\"]\n      },\n      {\n        :prepared_query=>\"INSERT INTO users (first_name, last_name) VALUES (?, ?)\", :values=>[\"Jane\", \"Doe\"]\n      }\n    ]\n\n    # update\n    [\n      {\n        :id => 115,\n        :primary_column_name => \"id\",\n        :prepared_query => \"UPDATE users SET id = ?, first_name = ? WHERE id = ?\",\n        :values => [115, \"John\"]\n      },\n      {\n        :id => 116,\n        :primary_column_name => \"id\",\n        :prepared_query => \"UPDATE users SET id = ?, last_name = ? WHERE id = ?\",\n        :values => [116, \"Doe\"]\n      }\n    ]\n  ```\n\n- Then we will push main query to the transaction queries since we will have to perform all queries in one transactions and roll all back if error occurs.\n- `perform_transaction` method wraps all queries in one single transaction\n- Finally all queries in the array are executed one by one and inserted or updated to and in mysql database using mysql2 gem.\n\n## Final Code\n\nIf you have been following the tutorial from part 1, you will have following in your service file:\n\n```ruby\nrequire 'mysql2'\n\nmodule MySqlServer\n  module Database\n    class Connect\n      INSERT_QUERY_TYPE = 'insert'.freeze\n      UPDATE_QUERY_TYPE = 'update'.freeze\n\n      attr_reader :mysql_client, :table, :primary_column\n\n      def initialize(table, primary_column)\n        @table = table\n        @primary_column = primary_column\n      end\n\n      def fetch_all\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT ce_id, ce_peername from #{table}\")\n\n          puts result.entries\n        end\n      end\n\n      def fetch_one(id)\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table} WHERE #{primary_column}=#{id}\")\n\n          puts result.entries\n        end\n      end\n\n      def insert(attributes, transaction_attributes_array = [])\n        query = prepare_query(attributes, INSERT_QUERY_TYPE)\n\n        transaction_queries = prepare_transaction_queries(transaction_attributes_array, INSERT_QUERY_TYPE)\n\n        transaction_queries.push(query)\n\n        perform_mysql_operation do\n          perform_transaction(INSERT_QUERY_TYPE, transaction_queries)\n\n          puts 'Record inserted!'\n        end\n      end\n\n      def update(id, attributes, transaction_attributes_array = [])\n        query = prepare_query(attributes, UPDATE_QUERY_TYPE)\n\n        transaction_queries = prepare_transaction_queries(transaction_attributes_array, UPDATE_QUERY_TYPE)\n\n        transaction_queries.push(query)\n\n        perform_mysql_operation do\n          perform_transaction(UPDATE_QUERY_TYPE, transaction_queries, id)\n\n          puts 'Record Updated!'\n        end\n      end\n\n      private\n\n      def connect_to_db\n        host = ENV['MYSQL_SERVER_IP']\n        database = ENV['MYSQL_DB_NAME']\n        username = ENV['MYSQL_USERNAME']\n        password = ENV['MYSQL_PASSWORD']\n\n        Mysql2::Client.new(username: username, password: password, database: database, host: host)\n      end\n\n      def perform_mysql_operation\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          @mysql_client = connect_to_db\n\n          yield\n        rescue StandardError => e\n          raise e\n        ensure\n          mysql_client&.close\n        end\n      end\n\n      def prepare_insert_query(keys, transaction_table = nil)\n        columns = keys.join(', ')\n        substituted_columns = keys.map { '?' }.join(', ')\n        table_name = transaction_table || table\n\n        \"INSERT INTO #{table_name} (#{columns}) VALUES (#{substituted_columns})\"\n      end\n\n      def prepare_update_query(keys, transaction_table = nil, transaction_primary_column = nil)\n        columns = keys.map { |key| \"#{key} = ?\" }.join(', ')\n        table_name = transaction_table || table\n        primary_column_name = transaction_primary_column || primary_column\n\n        \"UPDATE #{table_name} SET #{columns} WHERE #{primary_column_name} = ?\"\n      end\n\n      def primary_column_hash(query_type, primary_column, attributes)\n        return {} if primary_column.nil? || query_type == INSERT_QUERY_TYPE\n\n        column_hash = {}\n        primary_column_symbol = primary_column.to_sym\n\n        column_hash[primary_column_symbol] = attributes[primary_column_symbol]\n\n        {\n          **column_hash,\n          primary_column_name: primary_column\n        }\n      end\n\n      def prepared_query_by_type(query_type, keys, transaction_table = nil, transaction_primary_column = nil)\n        if query_type == INSERT_QUERY_TYPE\n          prepare_insert_query(keys, transaction_table)\n        else\n          prepare_update_query(keys, transaction_table, transaction_primary_column)\n        end\n      end\n\n      def prepare_query(attributes, type, transaction_table = nil, transaction_primary_column = nil)\n        raise 'Attributes cannot be empty' if attributes.empty?\n\n        keys = attributes.keys\n        values = attributes.values\n\n        {\n          prepared_query: prepared_query_by_type(type, keys, transaction_table, transaction_primary_column),\n          values: values\n        }\n      end\n\n      def params_for_prepare_query(query_type, transaction_attribute)\n        attributes = transaction_attribute[:attributes]\n        transaction_table = transaction_attribute[:table]\n        default_params = [attributes, query_type, transaction_table]\n\n        return default_params if query_type == INSERT_QUERY_TYPE\n\n        transaction_primary_column = transaction_attribute[:primary_column]\n\n        default_params.push(transaction_primary_column)\n      end\n\n      def prepare_transaction_queries(attributes_array, type)\n        attributes_array.map do |transaction_attribute|\n          params = params_for_prepare_query(type, transaction_attribute)\n\n          {\n            **primary_column_hash(type, transaction_attribute[:primary_column], transaction_attribute[:attributes]),\n            **prepare_query(*params)\n          }\n        end\n      end\n\n      def transaction\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          mysql_client.query('BEGIN')\n          yield\n          mysql_client.query('COMMIT')\n        rescue StandardError => e\n          mysql_client.query('ROLLBACK')\n\n          raise e\n        end\n      end\n\n      def perform_insert_transaction(transaction_queries)\n        transaction_queries.each do |transaction_query|\n          statement = mysql_client.prepare(transaction_query[:prepared_query])\n          statement.execute(*transaction_query[:values])\n        end\n      end\n\n      def perform_update_transaction(transaction_queries, main_table_id)\n        transaction_queries.each do |transaction_query|\n          values = transaction_query[:values]\n          primary_column_name = transaction_query[:primary_column_name]\n          record_id = primary_column_name && transaction_query[primary_column_name.to_sym] || main_table_id\n          values.push(record_id)\n\n          statement = mysql_client.prepare(transaction_query[:prepared_query])\n          statement.execute(*values)\n        end\n      end\n\n      def perform_transaction(query_type, transaction_queries, main_table_id = nil)\n        transaction do\n          if query_type == INSERT_QUERY_TYPE\n            perform_insert_transaction(transaction_queries)\n          else\n            perform_update_transaction(transaction_queries, main_table_id)\n          end\n        end\n      end\n    end\n  end\nend\n\n```\n\nAfter this, our service should be able to perform all basic, prepared operations and transactions in and to the external mysql server using mysql2 gem. Next week we will learn how to perform join operations using mysql2 gem. Yes we will be joining a lot of tables next week and next article will be the final one in the series. Thank you and stay tuned!\n\n**Image Credits:** Cover Image by Pierre Borthiry on Unsplash"
        },
        {
          "id": "articles-interact-with-mysql-server-using-mysql2-gem-part-3-prepared-statements",
          "title": "Interact with Mysql Server using mysql2 gem [Part 3] - Prepared Statement",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, mysql, tutorial",
          "url": "/articles/interact-with-mysql-server-using-mysql2-gem-part-3-prepared-statements/",
          "content": "This is the third part of the series where we create service to interact with mysql server in rails using mysql2 gem.\n\n## Others in series\n\n- Interact with MySQL Server using mysql2 gem [Part 1] - Select Operations\n- Interact with MySQL Server using mysql2 gem [Part 2] - Insert and Update Operations\n- Interact with MySQL Server using mysql2 gem [Part 4] - Perform Transactions\n\n## Requirements\n\n- [x] Service to connect with external mysql server\n- [x] Perform basic query: select, insert and update\n- [ ] Prepared statement\n- [ ] Perform transactions\n- [ ] Perform join query\n\nIn previous two articles, we created a service and added methods to help us perform select, insert and update operations. Today we will be looking at performing prepared statements to mysql server using mysql2 gem.\n\n## In this blog\n\nWe will be learning the following in this blog:\n\n- Perform prepared statement\n\n## Prepared Statement\n\nFrom wikipedia:\n\n> In database management systems (DBMS), a prepared statement or parameterized statement is a feature used to execute the same or similar database statements repeatedly with high efficiency. Typically used with SQL statements such as queries or updates, the prepared statement takes the form of a template into which certain constant values are substituted during each execution.\n\nWhat it means for our service is we will replace the actual value in insert and update query with question mark(?) and send the actual values only the second time. Let's refactor the code.\n\n### Prepared Insert Query\n\nHere is what we will do for supporting prepared statements in our insert operation:\n\n1. Remove the method `format_insert_query` because it is dumping all attributes and values in single query while we need to use placeholder (?) and perform operation in two phases; one, prepare the query and two, send values to create in database.\n2. Create `prepare_query` method which will format the query as needed and provide us the hash with query and values.\n3. Update `insert` method to perform prepared statement.\n\n#### Code\n\n```ruby\ndef insert(attributes)\n  query = prepare_query(attributes)\n\n  perform_mysql_operation do\n    statement = mysql_client.prepare(query[:prepared_query])\n    statement.execute(*query[:values])\n\n    puts 'Record inserted!'\n  end\nend\n\nprivate\n\ndef prepare_query(attributes)\n  raise 'Attributes cannot be empty' if attributes.empty?\n\n  keys = attributes.keys\n  columns = keys.join(', ')\n  substituted_columns = keys.map { '?' }.join(', ')\n\n  prepared_query = \"INSERT INTO #{table} (#{columns}) VALUES (#{substituted_columns})\"\n\n  values = attributes.values\n\n  {\n    prepared_query: prepared_query,\n    values: values\n  }\nend\n```\n\n#### Explanation\n\n`prepare_query` is taking `attributes` hash parameter from `insert` method and returning hash with prepared query and values to insert to database. Following is happening inside the method:\n\n- Get column names by formatting key part of attributes\n- Format column names and add comma (,)\n- Format column names and add placeholder (?) then add comma (,)\n- Prepare insert query\n- Collect only values of attributes hash\n- Return a new hash with prepared query and values\n\nFollowing is happening inside `insert` method:\n\n- Call `prepare_query` which returns hash with prepared query and values needed for insert operation\n- Prepare query with `prepare` method provided by mysql2 gem\n- Insert record to database with `execute` method\n\nPractically:\n\n- `{first_name: 'John', last_name: 'Doe'}` will be received as `attributes` parameter, which will be sent to `prepare_query` to get hash having formatted query and values\n- Inside `prepare_query`, `columns` will have `\"first_name, last_name\"`, `substituted_columns` will have `\"?, ?\"` i.e. the number of values that will be inserted. If `table` was `users`, `prepared_query` will be `\"INSERT INTO users (first_name, last_name) VALUES (?, ?)\"` and `values` will have `['John', 'Doe']`\n- After receiving hash from `prepare_query`, `insert` method will now prepare the query with `prepare` method and insert to database with `execute` method.\n\n### Prepared Update Query\n\nInsert and update query has only one difference when query is prepared so we want to use same `prepare_query` method used in insert operation/. To do that we will update the code and do the following:\n\n1. Remove the method `format_update_query`.\n2. Update `prepare_query` method to support both insert and update operation.\n3. In `prepare_query`, we will add `type` params which can differentiate between insert and update operation.\n4. We will extract prepared statement for insert operation to new method `prepare_insert_query` and add `prepare_update_query` for formatting update query.\n5. Depending on `type` param, we will call related method that is formatting the prepared queries.\n6. Update `update` method to perform prepared statement.\n\n#### Code\n\n```ruby\ndef update(id, attributes)\n  query = prepare_query(attributes, 'update')\n  values = query[:values]\n  values.push(id)\n\n  perform_mysql_operation do\n    statement = mysql_client.prepare(query[:prepared_query])\n    statement.execute(*values)\n\n    puts 'Record Updated!'\n  end\nend\n\nprivate\n\ndef prepare_insert_query(keys)\n  columns = keys.join(', ')\n  substituted_columns = keys.map { '?' }.join(', ')\n\n  \"INSERT INTO #{table} (#{columns}) VALUES (#{substituted_columns})\"\nend\n\ndef prepare_update_query(keys)\n  columns = keys.map { |key| \"#{key} = ?\" }.join(', ')\n\n  \"UPDATE #{table} SET #{columns} WHERE #{primary_column} = ?\"\nend\n\ndef prepare_query(attributes, type)\n  raise 'Attributes cannot be empty' if attributes.empty?\n\n  keys = attributes.keys\n\n  prepared_query = type == 'insert' ? prepare_insert_query(keys) : prepare_update_query(keys)\n\n  values = attributes.values\n\n  {\n    prepared_query: prepared_query,\n    values: values\n  }\nend\n```\n\n#### Explanation\n\nOnly change in `update` to `insert` is; it's also taking `id` as parameters. `id` lets us know which existing record we want to update in database. It is getting prepared query and values for updating in database, concept is same as `insert` with change in query and values where `id` value is added to the values that are returned from `prepare_query` hash.\n\nPractically:\n\n- If we are providing `id=1` and `attributes` same as insert query, `prepare_query` will return query `\"UPDATE users SET first_name = ?,last_name = ? WHERE id = ?\"` and values `['John', 'Doe']`\n- Since we also have placeholder for `id`, we will need to add id to the values, so values will now contain `['John', 'Doe', 1]`\n- After this, as with insert operation, first queries are prepared and then values are updated in the database.\n\n## Final Code\n\nIf you have been following the tutorial from part 1, you will have following in your service file:\n\n```ruby\nrequire 'mysql2'\n\nmodule MySqlServer\n  module Database\n    class Connect\n      attr_reader :mysql_client, :table, :primary_column\n\n      def initialize(table, primary_column)\n        @table = table\n        @primary_column = primary_column\n      end\n\n      def fetch_all\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table}\")\n\n          puts result.entries\n        end\n      end\n\n      def fetch_one(id)\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table} WHERE #{primary_column}=#{id}\")\n\n          puts result.entries\n        end\n      end\n\n      def insert(attributes)\n        query = prepare_query(attributes, 'insert')\n\n        perform_mysql_operation do\n          statement = mysql_client.prepare(query[:prepared_query])\n          statement.execute(*query[:values])\n\n          puts 'Record inserted!'\n        end\n      end\n\n      def update(id, attributes)\n        query = prepare_query(attributes, 'update')\n        values = query[:values]\n        values.push(id)\n\n        perform_mysql_operation do\n          statement = mysql_client.prepare(query[:prepared_query])\n          statement.execute(*values)\n\n          puts 'Record Updated!'\n        end\n      end\n\n      private\n\n      def connect_to_db\n        host = ENV['MYSQL_SERVER_IP']\n        database = ENV['MYSQL_DB_NAME']\n        username = ENV['MYSQL_USERNAME']\n        password = ENV['MYSQL_PASSWORD']\n\n        Mysql2::Client.new(username: username, password: password, database: database, host: host)\n      end\n\n      def perform_mysql_operation\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          @mysql_client = connect_to_db\n\n          yield\n        rescue StandardError => e\n          raise e\n        ensure\n          mysql_client&.close\n        end\n      end\n\n      def prepare_insert_query(keys)\n        columns = keys.join(', ')\n        substituted_columns = keys.map { '?' }.join(', ')\n\n        \"INSERT INTO #{table} (#{columns}) VALUES (#{substituted_columns})\"\n      end\n\n      def prepare_update_query(keys)\n        columns = keys.map { |key| \"#{key} = ?\" }.join(', ')\n\n        \"UPDATE #{table} SET #{columns} WHERE #{primary_column} = ?\"\n      end\n\n      def prepare_query(attributes, type)\n        raise 'Attributes cannot be empty' if attributes.empty?\n\n        keys = attributes.keys\n\n        prepared_query = type == 'insert' ? prepare_insert_query(keys) : prepare_update_query(keys)\n\n        values = attributes.values\n\n        {\n          prepared_query: prepared_query,\n          values: values\n        }\n      end\n    end\n  end\nend\n```\n\nAfter this, our service should be able to perform all basic and prepared operations in and to the external mysql server. Next week we will learn to perform transaction operations i.e. we will be performing multiple queries and rollback all operations if there is error in even one of the operation. Thank you and stay tuned!\n\n**Image Credits:** Cover Image by Ian Battaglia on Unsplash"
        },
        {
          "id": "articles-interact-with-mysql-server-using-mysql2-gem-part-2-insert-and-update-operations",
          "title": "Interact with MySQL Server using mysql2 gem [Part 2] - Insert and Update Operations",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, mysql, tutorial",
          "url": "/articles/interact-with-mysql-server-using-mysql2-gem-part-2-insert-and-update-operations/",
          "content": "This is the second part of the series where we create service to interact with mysql server in rails using mysql2 gem.\n\n## Others in series\n\n- Interact with MySQL Server using mysql2 gem [Part 1] - Select Operations\n- Interact with MySQL Server using mysql2 gem [Part 3] - Prepared Statements\n- Interact with MySQL Server using mysql2 gem [Part 4] - Perform Transactions\n\n## Requirements\n\n- [x] Service to connect with external mysql server\n- [ ] Perform basic query: select, insert and update\n- [ ] Prepared statement\n- [ ] Perform transactions\n- [ ] Perform join query\n\nIn previous blog, we created a service and also added method to perform `select` operations. Today we will be adding additional methods to help us perform insert and update operations to mysql server using mysql2 gem.\n\n## In this blog\n\nWe will be learning the following in this blog:\n\n- Perform insert query\n- Perform update query\n\n## Perform Insert Query\n\nInsert query is used to create new record in the database.\n\n### Code\n\n```ruby\n\ndef insert(attributes)\n  query = format_insert_query(attributes)\n\n  perform_mysql_operation do\n    mysql_client.query(query)\n\n    puts 'Record inserted!'\n  end\nend\n\nprivate\n\ndef format_insert_query(attributes)\n  raise 'Attributes cannot be empty' if attributes.empty?\n\n  columns = attributes.keys.join(',')\n\n  values = attributes.values.collect! { |value| \"'#{value}'\" }.join(',')\n\n  \"INSERT INTO #{table} (#{columns}) VALUES (#{values})\"\nend\n```\n\n### Explanation\n\n`format_insert_query` is taking `attributes` hash parameter from `insert` method. Following is happening inside the method:\n\n- Get column names by formatting key part of attributes param\n- Get values to insert by formatting value part of attributes param\n- Construct and return an insert query\n\nFollowing is happening inside `insert` method:\n\n- Call `format_insert_query` to get a query that can directly be used for insert operation\n- Insert to database\n\nPractically:\n\n- `{first_name: 'John', last_name: 'Doe'}` will be received as `attributes` parameter, which will be sent to `format_insert_query` to get formatted query\n- Inside `format_insert_query`, `columns` will have value `\"first_name,last_name\"`; key part of the `attributes` hash, `values` will have `\"'John','Doe'\"`; value part of the `attributes` hash. Lastly, if `table` was `users` it will return `\"INSERT INTO users (first_name,last_name) VALUES ('John','Doe')\"`\n- Now the `insert` method will send the query to server and new record will be inserted to the database.\n\n## Perform Update Query\n\nUpdate query is used to update existing record in the database.\n\n### Code\n\n```ruby\ndef update(id, attributes)\n  query = format_update_query(id, attributes)\n\n  perform_mysql_operation do\n    mysql_client.query(query)\n\n    puts 'Record Updated!'\n  end\nend\n\nprivate\n\ndef format_update_query(id, attributes)\n  raise 'Attributes cannot be empty' if attributes.empty?\n\n  formatted_attributes = attributes.map { |key, value| \"#{key}='#{value}'\" }.join(',')\n\n  \"UPDATE #{table} SET #{formatted_attributes} WHERE #{primary_column}=#{id}\"\nend\n```\n\n### Explanation\n\nOnly change in `update` to `insert` is; it's also taking `id` as parameters. `id` lets us know which existing record we want to update in database. It is getting formatted query and updating in database, concept is same as `insert` with only change in query.\n\n`format_update_query` has slight difference to that of `format_insert_query`; it is converting `attributes` differently, let's see that with practical example below.\n\n- If we are providing `id=1` and `attributes` same as insert query, `format_update_query` will return `\"UPDATE users SET first_name='John',last_name='Doe' WHERE id=1\"`\n- Now the `update` method will send the query to server and update the record with `id=1` in the database.\n\n## Final Code\n\nIf you have been following the tutorial from part 1, you will have following in your service file:\n\n```ruby\nrequire 'mysql2'\n\nmodule MySqlServer\n  module Database\n    class Connect\n      attr_reader :mysql_client, :table, :primary_column\n\n      def initialize(table, primary_column)\n        @table = table\n        @primary_column = primary_column\n      end\n\n      def fetch_all\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table}\")\n\n          puts result.entries\n        end\n      end\n\n      def fetch_one(id)\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table} WHERE #{primary_column}=#{id}\")\n\n          puts result.entries\n        end\n      end\n\n      def insert(attributes)\n        query = format_insert_query(attributes)\n\n        perform_mysql_operation do\n          mysql_client.query(query)\n\n          puts 'Record inserted!'\n        end\n      end\n\n      def update(id, attributes)\n        query = format_update_query(id, attributes)\n\n        perform_mysql_operation do\n          mysql_client.query(query)\n\n          puts 'Record Updated!'\n        end\n      end\n\n      private\n\n      def connect_to_db\n        host = ENV['MYSQL_SERVER_IP']\n        database = ENV['MYSQL_DB_NAME']\n        username = ENV['MYSQL_USERNAME']\n        password = ENV['MYSQL_PASSWORD']\n\n        Mysql2::Client.new(username: username, password: password, database: database, host: host)\n      end\n\n      def perform_mysql_operation\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          @mysql_client = connect_to_db\n\n          yield\n        rescue StandardError => e\n          raise e\n        ensure\n          mysql_client&.close\n        end\n      end\n\n      def format_insert_query(attributes)\n        raise 'Attributes cannot be empty' if attributes.empty?\n\n        columns = attributes.keys.join(',')\n\n        values = attributes.values.collect! { |value| \"'#{value}'\" }.join(',')\n\n        \"INSERT INTO #{table} (#{columns}) VALUES (#{values})\"\n      end\n\n      def format_update_query(id, attributes)\n        raise 'Attributes cannot be empty' if attributes.empty?\n\n        formatted_attributes = attributes.map { |key, value| \"#{key}='#{value}'\" }.join(',')\n\n        \"UPDATE #{table} SET #{formatted_attributes} WHERE #{primary_column}=#{id}\"\n      end\n    end\n  end\nend\n```\n\nAfter this our service should be able to perform basic query in the external mysql server. Next week we will be learning how we can perform query with prepared statement which helps us to avoid sql injection issues.\n\n**Image Credits:** Cover Image by Kelvin Ang on Unsplash"
        },
        {
          "id": "articles-interact-with-mysql-server-using-mysql2-gem-part-1-select-operations",
          "title": "Interact with MySQL Server using mysql2 gem [Part 1] - Performing select operations",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, mysql, tutorial",
          "url": "/articles/interact-with-mysql-server-using-mysql2-gem-part-1-select-operations/",
          "content": "Rails has made our lives easier. If we are talking in terms of querying database, active record has got us covered. But what if we had to communicate with external database?\n\nRecently in one of the project that I worked on, I had to perform insert, update, select, and other different queries to external MariaDB server. I figured out that it would be very easier in long term if I created a service which can work like ORM to perform the query I wanted.\n\nService takes `params` as input which is passed from controller to model and then finally to our service. If you are not familiar with `param`, it is a hash of attributes used to create or update in rails.\n\n## Others in series\n\n- Interact with MySQL Server using mysql2 gem [Part 2] - Insert and Update Operations\n- Interact with MySQL Server using mysql2 gem [Part 3] - Prepared Statements\n- Interact with MySQL Server using mysql2 gem [Part 4] - Perform Transactions\n\n## Skills required to follow the tutorial\n\nIntermediate in:\n\n- Rails\n- Sql\n\n## Requirements\n\n- Service to connect with external mysql server\n- Perform basic query: select, insert and update\n- Prepared statement\n- Perform transactions\n- Perform join query\n\n## In this blog\n\nOur requirement list is very long, so we will split this blog into various parts. We will be looking at the following requirements in this one:\n\n- Service to connect with external mysql server\n- Perform basic query: select\n\n## Service to connect with external mysql server\n\nWe will be using mysql2 gem for our purpose. Let's first create a service to connect with external mysql server.\n\nCreate a file **connect.rb** inside `lib/my_sql_server/database` and add the following to it.\n\n### Code\n\n```ruby\nrequire 'mysql2'\n\nmodule MySqlServer\n  module Database\n    class Connect\n      attr_reader :mysql_client\n\n      private\n\n      def connect_to_db\n        host = ENV['MYSQL_SERVER_IP']\n        username = ENV['MYSQL_USERNAME']\n        password = ENV['MYSQL_PASSWORD']\n        database = ENV['MYSQL_DB_NAME']\n\n        Mysql2::Client.new(username: username, password: password, database: database, host: host)\n      end\n\n      def perform_mysql_operation\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          @mysql_client = connect_to_db\n\n          yield\n        rescue StandardError => e\n          raise e\n        ensure\n          mysql_client&.close\n        end\n      end\n    end\n  end\nend\n```\n\n### Explanation\n\nHere, we are creating a service with private method `connect_to_db` that connects to our external mysql database. We are using following from application.yml:\n\n- host: IP address of external mysql server\n- username: User of the database\n- password: Database password\n- database: Database name\n\nIn `perform_mysql_operation`, for security reasons; we are making sure that connection to external database is closed once all the query operation is completed.\n\n## Perform basic query: select\n\n### Select query\n\nSelect query lets us fetch row/s from our database.\n\n#### Select all\n\n##### Code\n\n```ruby\nclass Connect\n  attr_reader :mysql_client, :table\n\n  def initialize(table)\n    @table = table\n  end\n\n  def fetch_all\n    perform_mysql_operation do\n      result = mysql_client.query(\"SELECT * from #{table}\")\n\n      result.entries\n    end\n  end\nend\n```\n\n##### Explanation\n\nWe are initializing `table` variable, this is the name of table that we want to perform queries on. We are adding it to initializer so we can use the service with any table we want, it let's our code to be dynamic and flexible.\n\n`fetch_all` method will execute query to fetch all records from the external mysql server. Inside the method, we are using `perform_mysql_operation` which accepts block of our code, catch errors and ensure connection is closed after query is completed.\n\nWe are saving the result to `result` which will return an instance of mysql2 class. And to get actual rows, we are using `entries` method.\n\n#### Select one\n\n##### Code\n\n```ruby\nclass Connect\n  attr_reader :mysql_client, :table, :primary_column\n\n  def initialize(table, primary_column)\n    @table = table\n    @primary_column = primary_column\n  end\n\n  def fetch_one(id)\n    perform_mysql_operation do\n      result = mysql_client.query(\"SELECT * from #{table} WHERE #{primary_column}=#{id}\")\n\n      result.entries\n    end\n  end\nend\n```\n\n##### Explanation\n\nWe have added `primary_column` to our initializer; this is the column name of the primary key in the table. Although, normally we use `id` as the primary key, that won't always be the case. Primary key can be of any name when working on real project, so we are handling that with `primary_column`.\n\n`fetch_one` is fetching single record from the table. We are passing `id` as the param, which should be the id of a record we want to fetch. We are using `WHERE` condition so as to only fetch a record with that particular id.\n\n## Final Code\n\n```ruby\nrequire 'mysql2'\n\nmodule MySqlServer\n  module Database\n    class Connect\n      attr_reader :mysql_client, :table, :primary_column\n\n      def initialize(table, primary_column)\n        @table = table\n        @primary_column = primary_column\n      end\n\n      def fetch_all\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table}\")\n\n          result.entries\n        end\n      end\n\n      def fetch_one(id)\n        perform_mysql_operation do\n          result = mysql_client.query(\"SELECT * from #{table} WHERE #{primary_column}=#{id}\")\n\n          result.entries\n        end\n      end\n\n      private\n\n      def connect_to_db\n        host = ENV['MYSQL_SERVER_IP']\n        username = ENV['MYSQL_USERNAME']\n        password = ENV['MYSQL_PASSWORD']\n        database = ENV['MYSQL_DB_NAME']\n\n        Mysql2::Client.new(username: username, password: password, database: database, host: host)\n      end\n\n      def perform_mysql_operation\n        raise ArgumentError, 'No block was given' unless block_given?\n\n        begin\n          @mysql_client = connect_to_db\n\n          yield\n        rescue StandardError => e\n          raise e\n        ensure\n          mysql_client&.close\n        end\n      end\n    end\n  end\nend\n```\n\nWe created a service that connects to external mysql server and perform basic select operations in this part. We will learn how to perform basic insert and update operation in the  next part.\n\n**Image Credits:** Cover Image by fabio on Unsplash"
        },
        {
          "id": "articles-fix-missing-top-level-class-documentation-comment-rubocop",
          "title": "[Fix] Missing top level class documentation comment Rubocop",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, rubocop, lint",
          "url": "/articles/fix-missing-top-level-class-documentation-comment-rubocop/",
          "content": "Rubocop is the best way to enforce best practices in our rails project. While working on the project with rubocop enabled, it's normal to stumble upon the warning: **Missing top-level class documentation comment. [Style/Documentation]**. When this happens, we have three options to fix or disable the warning.\n\n## Warning\n\n- Missing top-level class documentation comment. [Style/Documentation]\n\n## Options Available for Fix\n\nYou can disable or fix this warning using either of the 3 options:\n\n1. Disable cop in the whole project\n2. Disable cop in only one class\n3. Add comment just above the class declaration\n\n### Option 1: Disable cop in the whole project\n\nMost of the classes we use are self describing, meaning as a developer, you can easily make sense of what the class is doing. Normally I find this rule not very useful, so most of the time I disable it in the whole project. Add the following cop to your configuration file to disable it project wide:\n\n```ruby\n# .rubocop.yml\n\nStyle/Documentation:\n  Enabled: false\n```\n\n### Option 2: Disable cop in only one class\n\nIf you feel this cop is important in your project and don't want to disable it in the configuration file, then you can disable it in only one class as required.\n\n```ruby\n# app/models/user.rb\n\n# rubocop:disable Style/Documentation\nclass User\nend\n```\n\n### Option 3: Add comment just above the class declaration\n\nYou can also fix the warning by adding the comment by adding the comment above the class declaration.\n\n```ruby\n# app/models/user.rb\n\n# Service to download ftp files from the server\nclass FtpService\nend\n```\n\nThough this article is specific to resolving **Missing top-level class documentation comment. [Style/Documentation]**, this fix also works on every other warning that rubocop throws.\n\nDid I miss any option that you are using? Let me know in the comments below.\n\n**Image Credits:** Cover Image by Matt Popovich on Unsplash"
        },
        {
          "id": "articles-action-mailbox-with-postfix-part-2",
          "title": "Setup Action Mailbox with Postfix - Part 2",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, tutorial, web development",
          "url": "/articles/action-mailbox-with-postfix-part-2/",
          "content": "_NOTE_: This article was first posted on The Dev Post.\n\nThis is the second part of a 2 series tutorial to setup action mailbox with postfix. In this part, we will configure postfix in production server to forward incoming emails to our rails app so action mailbox can process it.\n\nIf you haven't read the first part where we setup action mailbox and test it in development, you can read it here.\n\n## You should have\n\n- Postfix configured in production server (same server as your rails app)\n- Existing app built with rails 6\n- Ruby with rbenv setup\n- Patience\n\n## Steps\n\nLet's login to our production server first.\n\n### Step 1: Create bash script\n\nCreate a script to forward incoming emails to our rails app inside `/usr/local/bin/`\n\n```shell\n$ nano email_forwarder.sh\n```\n\nAdd following to the script\n\n```shell\n#!/bin/sh\nexport HOME=YOUR_HOME_PATH\nexport PATH=YOUR_PATH\nexport RBENV_ROOT=YOUR_RBENV_PATH\n\ncd /path/to/your/project && bin/rails action_mailbox:ingress:postfix URL='https://truemark.com.np/rails/action_mailbox/relay/inbound_emails' INGRESS_PASSWORD='YOUR_INGRESS_PASSWORD'\n```\n\nReplace values of `HOME`, `PATH`, `RBENV_ROOT`, `URL` and `INGRESS_PASSWORD` as described below:\n\n- Copy your home directory for **HOME**\n\n`cd` and copy what you get from `pwd` command\n\n```shell\n$ cd\n$ pwd\n```\n\n- Copy what you get from `$PATH` and `which rbenv` command for **PATH** and **RBENV_ROOT** respectively\n\n```shell\n$ $PATH\n$ which rbenv\n```\n\n- Copy the password you added previously to `credentials.yml` file or your ENV file as described in part 1 for **INGRESS_PASSWORD**\n\nFor **URL**, if your application lived at `https://example.com`, the full command would look like this:\n\n`bin/rails action_mailbox:ingress:postfix URL=https://example.com/rails/action_mailbox/relay/inbound_emails INGRESS_PASSWORD=YOUR_STRONG_PASSWORD`\n\n### Step 2: Configure Postfix to Pipe Incoming emails to script\n\nWe will follow steps as described here.\n\n- Create `/etc/postfix/virtual_aliases` to add a catch-all alias; **localuser** needs to be an existing local user:\n\n```file\n# /etc/postfix/virtual_aliases\n@mydomain.tld   localuser@mydomain.tld\n```\n\n- Create `/etc/postfix/transport` to add a transport mapping. \"forward_to_rails\" can be whatever you want; it will be used later in `master.cf`\n\n```file\n# /etc/postfix/transport\nmydomain.tld    forward_to_rails:\n```\n\n- Next, both transport and virtual_aliases need to be compiled into berkeley db files:\n\n```shell\n$ sudo postmap /etc/postfix/virtual_aliases\n$ sudo postmap /etc/postfix/transport\n```\n\n- Add the transport to `/etc/postfix/master.cf`\n\n```file\n# /etc/postfix/master.cf\nforward_to_rails   unix  -       n       n       -       -       pipe\n  flags=Xhq user=deploy:deploy argv=/usr/local/bin/email_forwarder.sh\n  ${nexthop} ${user}\n```\n\nWe should specify **user** so script is run by that user and not postfix or nobody. `user=deploy:deploy` ~ `user=user:group`\n\n- Add following in `/etc/postfix/main.cf`\n\n```file\n# /etc/postfix/main.cf\n  transport_maps = hash:/etc/postfix/transport\n  virtual_alias_maps = hash:/etc/postfix/virtual_aliases\n```\n\nYou can view postfix log with `tail -f /var/log/mail.log`.\n\nYou must have everything now to receive email in your rails app. Test it with any of your email provider; just send the email to `email@your-configured-domain.com` and check if it is being received in the log. If you have any comments or suggestions, please let me know in comments below.\n\n## Similar Articles\n\nIf you are interested in seeing how this same process can be accomplished with other ingress options, you can check articles below:\n\n- Action Mailbox with SendGrid\n- Deploy Action Mailbox To Postmark [External Link] from Cody Norman\n\n**References:** Action Mailbox, Pipe incoming mails to script\n\n**Image Credits:** Cover Image by Clker-Free-Vector-Images from Pixabay"
        },
        {
          "id": "articles-action-mailbox-with-postfix-part-1",
          "title": "Setup Action Mailbox with Postfix - Part 1",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, tutorial, web development",
          "url": "/articles/action-mailbox-with-postfix-part-1/",
          "content": "_NOTE_: This article was first posted on The Dev Post.\n\nThis is the first part of a 2 series tutorial to setup action mailbox with postfix. In this part, we will implement action mailbox with postfix and test in development.\n\nIf you are only looking to configure postfix in production server to pipe emails, you can read the second part here.\n\nRails 6 released with many awesome features and action mailbox is one of them that has come to make the life easier. From Official Action Mailbox Guide:\n\n> Action Mailbox routes incoming emails to controller-like mailboxes for processing in Rails. It ships with ingresses for Mailgun, Mandrill, Postmark, and SendGrid. You can also handle inbound mails directly via the built-in Exim, Postfix, and Qmail ingresses.\n\nSo basically, action mailbox can be used to forward all incoming emails to your rails app and process it further as required like storing attachments, creating records from the email body in you db and many more.\n\n## Skills required to follow the tutorial\n\nIntermediate:\n\n- Rails\n- Linux skills to work with commands in server where your app has been deployed\n\n## Requirements\n\n- Setup Action Mailbox with relay option for Postfix\n- Receive incoming emails through relay (Postfix)\n- Pipe Postfix to forward all incoming emails to our shell script\n- Process Email in the mailbox as required\n\n## Resources Already Available\n\n- Tutorial to implement and test action mailbox in development.\n- Some questions in Stack Overflow but without required answers for our implementation! Frustrating!\n\n## You should have\n\n- Existing app built with rails 6\n\n## Steps\n\nFirst of all we will setup action mailbox and test in our local machine.\n\n### Step 1: Setup action mailbox\n\n- Install migrations needed for InboundEmail and ensure Active Storage is set up:\n\n```shell\n$ rails action_mailbox:install\n$ rails db:migrate\n```\n\n### Step 2: Ingress Configuration\n\nWe will be configuring Postfix among various available options.\n\n- Tell Action Mailbox to accept emails from an SMTP relay:\n\n```ruby\n# config/environments/production.rb\nconfig.action_mailbox.ingress = :relay\n```\n\n### Step 3: Generate Password for authenticating requests\n\nGenerate a strong password that Action Mailbox can use to authenticate requests to the relay ingress.\n\nUse rails credentials:edit to add the password to your application's encrypted credentials under action_mailbox.ingress_password, where Action Mailbox will automatically find it:\n\n```ruby\naction_mailbox:\n  ingress_password: YOUR_STRONG_PASSWORD\n```\n\nIf you are using **nano** editor you can edit credentials with following command:\n\n```shell\n  $ EDITOR=\"nano\" rails credentials:edit\n```\n\nAlternatively, you can also provide the password in the `RAILS_INBOUND_EMAIL_PASSWORD` environment variable. If you are using `figaro` gem it is as easy as:\n\n```yml\nRAILS_INBOUND_EMAIL_PASSWORD: 'YOUR_STRONG_PASSWORD'\n```\n\n### Step 4: Setup a mailbox\n\nNow we should setup a mailbox that will process all incoming emails as we require.\n\n- Generate new mailbox\n\n```shell\n$ bin/rails generate mailbox forwards\n```\n\nThis will create `forwards_mailbox` inside `app/mailboxes`\n\n```ruby\n# app/mailboxes/forwards_mailbox.rb\nclass ForwardsMailbox  :forwards\nend\n```\n\n- Accept single email domain\n\n```ruby\n# app/mailboxes/application_mailbox.rb\nclass ApplicationMailbox  :forwards\nend\n```\n\n- Accept multiple email domains\n\n```ruby\n# app/mailboxes/application_mailbox.rb\nclass ApplicationMailbox  :forwards\nend\n```\n\nThis regex matching is telling application mailbox to forward or route all emails coming from `@email-domain.com` to our `forwards_mailbox`. For e.g. if we configure it to be `/.*@gmail.com/i` and our rails app receives email to `john-doe@gmail.com`, since this email matches with the pattern `@gmail.com`, it will be forwarded to our `forwards_mailbox` where we can further process it.\n\nNote: Your mailbox name should match the name you've given it in the routing params i.e. `forwards` will route to `forwards_mailbox`.\n\n### Step 6: Test in development\n\nFor testing in development, Action Mailbox provides UI to test inbound emails in the development environment. To access this, fire up the Rails server first\n\n```ruby\n$ rails s\n```\n\nNow go to `http://localhost:3000/rails/conductor/action_mailbox/inbound_emails` and click on `Deliver new inbound email`. Fill in all required details and then click `Deliver inbound email`. Ohh wait! before that let's add `byebug` to our `process` method so we know action mailbox is actually forwarding our emails to the right place.\n\n```ruby\n# app/mailboxes/forwards_mailbox.rb\nclass ForwardsMailbox here.\n\nIf you have any confusions or suggestions, please let me know in comment section below.\n\n## Similar Articles\n\nIf you are interested in seeing how this same process can be accomplished with other ingress options, you can check articles below:\n\n- Action Mailbox with SendGrid\n- Deploy Action Mailbox To Postmark [External Link] from Cody Norman\n\n**References:** Action Mailbox\n\n**Image Credits:** Cover Image by Muhammad Ribkhan from Pixabay"
        },
        {
          "id": "articles-reset-password-react-rails",
          "title": "Reset password in react and rails app with devise",
          "collection": {
            "label": "articles",
            "name": "Posts"
          },
          "categories": "articles",
          "tags": "ruby on rails, reactjs, tutorial, web development",
          "url": "/articles/reset-password-react-rails/",
          "content": "_NOTE_: This article was first posted on Truemark Blog\n\nRecently when I was working on a project, I was assigned the task of resetting the password for a Rails/React app setup with Devise. I searched on google, as usual, there were a lot of tutorials for applications built with full-stack Rails, but couldn’t find any tutorial to implement this particular feature with React as a frontend. So, I decided to write my own after solving the problem so that it will be easier for the other developers who are looking for a way to implement it in React like I was assigned to.\n\nAuthentication for an app is a must when you are working in the real environment so that your app is only accessible to those who are authorized to use it. When we are talking about authentication in Rails, Devise is our go-to gem that is providing easy and flexible authentication. We won’t be going into depth about the Devise today. You can always learn it from the other sources as the tutorials for it are readily available.\n\n## Skills required to follow the tutorial\n\nBasics of\n\n- Rails\n- React\n- Devise\n\n## Requirements For the Features\n\n- When a user receives a password reset instruction email, they should be redirected to the React app.\n- Email design should be changed to what was given by the designer.\n\n## What was Available?\n\n- Tutorial to implement reset password feature in the Rails app only.\n- Plain and simple email design provided by Devise.\n\nIf you have worked with Devise, then you must be aware that when you initialize Devise in your Rails app, it creates multiple routes and controllers to let you handle most user-related logics that are normally hidden. Route to reset the password will be provided by Devise and we will be using it to send the email for password reset instruction first.\n\n## Steps\n\nFollow these steps and do as instructed below.\n\n### Step 1: Send the Reset Password Instruction to the User\n\n- Hit `/users/password` with following request to get the password reset email provided by Devise\n\nRequest Sample:\n\n```ruby\n{\n    \"user\":\n    {\n        \"email\":\"emailtoreset@email.com\"\n    }\n}\n```\n\n- The reset instructions will be plain and simple. When you click on the link provided in the email, it will redirect you to the Rails app. But the problem here is, we want the user to be redirected to our React app.\n\n### Step 2: Create Custom Mailer\n\n- Create a new mailer. I have created UserMailer, but you can create it with any name you want as long as you and other developers understand it. Next, we should extend Devise mailer as shown below:\n\n```ruby\nclass UserMailer \" // I am using figaro to store environment variables so I am accessing email from the application.yml file with this code\n before_action :add_inline_attachments!\n\n def reset_password_instructions(record, token, opts={})\n   super\n end\n\n private\n def add_inline_attachments!\n   attachments.inline['your-logo.png'] = File.read(Rails.root.join('app/assets/images/your-logo.png'))\n end\nend\n```\n\n- We need to override the method so that we can create a new mailer view to add our own design to the email and access the token to use for resetting the forgotten password.\n\n### Step 4: Create a Mailer View\n\n- Create a mailer view inside `views/{your\\_custom\\_mailer}` named `reset\\_password\\_instructions.html.erb`\n- Add HTML content to reflect the design you want or have been provided by the designer.\n\n### Step 5: Link to Redirect User to React app\n\n- You can access `reset_token` in your view with `@token` as shown below.\n- Following code snippet reflects the use of token:\n\n```html\n \n\n  Choose a new password\n\n\n```\n\nNow when you try to send the email again, you will get the same old email instead of your custom-designed one. Right? That’s not good. Now, let’s fix that issue so that we can get our own custom-designed email with our own instructions.\n\n### Step 6: Update Devise Initializer File\n\n- To get the custom-designed email, you need to tell Devise to use your custom mailer you just created.\n- For this, go to Devise initializer under `config/initializers` and search for `config.mailer`. The name has to be same as the mailer you had created in the second step. Here I have changed it to UserMailer:\n\n```ruby\n# Configure the class responsible to send e-mails.\nconfig.mailer = 'UserMailer'\n```\n\nNow try to send the email again and …. Magic! You are all set, you should get a new email that you added just now with a link redirecting to React app from where you can handle the rest of the process. Good Work!\n\nThis is how you send a password reset email with custom design and instructions in React and Rails. I hope this article helped you a lot. If you have any suggestions or confusion, please let me know in the comment section below."
        },
        {
          "id": "notes-devise-raise-error-when-old-password-is-used",
          "title": "Devise raise validations error when new and old passwords are same",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails, devise",
          "url": "/notes/devise-raise-error-when-old-password-is-used/",
          "content": "Authentication is a deal breaking feature in any applications nowadays. In Rails, Devise makes authentication a breeze; install a gem, run few commands and you have authentication working in your app.\n\nToday, with the help of Devise we will look into a solution for a very common feature request in the app; throw validation error if user tries to change their password but add the same old password.\n\n## Skills required to follow the tutorial\n\nIntermediate:\n\n- Rails\n\n## You should have\n\n- Existing Rails app with authentication already handled using Devise\n\n## Validation Implementation\n\nLet's dive into the code now.\n\nAdd the following validation error to the model that is used for authentication:\n\n```ruby\nclass User [Stack Overflow] Rails + Devise: How do I get an error message if password is not changed?\n\n## Image Credits\n\n- Cover Image by alexander ehrenhöfer on Unsplash"
        },
        {
          "id": "notes-run-cron-job-manually",
          "title": "Run Cron Job Manually",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "cron",
          "url": "/notes/run-cron-job-manually/",
          "content": "Cron Job has made the scheduling of various tasks super easy but debugging those scripts from Cron Job is equally harder. This is because Cron Job runs command you provide to it always at one specific time; which means there are only two ways for you to run the Cron Job:\n\n- Wait for the Cron Job to run at the time you have given, could be seconds, minutes or even hours\n- Run the Cron Job every minute and see if it throws any error (most people do this)\n\nWhat if I tell you there is also a third way; run the Cron Job manually.\n\nThis approach will help us in running any of our script immediately without having to wait for Crontab to execute our job at specific time. And it will also run the job by replicating the same environment Cron uses while running it's jobs. Super helpful for debugging any Cron related issue if you ask me.\n\n## Steps\n\nLet's dig into the approach of running the Cron Job manually then.\n\n### Step 1: Get PATHs your normal Cron Job uses\n\nThe biggest difference in running the script directly from the command line versus from the Cron Job is difference in it's PATH values.\n\nA lot of times, Cron Job fails to execute because PATH doesn't include executables for e.g. for Ruby, PERL, etc. and due to these executables not being found script that was running properly in development or even from command line in the production server will no more work when running the same script from the Cron Job.\n\nRun the command `crontab -e` to edit your crontab and add the following entry to the top (or anywhere in the file):\n\n```crontab\n* * * * * /usr/bin/env > /home/deploy/cron-env\n```\n\nPlease note that `/home/deploy` is the root path of my logged in user in the Production server, it could be different for you.\n\nWith the above entry in the crontab, we are telling the Cron Job to run every minute and add PATHs it gets from the command \"/usr/bin/env\" to a file named \"cron-env\"\n\nWait for a minute and check the content inside the file \"cron-env\" with the command `cat cron-env`, it should have the content similar to this:\n\n```bash\nHOME=/home/deploy\nLOGNAME=deploy\nPATH=/usr/bin:/bin\nLANG=en_US.UTF-8\nSHELL=/bin/sh\nPWD=/home/deploy\n```\n\nPlease note that \"deploy\" is the name of the logged in user.\n\nYou should remove the Cron Job we added previously from the Crontab since it has served it's purpose of providing us with PATHs we will require to run commands as a Cron Job.\n\n### Step 2: Create a bash script to run any command similar to Cron Job\n\nCreate a new file with the command `nano run-as-cron` and add the following inside:\n\n```bash\n#!/bin/bash\n\n/usr/bin/env -i $(cat /home/deploy/cron-env) \"$@\"\n\n```\n\nHere is what the above command is doing:\n\n- `/usr/bin/env` is the path to the env command, which is used to find and execute a specified command with a modified environment\n- `-i` option tells env to start with an empty environment, ignoring the inherited environment variables.\n- `$(cat /home/deploy/cron-env)` uses the \"cat\" command to read the contents of the file located at /home/deploy/cron-env and then uses command substitution `$(...)` to include the contents of the file as arguments to the env command. The contents of the file are expected to be environment variable assignments for any commands that is run using this bash script.\n- `\"$@\"` is a special variable which represents all the arguments passed to the script or command. In this context, it allows the script or command executed by env to receive the same arguments that were passed to the original script or command containing this line.\n\n  For e.g. if you run the command `./run-as-cron /path/to/script -with arguments` then everything after \"./run-as-cron\" is passed as it is here and executed by this script.\n\n### Step 3: Run any command you want by replicating the Cron behavior\n\nNow we are ready to run any command we want using the same environment variables Cron Job uses during it's run.\n\nYou can run the command you want now with:\n\n```cmd\n$ ./run-as-cron /path/to/script --with arguments --and parameters\n```\n\nFor example if you have a script let's say to read a CSV file and parse them as JSON then you can run that command now with:\n\n```cmd\n$ ./run-as-cron /home/deploy/production/scripts/parse-csv-as-json '/home/deploy/production/csv-files/employees.csv'\n```\n\nThis assumes you take the file path as an argument inside the script.\n\nAnd that's it, you should now be able to run the Cron Job manually and debug the result from it immediately! \n\nIf you are a Ruby on Rails developer follow along for a bit more as I have a bonus tip for you.\n\n### Bonus: Run Ruby on Rails tasks replicating the Cron behavior\n\nUnrelated to this tutorial and being a Rails developer myself, I thought of attaching some bonus example on running the rake task by executing as the Cron Job manually.\n\nI am also assuming that you also already have whenever gem configured and added some Rake tasks to \"config/schedule.rb\"\n\nNow for example if you have a rake task that sends an email to you once a day with the list of users who registered yesterday to your app then the command could look like something below:\n\n```cmd\n$ ./run-as-cron /bin/bash -l -c 'cd /home/deploy/production/your-app/current && RAILS_ENV=production /home/deploy/.rbenv/shims/bundle exec rake users:registered_yesterday --silent >> log/whenever.log 2>&1'\n```\n\nI have just copied the command whenever executes when we deploy the Rails app with \"Capistrano\", here is the short explanation of bash command and options used above:\n\n- `/bin/bash` specifies the Bash shell to be used for running the command.\n- `-l` option makes Bash act as if it had been invoked as a login shell. This means it will read certain login-specific configuration files.\n- `-c '...'` option allows you to pass a command as a string for Bash to execute. The command inside the single quotes is the actual command being executed.\n- `cd /home/deploy/production/your-app/current && RAILS_ENV=production /home/deploy/.rbenv/shims/bundle exec rake users:registered_yesterday` tells the script to change the working directory to the folder \".... /current\" and execute the Rake task.\n- ``>> log/whenever.log 2>&1` appends both standard output and standard error to the file \"log/whenever.log\". This is a common technique to capture both normal output and errors in a log file.\n\n## Conclusion\n\nIn this short and to the point tutorial we have learnt how we can run the Cron Job manually and immediately be able to debug issues instead of having to wait for some minutes or hours to debug the issue.\n\nI hope you have also learnt about some new bash command options and techniques as I did while writing this blog. Lastly, I hope it helps you save a lot of your time into the future now when debugging any Cron Job.\n\nThank you for reading. Happy Tinkering!\n\n## References\n\n- Running a cron job manually and immediately [Server Fault] \n\n## Image Credits\n\n- Cover Image by Icons8 Team on Unsplash"
        },
        {
          "id": "notes-ubuntu-temporary-failure-in-name-resolution",
          "title": "[Solved] Ubuntu 22 Temporary failure in name resolution",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ubuntu",
          "url": "/notes/ubuntu-temporary-failure-in-name-resolution/",
          "content": "## Background\n\nIn one of the client project, we had to upgrade our existing Ubuntu server from the version 20 to 22. Upgrade was smooth but after the upgrade was done, we started noticing the issue of \"Temporary failure in name resolution\".\n\n## When does this error occur?\n\nThe \"Temporary failure in name resolution\" error occurs when the system cannot translate a website name into an IP address. Somehow it got messed up during the upgrade\n\n## Fix\n\nTo fix the issue, you can hit the following commands; this fix was tested on an upgrade Ubuntu at version 22.04.2\n\n```\n$ apt install netplan.io\n$ systemctl unmask systemd-networkd.service\n$ systemctl unmask systemd-resolved.service\n$ ENABLE_TEST_COMMANDS=1 netplan migrate\n$ netplan try\n\n# WARNING: This will immediately log you out of the server and restart it, if you are working with Production server; run with care.\n$ reboot\n\n$ apt purge ifupdown resolvconf\n$ ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf\n```\n\nTada, and that should fix the issue 🎉\n\n## Conclusion\n\nAre there any other solutions you tried and it worked? Let us know in the comments below.\n\nThank you for reading. Until next time! Happy tinkering.\n\n## References:\n\n- \"Temporary failure in name resolution\" after upgrading 22.04 to 22.10\n- Migrate from ifupdown to netplan\n\n## Image Credits\n\n- Cover Image by Albert Stoynov on Unsplash"
        },
        {
          "id": "notes-error-while-installing-mysql2-in-m1-mac",
          "title": "[Solved] Error while Installing mysql2 Gem in M1 Mac",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "mysql, ruby on rails",
          "url": "/notes/error-while-installing-mysql2-in-m1-mac/",
          "content": "In Ruby on Rails applications with mysql2 gem, mysql2 gem always threw error when trying in the new M1 Mac. The error always said \"ld: library not found for -lzstd\" and \"make failed\".\n\nLet's resolve this issue today!\n\n## Assumptions\n\n1. You are using rbenv for ruby\n2. You are using homebrew\n\n## Error Message\n\nWhenever I did bundle install inside the project, I got the following error:\n\n```\nlinking shared-object mysql2/mysql2.bundle\nld: library not found for -lzstd\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\nmake: *** [mysql2.bundle] Error 1\n\nmake failed, exit code 2\n```\n\n## Solution\n\nTo resolve the issue, you will have to provide the location for mysql installation in homebrew and install the mysql2 gem separately.\n\n1. Find out the version of mysql installed in your machine\n\n  Run the following command in the command line:\n\n  ```\n  $ mysql --version\n  mysql  Ver 8.0.27 for macos12.5 on arm64 (Homebrew)\n  ```\n\n  Note the mysql version; here 8.0.27 and move to next step.\n\n2. Install mysql2 gem separately\n\n  To resolve the make error we will have to install mysql2 gem separately. To do that run the following command by changing the mysql version you got in previous step.\n\n  ```\n  rbenv exec gem install mysql2 -- \\\n --with-mysql-lib=/opt/homebrew/Cellar/mysql/8.0.27/lib \\\n --with-mysql-dir=/opt/homebrew/Cellar/mysql/8.0.27 \\\n --with-mysql-config=/opt/homebrew/Cellar/mysql/8.0.27/bin/mysql_config \\\n --with-mysql-include=/opt/homebrew/Cellar/mysql/8.0.27/include \n  ```\n\n2. Run \"bundle install\" from your project root\n\n  Move to the project root and run `bundle install` in the command line.\n\n## Still getting the same error?\n\n  If you are still getting error, you need to also hit the command given below.\n\n  ```\n  # check zstd version\n  $ ls /opt/homebrew/Cellar/zstd\n  1.5.0\n\n  $ bundle config --local build.mysql2 \\\n    \"--with-ldflags=-L/opt/homebrew/Cellar/zstd/1.5.0/lib\"\n  ```\n\n  Run `bundle install` again from inside the project root and the error should now go away.\n\n## Downside\n\nThe downside of this solution is we have to run this command every time the mysql version is updated using homebrew.\n\nSo for example if your mysql version changes from 8.0.27 to 8.0.30 then you will again get this error. And in that case you will have to install the mysql2 gem separately again by using the steps above.\n\n## Conclusion\n\nIf you have any other solutions for this problem, do let us know in the comment section below.\n\nThank you for reading. Happy coding!\n\n## References\n\n- [Stack Overflow]  An error occurred while installing mysql2 (0.4.8), and Bundler cannot continue\n- [Github] bundle install fails with Gem::Ext::BuildError\n\n## Image Credits\n\n- Cover Image by Rubaitul Azad on Unsplash"
        },
        {
          "id": "notes-rbenv-shims-ruby-argument-list-too-long",
          "title": "[Solved] .rbenv/shims/ruby: Argument list too long",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails, ruby",
          "url": "/notes/rbenv-shims-ruby-argument-list-too-long/",
          "content": "I couldn't access any ruby command in the linux server where we have hosted the Rails app for one of my client. I had to access rails console to update some database records manually but I couldn't and got stuck in this issue for over 2 days.\n\nI encountered two issues along the way:\n\n- ruby version not found\n- .rbenv/shims/ruby: Argument list too long\n\nSolving the second issue solved the first issue too!\n\n## Problem\n\nInside the root folder of the project , `ruby -v` always returned that ruby 2.7.0 was not found and need to be installed. When I tried installing ruby 2.7.0 via rbenv, it said version already exists.\n\nI tried to execute `ruby -v` outside of the project just to be sure that it was not a problem with ruby version. It took a really long time to process the command which returned with following error:\n\n```bash\n/home/deploy/.rbenv/libexec/rbenv-exec: line 47: /home/deploy/.rbenv/shims/ruby: Argument list too long\n```\n\nGoogle searches didn't yield any relevant results.\n\nIn the process (after searching more for around two days), I stumbled upon the rbenv issue rbenv: cannot rehash. I had never imagined these issues to be related at all. I executed the command to rehash the rbenv to see if it executes successfully instead of throwing the error mentioned in the Github issue.\n\nBoom! same issue; rbenv could not be rehashed.\n\nSolution as mentioned in the replies was to delete the file \".rbenv-shim\", and rehash rbenv again. It worked!\n\n## Solution\n\nExecute the command `rbenv rehash`. It will return the location of the file you have to delete:\n\n```bash\nrbenv: cannot rehash: /home/deploy/.rbenv/shims/.rbenv-shim exists\n```\n\nHere, \"/home/deploy/.rbenv/shims/.rbenv-shim\" is the location of the file in my machine and it could be different in your local machine.\n\nRemove the file to resolve the issue:\n\n`rm /home/deploy/.rbenv/shims/.rbenv-shim`\n\nNow when you run the command `ruby -v` it will return the version of ruby currently installed in your machine.\n\n## Reason behind the issue\n\nIt happens when previous rehash of the rbenv was killed prematurely.\n\nQuoting from reply to the Github Issue:\n\n> During the rehash process, rbenv writes out the temporary file .rbenv-shim to indicate that the rehash is in progress. Then, if a parallel rbenv rehash process tries to run at the same time, it will fail because the file already exists. This guards against race conditions in parallel rehashes.\n>\n> It seems like .rbenv-shim file was never cleaned up after a rehash that ran earlier. That earlier process might have been killed prematurely and never cleaned up after itself.\n\n## Conclusion\n\nI am not sure if solution to the issue `.rbenv/shims/ruby: Argument list too long` is always to delete the \".rbenv-shim\" file but hey, it's worth a try.\n\nIf the solution to your problem was different than this, please let us know below in the replies and I will be sure to include it in this blog so that it helps others.\n\nThank you for reading. Happy coding!\n\n## References\n\n- rbenv: cannot rehash [Github]\n\n## Image Credits\n\n- Cover Image by Artem Maltsev on Unsplash"
        },
        {
          "id": "notes-update-multiple-records-at-once-in-rails",
          "title": "Update Multiple Records at Once in Rails",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails",
          "url": "/notes/update-multiple-records-at-once-in-rails/",
          "content": "Rails provides a built-in **create** method for adding multiple records in single line of code or in more technical term \"batch create\". For update, if we want to update attributes with same value for all available records then we can do so with the method `update_all`. \n\nBut what if we want to update multiple attributes at once and for multiple records? How do we \"batch update\" in Rails?\n\nWe will be looking at the answer to that question today in this blog.\n\nFor updating multiple records at once, there may be two cases; when we want to update\n\n- Same attribute/s in all rows\n- Different attributes in each row\n\n## Update same attribute/s in all rows\n\nTo update same attributes with same values in all rows of the table, we can use the Rails method **update_all**\n\nFor e.g. If we want to update all users with `first_name` \"John\" to \"Jessica\", we can do so with following code:\n\n```cmd\nUser.where(first_name: 'John').update_all(first_name: 'Jessica')\n```\n\n## Update different attributes in each row\n\nLet's suppose we have a model User and we want to update existing records inside with different **name**.\n\nFor e.g. we want to update records with the following JSON:\n\n```ruby\nformatted_users = [\n  {\n    id: 1,\n    name: 'John Doe'\n  },\n  {\n    id: 2,\n    name: 'Jessica Jones'\n  },\n  {\n    id: 3,\n    name: 'Robert Junior'\n  }\n]\n```\n\nDid you notice? Each user has different name that needs to be updated.\n\nLet's see how we can update multiple records like these at once in Rails.\n\n1. Index records by their id\n\n    First of all, we should index all records by their id, for that we will be using **index_by** which  returns records grouped by the id and all records will be inside the hash.\n\n    ```ruby\n      grouped_users = formatted_users.index_by { |user| user[:id] }\n\n      # index_by will return the following hash\n      # => {1=>{:id=>1, :name=>\"John Doe\"}, 2=>{:id=>2, :name=>\"Jessica Jones\"}, 3=>{:id=>3, :name=>\"Robert Junior\"}}\n    ```\n\n2. Update grouped records\n\n    After grouping all records by their id, we will pass all ids as a first argument and their attributes as the second attribute to the method **update**. This way all our records will be updated at once without us having to loop through each record and calling `update` each time.\n\n    ```ruby\n      User.update(grouped_users.keys, grouped_users.values)\n    ```\n\n## Conclusion\n\nThis way we can update multiple records with different attributes from a hash or JSON.\n\nOne thing to note is, this solution is not optimized or efficient for large set of records because for each record, we will be hitting database with the update query. That can take significant memory and also more time to execute large set of records.\n\nDo you have more optimized solutions? Let us know in the comment.\n\nThank you for reading. Happy Coding!\n\n## References\n\n- Is there anything like batch update in rails? [Stack Overflow]\n- Updating multiple records at the same time rails active record [Cba Bhusal Blog]\n\n## Image Credits\n\n- Cover Image by salvatore ventura on Unsplash"
        },
        {
          "id": "notes-override-default-date-format-in-rails-admin",
          "title": "Override Default Date Format in Rails Admin",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails, rails admin",
          "url": "/notes/override-default-date-format-in-rails-admin/",
          "content": "Rails Admin is a Rails engine that provides an easy-to-use interface for managing your data. It's perfect for the cases where we want admin dashboard quickly for CRUD (Create, Read, Update and Delete) operations.\n\nWhen using engines, it can be difficult to override its default behavior. It was the same case for overriding the default date format. It was tricky as I didn't know exactly where to look at.\n\nAfter some research, I found out that Rails Admin uses **long** date and time format from the locale. We can check the related code in official gem repository.\n\nLine for the exact code may change in the future, if it has, you can search for the code below:\n\n```rb\n  register_instance_option :date_format do\n    :long\n  end\n```\n\n## Override default date format\n\nTo override the default format of the date and display the format we want in our UI, we will need to add the required format in our locale files so the values inside the engine are overridden.\n\nAdd the following to `config/locale/en.yml`\n\n     ```rb\n       en:\n         date:\n           formats:\n             long: \"%Y-%m-%d\"\n         time:\n           formats:\n             long: \"%Y-%m-%d %H:%M:%S\"\n     ```\n\nPlease change format of the date and time as required for your application.\n\nIf you have other locales that your Rails app supports, you can update the date formats as required in related locale file by copying this exact code and updating content inside key \"long\"\n\n_NOTE_: date is for datatype **date** and time is for data type **datetime**\n\n## Conclusion\n\nIf you restart the rails server and reload the UI, you should be able to see the date format you added.\n\nThanks for reading. Happy coding!\n\n## Image Credits\n\n- Cover Image by Estée Janssens on Unsplash"
        },
        {
          "id": "notes-fix-rails-auto-increment-id-postgres-error",
          "title": "[Fix] Rails Auto Increment ID Postgres Error",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails, postgresql",
          "url": "/notes/fix-rails-auto-increment-id-postgres-error/",
          "content": "## Error\n\nActiveRecord::RecordNotUnique Exception: PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint \"users_pkey\"\nDETAIL:  Key (id)=(43957) already exists.\n\n## Detail\n\nYou normally run into this error when you restore database from another source, for e.g. production or staging server.\n\nThis happens because of database sequence for Postgres that is stored in local machine is not the same as what comes from restored database and same id can be assigned twice when auto incrementing by Rails application.\n\n## Solution\n\nWe can reset the sequence of the table that is stored in the local machine by Postgres to fix this issue.\n\n1. Go to rails console\n   \n   `rails c`\n\n2. Reset the Postgres sequence for the table\n\n    You can reset the Postgres sequence with the following command:\n\n    `ActiveRecord::Base.connection.reset_pk_sequence!('table_name')`\n\n    E.g. Assuming the table name is **users**, you can do the following:\n    \n    `ActiveRecord::Base.connection.reset_pk_sequence!('users')` \n\n## Conclusion\n\nAfter resetting the sequence of table stored by Postgres, new records will be created without any issues.\n\nThanks for reading. Happy Coding!\n\n## References\n\n- Rails auto-assigning id that already exists [Stack Overflow] \n\n## Image Credits\n\n- Cover Image by Brett Jordan on Unsplash"
        },
        {
          "id": "notes-fix-rails-server-is-already-running",
          "title": "[Fix] Rails server is already running",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby on rails",
          "url": "/notes/fix-rails-server-is-already-running/",
          "content": "## Error\n\n\"A server is already running. Check **.../tmp/pids/server.pid**\"\n\nThis error means that the server is already running in the port you are trying to start the rails server on; normally 3000.\n\n## Reasons for Error\n\nThis error can occur in two cases:\n\n1. You are running the Rails server on port 3000 in another tab of the terminal\n\n2. You suspended the server abruptly\n     \n    This normally happens when you stop the server with `ctrl+z` instead of `ctrl+c` to exit the Rails server. `ctrl+z` suspends the process but doesn't close the server running on port 3000 meaning the server is still running on background.\n\n    This can also happen when you close the terminal that the rails server was running on.\n\n## Solution\n\n### Rails server is running on port 3000 in another tab of the terminal\n\nYou can just close the server running on another tab on port 3000 to resolve this issue.\n\n### Server suspended abruptly\n\nSolution for this case is to kill the process running in the background on port 3000\n\n1. Find the process id for the rails server port\n   \n     If the port you are running the rails server is different than 3000, you should replace 3000 with the port number as required.\n\n     ```cmd\n       lsof -wni tcp:3000\n     ```\n\n     You should be able to see the output like this:\n\n     ```cmd\n      COMMAND   PID USER   FD   TYPE            DEVICE SIZE/OFF NODE NAME\n      ruby    16660 cool   14u  IPv4 0x89786e1f70a36a3      0t0  TCP 127.0.0.1:hbci (LISTEN)\n      ruby    16660 cool   15u  IPv6 0x89786e1d82f7aeb      0t0  TCP [::1]:hbci (LISTEN)\n     ``` \n\n2. Copy value in PID column, here **16660**\n\n3. Kill the process\n\n     ```cmd\n       kill -9 16660\n     ```\n\n     You should see the following output:\n\n     ```cmd\n       [1]  + 10975 killed     rails s\n     ```\n\n 4. Try running the server again\n      \n\n## Conclusion\n\nRails server should be up and running without any errors now.\n\nThank you for reading. Happy Coding!\n\n## References\n\n- A server is already running (Stack Overflow)\n\n## Image Credits\n\n- Cover Image by Olav Ahrens Røtne on Unsplash"
        },
        {
          "id": "notes-git-remove-files-and-folders-from-remote",
          "title": "Remove files or folders from remote git",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "git",
          "url": "/notes/git-remove-files-and-folders-from-remote/",
          "content": "GIT is great, it has made collaboration with other developers so easy, I can't thank GIT enough. But GIT is vast and not every command remains on my mind. I find myself googling over and over again to get that right GIT command that can solve my problem.\n\nRecently when working on one of the project that had just started, I accidentally pushed IDE folder to remote repository and I was there googling again, so I thought, why not write blog for this?. I can always come straight to my blog if this happens again and I can also help my fellow developers this way, right?\n\nAll sensitive information and IDE related folders should be added to gitignore so they are not tracked by git. You may already know this, but it doesn't help if your file or folder is already in the remote repository. Today we will learn how we can remove files or folders from remote repository. Let's get started!\n\n## Remove file or folder from remote repo only\n\n```shell\n# Remove a single file\ngit rm --cached password.txt\n\n# Remove a single folder\ngit rm --cached -rf .idea\n```\n\n## Remove file or folder from both remote repo and local\n\n```shell\n# Remove a single file\ngit rm password.txt\n\n# Remove a single folder\ngit rm -rf .idea\n```\n\n[[notice | Don't forget to add file or folder to gitignore]]\n| After removing the file or folder, we shouldn't forget to add them to **gitignore** before we commit and push to the repo again. Or we will be back to the start of blog removing those again!\n\nStraight and sweet, that's it. Any confusions? Have a better solution? Please comment below, it's never a bad idea to have a healthy conversation. Thank you. See you again!\n\n**References:** Remove file or folder only from remote repo\n\n**Image Credits:** Cover Image by Mahmudul Hasan Shaon from Unsplash"
        },
        {
          "id": "notes-open-google-play-store-from-react-native",
          "title": "Open google play store from react native app",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "react native, mobile development",
          "url": "/notes/open-google-play-store-from-react-native/",
          "content": "User ratings are very valuable to business as they play a crucial part in people's purchasing decisions; be it restaurants, movie tickets or in the current context, our react native app. You must have seen prompts when you are surfing through any android app or playing games, that ask you to rate the app in google play store.\n\nAre you wondering how you can emulate same behavior in our react native app that is live and has real users who you always wanted to cater? Getting user ratings or let's say taking our user to google play store is possible in react native through the use of Linking. There are also many packages that we can use who will have Linking under the hood, but today we will be trying very simple solution without using any packages.\n\n## Simple Example\n\nIf you don't have time to go through whole blog, this is what you will have to add to your code on button click asking to rate your app in google play store:\n\n```js\nLinking.openURL('market://details?id=com.whatsapp');\n```\n\n## Detailed Example\n\nIn this example, we will create a button and add a method that can take our user to google play store.\n\n```js\n# components/rateApp/rateAppButton.js\nimport React from 'react';\nimport { View, Text, Linking, TouchableHighlight, StyleSheet } from 'react-native';\nimport FontAwesome5 from 'react-native-vector-icons/FontAwesome5';\n\nconst styles = StyleSheet.create({\n  rateUsButton: {\n    flexDirection: 'row',\n    alignItems: 'center',\n    backgroundColor: '#00875F',\n    padding: 15,\n    justifyContent: 'flex-start',\n  },\n\n  rateUsText: {\n    fontSize: 18,\n    marginLeft: 10,\n    color: colors.WHITE,\n  },\n});\n\nconst RateAppButton = () => {\n  const openPlayStore = () => {\n    // it's a good idea to add this in .env file and use it from there\n    const GOOGLE_PACKAGE_NAME = 'com.whatsapp';\n\n    Linking.openURL(`market://details?id=${GOOGLE_PACKAGE_NAME}`);\n  };\n\n  return (\n    \n      \n        \n        Rate us on google play\n      \n    \n  );\n};\n\nexport default RateAppButton;\n```\n\nAre you using some package to take your user to play store? Let us know in the comments below if you have other ideas for the same problem.\n\n**References:** How to Open Google Play Store from your React Native App, Ask to Rate Your React Native App on Google\n\n**Image Credits:** Cover Image by 200 Degrees from Pixabay"
        },
        {
          "id": "notes-extract-key-value-from-hash-ruby-on-rails",
          "title": "Extract key or value from hash in ruby on rails",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby, ruby on rails, web development",
          "url": "/notes/extract-key-value-from-hash-ruby-on-rails/",
          "content": "When I was recently working in one of the client project, I had to communicate with external mariadb server to store records from react/rails app, that means I would get activerecord hash from our app which I had to convert to pure sql query and send it to external server for storing.\n\nIf you have worked with sql queries previously then you must know that keys and values must be separated for insert operations like\n\n```sql\nINSERT INTO users (first_name, last_name, email) VALUES (John, Doe, john@email.com)\n```\n\nI could convert attributes to hash easily using `as_json` to get the format `{\"first_name\"=>\"John\", \"last_name\"=>\"Doe\", \"email\"=>\"john@email.com\"}`. But I had to extract keys and values separately so that attributes can be accurately formatted and ready for insert and update operations. Let me show you how I extracted keys and values from the hash and formatted them as required for the operations.\n\nLet's assume we have: `user = {\"first_name\"=>\"John\", \"last_name\"=>\"Doe\", \"email\"=>\"john@email.com\"}`\n\n## Extract single key or value\n\nIf we only want **email**\n\n```rails-console\n// For key\nuser.extract!(\"email\").keys # [\"email\"]\n\n// For value\n\n# with extract\nuser.extract!(\"email\").values # [\"john@email.com\"]\n\n# simply\nuser['email'] # \"john@email.com\"\n```\n\n## Extract multiple keys or values\n\nIf we want **first_name** and **last_name** but not **email**\n\n```rails-console\n// For keys\nuser.extract!(*[\"first_name\", \"last_name\"]).keys # [\"first_name\", \"last_name\"]\n\n// For values\nuser.extract!(*[\"first_name\", \"last_name\"]).values # [\"John\", \"Doe\"]\n```\n\n## Extract all keys or values\n\n```rails-console\n// For keys\nuser.keys # [\"first_name\", \"last_name\", \"email\"]\n\n// For values\nuser.values # [\"John\", \"Doe\", \"john@email.com\"]\n\n```\n\nDo you know more elegant or alternative way to extract keys and values from hashes? Please enlighten and guide us with your precious comment below if you do.\n\n**Image Credits:** Cover Image by Jan Baborák from Unsplash"
        },
        {
          "id": "notes-generate-unique-random-number-token-ruby-rails",
          "title": "Generate unique random numbers or tokens in ruby on rails",
          "collection": {
            "label": "notes",
            "name": "Posts"
          },
          "categories": "notes",
          "tags": "ruby, ruby on rails, web development",
          "url": "/notes/generate-unique-random-number-token-ruby-rails/",
          "content": "Sometimes we need random or unique numbers and tokens in our web apps. Here is a quick solution on how you can generate random unique numbers or token using Ruby on Rails.\n\n## Generate Random Number or Token\n\nFor generating secured random numbers and tokens we use `Secure Random`, suitable for generating session keys in HTTP cookies, etc. If you are only looking for generating random numbers, then you can use rand(Random). Here, we are using random_number(SecureRandom) for numbers and hex(SecureRandom) for tokens. There are varieties of other methods provided by Secure Random like alphanumeric, base64, urlsafe_base64, uuid, etc. which you can read further about here: Secure Random.\n\n```ruby\ndef generate_random_number\n   SecureRandom.random_number(10000000)\n\n  # OR Using rand method\n  #  rand(10000000)\nend\n\ndef generate_token\n  SecureRandom.hex(10)\nend\n```\n\n## Generate Random Token or Number with Prefix\n\nAdd prefix to token with any string you would like to by returning prefixed value instead of only token like: `\"AC#{token}\"`.\n\n```ruby\ndef generate_token_with_prefix\n  loop do\n    token = SecureRandom.hex(10)\n\n    # Assuming AC is the string you would like to prefix\n    break \"AC#{token}\"\n  end\nend\n```\n\n## Generate Unique Random Token\n\nFor generating random tokens that are unique, we make the use of `loop` provide by Ruby. Here, we are generating tokens indefinitely until it meets our requirement that is; we are breaking out of the loop and returning token only if the generated token doesn't already exist in our database for user table.\n\n```ruby\nclass User Makandra Cards\n\n**Image Credits:** Cover Image by 955169 from Pixabay"
        },
        {
          "id": "book-reviews-ikigai",
          "title": "Ikigai [But Not a Review]",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/ikigai/",
          "content": "Ikigai: The Japanese Secret to a Long and Happy Life, is the the book written by Hector Garcia and Francesc Miralles. It talks about finding a purpose in life which makes living a life meaningful and makes the person happy.\n\nAuthors have researched quite well and provide a great outlook on long living Japanese people. Book also talks about diet plans, exercises, being focused on tasks, etc. \n\nThough Ikigai means \"the purpose that one have in their life\", there is very little on what we can do to find the purpose in our life in this book. But still, it's a good book to read.\n\n## The art of staying young and growing old\n\n- \"Only staying active will make you want to live a hundred years.\" - Japanese Proverb\n\n### The five Blue Zones\n\n- The keys to longevity are diet, exercise, finding a purpose in life (an ikigai), and forming strong social ties - that is, having a broad circle of friends and good family relations.\n\n### A lot of sitting will age you\n- Spending too much time seated at work or at home not only reduces muscular and respiratory fitness but also increases appetite and curbs the desire to participate in activities.\n- Being sedentary can lead to hypertension, imbalanced eating, cardiovascular disease, osteoporosis, and even certain kinds of cancer.\n- It's easy to be less sedantary; it just takes a bit of effort and a few changes to your routine. We can access a more active lifestyle that makes us feel better inside and out - we just have to add a few ingredients too our everyday habits:\n\n  - Walk to work, it just go on a walk for at least twenty minutes each day.\n  - Use your feet instead of an elevator or escalator. This is good for your posture, your muscles, and your respiratory system, among other things.\n  - Participate in social or leisure activities so that you don't spend too much time in front of the television.\n  - Replace your junk food with fruit and you'll have less of an urge to snack, and more nutrients in your system.\n  - Get the right amount of sleep. Seven to nine hours is good, but any more than that makes us lethargic.\n  - Play with children or pets, or join a sports team. This not only strengthens the body but also stimulates the mind and boosts self-esteem.\n  - Be conscious of your daily routine in order to detect harmful habits and replace them with more positive ones.\n\n### A model's best-kept secret\n- Most of those who make their living as models claim to sleep between nine and ten hours the night before a fashion show. This gives their skin a taut, a wrinkle-free appearance and a healthy, radiant glow.\n- Science has shown that sleep is a key anti-aging tool, because when we sleep we generate melatonin, a hormone that occurs naturally in our bodies.\n- Melatonin is a great ally in preserving youth. However, melatonin production decreases after age thirty. We can compensate for this by:\n\n  - Eating a balanced diet and getting more calcium.\n  - Soaking up a moderate amount of sun each day.\n  - Getting enough sleep.\n  - Avoiding stress, alcohol, tobacco, and caffeine, all of which make it harder to get a good night's rest, depriving us of the melatonin we need.\n\n### An ode to longevity\n\n- During our stay in Ogimi, the village that holds the Guinness record for longevity,a woman who was about to turn 100 years old sang the following song for us in a mixture of Japanese and the local dialect:\n\n    \"To keep healthy and have a long life,\n    eat just a little of everything with relish,\n    go to bed early, get up early, and then go out for a walk.\n\n    We live each day with serenity and we enjoy the journey.\n\n    To keep healthy and have a long life,\n    we get on well with all of our friends.\n\n    Spring, summer, fall, winter,\n    we happily enjoy all the seasons.\n\n    The secret is to not get distracted by how old the fingers are; from the fingers to the gas and back once again.\n\n    If you keep moving with your fingers working, 100 years will come to you.\"\n\n## From logotheraphy to ikigai - How to live longer and better by finding your purpose\n\n### What is logotheraphy?\n- Everything can be taken from a man but one thing: the last of the human freedom - to choose one's attitude in any given set of circumstances, to choose one's own way.\n\n### Better living through logotheraphy\n\n- A few key ideas\n  - We don't create the meaning of life, we discover it.\n  - We each have a unique reason for being, which can be adjusted it transformed many times over the years.\n  - Just as worry often brings about precisely the thing that was feared, excessive attention to desire (or \"hyper-intention\") can keep that desire from being fulfilled.\n  - Humor can help break negative cycles and reduce anxiety.\n  - We all have the capacity to do noble or terrible things. The side of the equation we end up on depends on our decisions, not on the condition in which we find ourselves.\n\n### Naikan meditation\n- If you are angry and want to fight, think about it for three days before coming to blows. After the days, the intense desire to fight will pass on it's own.\n\n## Find Flow In Everything You Do - How to turn work and free time into spaces for growth\n\n- \"We are what we repeatedly do. Excellence then, is not an act but a habit.\" - Aristotle\n\n### Going with the flow\n\nImagine you are skiing down one of your favorite slopes. Pretty snow flies up in both sides of you like what sand. Conditions are perfect\n\nYou are entirely focused on skiing as well as you can. You know exactly how to move at each moment. There is no future, no past. There is only the present. You feel the snow, your skis, your body, and your consciousness united as a single entity. You are completely immersed in the experience, not thinking about or distracted by anything else. Your ego dissolves, and you become post of what you are doing.\n\nWe've all felt or sense of time vanish when we lose ourselves in an activity we enjoy. We start cooking and before we know it, several hours have passed. We spend an afternoon with a book and forget about the world going by until we notice the sunset and realize we haven't eaten dinner.\n\nThe opposite can also happen. When we have to complete a task we don't want to do, every minute feels like a lifetime and we can't stop looking at our watch.\n\nThe funny thing is that someone else might really enjoy the same task, but we want to finish as quickly as possible.\n\nWhat makes us enjoy doing something so much that we forget about whatever worries we might have while we do it? When are we happiest? These questions can help us discover our ikigai.\n\n### The power of flow\n\n- These questions are also at the heart of psychologist Mihaly Csikszentmihalyi's research into the experience of being completely immersed in what we are doing.\n- Csikszentmihalyi called this state \"flow\", and described it as the pleasure, delight, creativity, and process when we are completely immersed in life.\n- When we flow, we are focused on a concrete task without any distractions . Or minds is \"in order\". The opposite occurs when we try to do something while our mind is on other things.\n- If you often find yourself losing focus while working on something you consider important, there are several strategies you can employ to increase your chances of achieving flow.\n\n#### The Seven Conditions for Achieving Flow\n\nAccording to researcher Owen Schaffer of DePaul University, the requirements for achieving flow are:\n\n  1. Knowing what to do\n  2. Knowing how to do it\n  3. Knowing how well you are doing\n  4. Knowing where to go (where navigation is involved)\n  5. Perceiving significant challenges\n  6. Perceiving significant skills\n  7. Being free from distractions\n\n##### Strategy 1: Choose a difficult task (but not too difficult!)\n\n- Shaffer's model encourages us to take in tasks that we have a chance of completing but that are slightly outside our comfort zone.\n- Every two or job has a set of rules and we need a set of skills to follow them. If the rules for completing a task are too basic relative to our skill set, we will likely get bored.\n- If, on the other hand, we assign ourselves a task that is too difficult, we won't have the skills to complete it and will almost certainly give up.\n- The ideal is to find a middle part, something aligned with our abilities but just a bit of a stretch, so we experience it as a challenge.\n- We want to see challenges through to the end because we enjoy the feeling is pushing ourselves.\n- Add a little something extra, something that takes you out of your comfort zone.\n\n##### Strategy 2: Have a clear, concrete objective\n\n- Video games - played in moderation - board games, and sports are great ways to achieve flow, because the objective tends to be very clear: Beat your rival or your own record while following a set of explicitly defined rules.\n- It is much more important to have a compass pointing to a concrete objective than to have a map.\n- Having a clear objective is important in achieving flow, but we also have to know how to leave it behind when we get down to business. Once the journey has begun, we should keep this objective in mind without obsessing over it.\n- When Olympic athletes complete for golf medal, they can't stop to think how pretty the medal is. They have to be present in there moment - they have to \"flow\". If they lost focus for a second, thinking how proud they'll be to show the medal to their parents, they'll almost certainly commit an error at a critical moment and will not win the competition.\n- \"A happy man is too satisfied with the present to dwell on the future.\" - Albert Einstein\n\n##### Strategy 3: Concentrate on a single task\n\n- This is perhaps one of the greatest obstacles we face today, with so much technology and so many distractions.\n- We often think that combing tasks will save us time, but scientific evidence shows that it has the opposite effect.\n- Our brains can take in millions of bits information but can only actually process a few dozen per second. When we say we're multitasking, what we're really doing is switching back and forth between tasks very quickly. Unfortunately, we're not computers adapt at parallel processing. We end up spending all our energy alternating between  tasks, instead of focusing on doing one of them well.\n- Technology is great, if we're in control of it. It's not so great if it takes control of us.\n\n### Humans as ritualistic beings\n\n- Don't worry about the outcome - it will come naturally. Happiness is in the doing, not in the result.\n\n## Masters of Longevity - Words of wisdom from the longest-living people in the world\n\n\n1. \"Eat and sleep, and you'll live a long time. You have to learn to relax.\" - Misao Okawa (age 117)\n2. \"I've never eaten meat in my life.\" - Maria Capovilla (age 116)\n3. \"Everything's fine.\" - Jeanne Calment (age 122)\n4. \"If you keep your mind and body busy, you'll be around a long time.\" - Walter Breuning (age 114)\n\n## Lessons from Japan's Centenarians - Traditions and proverbs for happiness and longevity\n\nOver the course of a week we conducted a total of one hundred interviews, asking the elder members of the community about their life philosophy, their ikigai, and the secrets to longevity:\n\n1. Don't worry\n    - \"The secret to a long life is not to worry. And to keep your heart young - don't let it grow old. Open your heart to people with a nice smile on your face. If you smile and open your heart, your grandchildren and everyone else will want to see you.\"\n\n    - \"The best way to avoid anxiety is to go out in the street and say hello to people. I do it everyday. I go out there and say. 'Hello!' and 'See you later!' Then I go home and care for my vegetable garden. In the afternoon, I spend time with friends.\"\n\n2. Cultivate good habits\n    - \"I feel joy every morning waking up at six and opening the curtains to look out at my garden, where I grow my own vegetables. I go right outside to check on my tomatoes, my mandarin oranges... I love the sight of them - it relaxes me. After an hour in the garden I go back inside and make breakfast.\"\n\n    - \"The key to staying sharp in old age is in your fingers. From your fingers to your brain, and back again. If you keep your fingers busy, you'll live to see one hundred.\"\n\n    - \"I get up at four every day. I set my alarm for that time, have a cup of coffee, and do a little exercise, lifting my arms. That gives me energy for the rest of the day.\"\n\n    - \"Working. If you don't work, your body breaks down.\"\n\n    - \"I do exercise everyday, and every morning I go for a little walk.\"\n\n    - \"Eating vegetables - it helps you live longer.\"\n\n    - \"To live a long time you need to do three things: exercise to stay healthy, eat well, and spend time with people.\"\n\n3. Nurture you friendships every day\n    - \"Getting together with my friends is my most important ikigai. We all get together here and talk - it's very important, and that's one of my favorite things in life.\"\n\n    -  \"Talking each day with the people you love, that's the secret to a long life.\"\n\n    - \"I wake up at five every morning, leave the house, and walk to the sea. Then I go to a friend's house and we have tea together. That's the secret to long life: getting together with people, and going from place to place.\"\n\n4. Live an unhurried life\n    - \"My secret to a long life is always saying to myself, 'Slow down', and 'Relax'. You live much longer if you're not in a hurry.\"\n\n    - \"Doing many different things every day. Always staying busy, but doing one thing at a time, without getting overwhelmed.\"\n\n    - \"The secret to long life is going to bed early, waking up early, and going for a walk. Living peacefully and enjoying the little things. Getting along with your friends. Spring, summer, fall, winter .. enjoying each season, happily.\"\n\n5. Be optimistic\n    - \"Every day I say to myself, 'Today will be full of health and energy. Live it to the fullest.'\"\n\n    - \"Laugh. Laughter is the most important thing. I laugh wherever I go.\"\n\n    - \"The must important thing in life, is to keep smiling.\"\n\n    - \"There's no secret to it. The trick is just to live.\"\n\n### Keys to the Ogimi Lifestyle\n\n- One hundred percent of the people we interviewed keep a vegetable garden, and most of them also have field of tea, mangoes, shikiwasa and so on.\n- All being to dinner form of neighborhood association, where they feel care for as though by family.\n- They celebrate all the time, even little things. Music, song and dance are essential parts of daily life.\n- They have an important purpose in life, or several.\n\nThey have an ikigai, but they don't take it too seriously. They are relaxed and enjoy all that they do.\n\n- They are very proud of their traditions and local culture.\n- They are passionate about everything they do, however insignificant it might seem.\n- Locals have a strong sense of yuimaaru - recognizing the connection between people. They help each other with everything from work in the fields (harvesting sugar cane or planting rice) to building houses and municipal projects. Our friend Miyagi, who ate dinner with us on our last night in twon, told us that he was building a new home with the help of all his friends, and that we could stay there the time we were in Ogimi.\n- They are always busy but they occupy themselves with tasks that allow them to relax. We didn't see a single old grandpa sitting on a bench doing nothing.\n\nThey are always coming and going - to sing karaoke, visit with neighbors, or play a game of gateball.\n\n## The Ikigai Diet - What the world's longest living people eat and drink\n\n### Okinawa's miracle diet\n\nBradley J. Wilcox and D. Craig Wilcox joined Makoto Suzuki's research team and published a book considered the bible on the subject of nutrition, \"The Okinawa Program\". They reached the following conclusions:\n\n- _Locals eat a wide variety of foods_, especially vegetables. Variety seems to be a key. A study of Okinawa's centenarians showed that they ate 206 different foods, including spices, on a regular basis. They ate an average of eighteen different foods each day, a striking contrast to the nutritional poverty of our fast-food culture.\n- _They eat at least five seconds of fruits and vegetables every day_. At least seven types of fruits and vegetables are consumed by Okinawans on a daily basis. The easiest way to check if there is enough variety on your table is to make sure you're \"eating the rainbow\". A table featuring red peppers, carrots, spinach, cauliflower, and eggplant, for example, offers great color and variety. Vegetables, potatoes, legumes, and soy products such as tofu are the staples of an Okinawan's diet. More than 30 percent of their daily calories comes from vegetables.\n- _Grains are the foundation of their diet._ Japanese people eat white rice every day, sometimes adding noodles. Rice is the primary food in Okinawa as well.\n- _They rarely way sugar_, and even if they do, it's cane sugar.\n\n### Hara hachi bu\n\n- This brings us back to the 80 percent rule we mentioned in the first chapter, a concept known in Japanese as hara hachi bu. It's easy to do: When you notice you're almost full but could have a little more ... just stop eating!\n\n### So, eat less to live longer?\n\n- Few would challenge this idea. Without taking it to the extreme of malnutrition, of course, eating fewer calories than our bodies ask for seems to increase longevity. The key to staying healthy while consuming free calories is eating food with a high nutritional value.\n- An alternative to following the 80 percent rule on a daily basis is to fast for one or two days each week. The 5:2 (or fasting) diet recommend two days of fasting (consuming fewer than five hundred calories) every week and eating normally on the other five days.\n\nAmong it's many benefits, fasting helps cleanse the digestive system and allows it to rest.\n\n### 15 natural antioxidants found in the Okinawan diet\n\n#### The Antioxidant Canon, for Westerners\n\nIn 2010 the UK's Daily Mirror published a list of foods recommended by excited to combat aging. Among these foods readily available in the West are:\n\n- Vegetables such as broccoli and chard, for their high concentration of water, minerals and fiber\n- Oily fish such as salmon, mackerel, tuna and sardines for all the antioxidants in their fat\n- Fruits such as citrus, strawberries and apricots; they are an excellent source of bushings and help eliminate toxins from the body\n- Berries such as blueberries and goji berries; they are rich in phytochemical antioxidants\n- Dried fruits which contain vitamins and antioxidants and give your energy\n- Grains such as oats and wheat, which give you energy and contain minerals\n- Olive oil, for its antioxidant effects that show in your skin\n- Red wine, in moderation for its antioxidant and vasolidatory properties\n\nFoods that should be eliminated are refined sugar and grains, processed baked goods, and prepared foods, along with cow's milk and all it's derivatives. Following this diet will help you feel younger and slow the process of premature aging.\n\n## Resilience And Wabi-Sabi - How to face life's challenges without letting stress and worry age you\n\n### What is resilience?\n\n- One thing that everyone with a clearly defined ikigai has in common is that they pursue their passion no matter what.\n\n    They never give up, even when the cards seem stacked against them or they gave one hurdle after another.\n- Resilience isn't just the ability to persevere. It is also an outlook we can cultivate to stay focused on the important things in life rather than what is most urgent, and to keep ourselves from being carried away by negative emotions.\n- \"Fall seven times, rise eight\" - Japanese Proverb\n- Resilient people know how to stay focused on their objectives, on what matters, without giving in to discouragement. Their flexibility is the source of their strength. They know how to adapt to change and to reversals of fortune. They concentrate on the things they can control and don't worry about those they can't.\n\n### Emotional resilience through Buddhism\n\n- Gautam Budhha realized that a wise person should not ignore life's pleasures. A wise person can live with these pleasures but should always remain conscious of how easy it is to be enslaved by them.\n- Zeno abandoned Cynicism's teaching to found the school of Stoicism, which centers on the idea that there is nothing wrong with enjoying life's pleasure as long as they don't take control of your life as you enjoy them. You have to be prepared for those pleasures to disappear.\n- The goal is not to eliminate all feelings and pleasures from our lives, but to eliminate negative emotions.\n\n### What's the worst thing that could happen?\n\n- We finally land of dream job, but after a little while we are already hunting for a better one. We win the lottery and buy a nice car but then decide we can't live without a sailboat. We finally win the heart of the man or woman we've been pinning for and suddenly find a wandering eye.\n\n### People can be insatiable.\n\n- The Stoics believed that these kinds of desire and ambitions are not worth pursuing. The objective of the virtuous person is to reach a state of tranquility: the absence of negative feelings such as anxiety, fear, shame, vanity and anger and the presence of positive feelings such as happiness, love, serenity and gratitude.\n\n    In order to keep their minds virtuous, the Stoics practiced something like negative visualization: They imagined the worst thing that could happen I'm order to be prepared of certain privileges and pleasures were taken from them.\n\n### Meditating for healthier emotions\n\n- In addition to negative visualization and not giving in to negative emotions, another central tenet of Stoicism is \"knowing what we can control and what we can't\".\n\nWorrying about things that are beyond our control accomplished nothing. We should have a clear sense of what we can change and what we can't, which in turn will allow us to resist giving in to negative emotions.\n- \"It's not what happens to you, but how you react that matters\" - Epictetus\n\n### The here and now, and the impermanence of things\n\n- The present is all that exists, and it is the only thing we can control. Instead of worrying about the past or future, we should appreciate things just add they are in the moment, in the now.\n- \"The only moment in which you can be truly alive is the present moment\" - Thich Naht Hanh\n- We should never forget that everything we have and all the people we love will disappear at some point.\n- Being aware of the impermanence of things does not have to make us sad, it should help us love the present moment and those who surround us.\n\n### Beyond resilience: Antifragility\n\n- Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.\" - Nassim Nicholas Taleb\n\n#### How can we be more antifragile?\n\n##### Step 1: Create more options\n\n- Instead of having a single salary, try to find a way to make money from your hobbies, at other jobs, or by starting your own business. If you have only one salary, you might be left with nothing should your employer run into trouble, leaving you into a position of fragility. On the other hand, if you have several options and you lose your primary job, it might just happen that you end up dedicating more time to your secondary job, and maybe even make more more money at it. You would have beaten that stroke of bad luck and would be, in that case, antifragile.\n- Right now you might be thinking,\"I don't need more that one salary and I'm happy with the friends I've always had. Why should I add anything new?\" It might seem like a waste of time to add caution to our lives, because extraordinary things don't ordinarily happen. We slip into a comfort zone. But the unexpected always happens, sooner or later.\n\n##### Step 2: Bet conservatively in certain areas and take many small risks in others\n\nIf you have$10,000 saved up, you might put $9,000 of that into an index fund it fixed-term deposit, and invest the remaining $1,000 in ten start-ups with huge growth potential-say, $100 in each.\n\nOne possible scenario is that three of the companies fail (you lose $300), the value of three other companies goes down (you lose another $100 or $200), the value of three goes up (you make $100 or $200), and the value of one the start-ups increases twenty-fold (you make nearly $2,000, or maybe even more).\n\nYou still make money, even if there is the businesses completely belly-up. You've benefited from the damage, just like the Hydra.\n\nThree key to becoming antifragile is taking on small risks that might lead to great reward, without exposing ourselves too dangers that might sink us, such as investing $10,000 in a fund of questionable reputation that we saw advertised in the newspaper.\n\n##### Step 3: Get rid of the things that make you fragile\n\n- What makes us fragile? Certain people, things, and habits generate losses for us and make us vulnerable. Who and what are they?\n- To build resilience into our lives, we shouldn't fear adversity, because each setback is an opportunity for growth. If we adopt an antifragile attitude, we'll find a way to get stronger with every blow, refining our lifestyle and staying focused on our ikigai.\n- Taking a hit or two can be viewed as either a misfortune or an experience that we can apply to all areas of our lives, as we continually make corrections and set new and better goals. As Taleb writes in Antifragile, \"We need randomness, mess, adventures, uncertainty, self-discovery, hear traumatic episodes, all these things that make life worth living.\"\n\n## Conclusion\n\n- Our ikigai is different for all of us, but one thing we have in common is that we are all searching for meaning. When we spend our days feeling connected to what is meaningful to us, we live more fully; when we lose the connection, we feel despair.\n- Modern life estranges us more and more from our true nature, making it very easy for us to lead lives lacking in meaning. Powerful forces and incentives (money, power, attention, success) distract is on a daily basis; don't let them take over your life.\n- Our intuition and curiosity are very powerful internal compasses to help us connect with our ikigai. Follow those things you enjoy, and get away from or change those you dislike. Be led by your curiosity, and keep busy but doing things that fill you with meaning and happiness. It doesn't need to be a big thing: we might find meaning in being good parents or in helping our neighbors.\n- Life is not a problem to be solved. Just remember to have something that keeps you busy doing what you love while being surrounded by the people who love you.\n\nThe ten rules of ikigai:\n\n1. Stay active; don't retire.\n    - Those who give up the things they love doing and do well lose their purpose in life. That's why it's so important to keep doing things of value, making progress, bringing beauty or utility to others, helping out, and shaping the world around you, even after your \"official\" professional activity had ended.\n\n2. Take it slow\n    - Being in a hurry is inversely proportional to quality of life. As the old saying goes, \"Walk slowly and you'll go far.\" When we leave urgency behind, life and time take on new meaning.\n\n3. Don't fill your stomach\n    - Less is more when it comes to eating for long life, too. According to 80 percent rule, in order to stay healthier longer, we should eat a little less than our hunger demands instead of stuffing ourselves.\n\n4. Surround yourself with good friends\n    - Friends are the best medicine, there for confiding worries over a good chat, sharing stories that brighten your day, getting advice, having fun, dreaming ... in other words, living.\n\n5. Get in shape for your next birthday\n    - Water moves; it is at its best when it flows fresh and doesn't stagnate. The body you move through life in needs a bit of daily maintenance to keep it running for a long time. Plus, exercise releases hormones that make us feel happy.\n\n6. Smile\n    - A cheerful attitude is not only relaxing - it also helps make friends. It's good to recognize the things that aren't so great, but we should never forget what a privilege it is to be in the here and now in a world so full of possibilities.\n\n7. Reconnect with nature\n    - Though most people live in cities these days, human beings are made to be part of the natural world. We should return to it often to recharge our batteries.\n\n8. Give thanks\n    - To your ancestors, to nature, which provides you with the air you breathe and the food you eat, to your friends and family, to everything that brightens your days and makes you feel lucky to be alive. Spend a moment every day giving thanks and you'll watch your stockpile of happiness grow.\n\n9. Love in the moment\n    - Stop regretting the past and fearing the future. Today is all you have. Make the most of it. Make it worth remembering.\n\n10. Follow your ikigai\n    - There is a passion inside you, a unique talent that gives meaning to your days and drives you to share the best of yourself until the very end. If you don't know what your ikigai is yet, as Viktor Frankl says, your mission is to discover it.\n\n## Conclusion of the blog\n\nHope you learnt something valuable from this extract. Thank you for reading!\n\n**Image Credits:** Cover Image by Tarun Savvy from Unsplash"
        },
        {
          "id": "book-reviews-rework",
          "title": "Rework [But Not a Review]",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/rework/",
          "content": "One word after reading Rework by David Heinemeier Hansson (DHH) and Jason Fried was \"relatable\".\n\nAuthors have done a great job keeping complex terms simple so it is easier to understand and if you are in the position of management in the company then a lot of what is said in this book just \"clicks\".\n\n## Chapter First\n\n- If you write a big plan, you'll most likely never look at it anyway.\n\n## Workaholism\n\n- Workaholics try to make up for food intellectual laziness with brute force. This results in inelegant solutions.\n- Workaholics make the people who don't stay late feel inadequate for \"merely\" working reasonable hours. That leads to guilt and poor morale all around.\n- Workaholics aren't heroes. They don't save the day, they just use it up. The real hero is already home because they figured out a faster way to get things done.\n\n## Scratch your own itch\n\n- The easiest, most straightforward way to create a great product or service is to make something \"you\" want to use.\n\n## Start making something\n\n- We all have that one friend who says, \"I had the idea of eBay. If only I had acted on it, I'd be a billionaire!\" That logic is pathetic and delusional. Having the idea for eBay has nothing to do with actually creating eBay. What you do is what matters, not what you think or say our plan.\n\n## No time is no excuse\n\n- When you want something bad enough, you make the time - regardless of your other obligations. The truth is most people just don't want it bad enough. Then they protect their ego with the excuse of time. \n\n    Don't let yourself off the hook with the excuses. It's entirely your responsibility to make your dreams come true.\n\n- The \"perfect\" time never arrives.\n\n    You're always too young or old or busy or broke or something else.\n\n## Start a business, not a startup\n\n- A business without a path to profit isn't a business, it's a hobby.\n\n## Building to flip is building to flop\n\n- \"What's your exit strategy?\"\n\n    You hear it when you're just beginning. \n\n    What is it with people who can't even start building something without knowing how they're going to leave it?\n\n    Would you go into a relationship planning the breakup? Would you meet a divorce lawyer the morning of your wedding? That would be ridiculous, right?\n\n## Embrace constraints\n\n- \"I don't have enough time/money/people/experience.\"\n\n    Stop whining. Less is a good thing. Constraints are advantages in disguise. Limited resources force you to make do with what you've got. There's no room to waste. And that forces you to be creative.\n\n## Making the call is making progress\n\n- When you put off decisions, they pile up. And piles end up ignored, deal with in haste, or thrown out. As a result, the individual problems in those piles stay unresolved.\n\n    Whenever you can, swap\"Let's think about it\" for \"Let's decide on it.\" Commit to making decisions. Don't wait for the perfect solution. Decide and move forward.\n\n    You want to get into the rhythm of making choices. When you get in that flow of making decision after decision, you build momentum and boost morale. Decisions are progress.\n\n- When you postpone decisions in the hope that a perfect answer will come to you later. It won't. You're as likely to make a great call today as you are tomorrow.\n- It doesn't matter how much you plan, you'll still get some stuff wrong anyway. Don't make things worse by overanalyzing and delaying before you even get going.\n\n## Tone in your fingers\n\n- It's tempting for people to obsess over tools instead of what they're going to do with those tools.\n\n    Many amateur golfers think they need expensive clubs. But it's the swing that matters, not the club. Give Tiger Woods a set of cheap clubs and he'll still destroy you.\n\n    You also see it in people who want to blog, podcast or shoot videos for their business but get hung up on which tools to use. The content is what matters. You can spend tons on fancy equipment, but if you've got nothing to say .... well, you've got nothing to say.\n\n## Interruption is the enemy of productivity\n\n- When do you get most of your work done? If you're like most people, it's at night or early in the morning. It's no coincidence that these are the times when nobody else is around.\n\n    Interruptions break your workday into a series of work moments. You can't get meaningful things done when you're constantly going start, stop, start, stop.\n\n## Go to sleep\n\n- Forgoing sleep is a bad idea. Sure, you get those extra hours right now, but you pay in spades later: You destroy your creativity, morale, and attitude.\n\n    Once in a while, you can pull an all-nighter if you fully understand the consequences. Just don't make it a habit. If it becomes a constant, the costs start to mount.\n\n    1. Stubbornness: When you're really tired, it always seems easier to plow down whatever bad path you happen to be on instead of reconsidering the route. The finish line is a constant mirage and you wind up walking in the desert way to long.\n\n    2. Lack of creativity: Creativity is one of the first things to go when you lose sleep. What distinguishes people who are teen times more effective than the norm is not that they work ten times as hard; it's that they use their creativity to come up with solutions that require one-tenth of the effort. Workout sleep, you stop coming up with one-tenth solutions.\n\n    3. Diminished morale: When your brain isn't during on all cylinders, it loves to feed on less demanding tasks. Like reading yet another article about stuff that doesn't matter. When you're tired, you lose motivation to attack the big problems.\n\n    4. Irritability: Your ability to remain patient and tolerant is severely reduced when you're tired. If you encounter someone reduced when you're tired. If you encounter someone who's acting like a fool, there's a good chance that person is suffering from sleep deprivation.\n\n    These are just some of the costs you incur when not getting sleep. Yet some people still develop a masochistic sense of honor about sleep deprivation. They even about how tired they are. Don't be impressed. It'll come back to bite them in the ass.\n\n## Long lists don't get done\n\n- Start making smaller to-do lists. Long lists collect dust. When's the last time you finished a long list of things? You might have knocked off the first few, but chances are you eventually abandoned it.\n\n    Long lists are guilt trips. The longer the list of unfinished items, the worse you feel about it. And at a certain point, you just stop looking at it because it makes you feel bad. Then you stress out and the whole thing turns into a big mess.\n\n    Better way is to break down long list down into a bunch of smaller lists. For example, break a single list of a hundred items into ten lists of ten items. That means when you finish an item on a list, you've completed 10 percent of that list, instead of 1 percent.\n\n    Yes, you still have the same amount of stuff left to do. But now you can look at the small picture and find satisfaction, motivation and progress. \n\n    And a quick suggestion about prioritization: Don't prioritize with numbers or labels. Instead, prioritize visually. Put the most important things at the top. When you're done with that, the next things on the list becomes the next most important thing. That way you'll only have a single next most important thing to do at a time.\n\n## Don't copy\n\n- Be influenced, but don't steal.\n- The problem of blindly copying is it skips understanding - and understanding is how you grow. You have to understand why something works or why something is the way it is. When you just copy and paste, you miss that. You just repurpose the last layer instead of understanding all the layers underneath.\n\n## Who cares what they're doing?\n\n- What's the point of worrying about things you can't control?\n\n## Let your customers outgrow you\n\n- People and situations change. You can't be everything to everyone.\n\n## Build an audience\n\n- All companies have customers. Lucky companies have fans. But the most fortunate companies have audiences. An audience can be your secret weapon.\n\n    When you build an audience, you don't have to buy people's attention - they give it to you. This it's a huge advantage.\n\n    So build an audience. Speak, write, blog, tweet, make videos - whatever. Share information that's valuable and you'll slowly but surely build a loyal audience. Then when you need to get the word out, the right people will already be listening.\n\n## Emulate chefs\n\n- You've probably heard of Emeril Lagassel, Mario Batali and Paula Deen. They're great chefs, but there are a lot of great chefs out there. So why do you know these few better than others? Because they share everything they know. They put their recipes in cookbooks and show their techniques on cooking shows.\n\n    So emulate famous chefs. They cook, so they write cookbooks. What do you do? What are you \"recipes\"? What's your \"cookbook\"? What can tell the world about how you operate that's informative, educational, and promotional?\n\n## Nobody like plastic flowers\n\n- Don't be afraid to show your flaws. Imperfections are real and people respond to real. It's why we like real flowers that wilt, not perfect plastic ones that never change. Don't worry about how you're supposed to act. Show the world what you're really like, ways and all.\n- When something becomes too polished, it loses its soul. It seems robotic.\n\n## Drug dealers get it right\n\n- Drug dealers are astute business people. They know their product is so good they're willing to give a little away for free upfront. They know you'll be back for more - with money.\n\n    Emulate drug dealers. Make your product so good, so addictive, so \"can't miss\" that giving customers a small free tests makes them come back with cash in hand.\n\n## You don't create a culture\n\n- You don't create a culture. It happens.\n\n    This is why new companies don't have a culture. Culture is the byproduct of consistent behavior.\n\n    If you encourage people to share, then sharing will be built into your culture. If you reward trust, then trust will be built in.\n\n    Culture is not policy. Culture is action, not words.\n\n## Decisions are temporary\n\n- \"But what if...,?\" \"What happens when ....?\" \"Don't we need plan for ......?\"\n\n    Don't make up problems you don't have yet. It's not a problem until it's a real problem. Most of the things you worry about never happen anyway. \n\n## They're not 13\n\n- When you test people like children, you get children's work. Yet that's exactly how a lot of companies and managers test their employees. Employees need to ask permission before they can do anything.\n\n    When everything constantly needs approval, you create a culture of non thinkers. You create a boss-versus-worker relationship that screams, \"I don't trust you.\"\n\n## Send people home at 5\n\n- You don't need more hours, you need \"better\" hours\n- When people have something to do at home, they get down to business. They get their work done at the office because they have somewhere else to be. They find ways to be more efficient because they have to. They need to pick up their kids or get to choir practice. So they use their time wisely.\n- \"If you want something done, ask the busiest person you know.\" You want busy people. People who have a life outside of work. People who cares about more than one thing. You shouldn't expect the job to be someone's entire life - at least not if you want to keep them around for a long time.\n\n## Don't scar on the first cut\n\n- The second something goes wrong, the natural tendency is to create a policy. \"Someone's wearing shorts? We need a dress code!\" No, you don't. You just need to tell John not to wear shorts again.\n\n    Don't create a policy because one person did something wrong once. Policies are only meant for situations that comes up over and over again.\n\n## Conclusion\n\nHope you learnt something valuable from this extract. Thank you for reading.\n\n**Image Credits:** Cover Image by Robert Anasch from Unsplash"
        },
        {
          "id": "book-reviews-zero-to-one",
          "title": "Zero to One [But Not a review]",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/zero-to-one/",
          "content": "Doing what someone else already knows how to do takes the world from 1 to n, adding more of something familiar. But when you do something new, you go from 0 to 1. The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. Tomorrow’s champions will not win by competing ruthlessly in today’s marketplace. They will escape competition altogether, because their businesses will be unique.\n\nZero to One presents at once an optimistic view of the future of progress in America and a new way of thinking about innovation: it starts by learning to ask the questions that lead you to find value in unexpected places.\n\n## Last Mover Advantage\n\n- Start from niche market and gradually start dominating.\n- Don't disrupt the market and avoid competition as much as possible\n\n## Follow the money\n\n- People who understand the power law will hesitate more than others when it comes to founding a new venture: they know how tremendously successful they could become by joining the very best company while it's growing fast.\n\n## Secrets\n\n- Every one of today's most famous and familiar ideas was once unknown and unsuspected.\n- From an early age, we are taught the right way to do things is to proceed one very small step at a time, day by day, grade by grade. If you overachieve and end up learning something that's not on the test, you won't receive credit for it.\n- Most people think only in terms of what they've been taught; schooling itself aims to impart conventional wisdom.\n\n## Foundations\n\n- A startup messed up at its foundation cannot be fixed.\n- As a founder, your first job is to get the first things right, because you cannot build a great company on a flawed foundation\n\n## The Mechanics of Mafia\n\n- \"Company Culture\" doesn't exist apart from company itself: no company has a culture; every company is a culture.\n- The lawyers I worked with ran a valuable business, and they were impressive individuals one by one. But the relationships between them were oddly thin. They spent all day together, but few of them seemed to have much to say to each other outside the office. Why work with a group of people who don't even like each other?\n\n## If You Build It, Will They Come?\n\n- Advertising doesn't exist to make you buy a product right away, it exists to embed subtle impressions that will drive sales later.\n- If you have invented something new but you haven't invented an effective way to sell it, you have a bad business -- no matter how good the product.\n\n## Man And Machine\n\n- Computers are complements for humans, not substitutes.\n- Computers are tools, not rivals.\n- As we find new ways to use computers, they won't just get better at the kinds of things people already do; they'll help us to do what was previously unimaginable.\n\nHope you learnt something valuable from this extract. Thank you for reading.\n\n**Image Credits:** Cover Image by Oscar Nilsson from Unsplash"
        },
        {
          "id": "book-reviews-the-subtle-art-of-not-giving-a-fuck",
          "title": "The Subtle Art of Not Giving a Fuck [But Not a review]",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/the-subtle-art-of-not-giving-a-fuck/",
          "content": "In life, we care too much, about every single thing. In Mark Manson's own fucking words \"We give too many fucks in our life\". \n\nThis behavior of ours makes us miserable and unhappy. So, what's the solution? The author says, the less fucks you give, happier you will be. In life, there is no such thing as not giving a fuck, because one way or the other we end up and have to give fuck about something. So, how to know, what to give the fuck about?\n\nIn \"The Subtle Art of Not Giving a Fuck\", Mark Manson teaches us ways to find what we truly care for. So that we can spend our life on giving a fuck about only those things and lead a happy life. If you lead a \"normal\" life, this book will blow your mind, because in this era where every self-help book is all about making you feel good; this book makes you confront \"facts\" of real life.\n\n## Chapter 1: Don't Try\n\n- Everyone and their TV commercial wants you to believe that the key to a good life is a nicer job, or a more rugged car, or a prettier girlfriend, or a hot tub with an inflatable pool for the kids. The world is constantly telling you that the path to a better life is more, more, more - buy more, own more, make more, fuck more, be more. Why? My guess: because giving a fuck about more stuff is good for business.\n- The key to a good life is not giving a fuck about more; it's giving a fuck about less, giving a fuck about only what is true and immediate and important.\n\n### The feedback loop from hell\n\n- Our society today, through the wonders of consumer culture and hey-look-my-life-is-cooler-than-yours social media, has bred a whole generation of people who believe that having these negative experiences - anxiety, fear, guilt, etc. - is totally not okay.\n- Back in Grandpa's day, he would feel like shit and think to himself, \"Gee whiz, I sure do feel like a cow turd today. But hey, I guess that's just life. Back to shoveling hay.\"\n\n    But now? Now if you feel like shit for even five minutes, you're bombarded with 350 images of people totally happy and having amazing fucking lives, and it's impossible to not feel like there's something wrong with you.\n- Because there's an infinite amount of things we can now see or know, there are also an infinite number of ways we can discover that we don't measure up, that we're not good enough, that things aren't as great as they could be. And this rips us apart inside.\n- The desire for more positive experience is itself a negative experience. And, paradoxically, the acceptance of one's negative experience is itself a positive experience.\n- You're going to die one day. I know that's kind of obvious, but I just wanted to remind you in case you'd forgotten. You and everyone you know are going to be dead soon. And in the short amount of time between here and there, you have a limited amount of fucks to give. Very few, in fact. And if you go around giving a fuck about everything and everyone without conscious thought or choice - well, then you're going to get fucked.\n\n### The subtle art of not giving a fuck\n\n- So what does not giving a fuck mean? Let's look at three \"subtleties\" that should help clarify the matter.\n\n#### Subtlety#1: Not giving a fuck doesn't mean being indifferent; it means being comfortable with being different.\n\n- There's no such thing as not giving a fuck. You must give a fuck about something. It's part of our biology to always care about something and therefore to always give a fuck.\n\n    The question then is, What do we give a fuck about? What are we choosing to give a fuck about? And how can we not give a fuck about what ultimately does not matter?\n\n#### Subtlety#2: To not give a fuck about adversity, you must first give a fuck about something more important than adversity.\n\n- If you find yourself consistently giving too many fucks about trivial shit that bothers you - your ex-boyfriend's new Facebook picture, how quickly the batteries die in the TV remote, missing out on yet another two-for-one sale on hand sanitizer - chances are you don't have much going on in your life to give a legitimate fuck about. And that's your real problem. Not the hand sanitizer. Not the TV remote.\n- I once heard an artist say that when a person has no problems, the mind automatically finds a way to invent some. I think what most people consider \"life problems\" are really just side effects of not having anything more important to worry about.\n- It then follows that finding something important and meaningful in your life is perhaps the most productive use of your time and energy. Because if you don't find that meaningful something, your fucks will be given to meaningless and frivolous causes.\n\n#### Subtlety#3: Whether you are realizing or not, you are always choosing what to give a fuck about.\n\n- Maturity is what happens when one learns to only give a fuck about what's truly fuck worthy.\n\n## Chapter 2: Happiness is a problem\n\n- The life itself is a form of suffering. The rich suffer because of their riches. The poor suffer because of their poverty. People without a  family suffer because they have no family. People with a family suffer because of their family. People who pursue worldly pleasures suffer because of their worldly pleasures. People who abstain from worldly pleasures suffer because of their abstention.\n- The pain and loss are inevitable and we should let go of trying to resist them.\n- There is a premise that underlies a lot of our assumption and beliefs. The premise is that happiness is algorithmic, that it can be worked for and earned and achieved. If I achieve X, then I can be happy. If I look like Y, then I can be happy. If I can be with a person like Z, then I can be happy.\n\n    This premise, though, is the problem. Happiness is not a solvable equation. Dissatisfaction and unease are inherent parts of human nature and, as we'll see, necessary components to creating happiness.\n\n### The Misadventures of Disappointment Panda\n\n- We suffer for the simple reason that suffering is biologically useful. It is nature's preferred agent for inspiring change. We have evolved to always live with a certain degree of dissatisfaction and insecurity, because it's the mildly dissatisfied and insecure creature that's going to do the most work to innovate and survive.\n- We are wired to become dissatisfied with whatever we have and satisfied by only what we don't have. This constant dissatisfaction has kept our species fighting and striving, building and conquering.\n- Problems never fucking go away - they just improve. Warren Buffet's got money problems; the drunk hobo down at Kwik-E-Mart's got money problems. Buffet's just got better money problems than the hobo. All of life is like this.\n- Life is essentially an endless series of problems. The solution to one problem is merely the creation of the next one.\n- Don't hope for a life without problems. There's no such thing. Instead, hope for a life full of good problems.\n\n### Happiness Comes from Solving Problems\n\n- True happiness occurs only when you find the problems you enjoy having and enjoy solving.\n- Whatever your problems are, the concept is the same: solve problems; be happy. Unfortunately, for many people, life doesn't feel that simple. That's because they fuck things up in at least one of two ways:\n\n    1. Denial: Some people deny that their problems exist in the first place. And because they deny reality, they must constantly delude or distract themselves from reality. This may make them feel good in the short term, but it leads to a life of insecurity, neuroticism, and emotional repression.\n\n    2. Victim Mentality: Some choose to believe that there is nothing they can do to solve their problems, even when they in fact could. Victims seek to blame others for their problems or blame outside circumstances. This may make them feel better in the short term, but it leads to a life of anger, helplessness, and despair.\n\n### Emotions are Overrated\n\n- Whatever makes us happy today will no longer make us happy tomorrow, because our biology always needs something more.\n- We like the idea that there's some form of ultimate happiness that can be attained. We like the idea that we can alleviate all of our suffering permanently. We like the idea that we can feel unfulfilled and satisfied with our lives forever. But we \"cannot\".\n\n### Choose Your Struggle\n\n- What determines your success isn't, \"What do you want to enjoy?\" The relevant question is, \"What pain do you want to sustain?\" The path to happiness is a path full of shit heaps and shame.\n- You want the reward and not the struggle. You want the result and not the process. You are in love with not the fight but only the victory. And life doesn't work that way.\n- Our struggles determine our success.\n\n## Chapter 3: You are not Special\n\n### Things Fall Apart\n\n- The truth is that there's no such thing as personal problem. If you've got a problem, chances are millions of other people have had it in the past, have it now, and are going to have it in the future.\n- The benefits of the Internet and social media are unquestionably fantastic. In many ways, this is the best time in history to be alive. But perhaps these technologies are having some unintended social side effects. Perhaps these same technologies that have liberated and educated so many are simultaneously enabling people's sense of entitlement more than ever before.\n\n### The Tyranny of Exceptionalism\n\n- Most of us are pretty average at most things we do. Even if you're exceptional at one thing, chances are you're average or below average at most other things. That's just the nature of life.\n- To become truly great at something, you have to dedicate see shit-tons of time and energy to it. And because we all have limited time and energy, few of us ever become truly exceptional at more than one thing, if anything at all.\n- Brilliant businesspeople are often fuckups in their personal lives.\n- Many celebrities are probably just as clueless about life as the people who gawk at them and follow their every move.\n- Technology has solved old economic problems by giving us new psychological problems.\n- The Internet has not just open-sourced information; it has also open-sourced insecurity, self-doubt, and shame.\n\n### B-b-b-but, If I'm not Going to be Special or Extraordinary, What's the Point?\n\n- It has become an accepted part of our culture today to believe that we are all destined to do something truly extraordinary. The fact that this statement is inherently contradictory - after all, if everyone was extraordinary, then by definition 'no one' would be extraordinary - is missed by most people.\n- A lot of people are afraid to accept mediocrity because they believe that if they accept it, they'll never achieve anything, never improve, and that their life won't matter.\n\n    This sort of thinking is dangerous. Once you accept the premise that a life is worthwhile only if it is truly notable and great, then you basically accept the fact that most of the human population (including yourself) sucks and is worthless.\n\n## Chapter 4: The Value of Suffering\n\n### The Self-Awareness Onion\n\n- Problems may be inevitable, but the meaning of each problem is not. We get to control what our problems mean based on how we choose to think about them, the standard by which we choose to measure them.\n\n### Shitty Problem\n\n1. Pleasure\n\n    Pleasure is great, but it's a horrible value to prioritize your life around. Pleasure is a false god. Research shows that people who focus their energy on superficial pleasures end up more anxious, more emotionally unstable, and more depressed. Pleasure is the most superficial form of life satisfaction and therefore the easiest to obtain and the easiest to lose.\n\n2. Material Success\n\n    Many people measure their self-worth based on how much money they make or what kind of car they drive or whether their front lawn is greener and prettier than the next-door neighbor's.\n\n    Research shows that once one is able to provide for basic physical needs (foods, shelter, and so on), the correlation between happiness and worldly success quickly approaches zero.\n\n3. Always Being Right\n\n    As humans, we're wrong pretty much constantly, so if your metric for life success is to be right - well, you're going to have a difficult time rationalizing all of the bullshit to yourself.\n\n    People who base their self-worth on being right about everything prevent themselves from learning from their mistakes.\n\n4. Staying Positive\n\n    While there is something to be said for \"staying on the sunny side of life\", the truth is, sometimes life sucks, and the healthiest thing you can do is admit to it.\n\n    Denying negative emotions leads to experiencing deeper and more prolonged negative emotions and to emotional dysfunction. Constant positivity is a form of avoidance, not a valid solution to life's problem.\n\n\n\"One day, in retrospect, the years of struggle will strike you as the most beautiful\" - Freud\n\nSome of the greatest moments of one's life are not pleasant, not successful, not known, and not positive.\n\n### Defining Good and Bad Values\n\n- Good values are 1) reality-based 2) socially constructive, and 3) immediate and controllable.\n- Bad values are 1) superstitious, 2) socially destructive, and 3) not immediate or controllable.\n\n- When we have poor values - that is, poor standards we set for ourselves and others - we are essentially giving fucks about the things that don't matter, things that in fact make our life worse. But when we choose better values, we are able to divert our fucks to something better - toward things that matter, things that improve the state of our well-being and that generate happiness, pleasure and success as side effects.\n- This, is what \"self-improvement\" is really about: prioritizing better values, choosing better things to give a fuck about. Because when you give better fucks, you get better problems. And when you get better problems , you get a better life.\n\n## Chapter 5: You are Always Choosing\n\n- Often the only difference between a problem being painful or being powerful is a sense that we chose it, and that we are responsible for it.\n- If you're miserable in your current situation, chances are it's because you feel like some part of it is outside your control - that there's a problem you have no ability to solve, a problem that was somehow thrust upon you without your choosing.\n\n### The Choice\n\n- We don't always control what happens to us. But we always control how we interpret what happens to us, as well as how we respond.\n- Whether we like it or not, we are always taking an active role in what's occurring to and within us. We are always interpreting the meaning of every moment and every occurrence. We are always choosing the value by which live and the metrics by which we measure everything that happen to us. Often the same event can be good or bad, depending on the metrics we choose to use.\n- In reality, there is no such thing as not giving a single fuck. It's impossible. We must all give a fuck about something. The real question is, What are we choosing to give a fuck about? What values are we choosing to base our actions on? What metrics are we choosing to use to measure our life? And are those good choices - good values and good metrics?\n\n### The Responsibility/Fault Fallacy\n\n- With great responsibility comes great power.\n- The more we choose to accept responsibility in our lives, the more power we will exercise over our lives.\n- Nobody else is ever responsible for your situation but you. Many people may be to blame for your unhappiness, but nobody is ever responsible for your unhappiness but you. This is because you always get to choose how you see things, how you react to things, how you value things.\n\n### Genetics and the Hand We're Dealt\n\n- We all get dealt cards. Some of us get better cards than others. And while it's easy to get hung up on our cards, and feel we got screwed over, the real game lies in the choices we make with those cards, the risks we decide to take, and the consequences we choose to live with. People who consistently make the best choices in the situation they're given are the ones who eventually come out ahead in poker, just as in life. And it's not necessarily the people with best cards.\n\n### Victimhood Chic\n\n- Rather than report on real stories and real issues, the media find it much easier (and more profitable) to find something mildly offensive, broadcast it to a wide audience, generate outage, and then broadcast that outrage back across the population in a way that outrages yet another part of the population. This triggers a kind of bullshit pinging back and forth between two imaginary sides, meanwhile distracting everyone from real societal problems. \n\n## Chapter 6: You're Wrong About Everything (But So Am I)\n\n- Growth is an endlessly iterative process. When we learn something new, we don't go from wrong to right. Rather, we go from wrong to slightly less wrong. And when we learn something additional, we go from slightly less wrong to than that, and then too even less wrong than that, and so on. We are always in the process of approaching truth and perfection without actually ever reaching truth or perfection.\n- Instead of striving for certainty, we should be in constant search of doubt: doubt about our own beliefs, doubt about our own feelings, doubt about what the future may hold for us unless we get out there and create it for ourselves. Instead of looking to be right all the time, we should be looking for how we're wrong all the time. Because we are.\n- Being wrong opens us up the possibility of change. Being wrong brings the opportunity for growth.\n\n### The Dangers of Pure Certainty\n\n- In the mid-1990s, psychologist Roy Baumeister began researching the concept of evil. Basically, he looked at people who do bad things and at why they do them.\n\n    At the time it was assumed that people did bad things because they felt horrible about themselves - that is, they had low self-esteem. One of Baumeister's first surprising findings was that this was often not true. In fact, it was usually the opposite. Some of the worst criminals felt pretty damn good about themselves. And it was this feeling good about themselves in store of the reality around them that have them the sense of justification for hurting and disrespecting others.\n- Evil people never believe that they are evil; rather, they believe that everyone else is evil.\n- Those who believe the know everything learn nothing.\n- They more we admit we don't know, the more opportunities we gain to learn.\n\n### Manson's Law of Avoidance\n\n- The more something threatens your identity, the more you will avoid it.\n- There's a certain comfort that comes with knowing how you fit in the world. Anything that shakes up that comfort - even if it could potentially make your life better - is inherently scary.\n- I had a friend who was a party guy, always going out drinking and chasing girls. After years of living the \"high life\", he found himself terribly lonely, depressed and unhealthy. He wanted to give up his party lifestyle. Yet he never changed. Fort years he went on, empty night after empty night, bottle after bottle. Always some excuse. Always some reason he couldn't slow down.\n\n    Giving up that lifestyle threatened his identity too much. The Party Guy was all he knew how to be. Too give that up would be like committing psychological hara-kiri.\n- I say don't find yourself. I say never know who you are. Because that's what keeps you striving and discovering. And it forces you to remain humble in your judgements and accepting of the differences in others.\n\n### How to Be a Little Less Certain of Yourself\n\n- Questioning ourselves and doubting our own thoughts and belief is one of the hardest skills to develop. But it can be done. Here are some questions that will help you breed a little more uncertainty in your life.\n\n    Question #1: What if I'm wrong?\n        - It's worth remembering that for any change to happen in your life, you must be wrong about something. If you're sitting there, miserable day after day, then that means you're already wrong about something major in your life, and until you're able to question yourself to find it, nothing will change.\n\n    Question #2: What would it mean if I were wrong?\n        - Many people are able to ask themselves if they're wrong, but few are able to go the extra step and admit what it would mean if they were wrong. That's because the potential meaning behind our wrongness is often painful.\n\n    Question #3: Would being wrong create a better or a worse problem than my current problem, for both myself and others?\n        - This is the litmus test for determining whether we've got some pretty solid values going on, or we're totally neurotic fuck wads taking our fucks out on everyone, including ourselves.\n        - The goal here is to look at which problem is better. Because after all, as Disappointment Panda said, life's problems are endless.\n\n## Chapter 7: Failure is the Way Forward\n\n- Failure is a relative concept.\n- Making money by itself is a lousy metric for ourselves. You could make plenty of money and be miserable, just as you could be broke and be pretty happy. Therefore why use money as a means to measure your self-worth?\n\n### The Failure/Success Paradox\n\n- When Pablo Picasso was an old man, he was sitting in a cafe in Spain, doodling on used napkin. He was nonchalant about the whole thing, drawing whatever amused him in that moment.\n\n    Some woman sitting near him was looking on in awe. After a few moments, Picasso finished his coffee and crumpled up the napkin to throw away as he left.\n\n    The woman stopped him. \"Wait,\" she said. \"Can I have that napkin you were just drawing on? I'll pay you for it.\"\n\n    \"Sure\", Picasso replied. \"Twenty thousand dollars.\"\n\n    The woman's head jolted back as if he had just flung a brick at her. \"What? It took you like two minutes to draw that.\"\n\n    \"No, ma'am,\" Picasso said. \"It took me over sixty years to draw this.\" He stuffed the napkin in his pocket and walked out of the cafe.\n- Improvement on anything is based on thousands of tiny failures, and the magnitude of your success is based on how many times you've failed at something.\n- If someone is better than you at something, then it's likely because they've failed at it more than you have.\n- If someone is worse than you, it's likely because they haven't been through all of the painful learning experiences you have.\n- If you think about a young child trying to learn to walk, that child will fall down and hurt itself hundreds of times. But at no point does that child ever stop and think, \"Oh, I guess walking just isn't for me. I'm not good at it.\"\n\n### Avoiding failure is something we learn at done later point in life.\n\n- For many of us, our proudest achievements come in the face of the greatest adversity. Our pain often makes us stronger, miss resilient, more grounded.\n- Many people, when they feel some form of pain or anger or sadness, drop everything and attend to numbing out whatever they're feeling. Their goal is to get back to \"feeling good\" again as quickly as possible, even if that means substances or deluding themselves or returning to their shitty values.\n- Learn to sustain the passion you've chosen. When you choose a new value, you are closing to introduce a new form of pain into your life. Relish it. Savor it. Welcome it with open arms. Then act despite it.\n\n    This is going to feel impossibly hard at first. But you can start simple. You're going to feel as though you don't know what to do. But we've discussed this: you don't know anything. Even when you think you do, you really don't know what the duck you're doing. So really, what is there to lose?\n- Life is about not knowing and then doing something anyway. All of life is like this. It never changes. Even when you're happy. Even when you're farting fairy dust. Even when you win the lottery and buy a small fleet of Jet Skis, you still won't know what the hell you're doing. Don't ever forget that. And don't ever be afraid of that.\n\n### The \"Do Something\" Principle\n\n- If you're stuck on a problem, don't sit there and think about it; just start working on it. Even if you don't know what you're doing, the simple act of working on it will eventually cause the right ideas to show up in your head \n- Don't just sit there. Do something. The answers will follow.\n- Action isn't just there effect of motivation; it's also the cause of it.\n- If you want to accomplish something but don't feel motivated or inspired, then you assume you're just screwed. There's nothing you can do about it. It's not until a major emotional life event occurs that you can generate enough motivation to actually get off the couch and do something.\n- If you lack the motivation to make an important change in your life, to something - anything, really - and then harness the reaction to that action as a way to begin motivating yourself.\n\n    I call this the \"do something\" principle.\n- If we follow the \"do something\" principle, failure feels unimportant. When the standard of success becomes merely acting - when any result is regarded as progress and important, when inspiration is seen as a reward rather than a prerequisite - we propel ourselves ahead. We feel free to fail, and that failure moves us forward.\n- The \"do something\" principle not only helps us overcome procrastination, but it's also the process by which we adopt new values. If you're in the midst of an existential shitstorm and everything feels meaningless - if all the ways you used to measure yourself have come up short and you have no idea what's next, if you know that you've been hurting yourself chasing false dreams, or if you know that there's some better metric you should be measuring yourself with but you don't know how - the answer is the same: \"Do something\"\n\n    That \"something\" can be the smallest viable action toward something else. It can be anything.\n\n## Chapter 8: The Importance of Saying No\n\n- Travel is a fantastic self-development tool, because it extricates you from the values of your culture and shows you that another society can live with entirely different values and still function and not hate themselves.\n\n### Rejection Makes Your Life Better\n\n- We all must give a fuck about something, in order to value something. And to value something, we must reject that is not that something. To value X, we must reject non-X.\n- People can't solve your problems for you. And they shouldn't try, because that won't make you happy. You can't solve other's problems for them either, because that likewise won't make them happy.\n- If you make sacrifice for someone you care about, it needs to be because you want to, not because you feel obligated or because you fear the consequences of not doing so.\n\n    If your partner is going to make a sacrifice for you, it needs to because they genuinely want to, not because you've manipulated the sacrifice through anger or guilt. Acts of love are valid only if they're performed without conditions of expectations.\n- People with strong boundaries are not afraid of a temper tantrum, an argument, or getting hurt. People with weak boundaries are terrified of those things and will constantly mold their own behavior to fit the highs and lows of their relationship emotional roller coaster.\n\n    People with strong boundaries understand that it's unreasonable to expect two people to accommodate each other 100 percent and fulfill every need the other has. People with strong boundaries understand that they may hurt someone's feelings sometimes, but ultimately they can't determine how others feel. People with strong boundaries understand that a healthy relationship is not about controlling one another's emotions, but rather about each partner supporting the other in their individual growth and in solving their own problems.\n\n### How to Build Trust\n\n- When our highest priority is to always make ourselves feel good, or to always make our partner feel good, then nobody ends up feeling good. And our relationship falls apart without our even knowing it.\n\n    Without conflict, there can be no trust. Conflict exists to show up who is there for us unconditionally and who is just there for the benefits. No one trusts a yes-man. The pain in our relationship is necessary to cement our trust in each other and produce greater intimacy.\n- For a relationship to be healthy, both people must be willing and be able to both say no and hear no.\n\n### Freedom Through Commitment\n\n- More is not always better. In fact, the opposite is true. We are actually often happier with less. When we're overloaded with opportunities and options, we suffer from what psychologists refer to as paradox of choice. Basically, the more options we are given, the less satisfied we become with whatever we choose, because we're aware of all the other options we're potentially forfeiting.\n\n## Chapter 9: ... And Then You Die\n\n- Death scares us. And because it scares us, we avoid thinking about it, talking about it, sometimes even acknowledging it, even when it's happening to someone close to us.\n\n    Yet, in a bizarre, backwards way, death is the light by which the shadow of all of life's meaning is measured, Without death, everything would feel inconsequential, all experience arbitrary, all metrics and values suddenly zero.\n\n### Something Beyond Our Selves\n\nErnest Becker was an academic outcast. When he had colon cancer, he decided to write a book which would be about death.\n\nBecker died in 1974. His book _The Denial of Death_, would win the Pulitzer Prize and become one of the most influential intellectual works of the twentieth century.\n\n_The Denial of Death_ essentially makes two points:\n\n1. Humans are unique in that we're the only animals that can conceptualize and think about ourselves abstractly. Dog don't sit around and worry about their career. Cats don't think about their past mistakes or wonder what would have happened if they'd done something differently. Monkeys don't argue over future possibilities, just as fish don't sit around wondering if other fish would like them more if they had longer fins.\n\n    As humans, we're blesses with the ability to imagine ourselves in hypothetical situations, to contemplate both the past and the future, to imaging other realities or situations where things might be different. And it's because of this unique mental  ability, Becker says, that we all, at some point, become aware of the inevitability of our own death. Because we're able to conceptualize alternate versions of reality, we are the only animal capable of imagining a reality without ourselves in it.\n\n    This realization causes what Becker calls \"death terror\", a deep existential anxiety that underlies everything we think or do.\n\n2. Becker's second point starts with the premise that we essentially have two \"selves\". The first self is the physical self - the one that eats, sleeps, snores and poops. The second self is our conceptual self - our identity, or how we see ourselves.\n\n    Becker's argument is this: We are all aware on some level that our physical self will eventually die, that this death is inevitable, and that its inevitability _ on some unconscious level - scares the shit out of us. Therefore, in order to compensate for our fear of the inevitable loss of our physical self, we try to construct a conceptual self that will live forever. This is why people try so hard to put their names on buildings, on statues, on spines of books. It's why we feel compelled to spend so much time giving ourselves to others, especially to children, in the hopes that our influence - our conceptual self - will last way beyond our physical self. That we will be remembered and revered and idolized long after our physical self ceases to exist.\n\n    Becker called such efforts our \"immortality projects, projects that allow our conceptual self to live on way past the point of our physical death. Whether it be through mastering an art form, conquering a new land, gaining great riches, or simply having a large and loving family that will live on for generations, _all the meaning in our life is shaped by this innate desire to never truly die._\n\n### The Sunny Side of Death\n\n- Nothing makes you present and mindful like being mere inches away from your own death.\n- \"The fear of death follows from the fear of life. A man who lives fully is prepared to die at any time.\"\n- Confronting the reality of our own mortality is important because it obliterates all the crappy, fragile, superficial values in life. While most people whittle their days chasing another buck, or a little bit more fame and attention, or a little bit more assurance that they're right or loved, death confronts all of us with a far more painful and important question: What is your legacy?\n\n    How will the world be different and better when you're gone? What mark will you have made? What influence will you have caused? They say that a butterfly flapping its wings in Africa can cause a hurricane in Florida; well, what hurricane will you leave in your wake?\n\n    As Becker pointed out, this is arguably the only truly important question in our life. Yet we avoid thinking about it. One, because it's hard. Two, because it's scary. Three, because we have no fucking clue what we're doing.\n- Death is the only thing we know with any certainty. It is the correct answer to all of the questions we should ask but never do.\n- We are so materially well off, yet so psychologically tormented in so many low-level and shallow ways. People relinquish all responsibility, demanding that society cater to their feelings and sensibilities. People hold on to arbitrary certainties and try to enforce them on others, often violently, in the name of some made-up righteous cause. People, high on a sense of false superiority, fall into inaction and lethargy for fear of trying something worthwhile and failing at it.\n- Bukowski once wrote, \"We're all going to die, all of us. What a circus! That alone should make us love each other, but it doesn't. We are terrorized and flattened by life's trivialities; we are eaten up by nothing.\"\n- \n\n## Conclusion\n\nI love this book because Mark throws all the facts straight to my face, unlike every other \"make you feel good\" book. If you are allergic to the \"Fuck\" word, fret not, after reading this book, you will be comfortable with it.\n\nThere are hard truths, a lot of stories and all facts in this book. This book stands out from the crowd. Worth every minute.\n\nHope you learnt something new. Thank you for reading.\n\n**Image Credits:** Cover Image by Jason Hogan from Unsplash"
        },
        {
          "id": "book-reviews-the-4-hour-workweek",
          "title": "The 4-Hour Workweek [But Not a review]",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/the-4-hour-workweek/",
          "content": "Forget the old concept of retirement and the rest of the deferred - life plan – there is no need to wait and every reason not to, especially in unpredictable economic times. Whether your dream is escaping the rat race, experiencing high-end world travel, or earning a monthly five-figure income with zero management, The 4-Hour Workweek is the blueprint.\n\nThis book changed the way I think about my professional life, I was wired to think that 9 to 5 jobs are normal and it is \"what it is\". Then I read this book, it teaches you ways to delegate all your boring works to others, so you can focus on the most important job. The author suggests us to take all the vacations that we are saving for the retirement right now; all while teaching us about life style redesigns to help us achieve that.\n\n## Chronology of a Pathology\n\n- \"An expert is a person who has made all the mistakes that can be made in a very narrow field.\" - Niels Bohr\n\n## Getting off the wrong train\n\n- \"Civilization had too many rules for me, so I did my best to rewrite them\" - Bill Cosby\n\n## Rules that change the rules\n\n- \"I can’t give you a sure fire formula for success, but I can give you a formula for failure: try to please everybody all the time.\" - Herbert Bayard Swope\n\n## Pessimism: Defining the nightmare\n\n- \"Action may not always bring happiness, but there is no happiness without action.\" — Benjamin Disraeli\n- To do or not to do? To try or not to try? Most people will vote no, whether they consider themselves brave or not. Uncertainty and the prospect of failure can be very scary noises in the shadows. Most people will choose unhappiness over uncertainty.\n\n## Uncovering feat disguised as optimism\n\n- Are you better off than you were one year ago, one month ago, or one week ago? If not, things will not improve by themselves. If you are kidding yourself, it is time to stop and plan for a jump.\n- To enjoy life, you don’t need fancy nonsense, but you do need to control your time and realize that most things just aren’t as serious as you make them out to be. \n\n### Q&A: Questions and Actions\n\nSpend a few minutes on each answer. \n\n1. Define your nightmare, the absolute worst that could happen if you did what you are considering. \n\n    What doubt, fears, and “what-ifs” pop up as you consider the big changes you can—or need—to make? Envision them in painstaking detail. Would it be the end of your life? What would be the permanent impact, if any, on a scale of 1–10? Are these things really permanent? How likely do you think it is that they would actually happen? \n\n2. What steps could you take to repair the damage or get things back on the upswing, even if temporarily? \n\n    Chances are, it’s easier than you imagine. How could you get things back under control? \n\n3. What are the outcomes or benefits, both temporary and permanent, of more probable scenarios? \n\n    Now that you’ve defined the nightmare, what are the more probable or definite positive outcomes, whether internal (confidence, self-esteem, etc.) or external? What would the impact of these more-likely outcomes be on a scale of 1–10? How likely is it that you could produce at least a moderately good outcome? Have less intelligent people done this before and pulled it off? \n\n4. If you were fired from your job today, what would you do to get things under financial control? \n\n    Imagine this scenario and run through questions 1–3 above. If you quit your job to test other options, how could you later get back on the same career track if you absolutely had to? \n\n5. What are you putting off out of fear? \n\n    Usually, what we most fear doing is what we most need to do. That phone call, that conversation, whatever the action might be—it is fear of unknown outcomes that prevents us from doing what we need to do. Define the worst case, accept it, and do it. I’ll repeat something you might consider tattooing on your forehead: What we fear doing most is usually what we most need to do. As I have heard said, a person’s success in life can usually be measured by the number of uncomfortable conversations he or she is willing to have. Resolve to do one thing every day that you fear. I got into this habit by attempting to contact celebrities and famous businesspeople for advice. \n\n6. What is it costing you — financially, emotionally, and physically — to postpone action? \n\n    Don’t only evaluate the potential downside of action. It is equally important to measure the atrocious cost of inaction. If you don’t pursue those things that excite you, where will you be in one year, five years, and ten years? How will you feel having allowed circumstance to impose itself upon you and having allowed ten more years of your finite life to pass doing what you know will not fulfill you? If you telescope out 10 years and know with 100% certainty that it is a path of disappointment and regret, and if we define risk as “the likelihood of an irreversible negative outcome,” inaction is the greatest risk of all. \n\n7. What are you waiting for? \n\n    If you cannot answer this without resorting to the previously rejected concept of good timing, the answer is simple: You’re afraid, just like the rest of the world. Measure the cost of inaction, realize the unlikelihood and repairability of most missteps, and develop the most important habit of those who excel and enjoy doing so: action. \n\n## System Reset\n\n- \"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the \nworld to himself. Therefore all progress depends on the unreasonable man.\" — George Bernard Shaw, Maxims for Revolutionists\n\n## Doing the unrealistic is easier than doing the realistic\n\n- If you are insecure, guess what? The rest of the world is, too. Do not overestimate the competition and underestimate yourself. You are better than you think.\n\n## Comfort Challenge\n\n- The most important actions are never comfortable\n\n## Step 2-E is for elimination\n\n- \"One does not accumulate but eliminate. It is not daily increase but daily decrease. The height of cultivation always runs to simplicity.\"  — Bruce Lee\n- Being busy is a form of laziness—lazy thinking and indiscriminate action\n\n## The 9-5 Illusion and Parkinson's Law\n\n- Since we have 8 hours to fill, we fill 8 hours. If we had 15, we would fill 15. If we have an emergency and need to suddenly leave work in 2 hours but have pending deadlines, we miraculously complete those assignments in 2 hours. \n\n### Questions and actions\n\n- You are the average of the five people you associate with most, so do not underestimate the effects of your pessimistic, unambitious, or disorganized friends. If someone isn’t making you stronger, they’re making you weaker.\n\n## The low information diet\n\n- \"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking.\" — Albert Einstein\n\n### Q&A: Questions And Actions\n\n- \"Learning to ignore things is one of the great paths to inner peace.\" — ROBERT J. SAWYER\n- More is not better, and stopping something is often 10 times better than finishing it. Develop the habit of non finishing that which is boring or unproductive if a boss isn’t demanding it.\n\n## Interrupting interruption and the art of refusal\n\n- \"Do your own thinking independently. Be the chess player, not the chess piece.\" — Ralph Charell\n\n## But I am an employee, how does this help me?\n\n- If I can do it better than an assistant, why should I pay them at all? Because the goal is to free your time to focus on bigger and better things.\n\n## Lifestyle design in action\n\n- Don’t call it a problem if you can avoid it.\n\n## Income autopilot 1\n\n- \"As to methods there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.\" — Ralph Waldo Emerson\n\n## Why to begin with end in mind: A cautionary tale\n\n- Creating demand is hard. Filling demand is much easier. Don’t create a product, then seek someone to sell it to. Find a market — define your customers — then find or develop a product for them.\n- It is said that if everyone is your customer, then no one is your customer.\n- The main benefit of your product should be explainable in one sentence or phrase. How is it different and why should I buy it? ONE sentence or phrase, folks. Apple did an excellent job of this with the iPod. Instead of using the usual industry jargon with GB, bandwidth, and so forth, they simply said, “1,000 songs in your pocket.” Done deal. Keep it simple and do not move ahead with a product until you can do this without confusing people. \n\n## Tools and tricks\n\n- Reinventing the wheel is expensive—become an astute observer of what is already working and adapt it.\n\n## Income autopilot 3\n\n### MBA (Management by Absence)\n\n- \"A company is stronger if it is bound by love rather than by fear. If the employees come first, then they’re happy.\" — Herb Kelleher \n\n## How to escape the office\n\n- \"By working faithfully eight hours a day, you may eventually get to be a boss and work twelve hours a day.\" — ROBERT FROST, American poet \n\n## Caste to castaway\n\n- Work wherever and whenever you want, but get your work done.\n\n### Question and actions\n\n- Don’t underestimate how much your company needs you.\n\n## Beyond Repair\n\n- \"All courses of action are risky, so prudence is not in avoiding danger (it’s impossible), but calculating risk and acting decisively. Make mistakes of ambition and not mistakes of sloth. Develop the strength to do bold things, not the strength to suffer.\" — Niccolo Machiavelli, The \nPrince\n\n## Pride and punishment\n\n- “But, you don’t understand my situation. It’s complicated!” But is it really? Don’t confuse the complex with the difficult. Most situations are simple — many are just emotionally difficult to act upon. The problem and the solution are usually obvious and simple. It’s not that you don’t know what to do. Of course you do. You are just terrified that you might end up worse off than you are now. I’ll tell you right now: If you’re at this point, you won’t be worse off. \n\n### Questions and Actions\n\n- \"Only those who are asleep make no mistakes.\" — Ingvar Kamprad, founder of IKEA, world’s largest furniture brand\n- In the world of action and negotiation, there is one principle that governs all others: The person who has more options has more power.\n\n## The Birth of Mini-Retirements and the Death of Vacations\n\n- \"There is more to life than increasing its speed.\" — Mohandas Gandhi\n- Learn to slow down\n\n## Postpartum Depression: It’s Normal\n\n- Man is so made that he can only find relaxation from one kind of labor by taking up another. — ANATOLE FRANCE, author of The Crime of Sylvestre Bonnard\n- Too much free time is no more than fertilizer for self-doubt and assorted mental tail-chasing.\n\n## Frustrations and Doubts: You’re Not Alone\n\n- \"People say that what we are seeking is a meaning for life. I don’t think this is what we’re really seeking. I think what we’re seeking is an experience of being alive.\" — JOSEPH CAMPBELL,The Power of Myth\n- If you can’t define it or act upon it, forget it.\n\n## The Point of It All: Drumroll, Please\n\n- \"What man actually needs is not a tensionless state but rather the striving and struggling for a worthwhile goal, a freely chosen task.\" — VIKTOR E. FRANKL\n\n### Questions and Actions\n\n- \"The miracle is not to walk on water. The miracle is to walk on the green earth, dwelling deeply in the present moment and feeling truly alive.\" — THICH NHAT HANH\n- Full-time work isn’t bad if it’s what you’d rather be doing.\n\n### The Top 13 New Rich Mistakes\n- If you don’t make mistakes, you’re not working on hard enough problems. \n- One of the most universal causes of self-doubt and depression: trying to \nimpress people you don’t like.\n- Money doesn’t change you; it reveals who you are when you no longer have to be nice.\n\n## The choice minimal lifestyle: 6 formulas for more output and less overwhelm\n\n- Is your weekend really free if you find a crisis in the inbox Saturday morning that you can’t address until Monday morning?\n- If you don’t prioritize, everything seems urgent and important.\n- Work is not all of life. \n- Never tell yourself “I’ll just get it done this weekend.”\n- Being busy is not the same as being productive.\n\nHope you learnt something valuable from this extract. Thank you for reading.\n\n**Image Credits:** Cover Image by Kevin Bhagat from Unsplash"
        },
        {
          "id": "book-reviews-valuable-lessons-from-how-to-win-friends-and-influence-people",
          "title": "Valuable lessons from How to Win Friends And Influence People",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/valuable-lessons-from-how-to-win-friends-and-influence-people/",
          "content": "\"How to Win Friends and Influence People\" is a best seller by Dale Carnegie published in 1936 A.D. (I was amazed the book was this old, well book never gets old!). It is a self-help book where author teaches various techniques with various real life encounters and how those techniques have been implemented by various known people in the past.\n\nThis blog is for all of you:\n\n1. Already read the book?\n\n   Read on, and revise these very helpful and somewhat hard techniques (You know what I am talking about)\n\n2. Haven't read the book?\n\n   Read on, I will let you in on all the secret techniques that author has provided in the book. This will act as a very quick summary for you until you can read it in full.\n\nOne last thing, don't forget to read **\"Father Forgets\"** [below](#father-forgets), it's an amazing and the only story from the book I have included here.\n\nBelow I list all the valuable lessons I learnt from Dale Carnegie in \"How to Win Friends and Influence People\".\n\n### You can't win an argument\n\n- There is only one way under heaven to get the best out of an argument - and that is to avoid it. Avoid it as you would avoid rattlesnakes and earthquakes.\n- When one yells, the other should listen - because when two people yell, there is no communication, just noise and bad vibrations\n- Don't raise your voice, improve your argument.\n- Rule 1: **Avoid arguments**\n\n### How to avoid making enemies\n\n- Rule 2: **Never tell the other person, 'You're wrong.'**\n\n### If you're wrong, admit it\n\n- Rule 3: **If you are wrong, admit it.**\n\n### A drop of honey\n\n- Rule 4: **Begin in a friendly way.**\n\n### Secret of Socrates\n\n- When you disagree, don't try to argue. Instead think how you can make the other party say \"yes yes\" first, this will lead to them having open attitude and acceptance. And at the end you can get them to agree with you \"if you are right\", else you should admit your mistake.\n- Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.\n- Rule 5: **Get the other person to agree with you immediately.**\n\n### The safety valve in handling complaints\n\n- Listen and don't interrupt while others are talking. Applies in business, interviews, etc. Let them do the talking.\n- \"If you want enemies, excel your friends; but if you want friends, let you friends excel you.\" - La Rochefoucauld. Why is that true? Because when our friends excel us, they feel important; but when we excel them, they - or at least some of them - will feel inferior and envious.\n- Rule 6: **Let the other person do more of the talking**\n\n### How to get cooperation\n\n- Rule 7: **Let the other person have ownership of the idea.**\n\n### The magic formula\n\n- Remember that other people may be totally wrong. But they don't think so. Don't condemn them. Any fool can do that. Try to understand them. Only wise, tolerant, exceptional people even try to do that.\n- Do the thing you fear to do and keep on doing it... that is the quickest and surest way ever yet discovered to conquer fear.\n- Rule 8: **Try to see things from the other's point of view.**\n\n### What everybody wants\n\n- Rule 9: **Be sympathetic to the other person's ideas and desires.**\n\n### An appeal that everybody likes\n\n- \"Criticism, like rain, should be gentle enough to nourish a man's growth without destroying his roots\" - Frank A. Clark\n- Try honestly to see things from the other person's point of view.\n- Rule 10: **Appeal to the nobler motives.**\n\n### The movies do it. Tv does it. Why don't you do it?\n\n- Rule 11: **Dramatise your ideas.**\n\n### When nothing else works, try this\n\n- The way to get things done is to stimulate competition. i do not mean in a sordid money-getting way, but in the desire to excel.\n- All men have fears, but the brave put down their fears and go forward, sometimes to death, but always to victory.\n- Fear doesn't exist anywhere except in the mind.\n- Rule 12: **Challenges generate excitement**\n\n### Do this and you'll be welcomed anywhere\n\n- You can make more friends in two months by becoming genuinely interested in other people than you can in 2 years by trying to get other people interested in you.\n- If you want others to like you, if you want to develop real friendships, if you want to help others at the same time as you help yourself, keep this principle in mind: show a genuine interest in other people.\n- \"All the effort in the world won't matter if you're not inspired.\" - Chuck Palahnuik\n- Rule 13: **Show a genuine interest in other people.**\n\n### Make a good first impression\n\n- It isn't what you have or who you are or where you are or what you are doing that makes you happy, or unhappy. It is what you think about it. For example, two people may be in the same place, doing the same thing; both may have about an equal amount of money and prestige - and yet one may be miserable and the other happy. Why? Because of a different mental attitude.\n- \"There is nothing either good or bad, but thinking makes it so\" - Shakespeare\n- \"Most folks are about as happy as they make up their minds to be.\" - Abe Lincoln\n- \"A man without a smiling face must not open a shop\" - Chinese Proverb\n- Your smile is a messenger of your goodwill. Your smile brightens the lives of all who see it. Yo someone who has seen a dozen people frown, scowl or turn their faces away, your smile is like the sun breaking through the clouds. Especially when that someone is under pressure from his bosses, his customers, his teachers or parents or children, a smile can help him realise that all is not hopeless - that there is joy in the world.\n- Be sympathetic with the other person's ideas and desires.\n- Rule 14: **A smile is the best ornament you can wear.**\n\n### If you don't do this, you are headed for trouble\n\n- \"A single day is enough to make us a little larger or, another time, a little smaller.\" - Paul Klee\n- Everybody in the world is seeking happiness - and there is one sure way to find it. That is by controlling your thoughts. Happiness doesn't depend on outward conditions. It depends on inward conditions.\n- Rule 15: **A person's name is the sweetest and most important sound in any language to him or her.**\n\n### Become a good conversationalist\n\n- If you aspire to be a good conversationalist, be an attentive listener.\n- Remember that the people you are talking to are hundred times more interested in themselves and their wants and problems than they are in you and your problems. A person's toothache means more to that person than a famine in China which kills a million people. A boil on one's neck interests one more than 40 earthquakes in Africa. Think of that the next time you start a conversation.\n- Feeling sorry for yourself, and your present condition, is not only a waste of energy but the worst habit you could possibly have.\n- Rule 16: **Be a good listener.**\n\n### How to interest people\n\n- \"When we give cheerfully and accept gratefully, everyone is blessed.\" - Maya Angelou\n- Rule 17: **Talk in relation to the other person's interests.**\n\n### How to make people like you instantly\n\n- Always make the other person feel important.\n- Every man I meet is my superior in some way. In that, I learn of him.\n- \"In the end, those who demean others only disrespect themselves.\" - D.B. Harrop\n- First ask yourself: What is the worst that can happen? Then prepare to accept it. Then proceed to improve on the worst.\n- Rule 18: **Make the other person feel important.**\n\n### If you must find fault, this is the way to begin\n\n- If you want to find fault in someone, begin with praise and move on to what you have gotta say\n- Rule 19: **Begin with praise.**\n\n### How to criticise and not be hated for it\n\n- Many people begin their criticism with sincere praise followed by the word 'but' and ending with a critical statement.\n\n  For example, in trying to change a child's careless attitude towards studies, we might say, 'We're really proud of you, Johnnie, for raising your grades this term. But if you had worked harder on your algebra, the results would have been better.'\n\n  In this case, the person who is on the other end might feel encouraged until they hear the word 'but'. They might then question the sincerity of the original praise. To them, the praise seemed only to be a contrived lead-in to a critical inference of failure. Credibility would be strained, and we probably would not achieve our objectives of changing their attitude. This could be easily overcome by changing the word 'but' to 'and'. 'We're really proud of you, Johnnie, ror raising your grades this term, and by continuing the same conscientious efforts next term, your algebra grade can be up with all the others.'\n\n  Now, they would accept the praise because there was no follow-up of an inference of failure. We have called their attention to the behavior we wished to change indirectly, and the chances are they will try to live up to our expectations.\n\n- \"Criticism is something we can avoid easily by saying nothing, doing nothing, and being nothing.\" - Aristotle\n- Flaming enthusiasm, backed up by horse sense and persistence, is the quality that most frequently makes for success.\n- Rule 20: **Call attention to people's mistakes indirectly.**\n\n### Talk about your own mistakes first\n\n- \"If you have no critics you'll likely have no success.\" - Malcolm X\n- If only the people who worry about their liabilities would think about the riches they do possess, they would stop worrying.\n- Rule 21: **Before criticising the other person, talk about your own mistakes.**\n\n### No one likes to take orders\n\n- Asking questions not only makes an order more palatable; it often stimulates the creativity of the person whom you ask. People are more likely to accept an order of they have had a part in the decision that caused the order to be issued.\n- Rule 22: **Ask questions instead of giving direct orders.**\n\n### Let the other person save face\n\n- Even if we are right and the other person is definitely wrong, we only destroy ego by causing someone to lose face.\n- \"I have no right to say or do anything that diminishes a man in his own eyes. What matters is not what I think of him, but what he thinks of himself. Hurting a man in his dignity is a crime.\" - Antoine de Saint-Exupéry\n- A real leader will always let the other person save face.\n- If you believe in what you are doing, then let nothing hold you up in your work. Much of the best work of the world has been done against seeming impossibilities. The thing is to get the work done.\n- Rule 23: **Let the other person save face.**\n\n### How to encourage people\n\n- Let us praise even the slightest improvement. That inspires the other person to keep on improving.\n- When criticism is minimized and praise emphasized, the good things people do will be reinforced and the poorer things will atrophy for lack of attention.\n- Abilities wither under criticism; they blossom under encouragement. To become a more effective leader of people, praise every improvement.\n- Rule 24: **Praise every improvement**\n\n### Give a dog a good name\n\n- \"The average person can be led readily if you have his or her respect and if you show that you respect that person for some kind of ability.\" - Samuel Vauclain\n- If you want to excel in that difficult leadership role of changing the attitude or behavior of others, give the other person a good reputation to live up to.\n- If you can't sleep, then get up and do something instead of lying there worrying. It's the worry that gets you, not the lack of sleep.\n- Rule 25: **Give the other person a good reputation to live up to.**\n\n### Make the fault seem easy to correct\n\n- Praise for the things people do right instead of emphasizing on mistakes. Criticism discourages people whole people encourage them and get the best out of them.\n- If you want to help others to improve, use encouragement.\n- Act enthusiastic and you will be enthusiastic.\n- Rule 26: **Use Encouragement**\n\n### Making people glad to do what you want\n\n- Always make the other person happy about doing the things you suggest.\n- The effective leader should keep the following guidelines in mind when it is necessary to change attitudes on behavior:\n\n  1. Be sincere. Do not promise anything that you cannot deliver. Forget about the benefits to yourself and concentrate on the benefits to the other person.\n  2. Know exactly what it is you want the other person to do.\n  3. Be empathetic. Ask yourself what it is the other person really wants.\n  4. Consider the benefits that person will receive from doing what you suggest.\n  5. Match those benefits to the other person's wants.\n  6. When you make your request, put it in a form that will convey to the other person the idea that he personally will benefit.\n\n     We could give a court order like this: 'John, we have customers coming in tomorrow and I need the stockroom cleaned out. So sweep it out, put the stock in neat piles on the shelves and polish the counter.' Or we could express the same idea by showing John the benefits he will get from doing the task: 'John, we have a job that should be completed right away. If it is done now, we won't be faced with it later. I am bringing some customers in tomorrow to show our facilities. I would like to show them the stockroom, but it is in poor shape. If you could sweep ot out, put the stock in neat piles on the shelves and polish the counter, it would make us look efficient and you will have done your part to provide a good company image.'\n\n- \"Let us be grateful to people who make us happy, they are the charming gardeners who make our souls blossom.\" - Marcel Proust\n- Rule 27: **Make the other person happy about doing whatever you suggest.**\n\n### Think before you criticise\n\n- \"I learned thirty years ago that it is foolish to scold. I have enough trouble overcoming my own limitations without fretting over the fact that God has not seen fit to distribute evenly the gift of intelligence.\" - John Wanamaker\n- When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion, creatures bristling with prejudices and motivated by pride and vanity.\n- \"I will speak ill of no man and speak all the good I know of everybody.\" - Benjamin Franklin\n- \"A great man shows his greatness by the way he treats little men.\" - Carlyle\n- Instead of condemning people, let's try to understand them. Let's try to figure out why they do what they do. That's a lot more profitable and intriguing than criticisms; and it breeds sympathy, tolerance and kindness. \"To know all is to forgive all\".\n- Rule 28: **Don't condemn, complain or criticise.**\n\n#### FATHER FORGETS\n\nListen Son, I am saying this as you lie asleep, one little hand crumpled under your cheek and blonde curls sticky over your wet forehead. I have broken into your room alone. Just a few minutes ago, as I sat reading my paper in the library, a stifling wave of remorse swept over me. Guilty, I came to your bedside.\n\nThere are things which I am thinking, son; I had been cross to you. I scolded you as you were dressing for school because you gave your face a mere dab with the towel. I took you to task for not cleaning your shoes. I called out angrily when you threw some of your things on the floor.\n\nAt breakfast I found fault, too. You spilled things. You gulped down your food. You put your elbows on the table. You spread butter too thick on your bread. As you started off to play and I made for my train, you turned and waved a hand and called, \"Goodbye, Daddy!\" I frowned, and said in reply, \"Hold your shoulders back!\".\n\nThen it began all over again late this afternoon. As I came up the road I spied you, down on your knees, playing marbles. There were holes in your socks. I humiliated you before your friends by marching you ahead of me to the house. Socks were expensive, and if you had to buy them you would be more careful! Imagine that son, from a father.\n\nDo you remember later, when I was reading in the library, how you came timidly, with sort of a hurt look in your eyes? I glanced up over my paper, impatient at the interruption; you hesitated at the door. \"What is it that you want?\" I snapped.\n\nYou said nothing, but ran across in one tempestuous plunge, threw your arms around my neck and kissed me, your small arms tightened with affection that God had set blooming in your heart, which even neglect could not wither. Then you were gone, pattering up the stairs.\n\nWell, Son, it was shortly afterwards that my paper slipped from my hands and a terrible sickening fear came over me. What has habit been doing to me? The habit of finding fault, or reprimanding; this was my reward to you for being a boy. It was not that I did not love you: it was that I expected too much of you. I was measuring you by the yardstick of my own years.\n\nThere is so much that was good, fine and true in your character. The little heart of yours was as big as the dawn itself over the hills. This was shown by your spontaneous impulse to rush in and kiss me good night. Nothing else mattered tonight. Son, I have come to your beside in the darkness, I have knelt there, ashamed!\n\nIt is a feeble atonement; I know that you would not understand these things which I have told you in the waking hours. Tomorrow I will be a real daddy! I will chum with you, suffer when you suffer and laugh when you laugh. I will bite my tongue when impatient words come. I will keep saying as if it were a ritual: \"He is nothing but a boy--a little boy.\"\n\nI am afraid I have visualized you as a man. Yet as I see you now, Son, crumpled and weary in your bed. I see that you are still a baby. Yesterday you were in your mother's arms, your head on her shoulder. I have asked too much, too much!\n\n*Instead of condemning and criticizing others, perhaps it would be better to try to understand them, to try to figure out why they do what they do. That's a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness, rather than contempt...!!!*\n\n### The big secret of dealing with people\n\n- There is only one way under high heaven to get anybody to do anything. And that is by making the other person want to do it.\n- Be hearty in your approbation and lavish in your praise.\n- Everyone should be respected as an individual, but no one idolised. - Albert Einstein\n- If you want to be enthusiastic, act enthusiastic.\n- Rule 29: **Appreciation should be honest, not flattery.**\n\n### Understand the other's point of view\n\n- \"If there is any one secret of success, it lies in the ability to get the other person's point of view and see things from that person's angle as well as from your own.\" - Henry Ford\n- \"What you do makes a difference, and you have to decide what kind of difference you want to make.\" - Jane Goodall\n- If you want to conquer fear, don't sit home and think about it. Go out and get busy.\n- Rule 30: **Arouse in the other person a desire for the object.**\n\n[[notice | Read the book!!!]]\n|There are many interesting stories in the book which I didn't go through because this book is full of such stories. You will find the book interesting if you found these lessons helpful.\n\nTo conclude, these techniques are really great in the reading. Personally I have found it very hard to apply these techniques. But at the same time, how can you know if it is possible or not until you try?\n\nI am trying and applying some of these great techniques:\n\n- I always think twice before speaking anything (specially when messaging, hard in person though)\n- I have started being \"Lavish in praises\"\n- I try to not see mistakes in others and instead try to put myself in thir shoes before making any decisions\n\nThank you for reading it this far. Or did you just scroll to the end? LOL. Stay awesome and healthy.\n\n**Image Credits:** Cover Image by Felix Rostig\n from Unsplash"
        },
        {
          "id": "book-reviews-top-quotes-and-important-points-from-atomic-habits",
          "title": "Top Quotes and important points from Atomic Habits",
          "collection": {
            "label": "book_reviews",
            "name": "Posts"
          },
          "categories": "book-reviews",
          "tags": "quotes, not a review",
          "url": "/book-reviews/top-quotes-and-important-points-from-atomic-habits/",
          "content": "Atomic Habit, a book by James Clear is a must read for anyone wanting to change or create a habit. It provides very practical ways to build good habits and break the bad ones.\n\nTogether with different strategies and tools to form a new habit, it also contains a range of mind striking quotes and points. Among them, there is one particular that I keep remembering:\n\n> A genius is not born, but is educated and trained.\n\nI doubt I will ever forget this one. Below I have listed some of the quotes and points which I thought were important; chapter by chapter.\n\n### Introduction\n\n- We all deal with setbacks but in the long run,the quality of our lives often depends on the quality of our habits.\n- With the same habits, you’ll end up with the same results. But with better habits, anything is possible.\n- It is so easy to overestimate the importance of one defining moment and underestimate the value of making small improvements on a daily basis.\n\n### How Your Habits Shape Your Identity\n\n- Are you reading books and learning something new each day? Tiny battles like these are the ones that will define your future self.\n- Becoming the best version of yourself requires you to continuously edit your beliefs, and to upgrade and expand your identity.\n- The real reason habits matter is not because they can get better results (although they can do that), but because they can change your beliefs about yourself.\n\n### How to Build Better Habits in Simple Steps\n\n- The ultimate purpose of habits is to solve the problems of life with as little energy and effort as possible.\n\n### The 1st Law: Make it Obvious\n\n- There are no good habits and bad habits. There are only effective habits.\n- The process of behavior change always starts with awareness. You need to be aware of your habits before you can change them.\n\n### Motivation is Overrated; Environment Often Matters More\n\n- Small changes in context can lead to large changes in behavior over time.\n\n### The Secret to Self Control\n\n- Once a habit is formed, it is unlikely to be forgotten.\n- People with high self control tend to spend less time in tempting situations. It's easier to avoid temptation than resist it.\n\n### The 2nd Law: Make it Attractive\n\n- The more attractive an opportunity is, the more likely it is to become habit-forming.\n\n### The role of family and friends in shaping your habits\n\n- The culture we live in determines which behaviors are attractive to us.\n- We tend to adopt habits that are praise and approved of by our culture because we have a strong desire to fit in and belong to the tribe.\n- A genius is not born, but is educated and trained.\n- One of the most effective things you can do to build better habits is to join a culture where (1) your desired behavior is the normal behavior and (2) you already have something in common with the group.\n- The normal behavior of the tribe often overpowers the desired behavior of the individual. Most days, we’d rather be wrong with the crowd than be right by ourselves.\n\n### The 3rd Law: Make it Easy\n\n- Walk slowly, but never backward.\n- The most effective form of learning is practice, not planning.\n- If you want to master a habit, the key is to start with repetition, not perfection.\n\n### The law of least effort\n\n- Business is a never ending quest to deliver the same result in an easier fashion.\n\n### How to stop procrastinating by using the two minute rule\n\n- The two minute rule states, \"When you start a new habit, it should take less than two minutes to do.\"\n\n### The 4th law: Make it satisfying\n\n- We are more likely to repeat a behavior when the experience is satisfying.\n- The cardinal rule of behavior change: What is immediately reqarded is repeated. What is immediately punished is avoided.\n- To get a habit to stick you need to feel immediately successful, even if it's in a small way.\n- The vital thing in getting a habit to stick is to feel successful—even if it’s in a small way.\n- In the beginning, you need a reason to stay on track. This is why immediate rewards are essential. They keep you excited while the delayed rewards accumulate in the background.\n\n### How to stick with good habits everyday\n\n- Habits need to be enjoyable if they are going to stick.\n- One of the most satisfying feelings is the feeling of making progress.\n- A habit tracker is a simple way to measure whether you did a habit - like marking an X on a calendar.\n- Habit trackers and other visual forms of measurement can make your habits satisfying by providing clear evidence of your progress.\n- Don't break the chain. Try to keep your habit streak alive.\n\n### How an accountability partner can change everything\n\n- An accountability partner can create an immediate cost to inaction. We care deeply about what others think of us, and we do not want others to have a lesser opinion of us.\n\n### The truth about talent (When genes matter and when they don't)\n\n- The secret of maximizing your odds of success is to choose the right field of competition.\n- Genes cannot be easily changed, which means they provide a powerful advantage in favorable circumstances and a serious disadvantage in unfavorable circumstances.\n- Play a game that favors your strengths.\n- Genes do not eliminate the need for hard work. They clarify it. They tell us what to work hard on.\n- Until you work as hard as those you admire, don’t explain away their success as luck.\n\n### The Goldilocks rule: How to stay motivated in life and work\n\n- The greatest threat to success is not failure but boredom.\n- As habits become routine, they become less interesting and less satisfying. We get bored.\n- No habit will stay interesting forever. At some point, everyone faces the same challenge on the journey of self improvement: you have to fall in love with boredom.\n- Stepping up when it’s annoying or painful or draining to do so, that’s what makes the difference between a professional and an amateur.\n- Anyone can work hard when they feel motivated. It’s the ability to keep going when work isn’t exciting that makes the difference.\n- Professionals stick to the schedule; amateurs let life get in the way.\n\n### The downside of creating good habits\n\n- Habits + Deliberate Practice = Mastery\n\n### Little Lessons from the Four Laws\n\n- Happiness is simply the absence of desire.\n\n[[notice | Confession time]]\n|I wasn't sure if anyone would be interested to read these types of blogs so I had ran the Twitter poll and got some decent response, so here we are! I had written down these points just so I can look at them in the future because I liked them, I hope you enjoy reading them as much as I enjoyed noting them down.\n\nIf you have come this far, Congratulation! You have reached the end of the blog. Thank you for reading.\n\n**Image Credits:** Cover Image by Lala Azizli from Unsplash"
        },
          {
            "id": "notes",
            "title": "Notes",
            "categories": "",
            "tags": "",
            "url": "/notes/",
            "content": "Quick notes and solutions to common software development problems. For longer, more thorough writing — see the \" class=\"text-secondary-600 dark:text-secondary-400 hover:underline\">articles section."
          },
          {
            "id": "notes-page-2",
            "title": "Notes (Page 2)",
            "categories": "",
            "tags": "",
            "url": "/notes/page/2/",
            "content": "Quick notes and solutions to common software development problems. For longer, more thorough writing — see the \" class=\"text-secondary-600 dark:text-secondary-400 hover:underline\">articles section."
          },
          {
            "id": "articles",
            "title": "Articles",
            "categories": "",
            "tags": "",
            "url": "/articles/",
            "content": "Technical articles about Ruby on Rails, web development and software engineering. For smaller, more regular tidbits — see the \" class=\"text-secondary-600 dark:text-secondary-400 hover:underline\">notes section."
          },
          {
            "id": "articles-page-2",
            "title": "Articles (Page 2)",
            "categories": "",
            "tags": "",
            "url": "/articles/page/2/",
            "content": "Technical articles about Ruby on Rails, web development and software engineering. For smaller, more regular tidbits — see the \" class=\"text-secondary-600 dark:text-secondary-400 hover:underline\">notes section."
          },
          {
            "id": "articles-page-3",
            "title": "Articles (Page 3)",
            "categories": "",
            "tags": "",
            "url": "/articles/page/3/",
            "content": "Technical articles about Ruby on Rails, web development and software engineering. For smaller, more regular tidbits — see the \" class=\"text-secondary-600 dark:text-secondary-400 hover:underline\">notes section."
          },
          {
            "id": "book-reviews",
            "title": "Book Reviews",
            "categories": "",
            "tags": "",
            "url": "/book-reviews/",
            "content": "Reviews and takeaways from books I've read."
          }
]
