Hey there, Are you new to these parts of the internet? If you are, then allow me to be your guide in helping you understand what Mixster really is and why this little corner of the web has become an integral part of whom I am and have become over the years. Also, get to know what we do here.
Lush green trees, monsoon weather, soul cooling winds coming from the tumultuous waters of the beach where sounds of waves splashing over the dry sand can be heard by people walking by. Now, just imagine a tech conference on a piece of software that powers almost half of the web today at the place that I just described. How could you stop yourself? I couldn’t. Here’s a brief twitter story of my adventures at React India.
This year has been amazing for me. For my communities, my projects, my initiatives, my interests, my work, my life, and of course Mixster. Hmm… a lot of “my” in that sentence. Let’s fix that. 2019, also gave me my fair share of letdown, losses, failures, anxiety, depression, breakups, server crashes, and bugs. And, I am sure I haven’t seen the last of them. But, that hasn’t stopped me to pause my life for a moment, look back and write this amazing post again. That’s why you have been sent a link for this post. Because, you arespecial and you did something great, this year. Little things that I like to acknowledge and appreciate.
Now, while people rush towards the finish line in this silly pursuit of happiness, I want to be different. I would like to be the one that stops just for a day, look back and say thanks to all the amazing people that mattered, people that helped, folks without whom I won’t be here. This is me uncut, taking names and giving hugs per name taken. That’s what Acknowledgement-ations is all about. This is Vipul Gupta, and he is feeling grateful, full of love. This one is going to be extremely long. Some of you are extremely busy, so you can just search your name for a faster turnaround.
In /this tutorial we will be covering the basics of setting up Graphite and Logster for Apache logs. First step is we setup an instance of Graphite. Read the following tutorial to install Graphite on your system and set it up. Later, we can use the same for Logster to implementing logging.
Graphite is an enterprise-ready infrastructure monitoring solution which can plug into existing infrastructure and solve the problems of time-series data storage, performance measurement, and data visualization. It is easily deployed as a platform for the Cloud and On-Prem. It is a mature and reliable open source monitoring solution solving monitoring issues for numerous large companies. With an extensive amount of integrations and tools available, Graphite can be modified to serve your needs from different storage backend’s, data collection agents, visualization tools, anomaly detection and alerting.
Busy servers and applications have a lot of things to monitor and there are stats in several forms. The stats are regarding your servers, applications running on those servers and loads of metrics which needs to be collected and monitored properly. Processing and collecting these stats help in deciding factors like scaling, performance of the system, troubleshooting and among more in your configuration. For monitoring to be precise, the system needs loads of data and the amount of data collected will likely increase the chances of understanding what is happening at any point of time. In this blog, we will be placing two popular collection programs StatsD and CollectD head to head to see what works in diverse use cases, and listing their pros and cons. Let’s go over why StatsD and CollectD are called daemons.
As soon as one hears about, Kubernetes or K8’s. The minds of some people run off to faraway lands as to what this complex piece of technology really is. With this post, I will give my best to bring forth some unique clarity on the subject with the help of my favorite sitcom, The Office. This is for people who basically know nothing, know very little or should know nothing about the technology but still want to know what the hype is about. It’s for everyone. Also, a bit of a disclaimer.
“There was an idea. Vipul knows this, called the Mixster Author initiative. The idea was to bring together a group of of remarkable people to see if they could become something more. To see if they could collaborate when Mixster needed them to, to write content that we I alone never could.”
Adapted from Nick Fury’s speech from Avengers (2013)
As a developer, we all love keeping our files onto the cloud securely so that we can have access to them anytime, anywhere. What better service to do that than Microsoft OneDrive?Microsoft OneDrive is a file hosting service and synchronization service operated by Microsoft as part of its web version of Office. The official OneDrive client is available for Windows only. And, that ain’t fair.
Why should we be using the blue screen of death in the first place to access all our files or folders. We can take the browser route as well, but no one like to sign in and out each time they want to access to their files. What if I told you that you can use OneDrive natively in your Linux machine, without the need of any client or 3rd party software.
Even though Microsoft loves Linux now, the “love” kind of dries up when you start to think about downloading a OneDrive client for your favorite Linux distro, and end up finding that there is no real client out there that fulfills your needs. I needed an all in one solution for securely syncing my files to OneDrive without worrying about losing them.
Some time back, Vipul on Mixster wrote an awesome review about the all-in-one solution for your Google Drive client troubles for Linux that Insync solves. Here, we are following up with Insync contacting us to write another review for them on their new OneDrive Linux client. Long story short, it changed my mind about using OneDrive web as my standard way of accessing my file, to now using Insync’s awesome OneDrive client. Here’s a small review of how I found it to be really useful! Let’s get started.
Youtube runs thousands of servers and streams videos to millions of viewers. We assume a server to be a single physical machine rather they are part of many virtual machines running on top of a single physical machine across the thousands of computers in a data center. Wanna know more about them. Click –>
There is a lovely, oversized, semi-woolen, red-and-black checkered shirt that I frequently wear and indiscriminately slip into every evening after returning to my room. Mind you, it is stolen property—I finagled it from my father when I was exactly 6 years old and wildly impatient to obtain a cape-like attire. You see, it was imperative for me and my playmates to have a superhero costume handy that evening in case it rained and confined us indoors, which it did. Thus, the thievery.
Years later, the shirt has lost its association to colorful tales of faux heroism and has become somewhat of a curio, only better: it is wearable, warm and immeasurably comforting. I might now associate it with home, or just homeyness, and on some days it is not even home that I miss, but a sense of belonging that I constantly find lacking in recent times—the past year, to be precise—making me feel adrift and lonely. It is not unique, nor is it novel; most people have this ‘loneliness’ complaint nowadays. The scale of this affliction is scary, and its direct relationship with the number of people alive today, confounding. But everyone finds their own coping mechanism, I think.
It’s 4 O’clock of a rainy Friday morning and one more Google Summer of Code has ended. My second to be exact. Time for me to hang in my boots, and start writing another report. Probably my last on this subject matter. There’s a lot to write.
Well, as far as the flow goes CerberusValidator works with schemas that are in Mapping structure. Basically any dicts with values as dict having types of values. If you don’t get it, then check this out https://docs.python-cerberus.org/en/stable/
But, Cerberus only cares for the schema and data which it’s getting from the user. Not from where it gets it. Almost most of our users will be giving the schema in the form either URL or paths to files. Which is fine by us until the point somewhere in week 12 where I forgot to code that properly into the code. Nothing to be afraid had to redo some old functions. Actually improved a lot of old code in the process. How time flies by. Damn.
Not much is left to be done, except write a few more tests and a lot of testing. And merging it to master. I am confident we can make it before August 19. Let’s see. Fingers crossed. This is vipulgupta2048 signing off for the second last time here. I won’t be going anywhere if you think.
There is a lot of work to be done at ScrapingHub x The Scrapy Project. Looking forward to new challenges.