[Featured Content // Reading Time – 1 min // Please do read]
Hey there, Are you new to these parts of the internet? If you are, then allow me to be your guide to help understand what Mixster is to me and why this little corner of the web has been an integral part of who I am and have become over the years. Also, what we really do here.
As a developer, we all love keeping our files onto the cloud securely so that we can have access to them anytime, anywhere. What better service to do that than Microsoft OneDrive?Microsoft OneDrive is a file hosting service and synchronization service operated by Microsoft as part of its web version of Office. The official OneDrive client is available for Windows only. And, that ain’t fair.
Why should we be using the blue screen of death in the first place to access all our files or folders. We can take the browser route as well, but no one like to sign in and out each time they want to access to their files. What if I told you that you can use OneDrive natively in your Linux machine, without the need of any client or 3rd party software.
Even though Microsoft loves Linux now, the “love” kind of dries up when you start to think about downloading a OneDrive client for your favorite Linux distro, and end up finding that there is no real client out there that fulfills your needs. I needed an all in one solution for securely syncing my files to OneDrive without worrying about losing them.
Some time back, Vipul on Mixster wrote an awesome review about the all-in-one solution for your Google Drive client troubles for Linux that Insync solves. Here, we are following up with Insync contacting us to write another review for them on their new OneDrive Linux client. Long story short, it changed my mind about using OneDrive web as my standard way of accessing my file, to now using Insync’s awesome OneDrive client. Here’s a small review of how I found it to be really useful! Let’s get started.
Youtube runs thousands of servers and streams videos to millions of viewers. We assume a server to be a single physical machine rather they are part of many virtual machines running on top of a single physical machine across the thousands of computers in a data center. Wanna know more about them. Click –>
There is a lovely, oversized, semi-woolen, red-and-black checkered shirt that I frequently wear and indiscriminately slip into every evening after returning to my room. Mind you, it is stolen property—I finagled it from my father when I was exactly 6 years old and wildly impatient to obtain a cape-like attire. You see, it was imperative for me and my playmates to have a superhero costume handy that evening in case it rained and confined us indoors, which it did. Thus, the thievery.
Years later, the shirt has lost its association to colorful tales of faux heroism and has become somewhat of a curio, only better: it is wearable, warm and immeasurably comforting. I might now associate it with home, or just homeyness, and on some days it is not even home that I miss, but a sense of belonging that I constantly find lacking in recent times—the past year, to be precise—making me feel adrift and lonely. It is not unique, nor is it novel; most people have this ‘loneliness’ complaint nowadays. The scale of this affliction is scary, and its direct relationship with the number of people alive today, confounding. But everyone finds their own coping mechanism, I think.
It’s 4 O’clock of a rainy Friday morning and one more Google Summer of Code has ended. My second to be exact. Time for me to hang in my boots, and start writing another report. Probably my last on this subject matter. There’s a lot to write.
Well, as far as the flow goes CerberusValidator works with schemas that are in Mapping structure. Basically any dicts with values as dict having types of values. If you don’t get it, then check this out https://docs.python-cerberus.org/en/stable/
But, Cerberus only cares for the schema and data which it’s getting from the user. Not from where it gets it. Almost most of our users will be giving the schema in the form either URL or paths to files. Which is fine by us until the point somewhere in week 12 where I forgot to code that properly into the code. Nothing to be afraid had to redo some old functions. Actually improved a lot of old code in the process. How time flies by. Damn.
Not much is left to be done, except write a few more tests and a lot of testing. And merging it to master. I am confident we can make it before August 19. Let’s see. Fingers crossed. This is vipulgupta2048 signing off for the second last time here. I won’t be going anywhere if you think.
There is a lot of work to be done at ScrapingHub x The Scrapy Project. Looking forward to new challenges.