Keep It Clean – Music management redefined

Music directory is the second most chaotic directory after Downloads and surely this is not a good thing for music lovers. Everything is good until you are using your favorite music manager for listening songs but the moment you have to visit the music directory to copy some songs to external device or something ,you realize what a mess it is. The enormous amount of songs in one folder tends to make loading the directory slow and sometimes even not responding. Arranging such large volume of music manually is time consuming and redundant, even if you arrange them manually once it will be same mess again within few weeks. So what to do ? We need some automated way to this arrangement and this is where my new project come in to existence. Keep It Clean is a perl script which arranges your music files based on artists, albums, year you name it. Right now the script is in its infancy and can only arrange music based on albums. You can find the source code on github. If you are a music lover or perl programmer or both or something else and want to contribute, feel free to send pull requests. Although right now its functionality  is limited but in my opinion its a treat to all music lovers for whom digging through music directory is a nightmare.

I hope this proves to be a worthy project for all music lovers.

P.S. This is my second blog post within 24 hours. I’m actually getting a hang of it 😛

Advertisements

YouTube Reverse Engineered !!!

This is the blog post which i wanted to write since i created the application named “YTGrabber“. Its a youtube video download api, sounds weired huh? But that’s true, it actually gives you the download links for youtube videos from their watch url  (eg. http://youtube.com/watch?v=videoid ). This api is RESTful in nature and gives response in JSON format hence its integration with any programming language  is easy.Moreover i built a web application based on this api. You might be thinking what’s the need of another software when there are several already. But thats the difference, downloaders available in diffrent types like browser extensions or even web apps all have some dependencies,some requires client side java others requires you to use specific browsers.This application cuts all the  crap of client side java etc. and just needs a web browser to function properly. This application can work smoothly on older browsers too (with some minor defects in UI which i’ll fix soon) but an HTML5 and CSS3 compliant browser is a plus point.Moreover this application gives you all possible format options (like MP4,3GP,flv) including HD ones.Creating such application filled me with great enthusiasm and joy. Will look forward to reverse engineer some other Google services (next up is Google Books).

P.S. This application is hosted here on Openshift cloud hosting.Source code of the api is there on github.

Caching in the cuisines

With every new semester comes a new project” ;). and for this semester it was call for a Web Tech. project.I chose an entirely different concept for it ,giving cuisines and beverages against their geographical locations. It sounds weird  but that’s the challenge,so i started picking up the services which can be helpful. After 2 days i was all prepared to start working on it.With JSP and Servlets  for my webapp i choose Openshift cloud hosting and other handful of services namely Google Maps,and Yummly Recipe Search.Giving UI was not a tough task, few days of work and a good UI was there. The most difficult task was to cache in the cuisines and beverages for all the countries (there are 190 countries listed on gmaps) .After searching for an authenticated source of such information wikipedia and some other websites (with limited information) were my options. I started caching dishes by making a crawler in Java ,tweaking that crawler so many times for just one website,and soon i had 860 dishes for just 64 countries. It was not a bad start,i was really happy that investing 3 hours in this paid off but when i checked these dishes against my recipe provider the things were not the same ,recipes of many dishes were not there . I decided to make another crawler which will check the cached in dishes against the recipes. That crawler seemed to work fine for first 200 or so dishes but all of a sudden it gave me a NullPointerException which was of no meaning there, after some tweaks i ran it again but with no luck it failed again. After some more tweaks i ran it again and fortunately it ran correctly this time and it counted 587 dishes with recipes. Now was the time to save all those dishes in a  markup file ,so that i don’t have to fetch them all the time but again the crawler start giving me random shit.Keeping my fingers crossed i’m still trying to figure out the cause of exception.Wishing for good results.