I will be showing you how to build a wifi robot from scratch. It is a very straight forward project and can easily be handled by a intermediate robot builder. To complete the robot, it will take a couple of hours. I also attached the android phone with this app ip camera with this robo and connected to the wifi to see live video feed. We need to reduce `Beacon Interval` to 40 instead of 100 (default) if we are using wifi router to command robo with pc/laptop to see real time effect.
Video of complete robot driving:

Items needed

  1. Nodemcu esp8266 https://www.amazon.in/ESP8266-NodeMcu-WiFi-Development-Board/dp/B00UY8C3N0/
  2. Power Bank
  3. Robot platform
  4. Motor driver
  5. Motor as per requirement
  6. Jumper wires
  7. Bread Board (Optional)

split screen in phone

 

robo bottom

img 1. robo bottom view

 

Step 1

Attach the motors with the platform, and join wires. Join left side motor black wires together and same for red.

step 2

Attach the motor wires with motor driver as shown in the above image.

step 3

takeout the current supply wires out from middle hole of platform and data jumper wires. And connect with pines as follows.

IN1 -> D1

IN2 -> D2

IN3 -> D3

IN4 -> D4

connect both the Enable with 3v3 of of esp8266 nodemcu.

My robo is working without connecting GND so I will be skipping that. Other we can attach the GND of nodemcu with GND of motor driver.

Now connect the cables with power bank and the robo is ready

Now the program is in lua. We can control this robo with android app or with pc with linux os.

  1. App can be found on this link. For using nodemcu softAP function put nodemcu-wifi.lua on nodemcu.
  2. To control with pc/laptop connect this to the wifi. use the file python-tcp-getch-linux.py to control robo with command line terminal. Put nodemcu-wifi.lua program to nodemcu if you want your robo to connect wifi router at home.

How to flash nodemcu for lua, I will write another post for that.

use this command to put this program into nodemcu with luatool

python esp8266/luatool/luatool.py --port /dev/ttyUSB0 --src init.lua --dest init.lua --restart

Programs are now kept now on github on this link

Dont forget to attach android phone with ip camera app to see live video from android.

Hi all,

Its something not very new, but quite young for me. I tried to make search Engin like google and yahoo. Its really not very easy to make one. It is also not very useful for you people until it is mature enough to answer your queries. It may take some weeks and lot of processing power to do that. It’s continuously increasing its database. You can see more better results every minute. And its fast enough to give it a try. You can ask any thing you want. Here is the link https://bringmefast.com

This is like copy of source-code of crawler that is used for collecting data form web.

https://github.com/vishvendrasingh/searchEngineCrawler/blob/master/se.py

Title : BringMeFast search engine

All news at one place, shivalink.com

All news at one place, Why did I do this. I known you guys also don’t have time to scroll through all the news papers like me, but we love to read them. We have got no other option except to skip or scroll fast. And when people talk about it we just do not know what happened at the end of the story. But Being a engineer I can not stay deprived of it. So its my call to make it easier to read and all at one place.

Currently I combined three news paper those are my favorite one. But you can tell me if you want your favorite news paper or channel.

Also I wanted to go a step furthur, I kept this news engine to send emails with 3 hours frequency. It is built with elastic search (search engine type database), python, postfix ( to send mail with queue), some php at front end. It can store billions of news. you can search and go to so deep history starting from yesterday.. 🙂 hehe as it is so new. I hope u guys will like this. Here is the link Enjoy.

http://shivalink.com/news.php

Credits:Zainab Bhatia Nirav Patel, Abhijeet Deshani
These guys really helped me a lot in this. Thank you so much from me & my social media friends

Hi every one, How could I miss crawler 2.0 and posted 3.0 before this. Here I am posting 2.0 crawler with multiprocess facility.. 😉 Actually 3.0 thread based crawler was easy to develop, and now it is the time for release of final 2.0.

Why I am making crawler,  actually me and my friends Abhijeet and Zainab were thinking of making basic search engine. But we know there are already better than our’s. Then we thought we can do some better with this crawler thing, and now one more guy joined us, Mr Nirav quite high skilled person and work on highly critical projects.

Now I am more sure of finishing all this in time and make an automatic system that will post all new thing on bestindianwear.com. We can say it will be a basic AI (Artificial Intelligent) project. Abhijeet is working quite hard on it

Thanks Guys – I do not feel alone, and your efforts make our way enjoyable. Cheers to everyone we will be finishing this soon… 🙂

https://github.com/vishvendrasingh/crawler/blob/master/crawler_2.0_stable.py

Crazy day, I indexed 30GB file having 53 million lines of json data to elastic. Then I tried kibana with it it was really enjoyable after doing it with my drink. Link to kibana is shivalink.com:5601.

Link to exastic is shivalink.com:9200

the most tough was to unzip 5GB file using all cores, it was bz2 file. I used pbzip2 but it didn’t worked in my case. Then I found lbzip2 -d myfile.json. It was really fast and used my all cores efficiently. It turned out to be 30GB then. After that how could we insert it to elastic, as I am very new to this I found esbulk and started with this. I inserted 45 million entries then It became too slow. Now I had no option other and stopping it right there.

Than I came up with new idea of tail -n No of rest of the entries and inserted them back. I successfully did it. Now I can say I kind of know big big data….. 🙂 feeling happy

crawler

Completed coding of recursive crawler, it was fun and a lot of hard work, some meditation, and lots of google. I finally did it. My friend Abhijeet asked to make recursive crawler and I was thinking how can I do that. So came up with this idea wo making two lists

1. processed list (All crawled urls are stored here)

2. unprocessed list (All new url are stored here)

Now if a new url exists in any of these lists then skip it and move furthur. Happy crawling guys…..:)

This program do the following thing

  1. store data in mongodb
  2. parse html in page title, meta data, meta keywords
  3. In case if page request fails error handling save it from breaking
  4. it does not follow any other domain except the given one

Here is the link https://github.com/vishvendrasingh/crawler.git