Disclaimer: This is a user generated content submitted by a member of the WriteUpCafe Community. The views and writings here reflect that of the author and not of WriteUpCafe. If you have any complaints regarding this post kindly report it to us.

Do you need a comprehensive listing of restaurants having their addresses as well as ratings when you go for some holidays? Certainly, yes because it makes your path much easier and the coolest way to do that is using web scraping.

Data scraping or web scraping extracts data from the website to a local machine. The results are in spreadsheet form so you can have the whole listing of restaurants accessible around me getting their address and ratings in easy spreadsheets!

Here at Web Screen Scraping, we utilize Python 3 scripts for scraping food and restaurant data as well as installing Python might be extremely useful. For script proofreading, we have used Google Colab to run a script because it assists us in running Python scripts using the cloud.

As our purpose is to get a complete list of different places, extracting Google Maps data is the answer! With Google Maps scraping, it’s easy to scrape a place name, kind of place, coordinates, address, phone number, ratings, and other vital data. For starting, we can utilize a Place Scraping API. Using a Place Scraping API, it’s very easy to scrape Places data.

1st Step: Which data is needed?

Here, we would search for the “restaurants around me” phrase in Sanur, Bali in a radius of 1 km. So, the parameters could be ‘restaurants’, ‘Sanur Beach’, and ‘1 km’.

Let’s translate that into Python:

coordinates = [‘-8.705833, 115.261377']
keywords = [‘restaurant']
radius = ‘1000'
api_key = ‘acbhsjbfeur2y8r' #insert your API key here

All the ‘keywords’ will help us get places that are listed as results or restaurants having ‘restaurants’ in them. It’s superior than utilize the ‘types’ or ‘names’ of the places because we can get a complete list of different places that the name and type, has ‘restaurant’. For example, we could use restaurant names like Sushi Tei & Se’i Sapi. In case, we utilize ‘names’, then we’ll have places whose names are having a ‘restaurant’ word in that. In case, we utilize ‘type’, then we’ll have places where any type is a ‘restaurant’. Though, the drawback of utilizing ‘keywords’ is, this will need extra time to clean data.

2nd Step: Create some required libraries, like:

import pandas as pd, numpy as np
import requests
import json
import time
from google.colab import files

Have you observed “from imported files of google.colab”? Yes, the usage of the Google Colab requires us to use google.colab library to open or save data files.

3rd Step: Write the code that produces data relying on the given parameters in 1st Step.

for coordinate in coordinates:
for keyword in keywords:url = ‘https://maps.googleapis.com/maps/api/place/nearbysearch/json?location='+coordinate+'&radius='+str(radius)+'&keyword='+str(keyword)+'&key='+str(api_key)while True:
print(url)
respon = requests.get(url)
jj = json.loads(respon.text)
results = jj[‘results']
for result in results:
name = result[‘name']
place_id = result [‘place_id']
lat = result[‘geometry'][‘location'][‘lat']
lng = result[‘geometry'][‘location'][‘lng']
rating = result[‘rating']
types = result[‘types']
vicinity = result[‘vicinity']data = [name, place_id, lat, lng, rating, types, vicinity]
final_data.append(data)time.sleep(5)if ‘next_page_token' not in jj:
break
else:next_page_token = jj[‘next_page_token']url = ‘https://maps.googleapis.com/maps/api/place/nearbysearch/json?key='+str(api_key)+'&pagetoken='+str(next_page_token)labels = [‘Place Name','Place ID', ‘Latitude', ‘Longitude', ‘Types', ‘Vicinity']

The code will help us find a place’s name, ids, ratings, latitude-longitude, kinds, and areas for all keywords as well as their coordinates. Because Google displays merely 20 entries on each page, we had to add ‘next_page_token’ to scrape the data of the next page. Let’s accept we are having 40 restaurants close to Sanur, then Google will display results on two pages. For 65 results, there will be four pages.

The utmost data points, which we extract are only 60 places. It is a rule of Google. For example, 140 restaurants are available around Sanur within a radius of 1 km from where we had started. It means that only 60 of the total 140 restaurants will get produced. So, to avoid inconsistencies, we have to control the radius and also coordinate proficiently. Please make certain that the radius doesn’t become very wide, which results in “only 60 points are made whereas there are several of them”. Moreover, just ensure that the radius isn’t extremely small, which results in listing different coordinates. Both of them could not become well-organized, so we need to understand the context of the location previously.

4th Step: Save this data into a local machine

export_dataframe_1_medium = pd.DataFrame.from_records(final_data, columns=labels)
export_dataframe_1_medium.to_csv(‘export_dataframe_1_medium.csv')

Last Step: Associate all these steps with the complete code:

import pandas as pd, numpy as np
import requests
import json
import time
final_data = []# Parameters
coordinates = [‘-8.705833, 115.261377']
keywords = [‘restaurant']
radius = ‘1000'
api_key = ‘acbhsjbfeur2y8r' #insert your Places APIfor coordinate in coordinates:
for keyword in keywords:url = ‘https://maps.googleapis.com/maps/api/place/nearbysearch/json?location='+coordinate+'&radius='+str(radius)+'&keyword='+str(keyword)+'&key='+str(api_key)while True:
print(url)
respon = requests.get(url)
jj = json.loads(respon.text)
results = jj[‘results']
for result in results:
name = result[‘name']
place_id = result [‘place_id']
lat = result[‘geometry'][‘location'][‘lat']
lng = result[‘geometry'][‘location'][‘lng']
rating = result[‘rating']
types = result[‘types']
vicinity = result[‘vicinity']data = [name, place_id, lat, lng, rating, types, vicinity]
final_data.append(data)time.sleep(5)if ‘next_page_token' not in jj:
break
else:next_page_token = jj[‘next_page_token']url = ‘https://maps.googleapis.com/maps/api/place/nearbysearch/json?key='+str(api_key)+'&pagetoken='+str(next_page_token)labels = [‘Place Name','Place ID', ‘Latitude', ‘Longitude', ‘Types', ‘Vicinity']export_dataframe_1_medium = pd.DataFrame.from_records(final_data, columns=labels)
export_dataframe_1_medium.to_csv(‘export_dataframe_1_medium.csv')

Now, it’s easy to download data from various Google Colab files. You just need to click on an arrow button provided on the left-side pane as well as click ‘Files’ to download data!

Your extracted data would be saved in CSV format as well as it might be imagined with tools that you’re well aware of! It could be R, Python, Tableau, etc. So, we have imagined that using Kepler.gl; a WebGL authorized, data agnostic, as well as high-performance web apps for geospatial analytical visualizations.

This is how the resulted data would look like in a spreadsheet:

And, this is how it looks in a Kepler.gl map:

We can see 59 restaurants from the Sanur beach. Just require to add names and ratings in the map as well as we’re prepared to search foods around the area!

Still not sure about how to scrape food data with Google Maps Data Scraping? Contact Web Screen Scraping for more details

Login

Welcome to WriteUpCafe Community

Join our community to engage with fellow bloggers and increase the visibility of your blog.
Join WriteUpCafe