Find exciting new NFTs using Python and Tweepy

Photo by Xavier von Erlach on Unsplash

Table of Contents

  1. Introduction
  2. Step 1: Obtain NFT popularity from Twitter
  3. Step 2: Collect NFT projects from
  4. Step 3: Filter exiting NFT collections
  5. Step 4: Putting it all together


From music and pictures of bored monkeys to the first Twitter message, in 2021, NFTs have skyrocketed in popularity. Besides being an inventive art expression, these non-fungible tokens have become compelling investments. The NFT sales volume keeps on growing, which can especially be seen in one of today’s most famous collections: The Bored Ape Yacht Club. Initially sold for 0.08 ETH each ($320), now with a floor price of 70 ETH ($233,000).

An investor’s dream is to acquire NFTs with such potential. Unfortunately, trimming down interesting projects has become time-consuming due to the many new collections launched daily. It would be great if we could delegate this task to our computers so we can focus on selecting the finest artwork and most enthusiastic NFT communities.

Tech Stack: Python ≥ 3.8, Tweepy, Pandas, Requests, JSON

This comprehensive step-by-step tutorial will teach you how to build a filter for exciting and upcoming NFT projects. We’ll use Python,, and Twitter as the main tools. is an excellent website for finding new NFTs. At the same time, Twitter can be used to discover more about the popularity of collections.

This tutorial covers the following topics:

  1. We will look up the number of followers on Twitter to determine the project’s popularity.
  2. We will gain data on upcoming launches from
  3. We will filter exciting projects based on a items/followers ratio threshold.

Step 1: Obtain NFT popularity from Twitter

The Twitter API can be used to retrieve the follower count of users. In this tutorial, we’ll use this value as a measure of popularity. To get the number of followers from an NFT collection via the Twitter API, we first need to request an API Key and API Key Secret. The following steps will guide you through the process of obtaining these keys:

  1. Visit the Getting started page and apply for a developer account.
  2. Once approved, you should click Add App and Create New via Projects & Apps. First, set up the app environment by selecting Staging. After this, you can pick a name for your new application.
  3. On the last page, Twitter provides you with the keys and tokens of your newly created application. Don’t forget to save them because it is the last time they are fully displayed.

Now you have created the API Key and API Key Secret; we can extract the follower count from Twitter users. We’ll use the Bored Ape Yacht Club page in the following code snippet as an example. First, assign your own application’s API Key and API Key Secret to the API_KEY and API_SECRET_KEY variables. After this, set up an API connection. Lastly, get the BoredApeYC user and extract the followers_count. Be aware that your output might differ from the output in this article.


BoredApeYC has 418222 followers

Step 2: Collect NFT projects from hosts a curated list of upcoming NFT collections with Twitter IDs. We will extract this information and build our dataset of upcoming NFT collections. We should first understand how the data is loaded on the page to extract the data.

Open the page with upcoming projects via and click right mouse -> inspect. This operation opens the developer tool in your browser. Then, click on the Network tab and filter for XHR requests (Javascript API for communication between the client and the server). Upon reloading the page, you’ll see the upcoming2 request on the left pane. The response of this request provides NFT projects in JSON format to fill the webpage. Instead of accessing the data via the browser, we can retrieve it with some help from Python.

We will use the URL from the previous paragraph to download the data. The requests module is used to fetch the content of the response. The JSON module is then used to load this data into a JSON object. The output of the first item is printed and shown underneath the code snippet.


'id': 'animegangnft',
'Project': 'Anime Gang NFT',
'Image Count': 4,
'Short Description': 'Anime character avatar NFTs',
'Max Items': '10000',
'Price Text': 'Presale: 0.075 ETH\nSale: 0.09 ETH',
'Price': '0.75 presale 0.09 official',
'Presale Date': '2022-01-15T20:10:00.000Z',
'Sale Date': '2022-01-15T20:10:00.000Z',
'Website': '',
'Discord': '',
'TwitterId': 'AnimeGangNFT',
'Listed Date': '2021-12-28T00:00:00.000Z'

Step 3: Filter exciting NFT collections

Now that there is a way to collect upcoming NFT collections and their popularity, we would like to filter out the exciting projects. The value of a project increases if all items sell out. Therefore we need to check if there are enough followers to sell all art pieces. The Max Item/Followers ratio is used for this. This value specifies the number of NFTs every follower should buy on average to sell all items in a collection. For example, If the value is 0.1, only 1 out of 10 followers has to buy one NFT for the collection to be sold out. If the ratio is 1.5, one follower should buy 1.5 NFTs to sell out the project. We will remove a project if a follower has to buy more than 1 NFT. The following code snippet provides the pseudocode for this filter.

item_followers_ratio = max_items/followers_countif item_followers_ratio > 1:
keep collection

Step 4: Putting it all together

In the previous sections, we have seen how to create each component for a filtering script. All that’s left to do is to put it together. Almost all lines of code have been handled before except for interesting_collections = [] and the try-except clause. We use theinteresting_collections variable to store all exciting projects with a greater than 1. We also don’t want to start the script again when an error occurs. The try-except clause handles any incorrect Twitter API calls by printing the error and continuing with looping. The created list of interesting collections is then transformed to a pandas DataFrame to store as a CSV file.

df = pd.DataFrame(interesting_collections)
df.to_csv('interesting_collections.csv', index=False)

It is good to mention that the free version of the Twitter API has a rate limit of 900 user lookups every 15 minutes. When testing, it is better to limit the for-loop range to a couple of collections. For example: for idx in range(10):.


The script’s output provides an overview of projects worthy of further investigation. You could also sort the CSV file on max item/followers ratio, as seen in the results above. This list gives a quick overview of the most promising NFT projects based on popularity. At last, you can spend your time more efficiently picking the next NFT investment.

Thank you for reading my post. I hope that you have learned something new! You can follow me for more articles related to this subject.