Leaving Meetup.com And Extracting Past Event Data Without API Access

It's no secret that meetup.com have raised their prices over the last couple of years. Whilst this doesn't impact some of the larger user groups who have lots of members, it is basically killing smaller groups with no income streams.

I have no idea how tiny groups with less than 50 members can afford to pay £150+ every 6 months and pay for venues, refreshments, hosting space, and all the other things that a small group needs to pay for. That's not a fixed cost either, there's a good chance that meetup.com will increase their prices further.

The local Drupal user group I manage in Manchester is one of those groups affected. We just don't have the income or the user numbers to afford meetup.com any more. So, after 14 years we have stopped the subscription and the group will disappear from meetup within the next few weeks.

But what about those 14 years of events? They are stored in meetup.com and there is no way of getting hold of that data without API access; which costs a lot of money to get. I suppose I could just go through all those events and copy/paste the data into a CSV file, but surely there is a better way?

In this article I will show how I dug into the meetup.com site to see how the data was presented, and how I was able to MacGyver a solution in PHP to download the past data for the group as a CSV file.

Digging Into The Data

I can see a list of the old events on the page at https://www.meetup.com/nwdrupal/events/?type=past, and as I scroll down the page the old events are loaded in as chunks. Note that this page might not exist when you read this article, but that's where the page used to be.

If we look at what is going on in the browser I can see that the data is loaded from the path /gql2 with a payload that requests the next chunk of data from some sort of API layer.

A screenshot of a web browser in developer mode, looking at the API requests made to the gql2 API layer.

The payload we sent to this endpoint has the following data.

{
  "operationName": "getPastGroupEvents",
  "variables": {
    "urlname": "nwdrupal",
    "beforeDateTime": "2026-01-30T08:12:37.288Z",
    "after": "MzA3NjgxNTk3OjE3NDcxNTkyMDAwMDA="
  },
  "extensions": {
    "persistedQuery": {
      "version": 1,
      "sha256Hash": "9463f7c9ab5b08db3f2172223c806fb48993508781cd939184d9151c75214e3a"
    }
  }
}

From what I can make out, this call has the following important parts:

  • urlname - This is the name of our group, in this case "nwdrupal".
  • beforeDateTime - The cut off date for our data, so we will find information about events before this time. This appears to be an ISO 8601 date time string (the best kind of date format).
  • after - This is the pagination control for the API. The string here is the ID of the current page in the series.
  • sha256Hash - This has value is generated by meetup somewhere and it required for the API to respond. I'm not sure if this string is part of my user account or if it's just a generic string as logging out or changing groups didn't seem to change it.

It's not clear how the sha256Hash value is generated, but it is basically the same for this set of data. In fact, I was able to extract this string and just reuse it for the duration of the data export. I'd imagine (and sort of hope) that the string may be different for your group, but this is the value I had on the page.

Let's see if we can extract this call into a standard CURL command.

curl -sSL -X POST https://www.meetup.com/gql2 -d '{"operationName":"getPastGroupEvents","variables":{"urlname":"nwdrupal","beforeDateTime":"2026-01-30T08:12:37.288Z","after":"MzA3NjgxNTk3OjE3NDcxNTkyMDAwMDA="},"extensions":{"persistedQuery":{"version":1,"sha256Hash":"9463f7c9ab5b08db3f2172223c806fb48993508781cd939184d9151c75214e3a"}}}'

Yes! This works well and returns a 34KB chunk of JSON data that contains the next 10 events in the list. There's a lot of data here, but we can pick out the important parts.

Looking at the JSON format returned we can see our event data. The top of the file has the following data signature, which tells us some information about our group, the current organiser user account, and the number items in the paginated data.

{
  "data": {
    "groupByUrlname": {
      "id": "4396922",
      "organizer": {
        "id": "11408007",
        "isStarterOrganizer": false,
        "__typename": "Member"
      },
      "events": {
        "totalCount": 173,
        "pageInfo": {
          "endCursor": "MzA3NjgxNTk3OjE3NDcxNTkyMDAwMDA=",
          "hasNextPage": true,
          "__typename": "PageInfo"
        },
..

Also important here is the endCursor value, which points to the next page in the data set. Using this we perform a request to get the next page, and so on, until we have all of the past event data. Why is the pagination pointer a hashed string? I have no idea, but using this item we can query the meetup system for the next page of results.

Also in the JSON data we have our events data, stored in an edges array. This contains all the information about the event, including date + time, the title and description of the event, plus a number of fields to do with member attendance and tickets.

{
  "data": {
    "groupByUrlname": {
..
      },
      "events": {
.. 
        },
        "edges": [
          {
            "node": {
              "id": "311961727",
              "title": "NWDUG monthly meetup January 2026",
              "eventUrl": "https://www.meetup.com/nwdrupal/events/311961727/",
              "description": "The Zoom details will be available to anyone who RSVPs to the event.\n\nWe endeavor to have guest speakers for each meeting, so if you have something you'd like to present, please let us know!\n\nWe will also have the usual news and events roundup. Our virtual get together are always warm and welcoming - we hope to see you there.",
              "aeoDescription": null,
              "isSaved": false,
..

This is the second page of data in our list, so all we need to do to get the first page of results is to alter the query a little to remove the after parameter in the variables section of the payload.

curl -sSL -X POST https://www.meetup.com/gql2 -d '{"operationName":"getPastGroupEvents","variables":{"urlname":"nwdrupal","beforeDateTime":"2026-01-29T08:24:58.294Z"},"extensions":{"persistedQuery":{"sha256Hash":"9463f7c9ab5b08db3f2172223c806fb48993508781cd939184d9151c75214e3a"}}}'

This returns the first page of results, which contains the endCursor parameter in the same way.

This means that to download all of the data we just need to make an initial call to get the first page, and then use the pagination data to extract the rest.

Let's throw some code together to do just that!

Extracting The Past Event Data Using PHP

Using the above information we can easily extract the past event data using a few calls to this JSON endpoint.

The first thing to do is set up a little function that will accept a JSON payload and make the call to the endpoint using the built in CURL functions. Having a function means that we can call this multiple times without having curl commands spread throughout the code base.

function getMeetupData($jsonPayload) {
  $curl_handle = curl_init();
  curl_setopt($curl_handle, CURLOPT_URL, "https://www.meetup.com/gql2");
  curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, 1);
  curl_setopt($curl_handle, CURLOPT_POST, 1);
  curl_setopt($curl_handle, CURLOPT_POSTFIELDS, $jsonPayload);
  $result = curl_exec($curl_handle);
  if (curl_errno($curl_handle)) {
    echo 'Error:' . curl_error($curl_handle);
  }
  curl_close($curl_handle);
  return json_decode($result);
}

The return value from this function is the data extracted from the JSON results.

By the way, I created this code by passing the curl commands above into my curl command tool, which converted the command directly to the PHP you see above.

Now we need to define the payloads that we will use to perform our queries. I used the sprintf function to create templates of the payloads we need to create. I find making templates in this way means I can reuse them over and over again but it also removed the need for complicated and ugly string concatenation. There's a variable here to store this sha256Hash key, which is needed for the requests to succeed.

$sha = 'put the sha key from the calls in the browser here';

$urlname = 'put the group name parameter here';

$rootDataUrlTemplate = '{"operationName":"getPastGroupEvents","variables":{"urlname":"%s","beforeDateTime":"2026-01-29T08:24:58.294Z"},"extensions":{"persistedQuery":{"sha256Hash":"%s"}}}';
$rootDataUrl = sprintf($rootDataUrlTemplate, $urlname, $sha);

$pagesDataUrlTemplate = '{"operationName":"getPastGroupEvents","variables":{"urlname":"%s","beforeDateTime":"2026-01-29T08:24:58.294Z","after":"%s"},"extensions":{"persistedQuery":{"version":1,"sha256Hash":"%s"}}}';

Next, we need to make the initial request and then extract the next page hash from the data. We just call the getMeetupData() function and extract the data by jumping into the correct place in the payload.

$data = getMeetupData($rootDataUrl);
$nextPage = $data->data->groupByUrlname->events->pageInfo->endCursor;

We want to store the data in a CSV format, so let's set that file handle up here.

$eventsCsv = 'events.csv';
$eventsCsvHandle = fopen($eventsCsv, 'w');

Now we just need to grab the next page of results from the JSON endpoint, extract the data we want, and drop it into a CSV file. Then we can find the endCursor data for the next page and grab the next page of data (if it exists).

I use a do/while loop here as it will execute the code in the loop going to the next iteration, which means we can extract data from the first page of results before going onto the second page.

// Get the page of results whilst the $nextPage variable is not null.
do {
  foreach ($data->data->groupByUrlname->events->edges as $event) {
    // Extract data from the JSON payload.
    $id = $event->node->id;
    $title = $event->node->title;
    $description = $event->node->description;
    $date = $event->node->dateTime;

    // Write data to the CSV file.
    $writeMe = [
      $id,
      $title,
      $description,
      $date,
    ];
    fputcsv($eventsCsvHandle, $writeMe);
  }
  // Create the payload for the next page and call it to get the next page of results.
  $pagesDataUrl = sprintf($pagesDataUrlTemplate, $urlname, $nextPage, $sha);
  $data = getMeetupData($pagesDataUrl);
  $nextPage = $data->data->groupByUrlname->events->pageInfo->endCursor;
} while ($nextPage !== NULL);

Running this (via the command line) took about 10 seconds and I now have a CSV of the 173 past events the group has, which I can extract into HTML and upload to our website for historical reasons.

I should note that there is very little error detection in this code, so if it falls over it will just error and stop. This isn't a production ready system for rock solid meetup.com integration. It's more of a one-shot script to grab data and never run it again.

Also, the Drupal group has very rarely added any form of formatting to the event descriptions. Some groups use markdown in their group descriptions so you may need to pass the data through a markdown tool to convert it from that format.

The data, though, is now in a usable state.

Conclusion

I did originally attempt to load the page using PHP and attempt to scrape the site that way. The problem here is that the original page is just a frontend to some JavaScript app and actually contains very minimal markup with a massive load of JSON, which I didn't want to extract.

When looking into the requests made from the past events page page I easily found the endpoint being called, which apparently doesn't need any authentication at all (outside of a simple hash) and will let me just pull data from the site. In fact, I was able to extract past event data from a couple of different groups on the site.

I'm of two minds as to if this is a security problem. Whilst you can event grab attendance information quite easily and this includes usernames and profile pictures, there is nothing there that isn't already available on the site.

This article is intended to help out people who need to extract past data from the platform, and not as a mechanism of attack or abuse.

I hope that some of this material is useful to your group if you want to save historic event information. If you are leaving Meetup.com and would like some assistance in getting the historic information for your group then please get in touch. You can also join the #! code discord server and chat to people there if you get stuck.

Add new comment

The content of this field is kept private and will not be shown publicly.