rsapiget downloads files using the new Rapidshare API

It was brought to my attention by a message in our forums that the download methods described in the “Use wget or curl to download from RapidShare Premium” article are no longer valid. Rapidshare has introduced a new API for account and file management. After a quick read of the Rapidshare API documentation, it was quite clear that the download methods that use regular cookies are not supported any more. I decided to spend some time with this API and try to write a Python script that can download files both as a free and a registered Pro user. I hereby publish this simple rapidshare client. I wrote this merely as an exercise and to compensate for the outdated information in that old article. I do not have a Rapidshare Pro account at this time and I’d say that I use such file-hosting services very rarely. So, the client has not been tested with a pro account. If you are a pro user, your feedback is welcome.

Update: Thanks to sharkic‘s feedback, this guide has now been improved by providing complete instructions on how to use wget and curl with the Rapidshare API. See the new sections at the end of the article.

The python implementation of a Rapidshare downloader

Instead of writing a wrapper script around wget or curl, I decided to go ahead with a pure Python Rapishare downloader, which works with both free and pro accounts. The script is called rsapiget.

Here is the code:

#! /usr/bin/env python
# -*- coding: utf-8 -*-
#  rsapiget - A simple command-line downloader that supports the Rapidshare API.
#  Homepage:
#  Copyright (c) 2010 George Notaras,,
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  See the License for the specific language governing permissions and
#  limitations under the License.
# Configuration BEGIN
LOGIN = ''
USE_SSL = False
# Configuration END
__version__ = '0.1.0'
import sys
import os
import urllib
import subprocess
import time
    import hashlib
    md5 = hashlib.md5
except ImportError:
    import md5
    md5 =
def info(msg):
    sys.stdout.write('%s\n' % msg)
def error(msg):
    sys.stderr.write('%s\n' % msg)
def transfer_progress(blocks_transfered, block_size, file_size):
    percent = float((blocks_transfered * block_size * 100) / file_size)
    progress = float(blocks_transfered * block_size / 1024)
    downspeed = (float(blocks_transfered * block_size) / float(time.time() - starttime)) / 1024
    sys.stdout.write("Complete: %.0f%% - Downloaded: %.2fKb - Speed: %.3fkb/s\r" % (percent, progress, downspeed))
def download(source, target):
    global starttime
    starttime = time.time()
    filename, headers = urllib.urlretrieve(source, target, transfer_progress)
    sys.stdout.write('Complete: 100%\n')
    for ss in headers:
        if ss.lower() == "content-disposition":
            filename = headers[ss][headers[ss].find("filename=") + 9:]  # 9 is len("filename=")=9
    urllib.urlcleanup()     # Clear the cache
    return filename
def verify_file(remote_md5sum, filename):
    f = open(filename, "rb")
    m = md5()
    while True:
        block =
        if not block:
    md5sum = m.hexdigest()
    return md5sum == remote_md5sum
def main():
    if len(sys.argv) != 2:
        error('Need Rapidshare link as argument')
    file_link = sys.argv[1]
        rapidshare_com, files, fileid, filename = file_link.rsplit('/')[-4:]
    except ValueError:
        error('Invalid Rapidshare link')
    if not rapidshare_com.endswith('') or files != 'files':
        error('Invalid Rapidshare link')
    if USE_SSL:
        proto = 'https'
        info('SSL is enabled')
        proto = 'http'
        info('MD5 sum verification is enabled')
    info('Downloading: %s' % file_link)
    if filename.endswith('.html'):
        target_filename = filename[:-5]
        target_filename = filename
    info('Save file as: %s' % target_filename)
    # API parameters
    params = {
        'sub': 'download_v1',
        'fileid': fileid,
        'filename': filename,
        'try': '1',
        'withmd5hex': '0',
            'withmd5hex': '1',
    if LOGIN and PASSWORD:
            'login': LOGIN,
            'password': PASSWORD,
    params_string = urllib.urlencode(params)
    api_url = '%s://' % proto
    # Get the first error response
    conn = urllib.urlopen('%s?%s' % (api_url, params_string))
    data =
    #print data
    # Parse response
        key, value = data.split(':')
    except ValueError:
        server, dlauth, countdown, remote_md5sum = value.split(',')
    except ValueError:
    # Wait for n seconds (free accounts only)
    if int(countdown):
        for t in range(int(countdown), 0, -1):
            sys.stdout.write('Waiting for %s seconds...\r' % t)
        info('Waited for %s seconds. Proceeding with file download...' % countdown)
    # API parameters for file download
    dl_params = {
        'sub': 'download_v1',
        'fileid': fileid,
        'filename': filename,
    if LOGIN and PASSWORD:
            'login': LOGIN,
            'password': PASSWORD,
            'dlauth': dlauth,
    dl_params_string = urllib.urlencode(dl_params)
    download_link = '%s://%s/cgi-bin/rsapi.cgi?%s' % (proto, server, dl_params_string)
    downloaded_filename = download(download_link, target_filename)
        if remote_md5sum.lower() == 'not found':
            info('Remote MD5 sum is not available. Skipping MD5 sum verification...')
        elif downloaded_filename:
            if verify_file(remote_md5sum.lower(), downloaded_filename):
                info('Downloaded and verified %s' % downloaded_filename)
                error('The downloaded file could not be verified')
            error('Will not verify. File not found: %s' % downloaded_filename)
    info('Operation Complete')
if __name__ == '__main__':
    except KeyboardInterrupt:

Save the code in a file called:

Usage is very simple:

python <rapidshare_link>

There are some configuration options at the top of the script you may need to check out:

  • LOGIN, PASSWORD: If you are a registered Pro user, set your username and password here. Being a pro user you will never have to wait for the download to start. Otherwise leave blank.
  • USE_SSL: Set to True to force the client to communicate with the rapidshare servers over an encrypted connection. Note that, according to the docs, this is more expensive in terms of Rapidshare points, so it is disabled by default.
  • VERIFY_MD5SUM: If this is set to True, the downloaded file’s integrity will be verified. The docs say that this results in more API calls than not using md5 verification, so this is disabled by default as well.

Although the old article has a small download server implementation in BASH, I haven’t tested whether the latter works with this client or not.

Please note that this script is work in progress and I might update the code in the following days. So, check back often for updates.

Download from Rapidshare API using wget

All credit for this method goes to sharkic (see comments).

I admit that when I was checking out the API, I had completely overlooked the withcookie option of the getaccountdetails_v1 subroutine. Also, I was not aware that it is now possible for free users to have an account with Rapidshare.

So, to sum sharkic‘s feedback up, here is how it is done. The following information requires that you have signed up with Rapidshare. Of course downloading files using wget and the following instructions requires a Rapidshare Pro account.

First, save the cookie data. This has to be done once:

wget -q -O - \
    --post-data="sub=getaccountdetails_v1&withcookie=1&login=LOGIN&password=PASSWORD" \ \
    | grep cookie | cut -d '=' -f 2 > .rapidshare_cookie

Substitute LOGIN and PASSWORD with your Rapidshare account’s username and password.

Now, you can download files using:

wget --no-cookies --header="Cookie: enc=`cat .rapidshare_cookie`"

Download from Rapidshare API using curl

The curl method is a derivative of the wget method. First save the cookie with:

curl --data "sub=getaccountdetails_v1&withcookie=1&login=LOGIN&password=PASSWORD" \ \
    | grep cookie | cut -d '=' -f 2 > .rapidshare_cookie

Substitute LOGIN and PASSWORD with your Rapidshare account’s username and password.

Now, you can download files using:

curl -L -O --cookie "enc=`cat .rapidshare_cookie`"

Enjoy! Your feedback is welcome.

rsapiget downloads files using the new Rapidshare API by George Notaras is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright © 2010 - Some Rights Reserved

George Notaras avatar

About George Notaras

George Notaras is the editor of the G-Loaded Journal, a technical blog about Free and Open-Source Software. George, among other things, is an enthusiast self-taught GNU/Linux system administrator. He has created this web site to share the IT knowledge and experience he has gained over the years with other people. George primarily uses CentOS and Fedora. He has also developed some open-source software projects in his spare time.

35 responses on “rsapiget downloads files using the new Rapidshare API

  1. Tom Permalink →

    Thank you again, its working for me:)

  2. sharkic Permalink →

    wget –no-cookies –header=”Cookie: enc=YOURCOOKIE” “”

  3. George Notaras Post authorPermalink →

    @sharkic: That’s awesome! Thanks for your feedback! I had completely overlooked the withcookie option of the getaccountdetails_v1 subroutine. Also, I was not aware that it is possible for free users to have an account with Rapidshare. I’ll update the post and add this information.

  4. sharkic Permalink →

    Wget option:

    first “read” cookie

    $ wget -qO- ‘’ | grep cookie | cut -d’=’ -f2

    and you will get something like this:

    then use:

    $ wget –nocookies –header=”Cookie: enc=YOURCOOKIE”


  5. polar Permalink →

    I am very excited with what I just read in this page. Cheers!!!

  6. taipan Permalink →

    Its not working? API usage changed?

  7. George Notaras Post authorPermalink →

    @taipan: Hi. Could you be more specific about which method you refer to? I just tested the python-based downloader (code is in the post). I downloaded a test file as a free user and it seems to work fine. The wget and curl methods require a pro account and cannot be used with a free account.

  8. martin Permalink →

    Hey George,

    thanks for the python script, works like a charm. It would be great though, if you could make it that the configuration parameters can be supplied from the command line. Especially an adjustable download folder would be great.

    Btw, the wget workaround doesn’t seem to work for me either.


  9. George Notaras Post authorPermalink →

    @Marty: These are some useful features. As soon as I find some free time in the weekend I’ll add these and probably make a small project out of this script.

    Thanks for your feedback.

  10. Yannis Permalink →

    The wget method is working for me as a premium user.

    Thanks George and sharkic ! I was terrified there wouldn’t be a command line workaround from that advanced-looking new API!

  11. Me, David Permalink →


    Thanks for these methods, for me personally they all work.
    However I prefer curl or python because of what they print to the user`s screen (stdout ?), but can’t get them to work when multiple urls are in a file

    For curl I tried:

    curl -K urls.txt

    where urls.txt is in this format

    url = "http......"

    For the python I have absolutely no idea how to …

    1. George Notaras Post authorPermalink →

      Hi David. I intend to make a small utility out of this python snippet, but, unfortunately, I haven’t found the time to do so yet.

  12. ferti Permalink →

    both wget and curl returns error in the test.avi instead of the real content
    Any idea?

  13. Daniel Oliveira Permalink →

    Hello, the python code is able to download links within a file?

    $ python links_rs.txt

  14. machinat Permalink →

    Wget doesn’t work for me too with premium account. The funny thing is that four days back I had used wget to download files using the load-cookie option. The same thing doesn’t work now.

    The python script works perfectly but the problem is that it can’t resume downloads. With my type of connection that is critical.

    How is the –header option different from the –load-cookie option? I had used the rs cookie entry from Firefox previously, and the cookie is exactly the same as that from your method.

  15. machinat Permalink →

    Possibly due to the API change, rs changed the download file to a javascript snippet (what ferti mentions). Basically replacing the download link with the location in the js snippet points to the actual file (escape the ! for bash).

    This is what I had done a few days back to download files using wget using –load-cookie. Now it doesn’t work anymore. This is a bummer.

  16. who Permalink →

    As a premium user and with enabled direct downloads, you are able to download through http authentication.
    Ex.: wget –http-user= –http-password=
    No need to hasle around with cookies, etc.

  17. Milan Permalink →

    Here is a simple bash script implementing the above method using curl.
    – you can submit a download list both in a file or on the command line (multiple URLs)
    – the download list in a file will only download lines starting with “http://” and containing “rapidshare”
    – already downloaded files are skipped

    curl -s -d "sub=getaccountdetails_v1&withcookie=1&login=$USER&password=$PASS" | grep cookie | cut -d= -f2  > $COOK
    if [ -f "$1" ]; then
       cat "$1" | grep "^http\:\/\/" | grep "rapidshare" >$TEMP
        echo $@ |tr [:space:] "\n" >$TEMP
    while read URL; do
        if [ ! -f $(basename ${URL}) ]; then
    	echo "Downloading file: $(basename $URL)"
    	curl -L --header "Cookie: enc=$(cat $COOK)" -O "$URL"
    	echo "Skipping - file exists: $(basename $URL)"
    done < "$TEMP"
    rm $COOK $TEMP
  18. George Notaras Post authorPermalink →

    Thanks all for your quality feedback.

    I intended to improve this script and write a proper command line utility, but my free time hasn’t been much during the last months. But, this will happen.

  19. anthony Permalink →

    many thanks for link to the Rapidshare API documentation. It helped me to update my old good bash script for free RS download.

  20. anthony Permalink →

    If somebody interested, my bash script is appended. It is just for free account, but works well even if user is hidden behind NAT and has to compete with other users. Enjoy.

    #   Purpose: Automate    #
    # files download using the free account #
    #     freeware (c) 20110312vaton        #
    # usage:
    # All RS links you want to download put into the "input.txt" file;
    # then run the script. All downloaded files you will find in the
    # "downloaded" subdir.
    # Successfully processed links are moved from the "input.txt" to
    # the "done.txt" file, while links with problems (file not found
    # or 10 unsuccesfull retries) are moved to the "bad.txt" file.
    # All received messages are logged in the "messages.txt" file
    # (quite usefull if something goes wrong - else just delete it).
    INF=input.txt                                  # input links file name
    DONEF=done.txt                                 # done links file name
    BADF=bad.txt                                   # bad links file name
    TMPF=wget-out.tmp                              # temporary file name
    MSGF=messages.txt                              # messages log file name
    # timer function
    # usage: timer  "" ""
      for i in `seq $TIME -1 1`; do
        gecho -ne "\r${2:-""} $(printf "%03d" ${i})s ${3:-""}    "
        sleep 1
      gecho -ne "\r${2:-""} DONE                                        \n"
    #### main ####
    # delete old temporary file
    if [ -f $TMPF ]; then                          # remove temporary file
      rm $TMPF
    retry=1                                        # set retry counter
    first="YES"                                    # set first pass flag
    # input file processing loop starts here
    if  [ `wc -l $INF | cut -d " " -f 1` = 0 ]; then
      gecho "nothing to do; check the input.txt file !!"
    while [ `wc -l $INF | cut -d " " -f 1` != 0 ]; do
      read line < $INF
      line=`gecho -n "$line" | sed &#039;s!\r!!g&#039;`      # remove CR
      if [ "$line" = "" ]; then                    # line is empty
        sed -i &#039;1 d&#039; $INF; rm `ls | grep sed`      # remove line from input file,
        retry=1                                    # ... reset retry counter
        first="YES"                                #    ... set first pass flag
        continue;                                  #       ... and check next line 
      else                                         # create URL for rapidshare api call
        N1=${line#*//}                             # remove initial "http://"  
        N2=${N1#*/}                                # extract
        FID=${N3%/*}                               # file ID
        FNAME=`basename "$line"`                   # ... and file name
        # build the URL for download request
      # print header on screen
      if [ "$first" = "YES" ]; then                 # print info
        gecho "`wc -l $INF | cut -d " " -f 1` links in the $INF file"
        gecho "Downloading file $FNAME"
        first="NO"                                  # clear first pass flag
      # send request to rapidshare and save response to temporary file
      wget -q -O $TMPF $URL
      # save mesage to log file
      message="$(sed &#039;s@location=\"/#!download.*@@' $TMPF)" 
      gecho "[`gdate +"%Y-%m-%d %H:%M.%S"`] ${message}[lenght=${#message}]" >> $MSGF
      if [ "$(echo "$message" | egrep "ERROR:")" != "" ]; then  # error messages processing
        # check for "File not found" error
        if [ "$(echo "$message" | egrep "File not found")" != "" ]; then  # file not found
          gecho "$line  > $BADF    # copy link to bad list
          gecho "FILE NOT FOUND"                    # print error message on screen
          sed -i '1 d' $INF; rm `ls | grep sed`     # remove processed line from input file,
          retry=1                                   # ... reset retry counter
          first="YES"                               #    ... set first pass flag
          continue;                                 #       ... and download next file 
        # check for "address busy" error
        if [ "$(echo "$message" | egrep "You need RapidPro to download")" != "" ]; then  # file not found
          timer 300 "Address busy; waiting 300 sec -" "before next try"
        # check for number (should be the wait time in seconds)
        waittime=$(echo "$message" | egrep -o "[0-9]* " | tr -d '\n' | tr -d ' ')
        if [ "$waittime" != "" ]; then              # number found, process it
          if [ $waittime -gt 12 ]; then             # run timer
            CD=`expr $waittime / 4`                 # ...with the 3/4 of wait time value
            CD=`expr $waittime - $CD`
            timer $CD "Waiting $CD of $waittime sec -" "to retry"
            continue;                               # ... and try again
          if [ $waittime -gt 2 ]; then              # run timer with 1 sec wait time
            timer 1 "Waiting 1 of $waittime sec -" "to retry"
            continue;                               # ... and try again
          # waittime is less less or equal to 2 sec here
          gecho -n ">"
          continue;                                 # no wait, try again immediately
        else                                        # unexpected error message
          if [ $retry -lt 10 ]; then                # retry
            timer 30 "Address busy; waiting 30 sec -" "before next try"
            gecho -n "?"                            # print '-' on screen
            retry=`expr $retry '+' 1`               # increase retry counter
            continue;                               # try again
          else                                      # too many retries, skip to next item
            gecho "$line  > $BADF
            sed -i '1 d' $INF; rm `ls | grep sed`   # remove processed link from input file,
            retry=1                                 # reset retry counter,
            first="YES"                             # set first pass flag ...
            continue;                               # ... and download next file 
        fi                                         ## end of wait time processing
      fi                                           ## end of ERROR message processing
      if [ "$(echo "$message" | egrep "DL:")" != "" ]; then  # ticket data processing
        gecho "got ticket"
        A1=${message#*:}                            # remove initial "DL:"
        A2=${A1%,*}                                 # extract
        HOST=${A3%,*}                               # hostname,
        AUTH=${A4%,*}                               # ... dlauth
        CD=${A4#*,}                                 #     ... and countdown
        timer $CD "Waiting $CD sec -" "to download" # wait for time specified by countdown
        wget -O downloaded/${FNAME} $URL            # download file
        gecho "${line}" >> $DONEF                   # copy link to done list
        sed -i '1 d' $INF; rm `ls | grep sed`       # remove processed link from input file,
        retry=1                                     # reset retry counter,
        first="YES"                                 # set first pass flag
      fi                                           ## end of ticket message processing
    done                                           ## end of while loop
    gecho "press ENTER to exit"                     # wait for ENTER before closing terminal window
  21. George Notaras Post authorPermalink →

    Hi Antony. Thanks for contributing your work.

    PS: Your comment had stuck in a wordpress unapproved comment queue. Sorry for taking me several days to notice it.

  22. Me, David Permalink →

    Hi David. I intend to make a small utility out of this python snippet, but, unfortunately, I haven’t found the time to do so yet.

    George, any progress of feeding the python script text files containing a set of URLs ?

  23. DaveQB Permalink →
    if [ "$#" -eq 1 ]
            if [ -f "$1" ]
                    F=" -i "
                    wget -c --limit-rate=280k --auth-no-challenge --user= --password= $F "${INPUT}"
            printf "Usage: $0 \n\n"
            exit 1
    exit 0

    This has been working for me for a few years now. Needs a relatively new version of wget, but not that new.

  24. DaveQB Permalink →

    Put in your username and password into that command line, obviously.

  25. bma Permalink →

    @ “Me, David”:

    while read URL; do rsapiget "$URL"; done < urls.txt
  26. tbfvrs Permalink →


    I wonder how to download from a list of files using wget… ?

    wget –no-cookies –header=”Cookie: enc=`cat .rapidshare_cookie`” -i List.txt

    Is the above phrase ok ?
    Anyone can help me ?

  27. Miten Permalink →

    gives invalid routine called error.

  28. superuser Permalink →

    Subrutine is now: getaccountdetails

  29. sarb Permalink →

    i have changed the subroutine as “getaccountdetails” in place of “download_v1”. it gives following error :

    conn = urllib.urlopen(‘%s?%s’ % (api_url, params_string))
    File “C:\Python27\Lib\”, line 84, in urlopen
    File “C:\Python27\Lib\”, line 205, in open
    return getattr(self, name)(url)
    File “C:\Python27\Lib\”, line 342, in open_http
    File “C:\Python27\Lib\”, line 951, in endheaders
    File “C:\Python27\Lib\”, line 811, in _send_output
    File “C:\Python27\Lib\”, line 773, in send
    File “C:\Python27\Lib\”, line 754, in connect
    self.timeout, self.source_address)
    File “C:\Python27\Lib\”, line 553, in create_connectio
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
    IOError: [Errno socket error] [Errno 11004] getaddrinfo failed

    please help

  30. Starous Permalink →

    @ anthony


    with your script I get this error msg. What is wrong?
    I\m using it on Raspberry Pi with Rasbian.

    File "", line 33
    SyntaxError: invalid syntax