rsapiget downloads files using the new Rapidshare API

It was brought to my attention by a message in our forums that the download methods described in the “Use wget or curl to download from RapidShare Premium” article are no longer valid. Rapidshare has introduced a new API for account and file management. After a quick read of the Rapidshare API documentation, it was quite clear that the download methods that use regular cookies are not supported any more. I decided to spend some time with this API and try to write a Python script that can download files both as a free and a registered Pro user. I hereby publish this simple rapidshare client. I wrote this merely as an exercise and to compensate for the outdated information in that old article. I do not have a Rapidshare Pro account at this time and I’d say that I use such file-hosting services very rarely. So, the client has not been tested with a pro account. If you are a pro user, your feedback is welcome.

Update: Thanks to sharkic‘s feedback, this guide has now been improved by providing complete instructions on how to use wget and curl with the Rapidshare API. See the new sections at the end of the article.

The python implementation of a Rapidshare downloader

Instead of writing a wrapper script around wget or curl, I decided to go ahead with a pure Python Rapishare downloader, which works with both free and pro accounts. The script is called rsapiget.

Here is the code:

#! /usr/bin/env python
# -*- coding: utf-8 -*-
#
#  rsapiget - A simple command-line downloader that supports the Rapidshare API.
#
#  Homepage: http://www.g-loaded.eu/rsapiget-download-rapidshare-api/
#
#  Copyright (c) 2010 George Notaras, G-Loaded.eu, CodeTRAX.org
#
#  Licensed under the Apache License, Version 2.0 (the "License");
#  you may not use this file except in compliance with the License.
#  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.
#
 
# Configuration BEGIN
LOGIN = ''
PASSWORD = ''
USE_SSL = False
VERIFY_MD5SUM = False
# Configuration END
 
__version__ = '0.1.0'
 
import sys
import os
import urllib
import subprocess
import time
try:
    import hashlib
    md5 = hashlib.md5
except ImportError:
    import md5
    md5 = md5.new
 
def info(msg):
    sys.stdout.write('%s\n' % msg)
    sys.stdout.flush()
 
def error(msg):
    sys.stderr.write('%s\n' % msg)
    sys.stderr.flush()
    sys.exit(1)
 
def transfer_progress(blocks_transfered, block_size, file_size):
    percent = float((blocks_transfered * block_size * 100) / file_size)
    progress = float(blocks_transfered * block_size / 1024)
    downspeed = (float(blocks_transfered * block_size) / float(time.time() - starttime)) / 1024
    sys.stdout.write("Complete: %.0f%% - Downloaded: %.2fKb - Speed: %.3fkb/s\r" % (percent, progress, downspeed))
    sys.stdout.flush()
 
def download(source, target):
    global starttime
    starttime = time.time()
    filename, headers = urllib.urlretrieve(source, target, transfer_progress)
    sys.stdout.write('Complete: 100%\n')
    sys.stdout.flush()
    for ss in headers:
        if ss.lower() == "content-disposition":
            filename = headers[ss][headers[ss].find("filename=") + 9:]  # 9 is len("filename=")=9
    urllib.urlcleanup()     # Clear the cache
    return filename
 
def verify_file(remote_md5sum, filename):
    f = open(filename, "rb")
    m = md5()
    while True:
        block = f.read(32384)
        if not block:
            break
        m.update(block)
    md5sum = m.hexdigest()
    f.close()
    return md5sum == remote_md5sum
 
def main():
    if len(sys.argv) != 2:
        error('Need Rapidshare link as argument')
 
    file_link = sys.argv[1]
 
    try:
        rapidshare_com, files, fileid, filename = file_link.rsplit('/')[-4:]
    except ValueError:
        error('Invalid Rapidshare link')
    if not rapidshare_com.endswith('rapidshare.com') or files != 'files':
        error('Invalid Rapidshare link')
 
    if USE_SSL:
        proto = 'https'
        info('SSL is enabled')
    else:
        proto = 'http'
 
    if VERIFY_MD5SUM:
        info('MD5 sum verification is enabled')
 
    info('Downloading: %s' % file_link)
 
    if filename.endswith('.html'):
        target_filename = filename[:-5]
    else:
        target_filename = filename
    info('Save file as: %s' % target_filename)
 
    # API parameters
 
    params = {
        'sub': 'download_v1',
        'fileid': fileid,
        'filename': filename,
        'try': '1',
        'withmd5hex': '0',
        }
 
    if VERIFY_MD5SUM:
        params.update({
            'withmd5hex': '1',
            })
 
    if LOGIN and PASSWORD:
        params.update({
            'login': LOGIN,
            'password': PASSWORD,
            })
 
    params_string = urllib.urlencode(params)
 
    api_url = '%s://api.rapidshare.com/cgi-bin/rsapi.cgi' % proto
 
    # Get the first error response
    conn = urllib.urlopen('%s?%s' % (api_url, params_string))
    data = conn.read()
    #print data
    conn.close()
 
    # Parse response
    try:
        key, value = data.split(':')
    except ValueError:
        error(data)
    try:
        server, dlauth, countdown, remote_md5sum = value.split(',')
    except ValueError:
        error(data)
 
    # Wait for n seconds (free accounts only)
    if int(countdown):
        for t in range(int(countdown), 0, -1):
            sys.stdout.write('Waiting for %s seconds...\r' % t)
            sys.stdout.flush()
            time.sleep(1)
        info('Waited for %s seconds. Proceeding with file download...' % countdown)
 
    # API parameters for file download
 
    dl_params = {
        'sub': 'download_v1',
        'fileid': fileid,
        'filename': filename,
        }
 
    if LOGIN and PASSWORD:
        dl_params.update({
            'login': LOGIN,
            'password': PASSWORD,
            })
    else:
        dl_params.update({
            'dlauth': dlauth,
            })
 
    dl_params_string = urllib.urlencode(dl_params)
 
    download_link = '%s://%s/cgi-bin/rsapi.cgi?%s' % (proto, server, dl_params_string)
 
    downloaded_filename = download(download_link, target_filename)
 
    if VERIFY_MD5SUM:
        if remote_md5sum.lower() == 'not found':
            info('Remote MD5 sum is not available. Skipping MD5 sum verification...')
        elif downloaded_filename:
            if verify_file(remote_md5sum.lower(), downloaded_filename):
                info('Downloaded and verified %s' % downloaded_filename)
            else:
                error('The downloaded file could not be verified')
        else:
            error('Will not verify. File not found: %s' % downloaded_filename)
 
    info('Operation Complete')
 
if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        error('\nAborted')

Save the code in a file called: rsapiget.py

Usage is very simple:

python rsapiget.py <rapidshare_link>

There are some configuration options at the top of the script you may need to check out:

  • LOGIN, PASSWORD: If you are a registered Pro user, set your username and password here. Being a pro user you will never have to wait for the download to start. Otherwise leave blank.
  • USE_SSL: Set to True to force the client to communicate with the rapidshare servers over an encrypted connection. Note that, according to the docs, this is more expensive in terms of Rapidshare points, so it is disabled by default.
  • VERIFY_MD5SUM: If this is set to True, the downloaded file’s integrity will be verified. The docs say that this results in more API calls than not using md5 verification, so this is disabled by default as well.

Although the old article has a small download server implementation in BASH, I haven’t tested whether the latter works with this client or not.

Please note that this script is work in progress and I might update the code in the following days. So, check back often for updates.

Download from Rapidshare API using wget

All credit for this method goes to sharkic (see comments).

I admit that when I was checking out the API, I had completely overlooked the withcookie option of the getaccountdetails_v1 subroutine. Also, I was not aware that it is now possible for free users to have an account with Rapidshare.

So, to sum sharkic‘s feedback up, here is how it is done. The following information requires that you have signed up with Rapidshare. Of course downloading files using wget and the following instructions requires a Rapidshare Pro account.

First, save the cookie data. This has to be done once:

wget -q -O - \
    --post-data="sub=getaccountdetails_v1&withcookie=1&login=LOGIN&password=PASSWORD" \
    https://api.rapidshare.com/cgi-bin/rsapi.cgi \
    | grep cookie | cut -d '=' -f 2 > .rapidshare_cookie

Substitute LOGIN and PASSWORD with your Rapidshare account’s username and password.

Now, you can download files using:

wget --no-cookies --header="Cookie: enc=`cat .rapidshare_cookie`" http://rapidshare.com/files/XXXXXXXX/test.zip

Download from Rapidshare API using curl

The curl method is a derivative of the wget method. First save the cookie with:

curl --data "sub=getaccountdetails_v1&withcookie=1&login=LOGIN&password=PASSWORD" \
    https://api.rapidshare.com/cgi-bin/rsapi.cgi \
    | grep cookie | cut -d '=' -f 2 > .rapidshare_cookie

Substitute LOGIN and PASSWORD with your Rapidshare account’s username and password.

Now, you can download files using:

curl -L -O --cookie "enc=`cat .rapidshare_cookie`" http://rapidshare.com/files/XXXXXXXX/test.zip

Enjoy! Your feedback is welcome.

rsapiget downloads files using the new Rapidshare API by George Notaras, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright © 2010 - Some Rights Reserved

About George Notaras

George Notaras is the editor of G-Loaded Journal, a technical blog about Free and Open-Source Software. George is a GNU/Linux enthusiast, a self-taught programmer and system administrator. He strongly believes that "knowledge is power" and has created this web site to share the IT knowledge and experience he has gained over the years with other people. George primarily uses CentOS and Fedora and spends some of his spare time developing open-source software. Follow George on Twitter: @gnotaras

39 responses on “rsapiget downloads files using the new Rapidshare API

  1. sharkic Permalink →

    Wget option:

    first “read” cookie

    $ wget -qO- ‘https://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=getaccountdetails_v1&withcookie=1&type=prem&login=USERNAME&password=PASSWORD’ | grep cookie | cut -d’=’ -f2

    and you will get something like this:
    AJSLDKAJSDLAKS10923EKJSDQA09128731KN23JK123097K1JL2H3KL1

    then use:

    $ wget –nocookies –header=”Cookie: enc=YOURCOOKIE”

    :)

  2. sharkic Permalink →

    wget –no-cookies –header=”Cookie: enc=YOURCOOKIE” “”

  3. sharkic Permalink →

    wget –no-cookies –header=”Cookie: enc=YOURCOOKIE” url

  4. George Notaras Post authorPermalink →

    @sharkic: That’s awesome! Thanks for your feedback! I had completely overlooked the withcookie option of the getaccountdetails_v1 subroutine. Also, I was not aware that it is possible for free users to have an account with Rapidshare. I’ll update the post and add this information.

  5. polar Permalink →

    I am very excited with what I just read in this page. Cheers!!!

  6. George Notaras Post authorPermalink →

    @taipan: Hi. Could you be more specific about which method you refer to? I just tested the python-based downloader (code is in the post). I downloaded a test file as a free user and it seems to work fine. The wget and curl methods require a pro account and cannot be used with a free account.

  7. martin Permalink →

    Hey George,

    thanks for the python script, works like a charm. It would be great though, if you could make it that the configuration parameters can be supplied from the command line. Especially an adjustable download folder would be great.

    Btw, the wget workaround doesn’t seem to work for me either.

    Cheers!
    Marty

  8. George Notaras Post authorPermalink →

    @Marty: These are some useful features. As soon as I find some free time in the weekend I’ll add these and probably make a small project out of this script.

    Thanks for your feedback.

  9. Yannis Permalink →

    The wget method is working for me as a premium user.

    Thanks George and sharkic ! I was terrified there wouldn’t be a command line workaround from that advanced-looking new API!

  10. Me, David Permalink →

    Hi,

    Thanks for these methods, for me personally they all work.
    However I prefer curl or python because of what they print to the user`s screen (stdout ?), but can’t get them to work when multiple urls are in a file

    For curl I tried:

    curl -K urls.txt

    where urls.txt is in this format

    url = "http......"

    For the python I have absolutely no idea how to …

    1. George Notaras Post authorPermalink →

      Hi David. I intend to make a small utility out of this python snippet, but, unfortunately, I haven’t found the time to do so yet.

  11. ferti Permalink →

    Hi,
    both wget and curl returns
    [script type=”text/javascript”]location=”/#!download|4034l3|4325453222|test.avi|366278″;[/script]
    in the test.avi instead of the real content
    Any idea?

  12. machinat Permalink →

    Wget doesn’t work for me too with premium account. The funny thing is that four days back I had used wget to download files using the load-cookie option. The same thing doesn’t work now.

    The python script works perfectly but the problem is that it can’t resume downloads. With my type of connection that is critical.

    How is the –header option different from the –load-cookie option? I had used the rs cookie entry from Firefox previously, and the cookie is exactly the same as that from your method.

  13. machinat Permalink →

    Possibly due to the API change, rs changed the download file to a javascript snippet (what ferti mentions). Basically replacing the download link with the location in the js snippet points to the actual file (escape the ! for bash).

    This is what I had done a few days back to download files using wget using –load-cookie. Now it doesn’t work anymore. This is a bummer.

  14. script kiddy Permalink →

    I found a way to download via wget. You will basically have to do the same things the script does. You need to:

    1. open the URL http://api.rapidshare.com/cgi-bin/rsapi.cgi/cgi-bin/rsapi.cgi?sub=download_v1&fileid=&filename= either in your browser or with wget. Rapidshare will tell you a download server
    2. Replace api.rapidshare.com with the server Rapidshare told you and append &login=&password=
    The whole URL should look something like this: http://rsXXXdt.rapidshare.com/cgi-bin/rsapi.cgi?sub=download_v1&fileid=12345678&filename=Example-File.zip&login=foo&password=bar
    3. That’s it, you’re good to go. You can GET like this with wget, curl and probably any other program.

    I guess there should be a more convenient way using cookies, but this worked for me…

  15. who Permalink →

    As a premium user and with enabled direct downloads, you are able to download through http authentication.
    Ex.: wget –http-user= –http-password=
    No need to hasle around with cookies, etc.

  16. Milan Permalink →

    Here is a simple bash script implementing the above method using curl.
    Features:
    – you can submit a download list both in a file or on the command line (multiple URLs)
    – the download list in a file will only download lines starting with “http://” and containing “rapidshare”
    – already downloaded files are skipped

    #!/bin/bash
    USER="username"
    PASS="password"
    COOK=$(mktemp)
    TEMP=$(mktemp)
     
    curl -s -d "sub=getaccountdetails_v1&withcookie=1&login=$USER&password=$PASS" https://api.rapidshare.com/cgi-bin/rsapi.cgi | grep cookie | cut -d= -f2  > $COOK
     
    if [ -f "$1" ]; then
       cat "$1" | grep "^http\:\/\/" | grep "rapidshare" >$TEMP
    else
        echo $@ |tr [:space:] "\n" >$TEMP
    fi
     
    while read URL; do
        if [ ! -f $(basename ${URL}) ]; then
    	echo "Downloading file: $(basename $URL)"
    	curl -L --header "Cookie: enc=$(cat $COOK)" -O "$URL"
        else
    	echo "Skipping - file exists: $(basename $URL)"
        fi
    done < "$TEMP"
     
    rm $COOK $TEMP
  17. George Notaras Post authorPermalink →

    Thanks all for your quality feedback.

    I intended to improve this script and write a proper command line utility, but my free time hasn’t been much during the last months. But, this will happen.

  18. anthony Permalink →

    Hi,
    many thanks for link to the Rapidshare API documentation. It helped me to update my old good bash script for free RS download.

  19. anthony Permalink →

    If somebody interested, my bash script is appended. It is just for free account, but works well even if user is hidden behind NAT and has to compete with other users. Enjoy.

    #!/bin/bash
     
    #########################################
    #   Purpose: Automate Rapidshare.com    #
    # files download using the free account #
    #     freeware (c) 20110312vaton        #
    #########################################
     
    # usage:
     
    # All RS links you want to download put into the "input.txt" file;
    # then run the script. All downloaded files you will find in the
    # "downloaded" subdir.
     
    # Successfully processed links are moved from the "input.txt" to
    # the "done.txt" file, while links with problems (file not found
    # or 10 unsuccesfull retries) are moved to the "bad.txt" file.
    # All received messages are logged in the "messages.txt" file
    # (quite usefull if something goes wrong - else just delete it).
     
    INF=input.txt                                  # input links file name
    DONEF=done.txt                                 # done links file name
    BADF=bad.txt                                   # bad links file name
    TMPF=wget-out.tmp                              # temporary file name
    MSGF=messages.txt                              # messages log file name
     
     
    # timer function
    # usage: timer  "" ""
     
    timer()
    {
      TIME=${1:-960}
      for i in `seq $TIME -1 1`; do
        gecho -ne "\r${2:-""} $(printf "%03d" ${i})s ${3:-""}    "
        sleep 1
      done
      gecho -ne "\r${2:-""} DONE                                        \n"
    }
     
     
    #### main ####
     
    # delete old temporary file
     
    if [ -f $TMPF ]; then                          # remove temporary file
      rm $TMPF
    fi
     
    retry=1                                        # set retry counter
    first="YES"                                    # set first pass flag
     
    # input file processing loop starts here
     
    if  [ `wc -l $INF | cut -d " " -f 1` = 0 ]; then
      gecho "nothing to do; check the input.txt file !!"
      gecho
    fi
     
    while [ `wc -l $INF | cut -d " " -f 1` != 0 ]; do
     
      read line < $INF
      line=`gecho -n "$line" | sed &#039;s!\r!!g&#039;`      # remove CR
     
      if [ "$line" = "" ]; then                    # line is empty
        sed -i &#039;1 d&#039; $INF; rm `ls | grep sed`      # remove line from input file,
        retry=1                                    # ... reset retry counter
        first="YES"                                #    ... set first pass flag
        continue;                                  #       ... and check next line 
     
      else                                         # create URL for rapidshare api call
     
        N1=${line#*//}                             # remove initial "http://"  
        N2=${N1#*/}                                # extract
        N3=${N2#*/}
        FID=${N3%/*}                               # file ID
        FNAME=`basename "$line"`                   # ... and file name
     
        # build the URL for download request
     
        URL="http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=download_v1&try=1&fileid=${FID}&filename=${FNAME}"
     
      fi
     
      # print header on screen
     
      if [ "$first" = "YES" ]; then                 # print info
        gecho "`wc -l $INF | cut -d " " -f 1` links in the $INF file"
        gecho "Downloading file $FNAME"
        first="NO"                                  # clear first pass flag
      fi
     
      # send request to rapidshare and save response to temporary file
     
      wget -q -O $TMPF $URL
     
      # save mesage to log file
     
      message="$(sed &#039;s@location=\"/#!download.*@@' $TMPF)" 
      gecho "[`gdate +"%Y-%m-%d %H:%M.%S"`] ${message}[lenght=${#message}]" >> $MSGF
     
      if [ "$(echo "$message" | egrep "ERROR:")" != "" ]; then  # error messages processing
     
        # check for "File not found" error
     
        if [ "$(echo "$message" | egrep "File not found")" != "" ]; then  # file not found
     
          gecho "$line  > $BADF    # copy link to bad list
          gecho "FILE NOT FOUND"                    # print error message on screen
          gecho
     
          sed -i '1 d' $INF; rm `ls | grep sed`     # remove processed line from input file,
          retry=1                                   # ... reset retry counter
          first="YES"                               #    ... set first pass flag
          continue;                                 #       ... and download next file 
        fi
     
        # check for "address busy" error
     
        if [ "$(echo "$message" | egrep "You need RapidPro to download")" != "" ]; then  # file not found
          timer 300 "Address busy; waiting 300 sec -" "before next try"
        fi
     
        # check for number (should be the wait time in seconds)
     
        waittime=$(echo "$message" | egrep -o "[0-9]* " | tr -d '\n' | tr -d ' ')
     
        if [ "$waittime" != "" ]; then              # number found, process it
     
          if [ $waittime -gt 12 ]; then             # run timer
            CD=`expr $waittime / 4`                 # ...with the 3/4 of wait time value
            CD=`expr $waittime - $CD`
            timer $CD "Waiting $CD of $waittime sec -" "to retry"
            continue;                               # ... and try again
          fi
     
          if [ $waittime -gt 2 ]; then              # run timer with 1 sec wait time
            timer 1 "Waiting 1 of $waittime sec -" "to retry"
            continue;                               # ... and try again
          fi
     
          # waittime is less less or equal to 2 sec here
     
          gecho -n ">"
          continue;                                 # no wait, try again immediately
     
        else                                        # unexpected error message
     
          if [ $retry -lt 10 ]; then                # retry
     
            timer 30 "Address busy; waiting 30 sec -" "before next try"
            gecho -n "?"                            # print '-' on screen
            retry=`expr $retry '+' 1`               # increase retry counter
            continue;                               # try again
     
          else                                      # too many retries, skip to next item
     
            gecho "$line  > $BADF
            gecho
     
            sed -i '1 d' $INF; rm `ls | grep sed`   # remove processed link from input file,
            retry=1                                 # reset retry counter,
            first="YES"                             # set first pass flag ...
            continue;                               # ... and download next file 
     
          fi 
     
        fi                                         ## end of wait time processing
      fi                                           ## end of ERROR message processing
     
      if [ "$(echo "$message" | egrep "DL:")" != "" ]; then  # ticket data processing
        gecho "got ticket"
     
        A1=${message#*:}                            # remove initial "DL:"
        A2=${A1%,*}                                 # extract
        A3=${A2%,*}
        A4=${A2#*,}
        HOST=${A3%,*}                               # hostname,
        AUTH=${A4%,*}                               # ... dlauth
        CD=${A4#*,}                                 #     ... and countdown
     
        URL="http://${HOST}/cgi-bin/rsapi.cgi?sub=download_v1&dlauth=${AUTH}&bin=1&fileid=${FID}&filename=${FNAME}"
     
        timer $CD "Waiting $CD sec -" "to download" # wait for time specified by countdown
        wget -O downloaded/${FNAME} $URL            # download file
     
        gecho "${line}" >> $DONEF                   # copy link to done list
     
        sed -i '1 d' $INF; rm `ls | grep sed`       # remove processed link from input file,
        retry=1                                     # reset retry counter,
        first="YES"                                 # set first pass flag
     
      fi                                           ## end of ticket message processing
    done                                           ## end of while loop
     
    gecho "press ENTER to exit"                     # wait for ENTER before closing terminal window
    read
  20. George Notaras Post authorPermalink →

    Hi Antony. Thanks for contributing your work.

    PS: Your comment had stuck in a wordpress unapproved comment queue. Sorry for taking me several days to notice it.

  21. Me, David Permalink →

    Hi David. I intend to make a small utility out of this python snippet, but, unfortunately, I haven’t found the time to do so yet.

    George, any progress of feeding the python script text files containing a set of URLs ?

  22. DaveQB Permalink →
    #!/bin/bash
     
    INPUT="$1"
     
    if [ "$#" -eq 1 ]
    then
            if [ -f "$1" ]
            then
                    F=" -i "
            else
                    F=""
            fi
                    wget -c --limit-rate=280k --auth-no-challenge --user= --password= $F "${INPUT}"
    else
            printf "Usage: $0 \n\n"
            exit 1
    fi
    exit 0

    This has been working for me for a few years now. Needs a relatively new version of wget, but not that new.

  23. bma Permalink →

    @ “Me, David”:

    while read URL; do rsapiget "$URL"; done < urls.txt
  24. tbfvrs Permalink →

    Hi

    I wonder how to download from a list of files using wget… ?

    wget –no-cookies –header=”Cookie: enc=`cat .rapidshare_cookie`” -i List.txt

    Is the above phrase ok ?
    Anyone can help me ?

  25. files search engine Permalink →

    So good documentation.I really need it.I am wonder how to get the file’s size on rapidshare host.and now I know.

  26. sarbjit Permalink →

    i have changed the subroutine as “getaccountdetails” in place of “download_v1″. it gives following error :

    conn = urllib.urlopen(‘%s?%s’ % (api_url, params_string))
    File “C:\Python27\Lib\urllib.py”, line 84, in urlopen
    return opener.open(url)
    File “C:\Python27\Lib\urllib.py”, line 205, in open
    return getattr(self, name)(url)
    File “C:\Python27\Lib\urllib.py”, line 342, in open_http
    h.endheaders(data)
    File “C:\Python27\Lib\httplib.py”, line 951, in endheaders
    self._send_output(message_body)
    File “C:\Python27\Lib\httplib.py”, line 811, in _send_output
    self.send(msg)
    File “C:\Python27\Lib\httplib.py”, line 773, in send
    self.connect()
    File “C:\Python27\Lib\httplib.py”, line 754, in connect
    self.timeout, self.source_address)
    File “C:\Python27\Lib\socket.py”, line 553, in create_connectio
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
    IOError: [Errno socket error] [Errno 11004] getaddrinfo failed

    please help

  27. Starous Permalink →

    @ anthony

    Hi,

    with your script I get this error msg. What is wrong?
    I\m using it on Raspberry Pi with Rasbian.


    File "rsapiget_free.py", line 33
    TIME=${1:-960}
    ^
    SyntaxError: invalid syntax

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>