Scrapy uses Request and Response objects for crawling web sites.
Typically, Request objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a Response object which travels back to the spider that issued the request.
Both Request and Response classes have subclasses which adds additional functionality not required in the base classes. These are described below in Request subclasses and Response subclasses.
A Request object represents an HTTP request, which is usually generated in the Spider and executed by the Downloader, and thus generating a Response.
Parameters: |
|
---|
A dict that contains arbitrary metadata for this request. This dict is empty for new Requests, and is usually populated by different Scrapy components (extensions, middlewares, etc). So the data contained in this dict depends on the extensions you have enabled.
This dict is shallow copied when the request is cloned using the copy() or replace() methods.
When you copy a request using the Request.copy() or Request.replace() methods the callback of the request is not copied by default. This is because of legacy reasons along with limitations in the underlying network library, which doesn’t allow sharing Twisted deferreds.
For example:
request = Request("http://www.example.com", callback=myfunc)
request2 = request.copy() # doesn't copy the callback
request3 = request.replace(callback=request.callback)
In the above example, request2 is a copy of request but it has no callback, while request3 is a copy of request and also contains the callback.
The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the Response object downloaded as its first argument.
Example:
def parse_page1(self, response):
request = Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
def parse_page2(self, response):
# this would log http://www.example.com/some_page.html
self.log("Visited %s" % response.url)
In some cases you may be interested in passing arguments to those callback functions so you can receive those arguments later, when the response is downloaded. There are two ways for doing this:
- using a lambda function (or any other function/callable)
- using the Request.meta attribute.
Here’s an example of logging the referer URL of each page using each mechanism. Keep in mind, however, that the referer URL could be accessed easier via response.request.url).
Using lambda function:
def parse_page1(self, response):
myarg = response.url
request = Request("http://www.example.com/some_page.html",
callback=lambda r: self.parse_page2(r, myarg))
def parse_page2(self, response, referer_url):
self.log("Visited page %s from %s" % (response.url, referer_url))
Using Request.meta:
def parse_page1(self, response):
request = Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
request.meta['referer_url'] = response.url
def parse_page2(self, response):
referer_url = response.request.meta['referer_url']
self.log("Visited page %s from %s" % (response.url, referer_url))
Here is the list of built-in Request subclasses. You can also subclass it to implement your own custom functionality.
The FormRequest class extends the base Request with functionality for dealing with HTML forms. It uses the ClientForm library (bundled with Scrapy) to pre-populate form fields with form data from Response objects.
The FormRequest class adds a new argument to the constructor. The remaining arguments are the same as for the Request class and are not documented here.
Parameter: | formdata (dict or iterable of tuples) – is a dictionary (or iterable of (key, value) tuples) containing HTML Form data which will be url-encoded and assigned to the body of the request. |
---|
The FormRequest objects support the following class method in addition to the standard Request methods:
Returns a new FormRequest object with its form field values pre-populated with those found in the HTML <form> element contained in the given response. For an example see Using FormRequest.from_response() to simulate a user login.
Keep in mind that this method is implemented using ClientForm whose policy is to automatically simulate a click, by default, on any form control that looks clickable, like a a <input type="submit">. Even though this is quite convenient, and often the desired behaviour, sometimes it can cause problems which could be hard to debug. For example, when working with forms that are filled and/or submitted using javascript, the default from_response() (and ClientForm) behaviour may not be the most appropiate. To disable this behaviour you can set the dont_click argument to True. Also, if you want to change the control clicked (instead of disabling it) you can also use the clickdata argument.
Parameters: |
|
---|
The other parameters of this class method are passed directly to the FormRequest constructor.
If you want to simulate a HTML Form POST in your spider, and send a couple of key-value fields you could return a FormRequest object (from your spider) like this:
return [FormRequest(url="http://www.example.com/post/action",
formdata={'name': 'John Doe', age: '27'},
callback=self.after_post)]
It is usual for web sites to provide pre-populated form fields through <input type="hidden"> elements, such as session related data or authentication tokens (for login pages). When scraping, you’ll want these fields to be automatically pre-populated and only override a couple of them, such as the user name and password. You can use the FormRequest.from_response() method for this job. Here’s an example spider which uses it:
class LoginSpider(BaseSpider):
domain_name = 'example.com'
start_urls = ['http://www.example.com/users/login.php']
def parse(self, response):
return [FormRequest.from_response(response,
formdata={'username': 'john', 'password': 'secret'},
callback=self.after_login)]
def after_login(self, response):
# check login succeed before going on
if "authentication failed" in response.body:
self.log("Login failed", level=log.ERROR)
return
# continue scraping with authenticated session...
A Response object represents an HTTP response, which is usually downloaded (by the Downloader) and fed to the Spiders for processing.
Parameters: |
|
---|
The Request object that generated this response. This attribute is assigned in the Scrapy engine, after the response and request has passed through all Downloader Middlewares. In particular, this means that:
Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality.
TextResponse objects adds encoding capabilities to the base Response class, which is meant to be used only for binary data, such as images, sounds or any media file.
TextResponse objects support a new constructor arguments, in addition to the base Response objects. The remaining functionality is the same as for the Response class and is not documented here.
Parameter: | encoding (string) – is a string which contains the encoding to use for this response. If you create a TextResponse object with a unicode body it will be encoded using this encoding (remember the body attribute is always a string). If encoding is None (default value), the encoding will be looked up in the response headers anb body instead. |
---|
TextResponse objects support the following attributes in addition to the standard Response ones:
A string with the encoding of this response. The encoding is resolved in the following order:
TextResponse objects support the following methods in addition to the standard Response ones:
Returns the body of the response as unicode. This is equivalent to:
response.body.encode(response.encoding)
But not equivalent to:
unicode(response.body)
Since, in the latter case, you would be using you system default encoding (typically ascii) to convert the body to uniode, instead of the response encoding.