Check for overflow during chunk trailer by removing unnecessary check in macro PARSING_HEADER. This will force the parser to abort if the chunk trailer contains more than HTTP_MAX_HEADER_SIZE of data.
Without this change, it is possible to get an assertion to fail by
continuing to call http_parser_execute after it has returned an error.
Specifically, the parser could be called with parser->state ==
s_chunk_size_almost_done and parser->flags & F_CHUNKED set. Then,
F_CHUNKED could have been cleared, and an error could be hit. In this
case, the parser would have returned with F_CHUNKED clear, but
parser->state == s_chunk_size_almost_done, resulting in an assertion
failure on the next call.
There are alternate solutions possible, including just saving all of
the fields (state included) on error.
I didn't add a test case because this is a bit annoying to test, but I
can add one if necesssary.
acceptable_header[x] is always assigned to a variable of type char, so
the 'unsigned' is unnecessary.
The other arrays can be of type int8_t/uint8_t to save space.
Yay valgrind testing
I don't believe that this actually mattered at all, because state was
initialized correctly, and flags would be set to 0 almost immediately
anyways.
This matters because char is signed by default on x86, so bytes with
values above 127 could have theoretically survived a pass through
lowcase (assuming that there was some non-zero data before the lowcase
array).
This also fixes test failures from the previous commit.
It also adds support for the LOCK method, which was previously
missing.
This brings the size of http_parser from 44 bytes to 32 bytes. It
also makes the code substantially shorter, at a slight cost in
craziness.
Currently this test fails, because short method strings do not cause
failures, even if they are unknown methods. However, long unknown
method strings do cause errors.
This saves space in the structure (it is now 28 bytes on x86), and
makes the handling of content_length more consistent between chunked
encoding and non-chunked-encoding.
This fixes a possible issue where a very large body (one that involves
> 80*1024 calls to http_parser_execute) will cause the next request
with that parser to return an error because it believes that this is
an overflow condition.