| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
| |
- Add tests for UTF-16 decoding and failures
- Add getutf8.pl to assist with UTF-16 decode testing
- Re-add test_decode_cycle() which was accidentally removed earlier
- Rename bytestring.dat to octets-escaped.dat
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Rename API for consistency:
- sparse_ratio() -> encode_sparse_array()
- max_depth() -> encode_max_depth()
- invalid_numbers() -> refuse_invalid_numbers()
- Adjust sparse array handling:
- Add "safe" option to allow small sparse arrays regardless of the
ratio.
- Generate an error by default instead of converting an array into an
object (POLA).
- Update invalid number handling:
- Allow decoding invalid numbers by default since many JSON
implementations output NaN/Infinity.
- Throw an error by default when attempting to encode NaN/Infinity
since the RFC explicitly states it is not permitted.
- Support specifying invalid number configuration separately for
encode/decode.
|
|
|
|
|
| |
Escaping forward slash can be useful when including JSON output
in HTML (Eg, embedded in SCRIPT tags).
|
|
|
|
| |
- Add __gc metatable method to clean up json_config_t userdata.
|
| |
|
|
|
|
|
| |
The preallocated buffer removes the need for buffer length checks while
processing strings and results in a 10 - 15% speedup.
|
|
|
|
|
| |
Add strbuf_reset() to reset string length and hide the string
implementation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace json_escape_char() with a static char2escape[] lookup table.
Escape all unprintable ASCII (0-31, 127) and JSON special characters
(double quote, backslash).
Dynamic creation of the char2escape table has been left commented out
due to an apparent performance hit. The performance loss may be due to
memory/page alignment (unknown).
Rename parsing lookup table from ch2escape to escape2char for consistency.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use strbuf_append_mem() for small static strings (~2% speedup).
- Use &json_config_key for storing registry data. It's more unique
and faster than a text string.
- Use strbuf_append_char_unsafe() for string quotes (~4% speedup).
- Use strbuf_append_number() instead of strbuf_append_fmt(). It is
much simpler and avoids the potential for 2 expensive calls to
vsnprintf().
- Make encoding buffer persistent across calls to avoid extra
malloc/free (~4% speedup on example2.json).
These performance improvements can be much more pronounced depending
on the data. Eg, small strings, numbers, booleans, etc..
|
|
|
|
|
| |
- Add Makefile and RPM spec file
- Add cjson.version variable
|
|
|
|
|
| |
- Fix typo and comment
- Change "while" to "for" loop
|
|
|
|
|
| |
Change strict_numbers to control whether json.decode will parse an
expanded set of numbers (Hex, Inf, NaN).
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Add runtime configuration for generating Inf/NaN encoding errors through
cjson.strict_numbers().
|
|
|
|
|
| |
Was: Cannot serialise <location>: <type>
Now: Cannot serialise <type>: <reason>
|
|
|
|
|
|
|
| |
- Always report the correct index of the token error.
- Use value.string to report what was found instead of just T_ERROR.
- Fix inverted unicode escape error detection.
|
|
|
|
|
|
|
| |
Allow maximum nesting depth and sparse array ratio to be configured at
runtime via the sparse_ratio() and max_depth() functions.
Throw exceptions when encoding excessively nested structures.
|
|
|
|
|
|
|
| |
Detect and encode very sparse arrays as objects. This prevents something
like:
{ [1000000] = "nullfest" }
..from generating a huge array.
|
|
- Convert lua_json_init() into luaopen_cjson() to support dynamic .so
loading.
- Rename "json" to "cjson" to reduce conflicts with other JSON modules.
- Remove unnecessary *_pcall_* API. Lua calls are fast enough,
even through C.
- Encode empty tables as objects
- Add support for decoding all UCS-2 escape codes.
|