Skip to content

Trait Methods

Trait \RdKafka\FFI\Methods

Description of librdkafka methods and constants is extracted from the official documentation.

Methods

getFFI()

public static getFFI (  ): \FFI
Returns
\FFI

rd_kafka_version()

public static rd_kafka_version (  ): int|null

Returns the librdkafka version as integer.

See also
See RD_KAFKA_VERSION for how to parse the integer format.
Use rd_kafka_version_str() to retreive the version as a string.
Returns
int|null int - ) - Version integer.

rd_kafka_version_str()

public static rd_kafka_version_str (  ): string|null

Returns the librdkafka version as string.

Returns
string|null const char* - ) - Version string

rd_kafka_get_debug_contexts()

public static rd_kafka_get_debug_contexts (  ): string|null

Retrieve supported debug contexts for use with the "debug" configuration property. (runtime)

Returns
string|null const char* - ) - Comma-separated list of available debugging contexts.

rd_kafka_get_err_descs()

public static rd_kafka_get_err_descs ( 
    \FFI\CData|null $errdescs, 
    \FFI\CData|null $cntp
 ): void
Parameters
errdescs \FFI\CData|null const struct rd_kafka_err_desc**
cntp \FFI\CData|null size_t*

rd_kafka_err2str()

public static rd_kafka_err2str ( 
    int $err
 ): string|null

Returns a human readable representation of a kafka error.

Parameters
err int rd_kafka_resp_err_t - ) - Error code to translate
Returns
string|null const char*

rd_kafka_err2name()

public static rd_kafka_err2name ( 
    int $err
 ): string|null

Returns the error code name (enum name).

Parameters
err int rd_kafka_resp_err_t - ) - Error code to translate
Returns
string|null const char*

rd_kafka_last_error()

public static rd_kafka_last_error (  ): int

Returns the last error code generated by a legacy API call in the current thread.

The legacy APIs are the ones using errno to propagate error value, namely:

  • rd_kafka_topic_new()
  • rd_kafka_consume_start()
  • rd_kafka_consume_stop()
  • rd_kafka_consume()
  • rd_kafka_consume_batch()
  • rd_kafka_consume_callback()
  • rd_kafka_consume_queue()
  • rd_kafka_produce()

The main use for this function is to avoid converting system errno values to rd_kafka_resp_err_t codes for legacy APIs.

Remarks
The last error is stored per-thread, if multiple rd_kafka_t handles are used in the same application thread the developer needs to make sure rd_kafka_last_error() is called immediately after a failed API call.
errno propagation from librdkafka is not safe on Windows and should not be used, use rd_kafka_last_error() instead.
Returns
int rd_kafka_resp_err_t - )

rd_kafka_errno2err()

public static rd_kafka_errno2err ( 
    int|null $errnox
 ): int

Converts the system errno value errnox to a rd_kafka_resp_err_t error code upon failure from the following functions:

  • rd_kafka_topic_new()
  • rd_kafka_consume_start()
  • rd_kafka_consume_stop()
  • rd_kafka_consume()
  • rd_kafka_consume_batch()
  • rd_kafka_consume_callback()
  • rd_kafka_consume_queue()
  • rd_kafka_produce()
Remarks
A better alternative is to call rd_kafka_last_error() immediately after any of the above functions return -1 or NULL.
Deprecated:
Use rd_kafka_last_error() to retrieve the last error code set by the legacy librdkafka APIs.
See also
rd_kafka_last_error()
Parameters
errnox int|null int - ) - System errno value to convert
Returns
int rd_kafka_resp_err_t - Appropriate error code for errnox

rd_kafka_errno()

public static rd_kafka_errno (  ): int|null

Returns the thread-local system errno.

On most platforms this is the same as errno but in case of different runtimes between library and application (e.g., Windows static DLLs) this provides a means for exposing the errno librdkafka uses.

Remarks
The value is local to the current calling thread.
Deprecated:
Use rd_kafka_last_error() to retrieve the last error code set by the legacy librdkafka APIs.
Returns
int|null int - )

rd_kafka_fatal_error()

public static rd_kafka_fatal_error ( 
    \FFI\CData|null $rk, 
    \FFI\CData|null $errstr, 
    int|null $errstr_size
 ): int

Returns the first fatal error set on this client instance, or RD_KAFKA_RESP_ERR_NO_ERROR if no fatal error has occurred.

This function is to be used with the Idempotent Producer and error_cb to detect fatal errors.

Generally all errors raised by error_cb are to be considered informational and temporary, the client will try to recover from all errors in a graceful fashion (by retrying, etc).

However, some errors should logically be considered fatal to retain consistency; in particular a set of errors that may occur when using the Idempotent Producer and the in-order or exactly-once producer guarantees can't be satisfied.

Parameters
rk \FFI\CData|null rd_kafka_t* - Client instance.
errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written to if there is a fatal error. - Writable size in errstr.
errstr_size int|null size_t
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if no fatal error has been raised, else any other error code.

rd_kafka_test_fatal_error()

public static rd_kafka_test_fatal_error ( 
    \FFI\CData|null $rk, 
    int $err, 
    string|null $reason
 ): int

Trigger a fatal error for testing purposes.

Since there is no practical way to trigger real fatal errors in the idempotent producer, this method allows an application to trigger fabricated fatal errors in tests to check its error handling code.

Parameters
rk \FFI\CData|null rd_kafka_t* - Client instance.
err int rd_kafka_resp_err_t - The underlying error code.
reason string|null const char* - A human readable error reason. Will be prefixed with “test_fatal_error: ” to differentiate from real fatal errors.
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if a fatal error was triggered, or RD_KAFKA_RESP_ERR__PREV_IN_PROGRESS if a previous fatal error has already been triggered.

rd_kafka_topic_partition_destroy()

public static rd_kafka_topic_partition_destroy ( 
    \FFI\CData|null $rktpar
 ): void

Destroy a rd_kafka_topic_partition_t.

Remarks
This must not be called for elements in a topic partition list.
Parameters
rktpar \FFI\CData|null rd_kafka_topic_partition_t* - )

rd_kafka_topic_partition_list_new()

public static rd_kafka_topic_partition_list_new ( 
    int|null $size
 ): \FFI\CData|null

Create a new list/vector Topic+Partition container.

Remarks
Use rd_kafka_topic_partition_list_destroy() to free all resources in use by a list and the list itself.
See also
rd_kafka_topic_partition_list_add()
Parameters
size int|null int - ) - Initial allocated size used when the expected number of elements is known or can be estimated. Avoids reallocation and possibly relocation of the elems array.
Returns
\FFI\CData|null rd_kafka_topic_partition_list_t* - A newly allocated Topic+Partition list.

rd_kafka_topic_partition_list_destroy()

public static rd_kafka_topic_partition_list_destroy ( 
    \FFI\CData|null $rkparlist
 ): void
Parameters
rkparlist \FFI\CData|null rd_kafka_topic_partition_list_t*

rd_kafka_topic_partition_list_add()

public static rd_kafka_topic_partition_list_add ( 
    \FFI\CData|null $rktparlist, 
    string|null $topic, 
    int|null $partition
 ): \FFI\CData|null

Add topic+partition to list.

Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t* - List to extend
topic string|null const char* - Topic name (copied)
partition int|null int32_t - Partition id
Returns
\FFI\CData|null rd_kafka_topic_partition_t* - The object which can be used to fill in additionals fields.

rd_kafka_topic_partition_list_add_range()

public static rd_kafka_topic_partition_list_add_range ( 
    \FFI\CData|null $rktparlist, 
    string|null $topic, 
    int|null $start, 
    int|null $stop
 ): void

Add range of partitions from start to stop inclusive.

Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t* - List to extend
topic string|null const char* - Topic name (copied)
start int|null int32_t - Start partition of range
stop int|null int32_t - Last partition of range (inclusive)

rd_kafka_topic_partition_list_del()

public static rd_kafka_topic_partition_list_del ( 
    \FFI\CData|null $rktparlist, 
    string|null $topic, 
    int|null $partition
 ): int|null

Delete partition from list.

Remarks
Any held indices to elems[] are unusable after this call returns 1.
Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t* - List to modify
topic string|null const char* - Topic name to match
partition int|null int32_t - Partition to match
Returns
int|null int - 1 if partition was found (and removed), else 0.

rd_kafka_topic_partition_list_del_by_idx()

public static rd_kafka_topic_partition_list_del_by_idx ( 
    \FFI\CData|null $rktparlist, 
    int|null $idx
 ): int|null

Delete partition from list by elems[] index.

See also
rd_kafka_topic_partition_list_del()
Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t*
idx int|null int
Returns
int|null int - 1 if partition was found (and removed), else 0.

rd_kafka_topic_partition_list_copy()

public static rd_kafka_topic_partition_list_copy ( 
    \FFI\CData|null $src
 ): \FFI\CData|null

Make a copy of an existing list.

Parameters
src \FFI\CData|null const rd_kafka_topic_partition_list_t* - ) - The existing list to copy.
Returns
\FFI\CData|null rd_kafka_topic_partition_list_t* - A new list fully populated to be identical to src

rd_kafka_topic_partition_list_set_offset()

public static rd_kafka_topic_partition_list_set_offset ( 
    \FFI\CData|null $rktparlist, 
    string|null $topic, 
    int|null $partition, 
    int|null $offset
 ): int

Set offset to offset for topic and partition.

Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t*
topic string|null const char*
partition int|null int32_t
offset int|null int64_t
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION if partition was not found in the list.

rd_kafka_topic_partition_list_find()

public static rd_kafka_topic_partition_list_find ( 
    \FFI\CData|null $rktparlist, 
    string|null $topic, 
    int|null $partition
 ): \FFI\CData|null

Find element by topic and partition.

Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t*
topic string|null const char*
partition int|null int32_t
Returns
\FFI\CData|null rd_kafka_topic_partition_t* - a pointer to the first matching element, or NULL if not found.

rd_kafka_topic_partition_list_sort()

public static rd_kafka_topic_partition_list_sort ( 
    \FFI\CData|null $rktparlist, 
    \FFI\CData|\Closure $cmp, 
    \FFI\CData|object|string|null $opaque
 ): void

Sort list using comparator cmp.

If cmp is NULL the default comparator will be used that sorts by ascending topic name and partition.

cmp_opaque is provided as the cmp_opaque argument to cmp.

Parameters
rktparlist \FFI\CData|null rd_kafka_topic_partition_list_t*
cmp \FFI\CData|\Closure int()(const void, const void*, void*)
opaque \FFI\CData|object|string|null void*

rd_kafka_headers_new()

public static rd_kafka_headers_new ( 
    int|null $initial_count
 ): \FFI\CData|null

Create a new headers list.

Parameters
initial_count int|null size_t - ) - Preallocate space for this number of headers. Any number of headers may be added, updated and removed regardless of the initial count.
Returns
\FFI\CData|null rd_kafka_headers_t*

rd_kafka_headers_destroy()

public static rd_kafka_headers_destroy ( 
    \FFI\CData|null $hdrs
 ): void
Parameters
hdrs \FFI\CData|null rd_kafka_headers_t*

rd_kafka_headers_copy()

public static rd_kafka_headers_copy ( 
    \FFI\CData|null $src
 ): \FFI\CData|null
Parameters
src \FFI\CData|null const rd_kafka_headers_t*
Returns
\FFI\CData|null rd_kafka_headers_t*

rd_kafka_header_add()

public static rd_kafka_header_add ( 
    \FFI\CData|null $hdrs, 
    string|null $name, 
    int|null $name_size, 
    \FFI\CData|object|string|null $value, 
    int|null $value_size
 ): int

Add header with name name and value val (copied) of size size (not including null-terminator).

Parameters
hdrs \FFI\CData|null rd_kafka_headers_t* - Headers list.
name string|null const char* - Header name.
name_size int|null ssize_t - Header name size (not including the null-terminator). If -1 the name length is automatically acquired using strlen().
value \FFI\CData|object|string|null const void* - Pointer to header value, or NULL (set size to 0 or -1).
value_size int|null ssize_t - Size of header value. If -1 the value is assumed to be a null-terminated string and the length is automatically acquired using strlen().
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR__READ_ONLY if the headers are read-only, else RD_KAFKA_RESP_ERR_NO_ERROR.

rd_kafka_header_remove()

public static rd_kafka_header_remove ( 
    \FFI\CData|null $hdrs, 
    string|null $name
 ): int

Remove all headers for the given key (if any).

Parameters
hdrs \FFI\CData|null rd_kafka_headers_t*
name string|null const char*
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR__READ_ONLY if the headers are read-only, RD_KAFKA_RESP_ERR__NOENT if no matching headers were found, else RD_KAFKA_RESP_ERR_NO_ERROR if headers were removed.

rd_kafka_header_get_last()

public static rd_kafka_header_get_last ( 
    \FFI\CData|null $hdrs, 
    string|null $name, 
    \FFI\CData|object|string|null $valuep, 
    \FFI\CData|null $sizep
 ): int

Find last header in list hdrs matching name.

Remarks
The returned pointer in valuep includes a trailing null-terminator that is not accounted for in sizep.
The returned pointer is only valid as long as the headers list and the header item is valid.
Parameters
hdrs \FFI\CData|null const rd_kafka_headers_t* - Headers list.
name string|null const char* - Header to find (last match).
valuep \FFI\CData|object|string|null const void** - (out) Set to a (null-terminated) const pointer to the value (may be NULL).
sizep \FFI\CData|null size_t* - (out) Set to the value’s size (not including null-terminator).
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if an entry was found, else RD_KAFKA_RESP_ERR__NOENT.

rd_kafka_header_get()

public static rd_kafka_header_get ( 
    \FFI\CData|null $hdrs, 
    int|null $idx, 
    string|null $name, 
    \FFI\CData|object|string|null $valuep, 
    \FFI\CData|null $sizep
 ): int

Iterator for headers matching name.

   Same semantics as rd_kafka_header_get_last()
Parameters
hdrs \FFI\CData|null const rd_kafka_headers_t* - Headers to iterate.
idx int|null size_t - Iterator index, start at 0 and increment by one for each call as long as RD_KAFKA_RESP_ERR_NO_ERROR is returned.
name string|null const char* - Header name to match.
valuep \FFI\CData|object|string|null const void** - (out) Set to a (null-terminated) const pointer to the value (may be NULL).
sizep \FFI\CData|null size_t* - (out) Set to the value’s size (not including null-terminator).
Returns
int rd_kafka_resp_err_t

rd_kafka_header_get_all()

public static rd_kafka_header_get_all ( 
    \FFI\CData|null $hdrs, 
    int|null $idx, 
    \FFI\CData|null $namep, 
    \FFI\CData|object|string|null $valuep, 
    \FFI\CData|null $sizep
 ): int

Iterator for all headers.

   Same semantics as rd_kafka_header_get()
See also
rd_kafka_header_get()
Parameters
hdrs \FFI\CData|null const rd_kafka_headers_t*
idx int|null size_t
namep \FFI\CData|null const char**
valuep \FFI\CData|object|string|null const void**
sizep \FFI\CData|null size_t*
Returns
int rd_kafka_resp_err_t

rd_kafka_message_destroy()

public static rd_kafka_message_destroy ( 
    \FFI\CData|null $rkmessage
 ): void
Parameters
rkmessage \FFI\CData|null rd_kafka_message_t*

rd_kafka_message_timestamp()

public static rd_kafka_message_timestamp ( 
    \FFI\CData|null $rkmessage, 
    \FFI\CData|null $tstype
 ): int|null

Returns the message timestamp for a consumed message.

The timestamp is the number of milliseconds since the epoch (UTC).

tstype (if not NULL) is updated to indicate the type of timestamp.

Remarks
Message timestamps require broker version 0.10.0 or later.
Parameters
rkmessage \FFI\CData|null const rd_kafka_message_t*
tstype \FFI\CData|null rd_kafka_timestamp_type_t*
Returns
int|null int64_t - message timestamp, or -1 if not available.

rd_kafka_message_latency()

public static rd_kafka_message_latency ( 
    \FFI\CData|null $rkmessage
 ): int|null

Returns the latency for a produced message measured from the produce() call.

Parameters
rkmessage \FFI\CData|null const rd_kafka_message_t* - )
Returns
int|null int64_t - the latency in microseconds, or -1 if not available.

rd_kafka_message_headers()

public static rd_kafka_message_headers ( 
    \FFI\CData|null $rkmessage, 
    \FFI\CData|null $hdrsp
 ): int

Get the message header list.

The returned pointer in *hdrsp is associated with the rkmessage and must not be used after destruction of the message object or the header list is replaced with rd_kafka_message_set_headers().

Remarks
Headers require broker version 0.11.0.0 or later.
As an optimization the raw protocol headers are parsed on the first call to this function.
Parameters
rkmessage \FFI\CData|null const rd_kafka_message_t*
hdrsp \FFI\CData|null rd_kafka_headers_t**
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if headers were returned, RD_KAFKA_RESP_ERR__NOENT if the message has no headers, or another error code if the headers could not be parsed.

rd_kafka_message_detach_headers()

public static rd_kafka_message_detach_headers ( 
    \FFI\CData|null $rkmessage, 
    \FFI\CData|null $hdrsp
 ): int

Get the message header list and detach the list from the message making the application the owner of the headers. The application must eventually destroy the headers using rd_kafka_headers_destroy(). The message's headers will be set to NULL.

Otherwise same semantics as rd_kafka_message_headers()

See also
rd_kafka_message_headers
Parameters
rkmessage \FFI\CData|null rd_kafka_message_t*
hdrsp \FFI\CData|null rd_kafka_headers_t**
Returns
int rd_kafka_resp_err_t

rd_kafka_message_set_headers()

public static rd_kafka_message_set_headers ( 
    \FFI\CData|null $rkmessage, 
    \FFI\CData|null $hdrs
 ): void

Replace the message's current headers with a new list.

Remarks
The existing headers object, if any, will be destroyed.
Parameters
rkmessage \FFI\CData|null rd_kafka_message_t* - The message to set headers.
hdrs \FFI\CData|null rd_kafka_headers_t* - New header list. The message object assumes ownership of the list, the list will be destroyed automatically with the message object. The new headers list may be updated until the message object is passed or returned to librdkafka.

rd_kafka_header_cnt()

public static rd_kafka_header_cnt ( 
    \FFI\CData|null $hdrs
 ): int|null

Returns the number of header key/value pairs.

Parameters
hdrs \FFI\CData|null const rd_kafka_headers_t* - ) - Headers to count
Returns
int|null size_t

rd_kafka_message_status()

public static rd_kafka_message_status ( 
    \FFI\CData|null $rkmessage
 ): int

Returns the message's persistence status in the topic log.

Remarks
The message status is not available in on_acknowledgement interceptors.
Parameters
rkmessage \FFI\CData|null const rd_kafka_message_t* - )
Returns
int rd_kafka_msg_status_t

rd_kafka_conf_new()

public static rd_kafka_conf_new (  ): \FFI\CData|null

Create configuration object.

When providing your own configuration to the rd_kafka_*_new_*() calls the rd_kafka_conf_t objects needs to be created with this function which will set up the defaults. I.e.:

rd_kafka_conf_t *myconf;
rd_kafka_conf_res_t res;

myconf = rd_kafka_conf_new();
res = rd_kafka_conf_set(myconf, "socket.timeout.ms", "600",
                        errstr, sizeof(errstr));
if (res != RD_KAFKA_CONF_OK)
   die("%s\n", errstr);

rk = rd_kafka_new(..., myconf);

Please see CONFIGURATION.md for the default settings or use rd_kafka_conf_properties_show() to provide the information at runtime.

The properties are identical to the Apache Kafka configuration properties whenever possible.

Remarks
A successful call to rd_kafka_new() will assume ownership of the conf object and rd_kafka_conf_destroy() must not be called.
See also
rd_kafka_new(), rd_kafka_conf_set(), rd_kafka_conf_destroy()
Returns
\FFI\CData|null rd_kafka_conf_t* - ) - A new rd_kafka_conf_t object with defaults set.

rd_kafka_conf_destroy()

public static rd_kafka_conf_destroy ( 
    \FFI\CData|null $conf
 ): void
Parameters
conf \FFI\CData|null rd_kafka_conf_t*

rd_kafka_conf_dup()

public static rd_kafka_conf_dup ( 
    \FFI\CData|null $conf
 ): \FFI\CData|null

Creates a copy/duplicate of configuration object conf.

Remarks
Interceptors are NOT copied to the new configuration object.
See also
rd_kafka_interceptor_f_on_conf_dup
Parameters
conf \FFI\CData|null const rd_kafka_conf_t* - )
Returns
\FFI\CData|null rd_kafka_conf_t*

rd_kafka_conf_dup_filter()

public static rd_kafka_conf_dup_filter ( 
    \FFI\CData|null $conf, 
    int|null $filter_cnt, 
    \FFI\CData|null $filter
 ): \FFI\CData|null
Parameters
conf \FFI\CData|null const rd_kafka_conf_t*
filter_cnt int|null size_t
filter \FFI\CData|null const char**
Returns
\FFI\CData|null rd_kafka_conf_t*

rd_kafka_conf_set()

public static rd_kafka_conf_set ( 
    \FFI\CData|null $conf, 
    string|null $name, 
    string|null $value, 
    \FFI\CData|null $errstr, 
    int|null $errstr_size
 ): int

Sets a configuration property.

conf must have been previously created with rd_kafka_conf_new().

Fallthrough: Topic-level configuration properties may be set using this interface in which case they are applied on the default_topic_conf. If no default_topic_conf has been set one will be created. Any subsequent rd_kafka_conf_set_default_topic_conf() calls will replace the current default topic configuration.

Remarks
Setting properties or values that were disabled at build time due to missing dependencies will return RD_KAFKA_CONF_INVALID.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
name string|null const char*
value string|null const char*
errstr \FFI\CData|null char*
errstr_size int|null size_t
Returns
int rd_kafka_conf_res_t - rd_kafka_conf_res_t to indicate success or failure. In case of failure errstr is updated to contain a human readable error string.

rd_kafka_conf_set_events()

public static rd_kafka_conf_set_events ( 
    \FFI\CData|null $conf, 
    int|null $events
 ): void
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
events int|null int

rd_kafka_conf_set_background_event_cb()

public static rd_kafka_conf_set_background_event_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $event_cb
 ): void

Generic event callback to be used with the event API to trigger callbacks for rd_kafka_event_t objects from a background thread serving the background queue.

How to use:

  1. First set the event callback on the configuration object with this function, followed by creating an rd_kafka_t instance with rd_kafka_new().
  2. Get the instance's background queue with rd_kafka_queue_get_background() and pass it as the reply/response queue to an API that takes an event queue, such as rd_kafka_CreateTopics().
  3. As the response event is ready and enqueued on the background queue the event callback will be triggered from the background thread.
  4. Prior to destroying the client instance, loose your reference to the background queue by calling rd_kafka_queue_destroy().

The application must destroy the rkev passed to event cb using rd_kafka_event_destroy().

The event_cb opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Remarks
This callback is a specialized alternative to the poll-based event API described in the Event interface section.
The event_cb will be called spontaneously from a background thread completely managed by librdkafka. Take care to perform proper locking of application objects.
Warning
The application MUST NOT call rd_kafka_destroy() from the event callback.
See also
rd_kafka_queue_get_background
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
event_cb \FFI\CData|\Closure void()(rd_kafka_t, rd_kafka_event_t*, void*)

rd_kafka_conf_set_dr_cb()

public static rd_kafka_conf_set_dr_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $dr_cb
 ): void
Deprecated:
See rd_kafka_conf_set_dr_msg_cb()
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
dr_cb \FFI\CData|\Closure void()(rd_kafka_t, void*, size_t, rd_kafka_resp_err_t, void*, void*)

rd_kafka_conf_set_dr_msg_cb()

public static rd_kafka_conf_set_dr_msg_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $dr_msg_cb
 ): void

Producer: Set delivery report callback in provided conf object.

The delivery report callback will be called once for each message accepted by rd_kafka_produce() (et.al) with err set to indicate the result of the produce request.

The callback is called when a message is succesfully produced or if librdkafka encountered a permanent failure. Delivery errors occur when the retry count is exceeded, when the message.timeout.ms timeout is exceeded or there is a permanent error like RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART.

An application must call rd_kafka_poll() at regular intervals to serve queued delivery report callbacks.

The broker-assigned offset can be retrieved with rkmessage->offset and the timestamp can be retrieved using rd_kafka_message_timestamp().

The dr_msg_cb opaque argument is the opaque set with rd_kafka_conf_set_opaque(). The per-message msg_opaque value is available in rd_kafka_message_t._private.

Remarks
The Idempotent Producer may return invalid timestamp (RD_KAFKA_TIMESTAMP_NOT_AVAILABLE), and and offset (RD_KAFKA_OFFSET_INVALID) for retried messages that were previously successfully delivered but not properly acknowledged.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
dr_msg_cb \FFI\CData|\Closure void()(rd_kafka_t, const rd_kafka_message_t*, void*)

rd_kafka_conf_set_consume_cb()

public static rd_kafka_conf_set_consume_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $consume_cb
 ): void

Consumer: Set consume callback for use with rd_kafka_consumer_poll()

The consume_cb opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Parameters
conf \FFI\CData|null rd_kafka_conf_t*
consume_cb \FFI\CData|\Closure void()(rd_kafka_message_t, void*)

rd_kafka_conf_set_rebalance_cb()

public static rd_kafka_conf_set_rebalance_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $rebalance_cb
 ): void

Consumer: Set rebalance callback for use with coordinated consumer group balancing.

The err field is set to either RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS or RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS and 'partitions' contains the full partition set that was either assigned or revoked.

Registering a rebalance_cb turns off librdkafka's automatic partition assignment/revocation and instead delegates that responsibility to the application's rebalance_cb.

The rebalance callback is responsible for updating librdkafka's assignment set based on the two events: RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS and RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS but should also be able to handle arbitrary rebalancing failures where err is neither of those.

Remarks
In this latter case (arbitrary error), the application must call rd_kafka_assign(rk, NULL) to synchronize state.

For eager/non-cooperative partition.assignment.strategy assignors, such as range and roundrobin, the application must use rd_kafka_assign() to set or clear the entire assignment. For the cooperative assignors, such as cooperative-sticky, the application must use rd_kafka_incremental_assign() for RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS and rd_kafka_incremental_unassign() for RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS.

Without a rebalance callback this is done automatically by librdkafka but registering a rebalance callback gives the application flexibility in performing other operations along with the assigning/revocation, such as fetching offsets from an alternate location (on assign) or manually committing offsets (on revoke).

rebalance_cb is always triggered exactly once when a rebalance completes with a new assignment, even if that assignment is empty. If an eager/non-cooperative assignor is configured, there will eventually be exactly one corresponding call to rebalance_cb to revoke these partitions (even if empty), whether this is due to a group rebalance or lost partitions. In the cooperative case, rebalance_cb will never be called if the set of partitions being revoked is empty (whether or not lost).

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Remarks
The partitions list is destroyed by librdkafka on return return from the rebalance_cb and must not be freed or saved by the application.
Be careful when modifying the partitions list. Changing this list should only be done to change the initial offsets for each partition. But a function like rd_kafka_position() might have unexpected effects for instance when a consumer gets assigned a partition it used to consume at an earlier rebalance. In this case, the list of partitions will be updated with the old offset for that partition. In this case, it is generally better to pass a copy of the list (see rd_kafka_topic_partition_list_copy()). The result of rd_kafka_position() is typically outdated in RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS.
See also
rd_kafka_assign()
rd_kafka_incremental_assign()
rd_kafka_incremental_unassign()
rd_kafka_assignment_lost()
rd_kafka_rebalance_protocol()

The following example shows the application's responsibilities:

static void rebalance_cb (rd_kafka_t *rk, rd_kafka_resp_err_t err,
                          rd_kafka_topic_partition_list_t *partitions,
                          void *opaque) {

switch (err)
    {
      case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
         // application may load offets from arbitrary external
         // storage here and update \p partitions
         if (!strcmp(rd_kafka_rebalance_protocol(rk), "COOPERATIVE"))
                 rd_kafka_incremental_assign(rk, partitions);
         else // EAGER
                 rd_kafka_assign(rk, partitions);
         break;

      case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
         if (manual_commits) // Optional explicit manual commit
             rd_kafka_commit(rk, partitions, 0); // sync commit

         if (!strcmp(rd_kafka_rebalance_protocol(rk), "COOPERATIVE"))
                 rd_kafka_incremental_unassign(rk, partitions);
         else // EAGER
                 rd_kafka_assign(rk, NULL);
         break;

      default:
         handle_unlikely_error(err);
         rd_kafka_assign(rk, NULL); // sync state
         break;
     }
}
Remarks
The above example lacks error handling for assign calls, see the examples/ directory.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
rebalance_cb \FFI\CData|\Closure void()(rd_kafka_t, rd_kafka_resp_err_t, rd_kafka_topic_partition_list_t*, void*)

rd_kafka_conf_set_offset_commit_cb()

public static rd_kafka_conf_set_offset_commit_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $offset_commit_cb
 ): void

Consumer: Set offset commit callback for use with consumer groups.

The results of automatic or manual offset commits will be scheduled for this callback and is served by rd_kafka_consumer_poll().

If no partitions had valid offsets to commit this callback will be called with err == RD_KAFKA_RESP_ERR__NO_OFFSET which is not to be considered an error.

The offsets list contains per-partition information:

  • offset: committed offset (attempted)
  • err: commit error

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Parameters
conf \FFI\CData|null rd_kafka_conf_t*
offset_commit_cb \FFI\CData|\Closure void()(rd_kafka_t, rd_kafka_resp_err_t, rd_kafka_topic_partition_list_t*, void*)

rd_kafka_conf_set_error_cb()

public static rd_kafka_conf_set_error_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $error_cb
 ): void

Set error callback in provided conf object.

The error callback is used by librdkafka to signal warnings and errors back to the application.

These errors should generally be considered informational and non-permanent, the client will try to recover automatically from all type of errors. Given that the client and cluster configuration is correct the application should treat these as temporary errors.

error_cb will be triggered with err set to RD_KAFKA_RESP_ERR__FATAL if a fatal error has been raised; in this case use rd_kafka_fatal_error() to retrieve the fatal error code and error string, and then begin terminating the client instance.

If no error_cb is registered, or RD_KAFKA_EVENT_ERROR has not been set with rd_kafka_conf_set_events, then the errors will be logged instead.

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Parameters
conf \FFI\CData|null rd_kafka_conf_t*
error_cb \FFI\CData|\Closure void()(rd_kafka_t, int, const char*, void*)

rd_kafka_conf_set_throttle_cb()

public static rd_kafka_conf_set_throttle_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $throttle_cb
 ): void

Set throttle callback.

The throttle callback is used to forward broker throttle times to the application for Produce and Fetch (consume) requests.

Callbacks are triggered whenever a non-zero throttle time is returned by the broker, or when the throttle time drops back to zero.

An application must call rd_kafka_poll() or rd_kafka_consumer_poll() at regular intervals to serve queued callbacks.

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Remarks
Requires broker version 0.9.0 or later.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
throttle_cb \FFI\CData|\Closure void()(rd_kafka_t, const char*, int32_t, int, void*)

rd_kafka_conf_set_log_cb()

public static rd_kafka_conf_set_log_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $log_cb
 ): void

Set logger callback.

The default is to print to stderr, but a syslog logger is also available, see rd_kafka_log_print and rd_kafka_log_syslog for the builtin alternatives. Alternatively the application may provide its own logger callback. Or pass func as NULL to disable logging.

This is the configuration alternative to the deprecated rd_kafka_set_logger()

Remarks
The log_cb will be called spontaneously from librdkafka's internal threads unless logs have been forwarded to a poll queue through rd_kafka_set_log_queue(). An application MUST NOT call any librdkafka APIs or do any prolonged work in a non-forwarded log_cb.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
log_cb \FFI\CData|\Closure void()(const rd_kafka_t, int, const char*, const char*)

rd_kafka_conf_set_stats_cb()

public static rd_kafka_conf_set_stats_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $stats_cb
 ): void

Set statistics callback in provided conf object.

The statistics callback is triggered from rd_kafka_poll() every statistics.interval.ms (needs to be configured separately). Function arguments:

  • rk - Kafka handle
  • json - String containing the statistics data in JSON format
  • json_len - Length of json string.
  • opaque - Application-provided opaque as set by rd_kafka_conf_set_opaque().

For more information on the format of json, see https://github.com/confluentinc/librdkafka/wiki/Statistics

If the application wishes to hold on to the json pointer and free it at a later time it must return 1 from the stats_cb. If the application returns 0 from the stats_cb then librdkafka will immediately free the json pointer.

See STATISTICS.md for a full definition of the JSON object.

Parameters
conf \FFI\CData|null rd_kafka_conf_t*
stats_cb \FFI\CData|\Closure int()(rd_kafka_t, char*, size_t, void*)

rd_kafka_conf_set_socket_cb()

public static rd_kafka_conf_set_socket_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $socket_cb
 ): void

Set socket callback.

The socket callback is responsible for opening a socket according to the supplied domain, type and protocol. The socket shall be created with CLOEXEC set in a racefree fashion, if possible.

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Default:

  • on linux: racefree CLOEXEC
  • others : non-racefree CLOEXEC
Remarks
The callback will be called from an internal librdkafka thread.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
socket_cb \FFI\CData|\Closure int()(int, int, int, void)

rd_kafka_conf_set_connect_cb()

public static rd_kafka_conf_set_connect_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $connect_cb
 ): void

Set connect callback.

The connect callback is responsible for connecting socket sockfd to peer address addr. The id field contains the broker identifier.

connect_cb shall return 0 on success (socket connected) or an error number (errno) on error.

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Remarks
The callback will be called from an internal librdkafka thread.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
connect_cb \FFI\CData|\Closure int()(int, const struct sockaddr, int, const char*, void*)

rd_kafka_conf_set_closesocket_cb()

public static rd_kafka_conf_set_closesocket_cb ( 
    \FFI\CData|null $conf, 
    \FFI\CData|\Closure $closesocket_cb
 ): void

Set close socket callback.

Close a socket (optionally opened with socket_cb()).

The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

Remarks
The callback will be called from an internal librdkafka thread.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
closesocket_cb \FFI\CData|\Closure int()(int, void)

rd_kafka_conf_set_opaque()

public static rd_kafka_conf_set_opaque ( 
    \FFI\CData|null $conf, 
    \FFI\CData|object|string|null $opaque
 ): void

Sets the application's opaque pointer that will be passed to callbacks.

See also
rd_kafka_opaque()
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
opaque \FFI\CData|object|string|null void*

rd_kafka_opaque()

public static rd_kafka_opaque ( 
    \FFI\CData|null $rk
 ): \FFI\CData|object|string|null
Parameters
rk \FFI\CData|null const rd_kafka_t*
Returns
\FFI\CData|object|string|null void*

rd_kafka_conf_set_default_topic_conf()

public static rd_kafka_conf_set_default_topic_conf ( 
    \FFI\CData|null $conf, 
    \FFI\CData|null $tconf
 ): void

Sets the default topic configuration to use for automatically subscribed topics (e.g., through pattern-matched topics). The topic config object is not usable after this call.

Warning
Any topic configuration settings that have been set on the global rd_kafka_conf_t object will be overwritten by this call since the implicitly created default topic config object is replaced by the user-supplied one.
Deprecated:
Set default topic level configuration on the global rd_kafka_conf_t object instead.
Parameters
conf \FFI\CData|null rd_kafka_conf_t*
tconf \FFI\CData|null rd_kafka_topic_conf_t*

rd_kafka_conf_get()

public static rd_kafka_conf_get ( 
    \FFI\CData|null $conf, 
    string|null $name, 
    \FFI\CData|null $dest, 
    \FFI\CData|null $dest_size
 ): int

Retrieve configuration value for property name.

If dest is non-NULL the value will be written to dest with at most dest_size.

*dest_size is updated to the full length of the value, thus if *dest_size initially is smaller than the full length the application may reallocate dest to fit the returned *dest_size and try again.

If dest is NULL only the full length of the value is returned.

Fallthrough: Topic-level configuration properties from the default_topic_conf may be retrieved using this interface.

Parameters
conf \FFI\CData|null const rd_kafka_conf_t*
name string|null const char*
dest \FFI\CData|null char*
dest_size \FFI\CData|null size_t*
Returns
int rd_kafka_conf_res_t - RD_KAFKA_CONF_OK if the property name matched, else RD_KAFKA_CONF_UNKNOWN.

rd_kafka_topic_conf_get()

public static rd_kafka_topic_conf_get ( 
    \FFI\CData|null $conf, 
    string|null $name, 
    \FFI\CData|null $dest, 
    \FFI\CData|null $dest_size
 ): int

Retrieve topic configuration value for property name.

See also
rd_kafka_conf_get()
Parameters
conf \FFI\CData|null const rd_kafka_topic_conf_t*
name string|null const char*
dest \FFI\CData|null char*
dest_size \FFI\CData|null size_t*
Returns
int rd_kafka_conf_res_t

rd_kafka_conf_dump()

public static rd_kafka_conf_dump ( 
    \FFI\CData|null $conf, 
    \FFI\CData|null $cntp
 ): \FFI\CData|null

Dump the configuration properties and values of conf to an array with "key", "value" pairs.

The number of entries in the array is returned in *cntp.

The dump must be freed with rd_kafka_conf_dump_free().

Parameters
conf \FFI\CData|null rd_kafka_conf_t*
cntp \FFI\CData|null size_t*
Returns
\FFI\CData|null const char**

rd_kafka_topic_conf_dump()

public static rd_kafka_topic_conf_dump ( 
    \FFI\CData|null $conf, 
    \FFI\CData|null $cntp
 ): \FFI\CData|null

Dump the topic configuration properties and values of conf to an array with "key", "value" pairs.

The number of entries in the array is returned in *cntp.

The dump must be freed with rd_kafka_conf_dump_free().

Parameters
conf \FFI\CData|null rd_kafka_topic_conf_t*
cntp \FFI\CData|null size_t*
Returns
\FFI\CData|null const char**

rd_kafka_conf_dump_free()

public static rd_kafka_conf_dump_free ( 
    \FFI\CData|null $arr, 
    int|null $cnt
 ): void
Parameters
arr \FFI\CData|null const char**
cnt int|null size_t

rd_kafka_conf_properties_show()

public static rd_kafka_conf_properties_show ( 
    \FFI\CData|null $fp
 ): void

Prints a table to fp of all supported configuration properties, their default values as well as a description.

Remarks
All properties and properties and values are shown, even those that have been disabled at build time due to missing dependencies.
Parameters
fp \FFI\CData|null FILE* - )

rd_kafka_topic_conf_new()

public static rd_kafka_topic_conf_new (  ): \FFI\CData|null

Create topic configuration object.

See also
Same semantics as for rd_kafka_conf_new().
Returns
\FFI\CData|null rd_kafka_topic_conf_t* - )

rd_kafka_topic_conf_dup()

public static rd_kafka_topic_conf_dup ( 
    \FFI\CData|null $conf
 ): \FFI\CData|null
Parameters
conf \FFI\CData|null const rd_kafka_topic_conf_t*
Returns
\FFI\CData|null rd_kafka_topic_conf_t*

rd_kafka_default_topic_conf_dup()

public static rd_kafka_default_topic_conf_dup ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null
Parameters
rk \FFI\CData|null rd_kafka_t*
Returns
\FFI\CData|null rd_kafka_topic_conf_t*

rd_kafka_topic_conf_destroy()

public static rd_kafka_topic_conf_destroy ( 
    \FFI\CData|null $topic_conf
 ): void
Parameters
topic_conf \FFI\CData|null rd_kafka_topic_conf_t*

rd_kafka_topic_conf_set()

public static rd_kafka_topic_conf_set ( 
    \FFI\CData|null $conf, 
    string|null $name, 
    string|null $value, 
    \FFI\CData|null $errstr, 
    int|null $errstr_size
 ): int

Sets a single rd_kafka_topic_conf_t value by property name.

topic_conf should have been previously set up with rd_kafka_topic_conf_new().

Parameters
conf \FFI\CData|null rd_kafka_topic_conf_t*
name string|null const char*
value string|null const char*
errstr \FFI\CData|null char*
errstr_size int|null size_t
Returns
int rd_kafka_conf_res_t - rd_kafka_conf_res_t to indicate success or failure.

rd_kafka_topic_conf_set_opaque()

public static rd_kafka_topic_conf_set_opaque ( 
    \FFI\CData|null $conf, 
    \FFI\CData|object|string|null $opaque
 ): void

Sets the application's opaque pointer that will be passed to all topic callbacks as the rkt_opaque argument.

See also
rd_kafka_topic_opaque()
Parameters
conf \FFI\CData|null rd_kafka_topic_conf_t*
opaque \FFI\CData|object|string|null void*

rd_kafka_topic_conf_set_partitioner_cb()

public static rd_kafka_topic_conf_set_partitioner_cb ( 
    \FFI\CData|null $topic_conf, 
    \FFI\CData|\Closure $partitioner
 ): void

Producer: Set partitioner callback in provided topic conf object.

The partitioner may be called in any thread at any time, it may be called multiple times for the same message/key.

The callback's rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The callback's msg_opaque argument is the per-message opaque passed to produce().

Partitioner function constraints:

  • MUST NOT call any rd_kafka_*() functions except: rd_kafka_topic_partition_available()
  • MUST NOT block or execute for prolonged periods of time.
  • MUST return a value between 0 and partition_cnt-1, or the special RD_KAFKA_PARTITION_UA value if partitioning could not be performed.
Parameters
topic_conf \FFI\CData|null rd_kafka_topic_conf_t*
partitioner \FFI\CData|\Closure int32_t()(const rd_kafka_topic_t, const void*, size_t, int32_t, void*, void*)

rd_kafka_topic_conf_set_msg_order_cmp()

public static rd_kafka_topic_conf_set_msg_order_cmp ( 
    \FFI\CData|null $topic_conf, 
    \FFI\CData|\Closure $msg_order_cmp
 ): void

Producer: Set message queueing order comparator callback.

The callback may be called in any thread at any time, it may be called multiple times for the same message.

Ordering comparator function constraints:

  • MUST be stable sort (same input gives same output).
  • MUST NOT call any rd_kafka_*() functions.
  • MUST NOT block or execute for prolonged periods of time.

The comparator shall compare the two messages and return:

  • < 0 if message a should be inserted before message b.
  • >=0 if message a should be inserted after message b.
Remarks
Insert sorting will be used to enqueue the message in the correct queue position, this comes at a cost of O(n).
If queuing.strategy=fifo new messages are enqueued to the tail of the queue regardless of msg_order_cmp, but retried messages are still affected by msg_order_cmp.
Warning
THIS IS AN EXPERIMENTAL API, SUBJECT TO CHANGE OR REMOVAL, DO NOT USE IN PRODUCTION.
Parameters
topic_conf \FFI\CData|null rd_kafka_topic_conf_t*
msg_order_cmp \FFI\CData|\Closure int()(const rd_kafka_message_t, const rd_kafka_message_t*)

rd_kafka_topic_partition_available()

public static rd_kafka_topic_partition_available ( 
    \FFI\CData|null $rkt, 
    int|null $partition
 ): int|null

Check if partition is available (has a leader broker).

Warning
This function must only be called from inside a partitioner function
Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
partition int|null int32_t
Returns
int|null int - 1 if the partition is available, else 0.

rd_kafka_msg_partitioner_random()

public static rd_kafka_msg_partitioner_random ( 
    \FFI\CData|null $rkt, 
    \FFI\CData|object|string|null $key, 
    int|null $keylen, 
    int|null $partition_cnt, 
    \FFI\CData|object|string|null $opaque, 
    \FFI\CData|object|string|null $msg_opaque
 ): int|null

Random partitioner.

Will try not to return unavailable partitions.

The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
key \FFI\CData|object|string|null const void*
keylen int|null size_t
partition_cnt int|null int32_t
opaque \FFI\CData|object|string|null void*
msg_opaque \FFI\CData|object|string|null void*
Returns
int|null int32_t - a random partition between 0 and partition_cnt - 1.

rd_kafka_msg_partitioner_consistent()

public static rd_kafka_msg_partitioner_consistent ( 
    \FFI\CData|null $rkt, 
    \FFI\CData|object|string|null $key, 
    int|null $keylen, 
    int|null $partition_cnt, 
    \FFI\CData|object|string|null $opaque, 
    \FFI\CData|object|string|null $msg_opaque
 ): int|null

Consistent partitioner.

Uses consistent hashing to map identical keys onto identical partitions.

The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
key \FFI\CData|object|string|null const void*
keylen int|null size_t
partition_cnt int|null int32_t
opaque \FFI\CData|object|string|null void*
msg_opaque \FFI\CData|object|string|null void*
Returns
int|null int32_t - a “random” partition between 0 and partition_cnt - 1 based on the CRC value of the key

rd_kafka_msg_partitioner_consistent_random()

public static rd_kafka_msg_partitioner_consistent_random ( 
    \FFI\CData|null $rkt, 
    \FFI\CData|object|string|null $key, 
    int|null $keylen, 
    int|null $partition_cnt, 
    \FFI\CData|object|string|null $opaque, 
    \FFI\CData|object|string|null $msg_opaque
 ): int|null

Consistent-Random partitioner.

This is the default partitioner. Uses consistent hashing to map identical keys onto identical partitions, and messages without keys will be assigned via the random partitioner.

The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
key \FFI\CData|object|string|null const void*
keylen int|null size_t
partition_cnt int|null int32_t
opaque \FFI\CData|object|string|null void*
msg_opaque \FFI\CData|object|string|null void*
Returns
int|null int32_t - a “random” partition between 0 and partition_cnt - 1 based on the CRC value of the key (if provided)

rd_kafka_msg_partitioner_murmur2()

public static rd_kafka_msg_partitioner_murmur2 ( 
    \FFI\CData|null $rkt, 
    \FFI\CData|object|string|null $key, 
    int|null $keylen, 
    int|null $partition_cnt, 
    \FFI\CData|object|string|null $rkt_opaque, 
    \FFI\CData|object|string|null $msg_opaque
 ): int|null

Murmur2 partitioner (Java compatible).

Uses consistent hashing to map identical keys onto identical partitions using Java-compatible Murmur2 hashing.

The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
key \FFI\CData|object|string|null const void*
keylen int|null size_t
partition_cnt int|null int32_t
rkt_opaque \FFI\CData|object|string|null void*
msg_opaque \FFI\CData|object|string|null void*
Returns
int|null int32_t - a partition between 0 and partition_cnt - 1.

rd_kafka_msg_partitioner_murmur2_random()

public static rd_kafka_msg_partitioner_murmur2_random ( 
    \FFI\CData|null $rkt, 
    \FFI\CData|object|string|null $key, 
    int|null $keylen, 
    int|null $partition_cnt, 
    \FFI\CData|object|string|null $rkt_opaque, 
    \FFI\CData|object|string|null $msg_opaque
 ): int|null

Consistent-Random Murmur2 partitioner (Java compatible).

Uses consistent hashing to map identical keys onto identical partitions using Java-compatible Murmur2 hashing. Messages without keys will be assigned via the random partitioner.

The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
key \FFI\CData|object|string|null const void*
keylen int|null size_t
partition_cnt int|null int32_t
rkt_opaque \FFI\CData|object|string|null void*
msg_opaque \FFI\CData|object|string|null void*
Returns
int|null int32_t - a partition between 0 and partition_cnt - 1.

rd_kafka_new()

public static rd_kafka_new ( 
    int $type, 
    \FFI\CData|null $conf, 
    \FFI\CData|null $errstr, 
    int|null $errstr_size
 ): \FFI\CData|null

Creates a new Kafka handle and starts its operation according to the specified type (RD_KAFKA_CONSUMER or RD_KAFKA_PRODUCER).

conf is an optional struct created with rd_kafka_conf_new() that will be used instead of the default configuration. The conf object is freed by this function on success and must not be used or destroyed by the application subsequently. See rd_kafka_conf_set() et.al for more information.

errstr must be a pointer to memory of at least size errstr_size where rd_kafka_new() may write a human readable error message in case the creation of a new handle fails. In which case the function returns NULL.

Remarks
RD_KAFKA_CONSUMER: When a new RD_KAFKA_CONSUMER rd_kafka_t handle is created it may either operate in the legacy simple consumer mode using the rd_kafka_consume_start() interface, or the High-level KafkaConsumer API.
An application must only use one of these groups of APIs on a given rd_kafka_t RD_KAFKA_CONSUMER handle.
See also
To destroy the Kafka handle, use rd_kafka_destroy().
Parameters
type int rd_kafka_type_t
conf \FFI\CData|null rd_kafka_conf_t*
errstr \FFI\CData|null char*
errstr_size int|null size_t
Returns
\FFI\CData|null rd_kafka_t* - The Kafka handle on success or NULL on error (see errstr)

rd_kafka_destroy()

public static rd_kafka_destroy ( 
    \FFI\CData|null $rk
 ): void

Destroy Kafka handle.

Remarks
This is a blocking operation.
rd_kafka_consumer_close() will be called from this function if the instance type is RD_KAFKA_CONSUMER, a group.id was configured, and the rd_kafka_consumer_close() was not explicitly called by the application. This in turn may trigger consumer callbacks, such as rebalance_cb. Use rd_kafka_destroy_flags() with RD_KAFKA_DESTROY_F_NO_CONSUMER_CLOSE to avoid this behaviour.
See also
rd_kafka_destroy_flags()
Parameters
rk \FFI\CData|null rd_kafka_t* - )

rd_kafka_destroy_flags()

public static rd_kafka_destroy_flags ( 
    \FFI\CData|null $rk, 
    int|null $flags
 ): void
Parameters
rk \FFI\CData|null rd_kafka_t*
flags int|null int

rd_kafka_name()

public static rd_kafka_name ( 
    \FFI\CData|null $rk
 ): string|null
Parameters
rk \FFI\CData|null const rd_kafka_t*
Returns
string|null const char*

rd_kafka_type()

public static rd_kafka_type ( 
    \FFI\CData|null $rk
 ): int
Parameters
rk \FFI\CData|null const rd_kafka_t*
Returns
int rd_kafka_type_t

rd_kafka_memberid()

public static rd_kafka_memberid ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null

Returns this client's broker-assigned group member id.

Remarks
This currently requires the high-level KafkaConsumer
Parameters
rk \FFI\CData|null const rd_kafka_t* - )
Returns
\FFI\CData|null char* - An allocated string containing the current broker-assigned group member id, or NULL if not available. The application must free the string with free() or rd_kafka_mem_free()

rd_kafka_clusterid()

public static rd_kafka_clusterid ( 
    \FFI\CData|null $rk, 
    int|null $timeout_ms
 ): \FFI\CData|null

Returns the ClusterId as reported in broker metadata.

Remarks
Requires broker version >=0.10.0 and api.version.request=true.
The application must free the returned pointer using rd_kafka_mem_free().
Parameters
rk \FFI\CData|null rd_kafka_t* - Client instance.
timeout_ms int|null int - If there is no cached value from metadata retrieval then this specifies the maximum amount of time (in milliseconds) the call will block waiting for metadata to be retrieved. Use 0 for non-blocking calls.
Returns
\FFI\CData|null char* - a newly allocated string containing the ClusterId, or NULL if no ClusterId could be retrieved in the allotted timespan.

rd_kafka_controllerid()

public static rd_kafka_controllerid ( 
    \FFI\CData|null $rk, 
    int|null $timeout_ms
 ): int|null

Returns the current ControllerId as reported in broker metadata.

Remarks
Requires broker version >=0.10.0 and api.version.request=true.
Parameters
rk \FFI\CData|null rd_kafka_t* - Client instance.
timeout_ms int|null int - If there is no cached value from metadata retrieval then this specifies the maximum amount of time (in milliseconds) the call will block waiting for metadata to be retrieved. Use 0 for non-blocking calls.
Returns
int|null int32_t - the controller broker id (>= 0), or -1 if no ControllerId could be retrieved in the allotted timespan.

rd_kafka_topic_new()

public static rd_kafka_topic_new ( 
    \FFI\CData|null $rk, 
    string|null $topic, 
    \FFI\CData|null $conf
 ): \FFI\CData|null

Creates a new topic handle for topic named topic.

conf is an optional configuration for the topic created with rd_kafka_topic_conf_new() that will be used instead of the default topic configuration. The conf object is freed by this function and must not be used or destroyed by the application subsequently. See rd_kafka_topic_conf_set() et.al for more information.

Topic handles are refcounted internally and calling rd_kafka_topic_new() again with the same topic name will return the previous topic handle without updating the original handle's configuration. Applications must eventually call rd_kafka_topic_destroy() for each succesfull call to rd_kafka_topic_new() to clear up resources.

See also
rd_kafka_topic_destroy()
Parameters
rk \FFI\CData|null rd_kafka_t*
topic string|null const char*
conf \FFI\CData|null rd_kafka_topic_conf_t*
Returns
\FFI\CData|null rd_kafka_topic_t* - the new topic handle or NULL on error (use rd_kafka_errno2err() to convert system errno to an rd_kafka_resp_err_t error code.

rd_kafka_topic_destroy()

public static rd_kafka_topic_destroy ( 
    \FFI\CData|null $rkt
 ): void

Loose application's topic handle refcount as previously created with rd_kafka_topic_new().

Remarks
Since topic objects are refcounted (both internally and for the app) the topic object might not actually be destroyed by this call, but the application must consider the object destroyed.
Parameters
rkt \FFI\CData|null rd_kafka_topic_t* - )

rd_kafka_topic_name()

public static rd_kafka_topic_name ( 
    \FFI\CData|null $rkt
 ): string|null
Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
Returns
string|null const char*

rd_kafka_topic_opaque()

public static rd_kafka_topic_opaque ( 
    \FFI\CData|null $rkt
 ): \FFI\CData|object|string|null
Parameters
rkt \FFI\CData|null const rd_kafka_topic_t*
Returns
\FFI\CData|object|string|null void*

rd_kafka_poll()

public static rd_kafka_poll ( 
    \FFI\CData|null $rk, 
    int|null $timeout_ms
 ): int|null

Polls the provided kafka handle for events.

Events will cause application-provided callbacks to be called.

The timeout_ms argument specifies the maximum amount of time (in milliseconds) that the call will block waiting for events. For non-blocking calls, provide 0 as timeout_ms. To wait indefinitely for an event, provide -1.

Remarks
An application should make sure to call poll() at regular intervals to serve any queued callbacks waiting to be called.
If your producer doesn't have any callback set (in particular via rd_kafka_conf_set_dr_msg_cb or rd_kafka_conf_set_error_cb) you might choose not to call poll(), though this is not recommended.

Events:

  • delivery report callbacks (if dr_cb/dr_msg_cb is configured) [producer]
  • error callbacks (rd_kafka_conf_set_error_cb()) [all]
  • stats callbacks (rd_kafka_conf_set_stats_cb()) [all]
  • throttle callbacks (rd_kafka_conf_set_throttle_cb()) [all]
  • OAUTHBEARER token refresh callbacks (rd_kafka_conf_set_oauthbearer_token_refresh_cb()) [all]
Parameters
rk \FFI\CData|null rd_kafka_t*
timeout_ms int|null int
Returns
int|null int - the number of events served.

rd_kafka_yield()

public static rd_kafka_yield ( 
    \FFI\CData|null $rk
 ): void

Cancels the current callback dispatcher (rd_kafka_poll(), rd_kafka_consume_callback(), etc).

A callback may use this to force an immediate return to the calling code (caller of e.g. rd_kafka_poll()) without processing any further events.

Remarks
This function MUST ONLY be called from within a librdkafka callback.
Parameters
rk \FFI\CData|null rd_kafka_t* - )

rd_kafka_pause_partitions()

public static rd_kafka_pause_partitions ( 
    \FFI\CData|null $rk, 
    \FFI\CData|null $partitions
 ): int

Pause producing or consumption for the provided list of partitions.

Success or error is returned per-partition err in the partitions list.

Parameters
rk \FFI\CData|null rd_kafka_t*
partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR

rd_kafka_resume_partitions()

public static rd_kafka_resume_partitions ( 
    \FFI\CData|null $rk, 
    \FFI\CData|null $partitions
 ): int

Resume producing consumption for the provided list of partitions.

Success or error is returned per-partition err in the partitions list.

Parameters
rk \FFI\CData|null rd_kafka_t*
partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR

rd_kafka_query_watermark_offsets()

public static rd_kafka_query_watermark_offsets ( 
    \FFI\CData|null $rk, 
    string|null $topic, 
    int|null $partition, 
    \FFI\CData|null $low, 
    \FFI\CData|null $high, 
    int|null $timeout_ms
 ): int

Query broker for low (oldest/beginning) and high (newest/end) offsets for partition.

Offsets are returned in *low and *high respectively.

Parameters
rk \FFI\CData|null rd_kafka_t*
topic string|null const char*
partition int|null int32_t
low \FFI\CData|null int64_t*
high \FFI\CData|null int64_t*
timeout_ms int|null int
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on failure.

rd_kafka_get_watermark_offsets()

public static rd_kafka_get_watermark_offsets ( 
    \FFI\CData|null $rk, 
    string|null $topic, 
    int|null $partition, 
    \FFI\CData|null $low, 
    \FFI\CData|null $high
 ): int

Get last known low (oldest/beginning) and high (newest/end) offsets for partition.

The low offset is updated periodically (if statistics.interval.ms is set) while the high offset is updated on each fetched message set from the broker.

If there is no cached offset (either low or high, or both) then RD_KAFKA_OFFSET_INVALID will be returned for the respective offset.

Offsets are returned in *low and *high respectively.

Remarks
Shall only be used with an active consumer instance.
Parameters
rk \FFI\CData|null rd_kafka_t*
topic string|null const char*
partition int|null int32_t
low \FFI\CData|null int64_t*
high \FFI\CData|null int64_t*
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on failure.

rd_kafka_offsets_for_times()

public static rd_kafka_offsets_for_times ( 
    \FFI\CData|null $rk, 
    \FFI\CData|null $offsets, 
    int|null $timeout_ms
 ): int

Look up the offsets for the given partitions by timestamp.

The returned offset for each partition is the earliest offset whose timestamp is greater than or equal to the given timestamp in the corresponding partition.

The timestamps to query are represented as offset in offsets on input, and offset will contain the offset on output.

The function will block for at most timeout_ms milliseconds.

Remarks
Duplicate Topic+Partitions are not supported.
Per-partition errors may be returned in rd_kafka_topic_partition_t.err
Parameters
rk \FFI\CData|null rd_kafka_t*
offsets \FFI\CData|null rd_kafka_topic_partition_list_t*
timeout_ms int|null int
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if offsets were be queried (do note that per-partition errors might be set), RD_KAFKA_RESP_ERR__TIMED_OUT if not all offsets could be fetched within timeout_ms, RD_KAFKA_RESP_ERR__INVALID_ARG if the offsets list is empty, RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION if all partitions are unknown, RD_KAFKA_RESP_ERR_LEADER_NOT_AVAILABLE if unable to query leaders for the given partitions.

rd_kafka_mem_free()

public static rd_kafka_mem_free ( 
    \FFI\CData|null $rk, 
    \FFI\CData|object|string|null $ptr
 ): void

Free pointer returned by librdkafka.

This is typically an abstraction for the free(3) call and makes sure the application can use the same memory allocator as librdkafka for freeing pointers returned by librdkafka.

In standard setups it is usually not necessary to use this interface rather than the free(3) functione.

rk must be set for memory returned by APIs that take an rk argument, for other APIs pass NULL for rk.

Remarks
rd_kafka_mem_free() must only be used for pointers returned by APIs that explicitly mention using this function for freeing.
Parameters
rk \FFI\CData|null rd_kafka_t*
ptr \FFI\CData|object|string|null void*

rd_kafka_queue_new()

public static rd_kafka_queue_new ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null

Create a new message queue.

See rd_kafka_consume_start_queue(), rd_kafka_consume_queue(), et.al.

Parameters
rk \FFI\CData|null rd_kafka_t* - )
Returns
\FFI\CData|null rd_kafka_queue_t*

rd_kafka_queue_destroy()

public static rd_kafka_queue_destroy ( 
    \FFI\CData|null $rkqu
 ): void

Destroy a queue, purging all of its enqueued messages.

Parameters
rkqu \FFI\CData|null rd_kafka_queue_t* - )

rd_kafka_queue_get_main()

public static rd_kafka_queue_get_main ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null

Use rd_kafka_queue_destroy() to loose the reference.

Parameters
rk \FFI\CData|null rd_kafka_t* - )
Returns
\FFI\CData|null rd_kafka_queue_t* - a reference to the main librdkafka event queue. This is the queue served by rd_kafka_poll().

rd_kafka_queue_get_consumer()

public static rd_kafka_queue_get_consumer ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null

Use rd_kafka_queue_destroy() to loose the reference.

Remarks
rd_kafka_queue_destroy() MUST be called on this queue prior to calling rd_kafka_consumer_close().
Polling the returned queue counts as a consumer poll, and will reset the timer for max.poll.interval.ms. If this queue is forwarded to a "destq", polling destq also counts as a consumer poll (this works for any number of forwards). However, even if this queue is unforwarded or forwarded elsewhere, polling destq will continue to count as a consumer poll.
Parameters
rk \FFI\CData|null rd_kafka_t* - )
Returns
\FFI\CData|null rd_kafka_queue_t* - a reference to the librdkafka consumer queue. This is the queue served by rd_kafka_consumer_poll().

rd_kafka_queue_get_partition()

public static rd_kafka_queue_get_partition ( 
    \FFI\CData|null $rk, 
    string|null $topic, 
    int|null $partition
 ): \FFI\CData|null

Use rd_kafka_queue_destroy() to loose the reference.

Remarks
rd_kafka_queue_destroy() MUST be called on this queue
This function only works on consumers.
Parameters
rk \FFI\CData|null rd_kafka_t*
topic string|null const char*
partition int|null int32_t
Returns
\FFI\CData|null rd_kafka_queue_t* - a reference to the partition’s queue, or NULL if partition is invalid.

rd_kafka_queue_get_background()

public static rd_kafka_queue_get_background ( 
    \FFI\CData|null $rk
 ): \FFI\CData|null

The background thread queue provides the application with an automatically polled queue that triggers the event callback in a background thread, this background thread is completely managed by librdkafka.

The background thread queue is automatically created if a generic event handler callback is configured with rd_kafka_conf_set_background_event_cb() or if rd_kafka_queue_get_background() is called.

The background queue is polled and served by librdkafka and MUST NOT be polled, forwarded, or otherwise managed by the application, it may only be used as the destination queue passed to queue-enabled APIs, such as the Admin API.

Use rd_kafka_queue_destroy() to loose the reference.

Warning
The background queue MUST NOT be read from (polled, consumed, etc), or forwarded from.
Parameters
rk \FFI\CData|null rd_kafka_t* - )
Returns
\FFI\CData|null rd_kafka_queue_t* - a reference to the background thread queue, or NULL if the background queue is not enabled.

rd_kafka_queue_forward()

public static rd_kafka_queue_forward ( 
    \FFI\CData|null $src, 
    \FFI\CData|null $dst
 ): void

Forward/re-route queue src to dst. If dst is NULL the forwarding is removed.

The internal refcounts for both queues are increased.

Remarks
Regardless of whether dst is NULL or not, after calling this function, src will not forward it's fetch queue to the consumer queue.
Parameters
src \FFI\CData|null rd_kafka_queue_t*
dst \FFI\CData|null rd_kafka_queue_t*

rd_kafka_set_log_queue()

public static rd_kafka_set_log_queue ( 
    \FFI\CData|null $rk, 
    \FFI\CData|null $rkqu
 ): int

Forward librdkafka logs (and debug) to the specified queue for serving with one of the ..poll() calls.

This allows an application to serve log callbacks (log_cb) in its thread of choice.

Remarks
The configuration property log.queue MUST also be set to true.
librdkafka maintains its own reference to the provided queue.
Parameters
rk \FFI\CData|null rd_kafka_t* - Client instance.
rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to forward logs to. If the value is NULL the logs are forwarded to the main queue.
Returns
int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on error, eg RD_KAFKA_RESP_ERR__NOT_CONFIGURED when log.queue is not set to true.

rd_kafka_queue_length()

public static rd_kafka_queue_length ( 
    \FFI\CData|null $rkqu
 ): int|null
Parameters
rkqu \FFI\CData|null rd_kafka_queue_t* - )
Returns
int|null size_t - the current number of elements in queue.

rd_kafka_queue_io_event_enable()

public static rd_kafka_queue_io_event_enable ( 
    \FFI\CData|null $rkqu, 
    int|null $fd, 
    \FFI\CData|object|string|null $payload, 
    int|null $size
 ): void

Enable IO event triggering for queue.

To ease integration with IO based polling loops this API allows an application to create a separate file-descriptor that librdkafka will write payload (of size size) to whenever a new element is enqueued on a previously empty queue.

To remove event triggering call with fd = -1.

librdkafka will maintain a copy of the payload.

Remarks
IO and callback event triggering are mutually exclusive.
When using forwarded queues the IO event must only be enabled on the final forwarded-to (destination) queue.
The file-descriptor/socket must be set to non-blocking.
Parameters
rkqu \FFI\CData|null rd_kafka_queue_t*
fd int|null int
payload \FFI\CData|object|string|null const void*
size int|null size_t

rd_kafka_queue_cb_event_enable()

public static rd_kafka_queue_cb_event_enable ( 
    \FFI\CData|null $rkqu, 
    \FFI\CData|\Closure $event_cb, 
    \FFI\CData|object|string|null $opaque
 ): void

Enable callback event triggering for queue.

The callback will be called from an internal librdkafka thread when a new element is enqueued on a previously empty queue.

To remove event triggering call with event_cb = NULL.

The qev_opaque is passed to the callback's qev_opaque argument.

Remarks
IO and callback event triggering are mutually exclusive.
Since the callback may be triggered from internal librdkafka threads, the application must not perform any pro-longed work in the callback, or call any librdkafka APIs (for the same rd_kafka_t handle).
Parameters
rkqu \FFI\CData|null rd_kafka_queue_t*
event_cb \FFI\CData|\Closure void()(rd_kafka_t, void*)
opaque \FFI\CData|object|string|null void*

rd_kafka_consume_start()

public static rd_kafka_consume_start ( 
    \FFI\CData|null $rkt, 
    int|null $partition, 
    int|null $offset
 ): int|null

Start consuming messages for topic rkt and partition at offset offset which may either be an absolute (0..N) or one of the logical offsets:

  • RD_KAFKA_OFFSET_BEGINNING
  • RD_KAFKA_OFFSET_END
  • RD_KAFKA_OFFSET_STORED
  • RD_KAFKA_OFFSET_TAIL

rdkafka will attempt to keep queued.min.messages (config property) messages in the local queue by repeatedly fetching batches of messages from the broker until the threshold is reached.

The application shall use one of the rd_kafka_consume*() functions to consume messages from the local queue, each kafka message being represented as a rd_kafka_message_t * object.

rd_kafka_consume_start() must not be called multiple times for the same topic and partition without stopping consumption first with rd_kafka_consume_stop().

Use rd_kafka_errno2err() to convert sytem errno to rd_kafka_resp_err_t

Parameters
rkt \FFI\CData|null rd_kafka_topic_t*
partition int|null int32_t
offset int|null int64_t
Returns
int|null int - 0 on success or -1 on error in which case errno is set accordingly:
  • EBUSY - Conflicts with an existing or previous subscription (RD_KAFKA_RESP_ERR__CONFLICT)
  • EINVAL - Invalid offset, or incomplete configuration (lacking group.id) (RD_KAFKA_RESP_ERR__INVALID_ARG)
  • ESRCH - requested partition is invalid. (RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION)
  • ENOENT - topic is unknown in the Kafka cluster. (RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC)
  • rd_kafka_consume_start_queue()

    public static rd_kafka_consume_start_queue ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $offset, 
        \FFI\CData|null $rkqu
     ): int|null
    

    Same as rd_kafka_consume_start() but re-routes incoming messages to the provided queue rkqu (which must have been previously allocated with rd_kafka_queue_new().

    The application must use one of the rd_kafka_consume_*_queue() functions to receive fetched messages.

    rd_kafka_consume_start_queue() must not be called multiple times for the same topic and partition without stopping consumption first with rd_kafka_consume_stop(). rd_kafka_consume_start() and rd_kafka_consume_start_queue() must not be combined for the same topic and partition.

    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    offset int|null int64_t
    rkqu \FFI\CData|null rd_kafka_queue_t*
    Returns
    int|null int

    rd_kafka_consume_stop()

    public static rd_kafka_consume_stop ( 
        \FFI\CData|null $rkt, 
        int|null $partition
     ): int|null
    

    Stop consuming messages for topic rkt and partition, purging all messages currently in the local queue.

    NOTE: To enforce synchronisation this call will block until the internal fetcher has terminated and offsets are committed to configured storage method.

    The application needs to be stop all consumers before calling rd_kafka_destroy() on the main object handle.

    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    Returns
    int|null int - 0 on success or -1 on error (see errno).

    rd_kafka_seek()

    public static rd_kafka_seek ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $offset, 
        int|null $timeout_ms
     ): int
    

    Seek consumer for topic+partition to offset which is either an absolute or logical offset.

    If timeout_ms is specified (not 0) the seek call will wait this long for the consumer to update its fetcher state for the given partition with the new offset. This guarantees that no previously fetched messages for the old offset (or fetch position) will be passed to the application.

    If the timeout is reached the internal state will be unknown to the caller and this function returns RD_KAFKA_RESP_ERR__TIMED_OUT.

    If timeout_ms is 0 it will initiate the seek but return immediately without any error reporting (e.g., async).

    This call will purge all pre-fetched messages for the given partition, which may be up to queued.max.message.kbytes in size. Repeated use of seek may thus lead to increased network usage as messages are re-fetched from the broker.

    Remarks
    Seek must only be performed for already assigned/consumed partitions, use rd_kafka_assign() (et.al) to set the initial starting offset for a new assignmenmt.
    Deprecated:
    Use rd_kafka_seek_partitions().
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    offset int|null int64_t
    timeout_ms int|null int
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR__NO_ERROR on success else an error code.

    rd_kafka_consume()

    public static rd_kafka_consume ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $timeout_ms
     ): \FFI\CData|null
    

    Consume a single message from topic rkt and partition.

    timeout_ms is maximum amount of time to wait for a message to be received. Consumer must have been previously started with rd_kafka_consume_start().

    Errors (when returning NULL):

    • ETIMEDOUT - timeout_ms was reached with no new messages fetched.
    • ENOENT - rkt + partition is unknown. (no prior rd_kafka_consume_start() call)

    NOTE: The returned message's ..->err must be checked for errors. NOTE: ..->err == RD_KAFKA_RESP_ERR__PARTITION_EOF signals that the end of the partition has been reached, which should typically not be considered an error. The application should handle this case (e.g., ignore).

    Remarks
    on_consume() interceptors may be called from this function prior to passing message to application.
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    timeout_ms int|null int
    Returns
    \FFI\CData|null rd_kafka_message_t* - a message object on success or NULL on error. The message object must be destroyed with rd_kafka_message_destroy() when the application is done with it.

    rd_kafka_consume_batch()

    public static rd_kafka_consume_batch ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $timeout_ms, 
        \FFI\CData|null $rkmessages, 
        int|null $rkmessages_size
     ): int|null
    

    Consume up to rkmessages_size from topic rkt and partition putting a pointer to each message in the application provided array rkmessages (of size rkmessages_size entries).

    rd_kafka_consume_batch() provides higher throughput performance than rd_kafka_consume().

    timeout_ms is the maximum amount of time to wait for all of rkmessages_size messages to be put into rkmessages. If no messages were available within the timeout period this function returns 0 and rkmessages remains untouched. This differs somewhat from rd_kafka_consume().

    The message objects must be destroyed with rd_kafka_message_destroy() when the application is done with it.

    See also
    rd_kafka_consume()
    Remarks
    on_consume() interceptors may be called from this function prior to passing message to application.
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    timeout_ms int|null int
    rkmessages \FFI\CData|null rd_kafka_message_t**
    rkmessages_size int|null size_t
    Returns
    int|null ssize_t - the number of rkmessages added in rkmessages, or -1 on error (same error codes as for rd_kafka_consume().

    rd_kafka_consume_callback()

    public static rd_kafka_consume_callback ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $timeout_ms, 
        \FFI\CData|\Closure $consume_cb, 
        \FFI\CData|object|string|null $opaque
     ): int|null
    

    Consumes messages from topic rkt and partition, calling the provided callback for each consumed messsage.

    rd_kafka_consume_callback() provides higher throughput performance than both rd_kafka_consume() and rd_kafka_consume_batch().

    timeout_ms is the maximum amount of time to wait for one or more messages to arrive.

    The provided consume_cb function is called for each message, the application MUST NOT call rd_kafka_message_destroy() on the provided rkmessage.

    The commit_opaque argument is passed to the consume_cb as commit_opaque.

    See also
    rd_kafka_consume()
    Remarks
    on_consume() interceptors may be called from this function prior to passing message to application.
    This function will return early if a transaction control message is received, these messages are not exposed to the application but still enqueued on the consumer queue to make sure their offsets are stored.
    Deprecated:
    This API is deprecated and subject for future removal. There is no new callback-based consume interface, use the poll/queue based alternatives.
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    timeout_ms int|null int
    consume_cb \FFI\CData|\Closure void()(rd_kafka_message_t, void*)
    opaque \FFI\CData|object|string|null void*
    Returns
    int|null int - the number of messages processed or -1 on error.

    rd_kafka_consume_queue()

    public static rd_kafka_consume_queue ( 
        \FFI\CData|null $rkqu, 
        int|null $timeout_ms
     ): \FFI\CData|null
    

    Consume from queue.

    See also
    rd_kafka_consume()
    Parameters
    rkqu \FFI\CData|null rd_kafka_queue_t*
    timeout_ms int|null int
    Returns
    \FFI\CData|null rd_kafka_message_t*

    rd_kafka_consume_batch_queue()

    public static rd_kafka_consume_batch_queue ( 
        \FFI\CData|null $rkqu, 
        int|null $timeout_ms, 
        \FFI\CData|null $rkmessages, 
        int|null $rkmessages_size
     ): int|null
    

    Consume batch of messages from queue.

    See also
    rd_kafka_consume_batch()
    Parameters
    rkqu \FFI\CData|null rd_kafka_queue_t*
    timeout_ms int|null int
    rkmessages \FFI\CData|null rd_kafka_message_t**
    rkmessages_size int|null size_t
    Returns
    int|null ssize_t

    rd_kafka_consume_callback_queue()

    public static rd_kafka_consume_callback_queue ( 
        \FFI\CData|null $rkqu, 
        int|null $timeout_ms, 
        \FFI\CData|\Closure $consume_cb, 
        \FFI\CData|object|string|null $opaque
     ): int|null
    

    Consume multiple messages from queue with callback.

    See also
    rd_kafka_consume_callback()
    Deprecated:
    This API is deprecated and subject for future removal. There is no new callback-based consume interface, use the poll/queue based alternatives.
    Parameters
    rkqu \FFI\CData|null rd_kafka_queue_t*
    timeout_ms int|null int
    consume_cb \FFI\CData|\Closure void()(rd_kafka_message_t, void*)
    opaque \FFI\CData|object|string|null void*
    Returns
    int|null int

    rd_kafka_offset_store()

    public static rd_kafka_offset_store ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $offset
     ): int
    

    Store offset offset + 1 for topic rkt partition partition.

    The offset + 1 will be committed (written) to broker (or file) according to auto.commit.interval.ms or manual offset-less commit()

    Deprecated:
    This API lacks support for partition leader epochs, which makes it at risk for unclean leader election log truncation issues. Use rd_kafka_offsets_store() and rd_kafka_offset_store_message() instead.
    Warning
    This method may only be called for partitions that are currently assigned. Non-assigned partitions will fail with RD_KAFKA_RESP_ERR__STATE. Since v1.9.0.
    Avoid storing offsets after calling rd_kafka_seek() (et.al) as this may later interfere with resuming a paused partition, instead store offsets prior to calling seek.
    Remarks
    enable.auto.offset.store must be set to "false" when using this API.
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    offset int|null int64_t
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on error.

    rd_kafka_offsets_store()

    public static rd_kafka_offsets_store ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $offsets
     ): int
    

    Store offsets for next auto-commit for one or more partitions.

    The offset will be committed (written) to the offset store according to auto.commit.interval.ms or manual offset-less commit().

    Per-partition success/error status propagated through each partition's .err for all return values (even NO_ERROR) except INVALID_ARG.

    Warning
    This method may only be called for partitions that are currently assigned. Non-assigned partitions will fail with RD_KAFKA_RESP_ERR__STATE. Since v1.9.0.
    Avoid storing offsets after calling rd_kafka_seek() (et.al) as this may later interfere with resuming a paused partition, instead store offsets prior to calling seek.
    Remarks
    The .offset field is stored as is, it will NOT be + 1.
    enable.auto.offset.store must be set to "false" when using this API.
    The leader epoch, if set, will be used to fence outdated partition leaders. See rd_kafka_topic_partition_set_leader_epoch().
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    offsets \FFI\CData|null rd_kafka_topic_partition_list_t*
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on (partial) success, or RD_KAFKA_RESP_ERR__INVALID_ARG if enable.auto.offset.store is true, or RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION or RD_KAFKA_RESP_ERR__STATE if none of the offsets could be stored.

    rd_kafka_subscribe()

    public static rd_kafka_subscribe ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $topics
     ): int
    

    Subscribe to topic set using balanced consumer groups.

    Wildcard (regex) topics are supported: any topic name in the topics list that is prefixed with "^" will be regex-matched to the full list of topics in the cluster and matching topics will be added to the subscription list.

    The full topic list is retrieved every topic.metadata.refresh.interval.ms to pick up new or delete topics that match the subscription. If there is any change to the matched topics the consumer will immediately rejoin the group with the updated set of subscribed topics.

    Regex and full topic names can be mixed in topics.

    Remarks
    Only the .topic field is used in the supplied topics list, all other fields are ignored.
    subscribe() is an asynchronous method which returns immediately: background threads will (re)join the group, wait for group rebalance, issue any registered rebalance_cb, assign() the assigned partitions, and then start fetching messages. This cycle may take up to session.timeout.ms * 2 or more to complete.
    After this call returns a consumer error will be returned by rd_kafka_consumer_poll (et.al) for each unavailable topic in the topics. The error will be RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART for non-existent topics, and RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED for unauthorized topics. The consumer error will be raised through rd_kafka_consumer_poll() (et.al.) with the rd_kafka_message_t.err field set to one of the error codes mentioned above. The subscribe function itself is asynchronous and will not return an error on unavailable topics.
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    topics \FFI\CData|null const rd_kafka_topic_partition_list_t*
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__INVALID_ARG if list is empty, contains invalid topics or regexes or duplicate entries, RD_KAFKA_RESP_ERR__FATAL if the consumer has raised a fatal error.

    rd_kafka_unsubscribe()

    public static rd_kafka_unsubscribe ( 
        \FFI\CData|null $rk
     ): int
    
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    Returns
    int rd_kafka_resp_err_t

    rd_kafka_subscription()

    public static rd_kafka_subscription ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $topics
     ): int
    

    Returns the current topic subscription.

    Remarks
    The application is responsible for calling rd_kafka_topic_partition_list_destroy on the returned list.
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    topics \FFI\CData|null rd_kafka_topic_partition_list_t**
    Returns
    int rd_kafka_resp_err_t - An error code on failure, otherwise topic is updated to point to a newly allocated topic list (possibly empty).

    rd_kafka_consumer_poll()

    public static rd_kafka_consumer_poll ( 
        \FFI\CData|null $rk, 
        int|null $timeout_ms
     ): \FFI\CData|null
    

    Poll the consumer for messages or events.

    Will block for at most timeout_ms milliseconds.

    Remarks
    An application should make sure to call consumer_poll() at regular intervals, even if no messages are expected, to serve any queued callbacks waiting to be called. This is especially important when a rebalance_cb has been registered as it needs to be called and handled properly to synchronize internal consumer state.
    Remarks
    on_consume() interceptors may be called from this function prior to passing message to application.
    When subscribing to topics the application must call poll at least every max.poll.interval.ms to remain a member of the consumer group.

    Noteworthy errors returned in ->err:

    • RD_KAFKA_RESP_ERR__MAX_POLL_EXCEEDED - application failed to call poll within max.poll.interval.ms.
    See also
    rd_kafka_message_t
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    timeout_ms int|null int
    Returns
    \FFI\CData|null rd_kafka_message_t* - A message object which is a proper message if ->err is RD_KAFKA_RESP_ERR_NO_ERROR, or an event or error for any other value.

    rd_kafka_consumer_close()

    public static rd_kafka_consumer_close ( 
        \FFI\CData|null $rk
     ): int
    

    Close the consumer.

    This call will block until the consumer has revoked its assignment, calling the rebalance_cb if it is configured, committed offsets to broker, and left the consumer group (if applicable). The maximum blocking time is roughly limited to session.timeout.ms.

    Remarks
    The application still needs to call rd_kafka_destroy() after this call finishes to clean up the underlying handle resources.
    Parameters
    rk \FFI\CData|null rd_kafka_t* - )
    Returns
    int rd_kafka_resp_err_t - An error code indicating if the consumer close was succesful or not. RD_KAFKA_RESP_ERR__FATAL is returned if the consumer has raised a fatal error.

    rd_kafka_assign()

    public static rd_kafka_assign ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $partitions
     ): int
    

    Atomic assignment of partitions to consume.

    The new partitions will replace the existing assignment.

    A zero-length partitions will treat the partitions as a valid, albeit empty assignment, and maintain internal state, while a NULL value for partitions will reset and clear the internal state.

    When used from a rebalance callback, the application should pass the partition list passed to the callback (or a copy of it) even if the list is empty (i.e. should not pass NULL in this case) so as to maintain internal join state. This is not strictly required - the application may adjust the assignment provided by the group. However, this is rarely useful in practice.

    Parameters
    rk \FFI\CData|null rd_kafka_t*
    partitions \FFI\CData|null const rd_kafka_topic_partition_list_t*
    Returns
    int rd_kafka_resp_err_t - An error code indicating if the new assignment was applied or not. RD_KAFKA_RESP_ERR__FATAL is returned if the consumer has raised a fatal error.

    rd_kafka_assignment()

    public static rd_kafka_assignment ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $partitions
     ): int
    

    Returns the current partition assignment as set by rd_kafka_assign() or rd_kafka_incremental_assign().

    Remarks
    The application is responsible for calling rd_kafka_topic_partition_list_destroy on the returned list.
    This assignment represents the partitions assigned through the assign functions and not the partitions assigned to this consumer instance by the consumer group leader. They are usually the same following a rebalance but not necessarily since an application is free to assign any partitions.
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    partitions \FFI\CData|null rd_kafka_topic_partition_list_t**
    Returns
    int rd_kafka_resp_err_t - An error code on failure, otherwise partitions is updated to point to a newly allocated partition list (possibly empty).

    rd_kafka_commit()

    public static rd_kafka_commit ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $offsets, 
        int|null $async
     ): int
    

    Commit offsets on broker for the provided list of partitions.

    offsets should contain topic, partition, offset and possibly metadata. The offset should be the offset where consumption will resume, i.e., the last processed offset + 1. If offsets is NULL the current partition assignment will be used instead.

    If async is false this operation will block until the broker offset commit is done, returning the resulting success or error code.

    If a rd_kafka_conf_set_offset_commit_cb() offset commit callback has been configured the callback will be enqueued for a future call to rd_kafka_poll(), rd_kafka_consumer_poll() or similar.

    Parameters
    rk \FFI\CData|null rd_kafka_t*
    offsets \FFI\CData|null const rd_kafka_topic_partition_list_t*
    async int|null int
    Returns
    int rd_kafka_resp_err_t - An error code indiciating if the commit was successful, or successfully scheduled if asynchronous, or failed. RD_KAFKA_RESP_ERR__FATAL is returned if the consumer has raised a fatal error.

    rd_kafka_commit_message()

    public static rd_kafka_commit_message ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $rkmessage, 
        int|null $async
     ): int
    

    Commit message's offset on broker for the message's partition. The committed offset is the message's offset + 1.

    See also
    rd_kafka_commit
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    rkmessage \FFI\CData|null const rd_kafka_message_t*
    async int|null int
    Returns
    int rd_kafka_resp_err_t

    rd_kafka_commit_queue()

    public static rd_kafka_commit_queue ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $offsets, 
        \FFI\CData|null $rkqu, 
        \FFI\CData|\Closure $cb, 
        \FFI\CData|object|string|null $opaque
     ): int
    

    Commit offsets on broker for the provided list of partitions.

    See rd_kafka_commit for offsets semantics.

    The result of the offset commit will be posted on the provided rkqu queue.

    If the application uses one of the poll APIs (rd_kafka_poll(), rd_kafka_consumer_poll(), rd_kafka_queue_poll(), ..) to serve the queue the cb callback is required.

    The commit_opaque argument is passed to the callback as commit_opaque, or if using the event API the callback is ignored and the offset commit result will be returned as an RD_KAFKA_EVENT_COMMIT event and the commit_opaque value will be available with rd_kafka_event_opaque().

    If rkqu is NULL a temporary queue will be created and the callback will be served by this call.

    See also
    rd_kafka_commit()
    rd_kafka_conf_set_offset_commit_cb()
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    offsets \FFI\CData|null const rd_kafka_topic_partition_list_t*
    rkqu \FFI\CData|null rd_kafka_queue_t*
    cb \FFI\CData|\Closure void()(rd_kafka_t, rd_kafka_resp_err_t, rd_kafka_topic_partition_list_t*, void*)
    opaque \FFI\CData|object|string|null void*
    Returns
    int rd_kafka_resp_err_t

    rd_kafka_committed()

    public static rd_kafka_committed ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $partitions, 
        int|null $timeout_ms
     ): int
    

    Retrieve committed offsets for topics+partitions.

    The offset field of each requested partition will either be set to stored offset or to RD_KAFKA_OFFSET_INVALID in case there was no stored offset for that partition.

    Committed offsets will be returned according to the isolation.level configuration property, if set to read_committed (default) then only stable offsets for fully committed transactions will be returned, while read_uncommitted may return offsets for not yet committed transactions.

    Parameters
    rk \FFI\CData|null rd_kafka_t*
    partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
    timeout_ms int|null int
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success in which case the offset or err field of each partitions’ element is filled in with the stored offset, or a partition specific error. Else returns an error code.

    rd_kafka_position()

    public static rd_kafka_position ( 
        \FFI\CData|null $rk, 
        \FFI\CData|null $partitions
     ): int
    

    Retrieve current positions (offsets) for topics+partitions.

    The offset field of each requested partition will be set to the offset of the last consumed message + 1, or RD_KAFKA_OFFSET_INVALID in case there was no previous message.

    Remarks
    In this context the last consumed message is the offset consumed by the current librdkafka instance and, in case of rebalancing, not necessarily the last message fetched from the partition.
    Parameters
    rk \FFI\CData|null rd_kafka_t*
    partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
    Returns
    int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success in which case the offset or err field of each partitions’ element is filled in with the stored offset, or a partition specific error. Else returns an error code.

    rd_kafka_produce()

    public static rd_kafka_produce ( 
        \FFI\CData|null $rkt, 
        int|null $partition, 
        int|null $msgflags, 
        \FFI\CData|object|string|null $payload, 
        int|null $len, 
        \FFI\CData|object|string|null $key, 
        int|null $keylen, 
        \FFI\CData|object|string|null $msg_opaque
     ): int|null
    

    Produce and send a single message to broker.

    rkt is the target topic which must have been previously created with rd_kafka_topic_new().

    rd_kafka_produce() is an asynchronous non-blocking API. See rd_kafka_conf_set_dr_msg_cb on how to setup a callback to be called once the delivery status (success or failure) is known. The delivery report is triggered by the application calling rd_kafka_poll() (at regular intervals) or rd_kafka_flush() (at termination).

    Since producing is asynchronous, you should call rd_kafka_flush() before you destroy the producer. Otherwise, any outstanding messages will be silently discarded.

    When temporary errors occur, librdkafka automatically retries to produce the messages. Retries are triggered after retry.backoff.ms and when the leader broker for the given partition is available. Otherwise, librdkafka falls back to polling the topic metadata to monitor when a new leader is elected (see the topic.metadata.refresh.fast.interval.ms and topic.metadata.refresh.interval.ms configurations) and then performs a retry. A delivery error will occur if the message could not be produced within message.timeout.ms.

    See the "Message reliability" chapter in INTRODUCTION.md for more information.

    partition is the target partition, either:

    • RD_KAFKA_PARTITION_UA (unassigned) for automatic partitioning using the topic's partitioner function, or
    • a fixed partition (0..N)

    msgflags is zero or more of the following flags OR:ed together: RD_KAFKA_MSG_F_BLOCK - block produce*() call if queue.buffering.max.messages or queue.buffering.max.kbytes are exceeded. Messages are considered in-queue from the point they are accepted by produce() until their corresponding delivery report callback/event returns. It is thus a requirement to call rd_kafka_poll() (or equiv.) from a separate thread when F_BLOCK is used. See WARNING on RD_KAFKA_MSG_F_BLOCK above.

    RD_KAFKA_MSG_F_FREE - rdkafka will free(3) payload when it is done with it. RD_KAFKA_MSG_F_COPY - the payload data will be copied and the payload pointer will not be used by rdkafka after the call returns. RD_KAFKA_MSG_F_PARTITION - produce_batch() will honour per-message partition, either set manually or by the configured partitioner.

    .._F_FREE and .._F_COPY are mutually exclusive. If neither of these are set, the caller must ensure that the memory backing payload remains valid and is not modified or reused until the delivery callback is invoked. Other buffers passed to rd_kafka_produce() don't have this restriction on reuse, i.e. the memory backing the key or the topic name may be reused as soon as rd_kafka_produce() returns.

    If the function returns -1 and RD_KAFKA_MSG_F_FREE was specified, then the memory associated with the payload is still the caller's responsibility.

    payload is the message payload of size len bytes.

    key is an optional message key of size keylen bytes, if non-NULL it will be passed to the topic partitioner as well as be sent with the message to the broker and passed on to the consumer.

    msg_opaque is an optional application-provided per-message opaque pointer that will provided in the message's delivery report callback (dr_msg_cb or dr_cb) and the rd_kafka_message_t _private field.

    Remarks
    on_send() and on_acknowledgement() interceptors may be called from this function. on_acknowledgement() will only be called if the message fails partitioning.
    If the producer is transactional (transactional.id is configured) producing is only allowed during an on-going transaction, namely after rd_kafka_begin_transaction() has been called.
    See also
    Use rd_kafka_errno2err() to convert errno to rdkafka error code.
    Parameters
    rkt \FFI\CData|null rd_kafka_topic_t*
    partition int|null int32_t
    msgflags int|null int
    payload \FFI\CData|object|string|null void*
    len int|null size_t
    key \FFI\CData|object|string|null const void*
    keylen int|null size_t
    msg_opaque \FFI\CData|object|string|null void*
    Returns
    int|null int - 0 on success or -1 on error in which case errno is set accordingly:
    • ENOBUFS - maximum number of outstanding messages has been reached: "queue.buffering.max.messages" (RD_KAFKA_RESP_ERR__QUEUE_FULL)
    • EMSGSIZE - message is larger than configured max size: "messages.max.bytes". (RD_KAFKA_RESP_ERR_MSG_SIZE_TOO_LARGE)
    • ESRCH - requested partition is unknown in the Kafka cluster. (RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION)
    • ENOENT - topic is unknown in the Kafka cluster. (RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC)
    • ECANCELED - fatal error has been raised on producer, see rd_kafka_fatal_error(), (RD_KAFKA_RESP_ERR__FATAL).
    • ENOEXEC - transactional state forbids producing (RD_KAFKA_RESP_ERR__STATE)
    • rd_kafka_producev()

      public static rd_kafka_producev ( 
          \FFI\CData|null $rk, 
          mixed $args
       ): int
      

      Produce and send a single message to broker.

      The message is defined by a va-arg list using rd_kafka_vtype_t tag tuples which must be terminated with a single RD_KAFKA_V_END.

      See also
      rd_kafka_produce, rd_kafka_produceva, RD_KAFKA_V_END
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      args mixed
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, else an error code as described in rd_kafka_produce(). RD_KAFKA_RESP_ERR__CONFLICT is returned if _V_HEADER and _V_HEADERS are mixed.

      rd_kafka_produce_batch()

      public static rd_kafka_produce_batch ( 
          \FFI\CData|null $rkt, 
          int|null $partition, 
          int|null $msgflags, 
          \FFI\CData|null $rkmessages, 
          int|null $message_cnt
       ): int|null
      

      Produce multiple messages.

      If partition is RD_KAFKA_PARTITION_UA the configured partitioner will be run for each message (slower), otherwise the messages will be enqueued to the specified partition directly (faster).

      The messages are provided in the array rkmessages of count message_cnt elements. The partition and msgflags are used for all provided messages.

      Honoured rkmessages[] fields are:

      • payload,len Message payload and length
      • key,key_len Optional message key
      • _private Message opaque pointer (msg_opaque)
      • err Will be set according to success or failure, see rd_kafka_produce() for possible error codes. Application only needs to check for errors if return value != message_cnt.
      Remarks
      If RD_KAFKA_MSG_F_PARTITION is set in msgflags, the .partition field of the rkmessages is used instead of partition.
      Remarks
      This interface does NOT support setting message headers on the provided rkmessages.
      Parameters
      rkt \FFI\CData|null rd_kafka_topic_t*
      partition int|null int32_t
      msgflags int|null int
      rkmessages \FFI\CData|null rd_kafka_message_t*
      message_cnt int|null int
      Returns
      int|null int - the number of messages succesfully enqueued for producing.

      rd_kafka_flush()

      public static rd_kafka_flush ( 
          \FFI\CData|null $rk, 
          int|null $timeout_ms
       ): int
      

      Wait until all outstanding produce requests, et.al, are completed. This should typically be done prior to destroying a producer instance to make sure all queued and in-flight produce requests are completed before terminating.

      Remarks
      This function will call rd_kafka_poll() and thus trigger callbacks.
      The linger.ms time will be ignored for the duration of the call, queued messages will be sent to the broker as soon as possible.
      If RD_KAFKA_EVENT_DR has been enabled (through rd_kafka_conf_set_events()) this function will not call rd_kafka_poll() but instead wait for the librdkafka-handled message count to reach zero. This requires the application to serve the event queue in a separate thread. In this mode only messages are counted, not other types of queued events.
      See also
      rd_kafka_outq_len()
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      timeout_ms int|null int
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR__TIMED_OUT if timeout_ms was reached before all outstanding requests were completed, else RD_KAFKA_RESP_ERR_NO_ERROR

      rd_kafka_purge()

      public static rd_kafka_purge ( 
          \FFI\CData|null $rk, 
          int|null $purge_flags
       ): int
      

      Purge messages currently handled by the producer instance.

      The application will need to call rd_kafka_poll() or rd_kafka_flush() afterwards to serve the delivery report callbacks of the purged messages.

      Messages purged from internal queues fail with the delivery report error code set to RD_KAFKA_RESP_ERR__PURGE_QUEUE, while purged messages that are in-flight to or from the broker will fail with the error code set to RD_KAFKA_RESP_ERR__PURGE_INFLIGHT.

      Warning
      Purging messages that are in-flight to or from the broker will ignore any subsequent acknowledgement for these messages received from the broker, effectively making it impossible for the application to know if the messages were successfully produced or not. This may result in duplicate messages if the application retries these messages at a later time.
      Remarks
      This call may block for a short time while background thread queues are purged.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      purge_flags int|null int - Tells which messages to purge and how.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, RD_KAFKA_RESP_ERR__INVALID_ARG if the purge flags are invalid or unknown, RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED if called on a non-producer client instance.

      rd_kafka_metadata()

      public static rd_kafka_metadata ( 
          \FFI\CData|null $rk, 
          int|null $all_topics, 
          \FFI\CData|null $only_rkt, 
          \FFI\CData|null $metadatap, 
          int|null $timeout_ms
       ): int
      

      Request Metadata from broker.

      Parameters:

      • all_topics if non-zero: request info about all topics in cluster, if zero: only request info about locally known topics.
      • only_rkt only request info about this topic
      • metadatap pointer to hold metadata result. The *metadatap pointer must be released with rd_kafka_metadata_destroy().
      • timeout_ms maximum response time before failing.
      Remarks
      Consumer: If all_topics is non-zero the Metadata response information may trigger a re-join if any subscribed topics have changed partition count or existence state.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      all_topics int|null int
      only_rkt \FFI\CData|null rd_kafka_topic_t*
      metadatap \FFI\CData|null const struct rd_kafka_metadata**
      timeout_ms int|null int
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success (in which case *metadatap) will be set, else RD_KAFKA_RESP_ERR__TIMED_OUT on timeout or other error code on error.

      rd_kafka_metadata_destroy()

      public static rd_kafka_metadata_destroy ( 
          \FFI\CData|\Closure $metadata
       ): void
      
      Parameters
      metadata \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_metadata*)(rd_kafka_t*, int, rd_kafka_topic_t*, const struct rd_kafka_metadata**, int)

      rd_kafka_list_groups()

      public static rd_kafka_list_groups ( 
          \FFI\CData|null $rk, 
          string|null $group, 
          \FFI\CData|null $grplistp, 
          int|null $timeout_ms
       ): int
      

      List and describe client groups in cluster.

      group is an optional group name to describe, otherwise (NULL) all groups are returned.

      timeout_ms is the (approximate) maximum time to wait for response from brokers and must be a positive value.

      The grplistp remains untouched if any error code is returned, with the exception of RD_KAFKA_RESP_ERR__PARTIAL which behaves as RD_KAFKA_RESP_ERR__NO_ERROR (success) but with an incomplete group list.

      See also
      Use rd_kafka_group_list_destroy() to release list memory.
      Deprecated:
      Use rd_kafka_ListConsumerGroups() and rd_kafka_DescribeConsumerGroups() instead.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      group string|null const char*
      grplistp \FFI\CData|null const struct rd_kafka_group_list**
      timeout_ms int|null int
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR__NO_ERROR on success and grplistp is updated to point to a newly allocated list of groups. RD_KAFKA_RESP_ERR__PARTIAL if not all brokers responded in time but at least one group is returned in grplistlp. RD_KAFKA_RESP_ERR__TIMED_OUT if no groups were returned in the given timeframe but not all brokers have yet responded, or if the list of brokers in the cluster could not be obtained within the given timeframe. RD_KAFKA_RESP_ERR__TRANSPORT if no brokers were found. Other error codes may also be returned from the request layer.

      rd_kafka_group_list_destroy()

      public static rd_kafka_group_list_destroy ( 
          \FFI\CData|null $grplist
       ): void
      
      Parameters
      grplist \FFI\CData|null const struct rd_kafka_group_list*

      rd_kafka_brokers_add()

      public static rd_kafka_brokers_add ( 
          \FFI\CData|null $rk, 
          string|null $brokerlist
       ): int|null
      

      Adds one or more brokers to the kafka handle's list of initial bootstrap brokers.

      Additional brokers will be discovered automatically as soon as rdkafka connects to a broker by querying the broker metadata.

      If a broker name resolves to multiple addresses (and possibly address families) all will be used for connection attempts in round-robin fashion.

      brokerlist is a ,-separated list of brokers in the format: <broker1>,<broker2>,.. Where each broker is in either the host or URL based format: <host>[:<port>] <proto>://<host>[:port] <proto> is either PLAINTEXT, SSL, SASL, SASL_PLAINTEXT The two formats can be mixed but ultimately the value of the security.protocol config property decides what brokers are allowed.

      Example: brokerlist = "broker1:10000,broker2" brokerlist = "SSL://broker3:9000,ssl://broker2"

      Remarks
      Brokers may also be defined with the metadata.broker.list or bootstrap.servers configuration property (preferred method).
      Deprecated:
      Set bootstrap servers with the bootstrap.servers configuration property.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      brokerlist string|null const char*
      Returns
      int|null int - the number of brokers successfully added.

      rd_kafka_set_logger()

      public static rd_kafka_set_logger ( 
          \FFI\CData|null $rk, 
          \FFI\CData|\Closure $func
       ): void
      

      Set logger function.

      The default is to print to stderr, but a syslog logger is also available, see rd_kafka_log_(print|syslog) for the builtin alternatives. Alternatively the application may provide its own logger callback. Or pass 'func' as NULL to disable logging.

      Deprecated:
      Use rd_kafka_conf_set_log_cb()
      Remarks
      rk may be passed as NULL in the callback.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      func \FFI\CData|\Closure void()(const rd_kafka_t, int, const char*, const char*)

      rd_kafka_set_log_level()

      public static rd_kafka_set_log_level ( 
          \FFI\CData|null $rk, 
          int|null $level
       ): void
      

      Specifies the maximum logging level emitted by internal kafka logging and debugging.

      Deprecated:
      Set the "log_level" configuration property instead.
      Remarks
      If the "debug" configuration property is set the log level is automatically adjusted to LOG_DEBUG (7).
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      level int|null int

      rd_kafka_log_print()

      public static rd_kafka_log_print ( 
          \FFI\CData|null $rk, 
          int|null $level, 
          string|null $fac, 
          string|null $buf
       ): void
      
      Parameters
      rk \FFI\CData|null const rd_kafka_t*
      level int|null int
      fac string|null const char*
      buf string|null const char*

      rd_kafka_log_syslog()

      public static rd_kafka_log_syslog ( 
          \FFI\CData|null $rk, 
          int|null $level, 
          string|null $fac, 
          string|null $buf
       ): void
      

      Builtin log sink: print to syslog.

      Remarks
      This logger is only available if librdkafka was built with syslog support.
      Parameters
      rk \FFI\CData|null const rd_kafka_t*
      level int|null int
      fac string|null const char*
      buf string|null const char*

      rd_kafka_outq_len()

      public static rd_kafka_outq_len ( 
          \FFI\CData|null $rk
       ): int|null
      

      Returns the current out queue length.

      The out queue length is the sum of:

      • number of messages waiting to be sent to, or acknowledged by, the broker.
      • number of delivery reports (e.g., dr_msg_cb) waiting to be served by rd_kafka_poll() or rd_kafka_flush().
      • number of callbacks (e.g., error_cb, stats_cb, etc) waiting to be served by rd_kafka_poll(), rd_kafka_consumer_poll() or rd_kafka_flush().
      • number of events waiting to be served by background_event_cb() in the background queue (see rd_kafka_conf_set_background_event_cb).

      An application should wait for the return value of this function to reach zero before terminating to make sure outstanding messages, requests (such as offset commits), callbacks and events are fully processed. See rd_kafka_flush().

      See also
      rd_kafka_flush()
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      int|null int - number of messages and events waiting in queues.

      rd_kafka_dump()

      public static rd_kafka_dump ( 
          \FFI\CData|null $fp, 
          \FFI\CData|null $rk
       ): void
      

      Dumps rdkafka's internal state for handle rk to stream fp.

      This is only useful for debugging rdkafka, showing state and statistics for brokers, topics, partitions, etc.

      Parameters
      fp \FFI\CData|null FILE*
      rk \FFI\CData|null rd_kafka_t*

      rd_kafka_thread_cnt()

      public static rd_kafka_thread_cnt (  ): int|null
      

      Retrieve the current number of threads in use by librdkafka.

      Used by regression tests.

      Returns
      int|null int - )

      rd_kafka_wait_destroyed()

      public static rd_kafka_wait_destroyed ( 
          int|null $timeout_ms
       ): int|null
      

      Wait for all rd_kafka_t objects to be destroyed.

      Returns 0 if all kafka objects are now destroyed, or -1 if the timeout was reached.

      Remarks
      This function is deprecated.
      Parameters
      timeout_ms int|null int - )
      Returns
      int|null int

      rd_kafka_unittest()

      public static rd_kafka_unittest (  ): int|null
      

      Run librdkafka's built-in unit-tests.

      Returns
      int|null int - ) - the number of failures, or 0 if all tests passed.

      rd_kafka_poll_set_consumer()

      public static rd_kafka_poll_set_consumer ( 
          \FFI\CData|null $rk
       ): int
      

      Redirect the main (rd_kafka_poll()) queue to the KafkaConsumer's queue (rd_kafka_consumer_poll()).

      Warning
      It is not permitted to call rd_kafka_poll() after directing the main queue with rd_kafka_poll_set_consumer().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_event_type()

      public static rd_kafka_event_type ( 
          \FFI\CData|null $rkev
       ): int|null
      
      Remarks
      As a convenience it is okay to pass rkev as NULL in which case RD_KAFKA_EVENT_NONE is returned.
      Parameters
      rkev \FFI\CData|null const rd_kafka_event_t* - )
      Returns
      int|null rd_kafka_event_type_t - the event type for the given event.

      rd_kafka_event_name()

      public static rd_kafka_event_name ( 
          \FFI\CData|null $rkev
       ): string|null
      
      Remarks
      As a convenience it is okay to pass rkev as NULL in which case the name for RD_KAFKA_EVENT_NONE is returned.
      Parameters
      rkev \FFI\CData|null const rd_kafka_event_t* - )
      Returns
      string|null const char* - the event type’s name for the given event.

      rd_kafka_event_destroy()

      public static rd_kafka_event_destroy ( 
          \FFI\CData|null $rkev
       ): void
      

      Destroy an event.

      Remarks
      Any references to this event, such as extracted messages, will not be usable after this call.
      As a convenience it is okay to pass rkev as NULL in which case no action is performed.
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )

      rd_kafka_event_message_next()

      public static rd_kafka_event_message_next ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Call repeatedly until it returns NULL.

      Event types:

      • RD_KAFKA_EVENT_FETCH (1 message)
      • RD_KAFKA_EVENT_DR (>=1 message(s))
      Remarks
      The returned message(s) MUST NOT be freed with rd_kafka_message_destroy().
      on_consume() interceptor may be called from this function prior to passing message to application.
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_message_t* - the next message from an event.

      rd_kafka_event_message_array()

      public static rd_kafka_event_message_array ( 
          \FFI\CData|null $rkev, 
          \FFI\CData|null $rkmessages, 
          int|null $size
       ): int|null
      

      Extacts size message(s) from the event into the pre-allocated array rkmessages.

      Event types:

      • RD_KAFKA_EVENT_FETCH (1 message)
      • RD_KAFKA_EVENT_DR (>=1 message(s))
      Remarks
      on_consume() interceptor may be called from this function prior to passing message to application.
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t*
      rkmessages \FFI\CData|null const rd_kafka_message_t**
      size int|null size_t
      Returns
      int|null size_t - the number of messages extracted.

      rd_kafka_event_message_count()

      public static rd_kafka_event_message_count ( 
          \FFI\CData|null $rkev
       ): int|null
      

      Event types:

      • RD_KAFKA_EVENT_FETCH (1 message)
      • RD_KAFKA_EVENT_DR (>=1 message(s))
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      int|null size_t - the number of remaining messages in the event.

      rd_kafka_event_error()

      public static rd_kafka_event_error ( 
          \FFI\CData|null $rkev
       ): int
      

      Use rd_kafka_event_error_is_fatal() to detect if this is a fatal error.

      Event types:

      • all
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      int rd_kafka_resp_err_t - the error code for the event.

      rd_kafka_event_error_string()

      public static rd_kafka_event_error_string ( 
          \FFI\CData|null $rkev
       ): string|null
      

      Event types:

      • all
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      string|null const char* - the error string (if any). An application should check that rd_kafka_event_error() returns non-zero before calling this function.

      rd_kafka_event_error_is_fatal()

      public static rd_kafka_event_error_is_fatal ( 
          \FFI\CData|null $rkev
       ): int|null
      

      Event types:

      • RD_KAFKA_EVENT_ERROR
      See also
      rd_kafka_fatal_error()
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      int|null int - 1 if the error is a fatal error, else 0.

      rd_kafka_event_opaque()

      public static rd_kafka_event_opaque ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|object|string|null
      

      Event types:

      • RD_KAFKA_EVENT_OFFSET_COMMIT
      • RD_KAFKA_EVENT_CREATETOPICS_RESULT
      • RD_KAFKA_EVENT_DELETETOPICS_RESULT
      • RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT
      • RD_KAFKA_EVENT_CREATEACLS_RESULT
      • RD_KAFKA_EVENT_DESCRIBEACLS_RESULT
      • RD_KAFKA_EVENT_DELETEACLS_RESULT
      • RD_KAFKA_EVENT_ALTERCONFIGS_RESULT
      • RD_KAFKA_EVENT_INCREMENTAL_ALTERCONFIGS_RESULT
      • RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT
      • RD_KAFKA_EVENT_DELETEGROUPS_RESULT
      • RD_KAFKA_EVENT_DELETECONSUMERGROUPOFFSETS_RESULT
      • RD_KAFKA_EVENT_DELETERECORDS_RESULT
      • RD_KAFKA_EVENT_LISTCONSUMERGROUPS_RESULT
      • RD_KAFKA_EVENT_DESCRIBECONSUMERGROUPS_RESULT
      • RD_KAFKA_EVENT_LISTCONSUMERGROUPOFFSETS_RESULT
      • RD_KAFKA_EVENT_ALTERCONSUMERGROUPOFFSETS_RESULT
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|object|string|null void* - the event opaque (if any) as passed to rd_kafka_commit() (et.al) or rd_kafka_AdminOptions_set_opaque(), depending on event type.

      rd_kafka_event_log()

      public static rd_kafka_event_log ( 
          \FFI\CData|null $rkev, 
          \FFI\CData|null $fac, 
          \FFI\CData|null $str, 
          \FFI\CData|null $level
       ): int|null
      

      Extract log message from the event.

      Event types:

      • RD_KAFKA_EVENT_LOG
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t*
      fac \FFI\CData|null const char**
      str \FFI\CData|null const char**
      level \FFI\CData|null int*
      Returns
      int|null int - 0 on success or -1 if unsupported event type.

      rd_kafka_event_stats()

      public static rd_kafka_event_stats ( 
          \FFI\CData|null $rkev
       ): string|null
      

      Extract stats from the event.

      Event types:

      • RD_KAFKA_EVENT_STATS
      Remarks
      the returned string will be freed automatically along with the event object
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      string|null const char* - stats json string.

      rd_kafka_event_topic_partition_list()

      public static rd_kafka_event_topic_partition_list ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      
      Remarks
      The list MUST NOT be freed with rd_kafka_topic_partition_list_destroy()

      Event types:

      • RD_KAFKA_EVENT_REBALANCE
      • RD_KAFKA_EVENT_OFFSET_COMMIT
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null rd_kafka_topic_partition_list_t* - the topic partition list from the event.

      rd_kafka_event_topic_partition()

      public static rd_kafka_event_topic_partition ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      
      Remarks
      The returned pointer MUST be freed with rd_kafka_topic_partition_destroy().

      Event types: RD_KAFKA_EVENT_ERROR (for partition level errors)

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null rd_kafka_topic_partition_t* - a newly allocated topic_partition container, if applicable for the event type, else NULL.

      rd_kafka_event_CreateTopics_result()

      public static rd_kafka_event_CreateTopics_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get CreateTopics result.

      Event types: RD_KAFKA_EVENT_CREATETOPICS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_CreateTopics_result_t* - the result of a CreateTopics request, or NULL if event is of different type.

      rd_kafka_event_DeleteTopics_result()

      public static rd_kafka_event_DeleteTopics_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DeleteTopics result.

      Event types: RD_KAFKA_EVENT_DELETETOPICS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DeleteTopics_result_t* - the result of a DeleteTopics request, or NULL if event is of different type.

      rd_kafka_event_CreatePartitions_result()

      public static rd_kafka_event_CreatePartitions_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get CreatePartitions result.

      Event types: RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_CreatePartitions_result_t* - the result of a CreatePartitions request, or NULL if event is of different type.

      rd_kafka_event_AlterConfigs_result()

      public static rd_kafka_event_AlterConfigs_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get AlterConfigs result.

      Event types: RD_KAFKA_EVENT_ALTERCONFIGS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_AlterConfigs_result_t* - the result of a AlterConfigs request, or NULL if event is of different type.

      rd_kafka_event_DescribeConfigs_result()

      public static rd_kafka_event_DescribeConfigs_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DescribeConfigs result.

      Event types: RD_KAFKA_EVENT_DESCRIBECONFIGS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DescribeConfigs_result_t* - the result of a DescribeConfigs request, or NULL if event is of different type.

      rd_kafka_queue_poll()

      public static rd_kafka_queue_poll ( 
          \FFI\CData|null $rkqu, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Poll a queue for an event for max timeout_ms.

      Remarks
      Use rd_kafka_event_destroy() to free the event.
      See also
      rd_kafka_conf_set_background_event_cb()
      Parameters
      rkqu \FFI\CData|null rd_kafka_queue_t*
      timeout_ms int|null int
      Returns
      \FFI\CData|null rd_kafka_event_t* - an event, or NULL.

      rd_kafka_queue_poll_callback()

      public static rd_kafka_queue_poll_callback ( 
          \FFI\CData|null $rkqu, 
          int|null $timeout_ms
       ): int|null
      

      Poll a queue for events served through callbacks for max timeout_ms.

      Remarks
      This API must only be used for queues with callbacks registered for all expected event types. E.g., not a message queue.
      Also see rd_kafka_conf_set_background_event_cb() for triggering event callbacks from a librdkafka-managed background thread.
      See also
      rd_kafka_conf_set_background_event_cb()
      Parameters
      rkqu \FFI\CData|null rd_kafka_queue_t*
      timeout_ms int|null int
      Returns
      int|null int - the number of events served.

      rd_kafka_plugin_f_conf_init_t()

      public static rd_kafka_plugin_f_conf_init_t ( 
          \FFI\CData|null $conf, 
          \FFI\CData|object|string|null $plug_opaquep, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Plugin's configuration initializer method called each time the library is referenced from configuration (even if previously loaded by another client instance).

      Remarks
      This method MUST be implemented by plugins and have the symbol name conf_init
      Remarks
      A plugin may add an on_conf_destroy() interceptor to clean up plugin-specific resources created in the plugin's conf_init() method.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration set up to this point.
      plug_opaquep \FFI\CData|object|string|null void** - Plugin can set this pointer to a per-configuration opaque pointer.
      errstr \FFI\CData|null char* - String buffer of size errstr_size where plugin must write a human readable error string in the case the initializer fails (returns non-zero). - Maximum space (including \0) in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on error.

      rd_kafka_interceptor_f_on_conf_set_t()

      public static rd_kafka_interceptor_f_on_conf_set_t ( 
          \FFI\CData|null $conf, 
          string|null $name, 
          string|null $val, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_conf_set() is called from rd_kafka_*_conf_set() in the order the interceptors were added.

      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      name string|null const char* - The configuration property to set.
      val string|null const char* - The configuration value to set, or NULL for reverting to default in which case the previous value should be freed.
      errstr \FFI\CData|null char* - A human readable error string in case the interceptor fails.
      errstr_size int|null size_t - Maximum space (including \0) in errstr.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_conf_res_t - RD_KAFKA_CONF_OK if the property was known and successfully handled by the interceptor, RD_KAFKA_CONF_INVALID if the property was handled by the interceptor but the value was invalid, or RD_KAFKA_CONF_UNKNOWN if the interceptor did not handle this property, in which case the property is passed on on the interceptor in the chain, finally ending up at the built-in configuration handler.

      rd_kafka_interceptor_f_on_conf_dup_t()

      public static rd_kafka_interceptor_f_on_conf_dup_t ( 
          \FFI\CData|null $new_conf, 
          \FFI\CData|null $old_conf, 
          int|null $filter_cnt, 
          \FFI\CData|null $filter, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_conf_dup() is called from rd_kafka_conf_dup() in the order the interceptors were added and is used to let an interceptor re-register its conf interecptors with a new opaque value. The on_conf_dup() method is called prior to the configuration from old_conf being copied to new_conf.

      Remarks
      No on_conf_* interceptors are copied to the new configuration object on rd_kafka_conf_dup().
      Parameters
      new_conf \FFI\CData|null rd_kafka_conf_t* - New configuration object.
      old_conf \FFI\CData|null const rd_kafka_conf_t* - Old configuration object to copy properties from.
      filter_cnt int|null size_t - Number of property names to filter in filter. - Property names to filter out (ignore) when setting up new_conf.
      filter \FFI\CData|null const char**
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on failure (which is logged but otherwise ignored).

      rd_kafka_interceptor_f_on_conf_destroy_t()

      public static rd_kafka_interceptor_f_on_conf_destroy_t ( 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_conf_destroy() is called from rd_kafka_*_conf_destroy() in the order the interceptors were added.

      Parameters
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_interceptor_f_on_new_t()

      public static rd_kafka_interceptor_f_on_new_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $conf, 
          \FFI\CData|object|string|null $ic_opaque, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      on_new() is called from rd_kafka_new() prior toreturning the newly created client instance to the application.

      Warning
      The rk client instance will not be fully set up when this interceptor is called and the interceptor MUST NOT call any other rk-specific APIs than rd_kafka_interceptor_add..().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      conf \FFI\CData|null const rd_kafka_conf_t* - The client instance’s final configuration.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      errstr \FFI\CData|null char* - A human readable error string in case the interceptor fails.
      errstr_size int|null size_t - Maximum space (including \0) in errstr.
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_destroy_t()

      public static rd_kafka_interceptor_f_on_destroy_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_destroy() is called from rd_kafka_destroy() or (rd_kafka_new() if rd_kafka_new() fails during initialization).

      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_interceptor_f_on_send_t()

      public static rd_kafka_interceptor_f_on_send_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $rkmessage, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_send() is called from rd_kafka_produce*() (et.al) prior to the partitioner being called.

      Remarks
      This interceptor is only used by producer instances.
      The rkmessage object is NOT mutable and MUST NOT be modified by the interceptor.
      If the partitioner fails or an unknown partition was specified, the on_acknowledgement() interceptor chain will be called from within the rd_kafka_produce*() call to maintain send-acknowledgement symmetry.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      rkmessage \FFI\CData|null rd_kafka_message_t* - The message being produced. Immutable.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_acknowledgement_t()

      public static rd_kafka_interceptor_f_on_acknowledgement_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $rkmessage, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_acknowledgement() is called to inform interceptors that a message was succesfully delivered or permanently failed delivery. The interceptor chain is called from internal librdkafka background threads, or rd_kafka_produce*() if the partitioner failed.

      Remarks
      This interceptor is only used by producer instances.
      The rkmessage object is NOT mutable and MUST NOT be modified by the interceptor.
      Warning
      The on_acknowledgement() method may be called from internal librdkafka threads. An on_acknowledgement() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      rkmessage \FFI\CData|null rd_kafka_message_t* - The message being produced. Immutable.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_consume_t()

      public static rd_kafka_interceptor_f_on_consume_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $rkmessage, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_consume() is called just prior to passing the message to the application in rd_kafka_consumer_poll(), rd_kafka_consume*(), the event interface, etc.

      Remarks
      This interceptor is only used by consumer instances.
      The rkmessage object is NOT mutable and MUST NOT be modified by the interceptor.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      rkmessage \FFI\CData|null rd_kafka_message_t* - The message being consumed. Immutable.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_commit_t()

      public static rd_kafka_interceptor_f_on_commit_t ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $offsets, 
          int $err, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_commit() is called on completed or failed offset commit. It is called from internal librdkafka threads.

      Remarks
      This interceptor is only used by consumer instances.
      Warning
      The on_commit() interceptor is called from internal librdkafka threads. An on_commit() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      offsets \FFI\CData|null const rd_kafka_topic_partition_list_t* - List of topic+partition+offset+error that were committed. The error message of each partition should be checked for error.
      err int rd_kafka_resp_err_t - The commit error, if any.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_request_sent_t()

      public static rd_kafka_interceptor_f_on_request_sent_t ( 
          \FFI\CData|null $rk, 
          int|null $sockfd, 
          string|null $brokername, 
          int|null $brokerid, 
          int|null $ApiKey, 
          int|null $ApiVersion, 
          int|null $CorrId, 
          int|null $size, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_request_sent() is called when a request has been fully written to a broker TCP connections socket.

      Warning
      The on_request_sent() interceptor is called from internal librdkafka broker threads. An on_request_sent() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      sockfd int|null int - Socket file descriptor.
      brokername string|null const char* - Broker request is being sent to.
      brokerid int|null int32_t - Broker request is being sent to.
      ApiKey int|null int16_t - Kafka protocol request type.
      ApiVersion int|null int16_t - Kafka protocol request type version.
      CorrId int|null int32_t - Kafka protocol request correlation id.
      size int|null size_t - Size of request.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_conf_interceptor_add_on_conf_set()

      public static rd_kafka_conf_interceptor_add_on_conf_set ( 
          \FFI\CData|null $conf, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_conf_set, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_conf_set() interceptor.

      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_conf_set \FFI\CData|\Closure rd_kafka_conf_res_t(rd_kafka_interceptor_f_on_conf_set_t*)(rd_kafka_conf_t*, const char*, const char*, char*, size_t, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_conf_interceptor_add_on_conf_dup()

      public static rd_kafka_conf_interceptor_add_on_conf_dup ( 
          \FFI\CData|null $conf, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_conf_dup, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_conf_dup() interceptor.

      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_conf_dup \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_conf_dup_t*)(rd_kafka_conf_t*, const rd_kafka_conf_t*, size_t, const char**, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_conf_interceptor_add_on_conf_destroy()

      public static rd_kafka_conf_interceptor_add_on_conf_destroy ( 
          \FFI\CData|null $conf, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_conf_destroy, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_conf_destroy() interceptor.

      Remarks
      Multiple on_conf_destroy() interceptors are allowed to be added to the same configuration object.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_conf_destroy \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_conf_destroy_t*)(void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR

      rd_kafka_conf_interceptor_add_on_new()

      public static rd_kafka_conf_interceptor_add_on_new ( 
          \FFI\CData|null $conf, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_new, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_new() interceptor.

      Remarks
      Since the on_new() interceptor is added to the configuration object it may be copied by rd_kafka_conf_dup(). An interceptor implementation must thus be able to handle the same interceptor,ic_opaque tuple to be used by multiple client instances.
      An interceptor plugin should check the return value to make sure it has not already been added.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_new \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_new_t*)(rd_kafka_t*, const rd_kafka_conf_t*, void*, char*, size_t) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_destroy()

      public static rd_kafka_interceptor_add_on_destroy ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_destroy, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_destroy() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_destroy \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_destroy_t*)(rd_kafka_t*, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_send()

      public static rd_kafka_interceptor_add_on_send ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_send, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_send() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_send \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_send_t*)(rd_kafka_t*, rd_kafka_message_t*, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing intercepted with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_acknowledgement()

      public static rd_kafka_interceptor_add_on_acknowledgement ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_acknowledgement, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_acknowledgement() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_acknowledgement \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_acknowledgement_t*)(rd_kafka_t*, rd_kafka_message_t*, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_consume()

      public static rd_kafka_interceptor_add_on_consume ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_consume, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_consume() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_consume \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_consume_t*)(rd_kafka_t*, rd_kafka_message_t*, void*) - Function pointer.
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_commit()

      public static rd_kafka_interceptor_add_on_commit ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_commit, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_commit() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_commit \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_commit_t*)(rd_kafka_t*, const rd_kafka_topic_partition_list_t*, rd_kafka_resp_err_t, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_request_sent()

      public static rd_kafka_interceptor_add_on_request_sent ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_request_sent, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_request_sent() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_request_sent \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_request_sent_t*)(rd_kafka_t*, int, const char*, int32_t, int16_t, int16_t, int32_t, size_t, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_topic_result_error()

      public static rd_kafka_topic_result_error ( 
          \FFI\CData|null $topicres
       ): int
      

      Topic result provides per-topic operation result information.

      Parameters
      topicres \FFI\CData|null const rd_kafka_topic_result_t* - )
      Returns
      int rd_kafka_resp_err_t - the error code for the given topic result.

      rd_kafka_topic_result_error_string()

      public static rd_kafka_topic_result_error_string ( 
          \FFI\CData|null $topicres
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the topicres.
      Parameters
      topicres \FFI\CData|null const rd_kafka_topic_result_t* - )
      Returns
      string|null const char* - the human readable error string for the given topic result, or NULL if there was no error.

      rd_kafka_topic_result_name()

      public static rd_kafka_topic_result_name ( 
          \FFI\CData|null $topicres
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the topicres.
      Parameters
      topicres \FFI\CData|null const rd_kafka_topic_result_t* - )
      Returns
      string|null const char* - the name of the topic for the given topic result.

      rd_kafka_AdminOptions_new()

      public static rd_kafka_AdminOptions_new ( 
          \FFI\CData|null $rk, 
          int $for_api
       ): \FFI\CData|null
      

      Create a new AdminOptions object.

         The options object is not modified by the Admin API request APIs,
         (e.g. CreateTopics) and may be reused for multiple calls.
      
      
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      for_api int rd_kafka_admin_op_t - Specifies what Admin API this AdminOptions object will be used for, which will enforce what AdminOptions_set_..() calls may be used based on the API, causing unsupported set..() calls to fail. Specifying RD_KAFKA_ADMIN_OP_ANY disables the enforcement allowing any option to be set, even if the option is not used in a future call to an Admin API method.
      Returns
      \FFI\CData|null rd_kafka_AdminOptions_t* - a new AdminOptions object (which must be freed with rd_kafka_AdminOptions_destroy()), or NULL if for_api was set to an unknown API op type.

      rd_kafka_AdminOptions_destroy()

      public static rd_kafka_AdminOptions_destroy ( 
          \FFI\CData|null $options
       ): void
      
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t*

      rd_kafka_AdminOptions_set_request_timeout()

      public static rd_kafka_AdminOptions_set_request_timeout ( 
          \FFI\CData|null $options, 
          int|null $timeout_ms, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Sets the overall request timeout, including broker lookup, request transmission, operation time on broker, and response.

      Remarks
      This option is valid for all Admin API requests.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin options.
      timeout_ms int|null int - Timeout in milliseconds. Defaults to socket.timeout.ms.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, or RD_KAFKA_RESP_ERR__INVALID_ARG if timeout was out of range in which case an error string will be written errstr.

      rd_kafka_AdminOptions_set_operation_timeout()

      public static rd_kafka_AdminOptions_set_operation_timeout ( 
          \FFI\CData|null $options, 
          int|null $timeout_ms, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Sets the broker's operation timeout, such as the timeout for CreateTopics to complete the creation of topics on the controller before returning a result to the application.

      CreateTopics: values <= 0 will return immediately after triggering topic creation, while > 0 will wait this long for topic creation to propagate in cluster. Default: 60 seconds.

      DeleteTopics: same semantics as CreateTopics. CreatePartitions: same semantics as CreateTopics.

      Remarks
      This option is valid for CreateTopics, DeleteTopics, CreatePartitions, and DeleteRecords.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin options.
      timeout_ms int|null int - Timeout in milliseconds.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, or RD_KAFKA_RESP_ERR__INVALID_ARG if timeout was out of range in which case an error string will be written errstr.

      rd_kafka_AdminOptions_set_validate_only()

      public static rd_kafka_AdminOptions_set_validate_only ( 
          \FFI\CData|null $options, 
          int|null $true_or_false, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Tell broker to only validate the request, without performing the requested operation (create topics, etc).

      Remarks
      This option is valid for CreateTopics, CreatePartitions, AlterConfigs.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin options.
      true_or_false int|null int - Defaults to false.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on failure in which case an error string will be written errstr.

      rd_kafka_AdminOptions_set_broker()

      public static rd_kafka_AdminOptions_set_broker ( 
          \FFI\CData|null $options, 
          int|null $broker_id, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Override what broker the Admin request will be sent to.

      By default, Admin requests are sent to the controller broker, with the following exceptions:

      • AlterConfigs with a BROKER resource are sent to the broker id set as the resource name.
      • IncrementalAlterConfigs with a BROKER resource are sent to the broker id set as the resource name.
      • DescribeConfigs with a BROKER resource are sent to the broker id set as the resource name.
      Remarks
      This API should typically not be used, but serves as a workaround if new resource types are to the broker that the client does not know where to send.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin Options.
      broker_id int|null int32_t - The broker to send the request to.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or an error code on failure in which case an error string will be written errstr.

      rd_kafka_AdminOptions_set_opaque()

      public static rd_kafka_AdminOptions_set_opaque ( 
          \FFI\CData|null $options, 
          \FFI\CData|object|string|null $opaque
       ): void
      
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t*
      opaque \FFI\CData|object|string|null void*

      rd_kafka_NewTopic_new()

      public static rd_kafka_NewTopic_new ( 
          string|null $topic, 
          int|null $num_partitions, 
          int|null $replication_factor, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): \FFI\CData|null
      

      Create a new NewTopic object. This object is later passed to rd_kafka_CreateTopics().

      Parameters
      topic string|null const char* - Topic name to create.
      num_partitions int|null int - Number of partitions in topic, or -1 to use the broker’s default partition count (>= 2.4.0).
      replication_factor int|null int - Default replication factor for the topic’s partitions, or -1 to use the broker’s default replication factor (>= 2.4.0) or if set_replica_assignment() will be used.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      \FFI\CData|null rd_kafka_NewTopic_t* - a new allocated NewTopic object, or NULL if the input parameters are invalid. Use rd_kafka_NewTopic_destroy() to free object when done.

      rd_kafka_NewTopic_destroy()

      public static rd_kafka_NewTopic_destroy ( 
          \FFI\CData|null $new_topic
       ): void
      
      Parameters
      new_topic \FFI\CData|null rd_kafka_NewTopic_t*

      rd_kafka_NewTopic_destroy_array()

      public static rd_kafka_NewTopic_destroy_array ( 
          \FFI\CData|null $new_topics, 
          int|null $new_topic_cnt
       ): void
      
      Parameters
      new_topics \FFI\CData|null rd_kafka_NewTopic_t**
      new_topic_cnt int|null size_t

      rd_kafka_NewTopic_set_replica_assignment()

      public static rd_kafka_NewTopic_set_replica_assignment ( 
          \FFI\CData|null $new_topic, 
          int|null $partition, 
          \FFI\CData|null $broker_ids, 
          int|null $broker_id_cnt, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Set the replica (broker) assignment for partition to the replica set in broker_ids (of broker_id_cnt elements).

      Remarks
      When this method is used, rd_kafka_NewTopic_new() must have been called with a replication_factor of -1.
      An application must either set the replica assignment for all new partitions, or none.
      If called, this function must be called consecutively for each partition, starting at 0.
      Use rd_kafka_metadata() to retrieve the list of brokers in the cluster.
      See also
      rd_kafka_AdminOptions_set_validate_only()
      Parameters
      new_topic \FFI\CData|null rd_kafka_NewTopic_t*
      partition int|null int32_t
      broker_ids \FFI\CData|null int32_t*
      broker_id_cnt int|null size_t
      errstr \FFI\CData|null char*
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, or an error code if the arguments were invalid.

      rd_kafka_NewTopic_set_config()

      public static rd_kafka_NewTopic_set_config ( 
          \FFI\CData|null $new_topic, 
          string|null $name, 
          string|null $value
       ): int
      

      Set (broker-side) topic configuration name/value pair.

      Remarks
      The name and value are not validated by the client, the validation takes place on the broker.
      See also
      rd_kafka_AdminOptions_set_validate_only()
      http://kafka.apache.org/documentation.html#topicconfigs
      Parameters
      new_topic \FFI\CData|null rd_kafka_NewTopic_t*
      name string|null const char*
      value string|null const char*
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, or an error code if the arguments were invalid.

      rd_kafka_CreateTopics()

      public static rd_kafka_CreateTopics ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $new_topics, 
          int|null $new_topic_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Create topics in cluster as specified by the new_topics array of size new_topic_cnt elements.

      Supported admin options:

      • rd_kafka_AdminOptions_set_validate_only() - default false
      • rd_kafka_AdminOptions_set_operation_timeout() - default 60 seconds
      • rd_kafka_AdminOptions_set_request_timeout() - default socket.timeout.ms
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_CREATETOPICS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      new_topics \FFI\CData|null rd_kafka_NewTopic_t** - Array of new topics to create.
      new_topic_cnt int|null size_t - Number of elements in new_topics array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_CreateTopics_result_topics()

      public static rd_kafka_CreateTopics_result_topics ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of topic results from a CreateTopics result.

      The returned topics life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_CreateTopics_result_t* - Result to get topics from.
      cntp \FFI\CData|null size_t* - Updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_topic_result_t**

      rd_kafka_DeleteTopic_new()

      public static rd_kafka_DeleteTopic_new ( 
          string|null $topic
       ): \FFI\CData|null
      

      Create a new DeleteTopic object. This object is later passed to rd_kafka_DeleteTopics().

      Parameters
      topic string|null const char* - ) - Topic name to delete.
      Returns
      \FFI\CData|null rd_kafka_DeleteTopic_t* - a new allocated DeleteTopic object. Use rd_kafka_DeleteTopic_destroy() to free object when done.

      rd_kafka_DeleteTopic_destroy()

      public static rd_kafka_DeleteTopic_destroy ( 
          \FFI\CData|null $del_topic
       ): void
      
      Parameters
      del_topic \FFI\CData|null rd_kafka_DeleteTopic_t*

      rd_kafka_DeleteTopic_destroy_array()

      public static rd_kafka_DeleteTopic_destroy_array ( 
          \FFI\CData|null $del_topics, 
          int|null $del_topic_cnt
       ): void
      
      Parameters
      del_topics \FFI\CData|null rd_kafka_DeleteTopic_t**
      del_topic_cnt int|null size_t

      rd_kafka_DeleteTopics()

      public static rd_kafka_DeleteTopics ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $del_topics, 
          int|null $del_topic_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Delete topics from cluster as specified by the topics array of size topic_cnt elements.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DELETETOPICS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      del_topics \FFI\CData|null rd_kafka_DeleteTopic_t** - Array of topics to delete.
      del_topic_cnt int|null size_t - Number of elements in topics array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DeleteTopics_result_topics()

      public static rd_kafka_DeleteTopics_result_topics ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of topic results from a DeleteTopics result.

      The returned topics life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DeleteTopics_result_t* - Result to get topic results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_topic_result_t**

      rd_kafka_NewPartitions_new()

      public static rd_kafka_NewPartitions_new ( 
          string|null $topic, 
          int|null $new_total_cnt, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): \FFI\CData|null
      

      Create a new NewPartitions. This object is later passed to rd_kafka_CreatePartitions() to increase the number of partitions to new_total_cnt for an existing topic.

      Parameters
      topic string|null const char* - Topic name to create more partitions for.
      new_total_cnt int|null size_t - Increase the topic’s partition count to this value.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      \FFI\CData|null rd_kafka_NewPartitions_t* - a new allocated NewPartitions object, or NULL if the input parameters are invalid. Use rd_kafka_NewPartitions_destroy() to free object when done.

      rd_kafka_NewPartitions_destroy()

      public static rd_kafka_NewPartitions_destroy ( 
          \FFI\CData|null $new_parts
       ): void
      
      Parameters
      new_parts \FFI\CData|null rd_kafka_NewPartitions_t*

      rd_kafka_NewPartitions_destroy_array()

      public static rd_kafka_NewPartitions_destroy_array ( 
          \FFI\CData|null $new_parts, 
          int|null $new_parts_cnt
       ): void
      
      Parameters
      new_parts \FFI\CData|null rd_kafka_NewPartitions_t**
      new_parts_cnt int|null size_t

      rd_kafka_NewPartitions_set_replica_assignment()

      public static rd_kafka_NewPartitions_set_replica_assignment ( 
          \FFI\CData|null $new_parts, 
          int|null $new_partition_idx, 
          \FFI\CData|null $broker_ids, 
          int|null $broker_id_cnt, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Set the replica (broker id) assignment for new_partition_idx to the replica set in broker_ids (of broker_id_cnt elements).

      Remarks
      An application must either set the replica assignment for all new partitions, or none.
      If called, this function must be called consecutively for each new partition being created, where new_partition_idx 0 is the first new partition, 1 is the second, and so on.
      broker_id_cnt should match the topic's replication factor.
      Use rd_kafka_metadata() to retrieve the list of brokers in the cluster.
      See also
      rd_kafka_AdminOptions_set_validate_only()
      Parameters
      new_parts \FFI\CData|null rd_kafka_NewPartitions_t*
      new_partition_idx int|null int32_t
      broker_ids \FFI\CData|null int32_t*
      broker_id_cnt int|null size_t
      errstr \FFI\CData|null char*
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, or an error code if the arguments were invalid.

      rd_kafka_CreatePartitions()

      public static rd_kafka_CreatePartitions ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $new_parts, 
          int|null $new_parts_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Create additional partitions for the given topics, as specified by the new_parts array of size new_parts_cnt elements.

      Supported admin options:

      • rd_kafka_AdminOptions_set_validate_only() - default false
      • rd_kafka_AdminOptions_set_operation_timeout() - default 60 seconds
      • rd_kafka_AdminOptions_set_request_timeout() - default socket.timeout.ms
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_CREATEPARTITIONS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      new_parts \FFI\CData|null rd_kafka_NewPartitions_t** - Array of topics for which new partitions are to be created.
      new_parts_cnt int|null size_t - Number of elements in new_parts array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_CreatePartitions_result_topics()

      public static rd_kafka_CreatePartitions_result_topics ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of topic results from a CreatePartitions result.

      The returned topics life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_CreatePartitions_result_t* - Result o get topic results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_topic_result_t**

      rd_kafka_ConfigSource_name()

      public static rd_kafka_ConfigSource_name ( 
          int $confsource
       ): string|null
      
      Parameters
      confsource int rd_kafka_ConfigSource_t - )
      Returns
      string|null const char* - a string representation of the confsource.

      rd_kafka_ConfigEntry_name()

      public static rd_kafka_ConfigEntry_name ( 
          \FFI\CData|null $entry
       ): string|null
      
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      string|null const char* - the configuration property name

      rd_kafka_ConfigEntry_value()

      public static rd_kafka_ConfigEntry_value ( 
          \FFI\CData|null $entry
       ): string|null
      
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      string|null const char* - the configuration value, may be NULL for sensitive or unset properties.

      rd_kafka_ConfigEntry_source()

      public static rd_kafka_ConfigEntry_source ( 
          \FFI\CData|null $entry
       ): int
      
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      int rd_kafka_ConfigSource_t - the config source.

      rd_kafka_ConfigEntry_is_read_only()

      public static rd_kafka_ConfigEntry_is_read_only ( 
          \FFI\CData|null $entry
       ): int|null
      
      Remarks
      Shall only be used on a DescribeConfigs result, otherwise returns -1.
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      int|null int - 1 if the config property is read-only on the broker, else 0.

      rd_kafka_ConfigEntry_is_default()

      public static rd_kafka_ConfigEntry_is_default ( 
          \FFI\CData|null $entry
       ): int|null
      
      Remarks
      Shall only be used on a DescribeConfigs result, otherwise returns -1.
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      int|null int - 1 if the config property is set to its default value on the broker, else 0.

      rd_kafka_ConfigEntry_is_sensitive()

      public static rd_kafka_ConfigEntry_is_sensitive ( 
          \FFI\CData|null $entry
       ): int|null
      
      Remarks
      An application should take care not to include the value of sensitive configuration entries in its output.
      Shall only be used on a DescribeConfigs result, otherwise returns -1.
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      int|null int - 1 if the config property contains sensitive information (such as security configuration), else 0.

      rd_kafka_ConfigEntry_is_synonym()

      public static rd_kafka_ConfigEntry_is_synonym ( 
          \FFI\CData|null $entry
       ): int|null
      
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - )
      Returns
      int|null int - 1 if this entry is a synonym, else 0.

      rd_kafka_ConfigEntry_synonyms()

      public static rd_kafka_ConfigEntry_synonyms ( 
          \FFI\CData|null $entry, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      
      Remarks
      The lifetime of the returned entry is the same as conf .
      Shall only be used on a DescribeConfigs result, otherwise returns NULL.
      Parameters
      entry \FFI\CData|null const rd_kafka_ConfigEntry_t* - Entry to get synonyms for.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConfigEntry_t** - the synonym config entry array.

      rd_kafka_ResourceType_name()

      public static rd_kafka_ResourceType_name ( 
          int $restype
       ): string|null
      
      Parameters
      restype int rd_kafka_ResourceType_t - )
      Returns
      string|null const char* - a string representation of the restype

      rd_kafka_ConfigResource_new()

      public static rd_kafka_ConfigResource_new ( 
          int $restype, 
          string|null $resname
       ): \FFI\CData|null
      

      Create new ConfigResource object.

      Parameters
      restype int rd_kafka_ResourceType_t - The resource type (e.g., RD_KAFKA_RESOURCE_TOPIC)
      resname string|null const char* - The resource name (e.g., the topic name)
      Returns
      \FFI\CData|null rd_kafka_ConfigResource_t* - a newly allocated object

      rd_kafka_ConfigResource_destroy()

      public static rd_kafka_ConfigResource_destroy ( 
          \FFI\CData|null $config
       ): void
      
      Parameters
      config \FFI\CData|null rd_kafka_ConfigResource_t*

      rd_kafka_ConfigResource_destroy_array()

      public static rd_kafka_ConfigResource_destroy_array ( 
          \FFI\CData|null $config, 
          int|null $config_cnt
       ): void
      
      Parameters
      config \FFI\CData|null rd_kafka_ConfigResource_t**
      config_cnt int|null size_t

      rd_kafka_ConfigResource_set_config()

      public static rd_kafka_ConfigResource_set_config ( 
          \FFI\CData|null $config, 
          string|null $name, 
          string|null $value
       ): int
      

      Set configuration name value pair.

      This will overwrite the current value.

      Parameters
      config \FFI\CData|null rd_kafka_ConfigResource_t* - ConfigResource to set config property on.
      name string|null const char* - Configuration name, depends on resource type.
      value string|null const char* - Configuration value, depends on resource type and name. Set to NULL to revert configuration value to default.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR if config was added to resource, or RD_KAFKA_RESP_ERR__INVALID_ARG on invalid input.

      rd_kafka_ConfigResource_configs()

      public static rd_kafka_ConfigResource_configs ( 
          \FFI\CData|null $config, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of config entries from a ConfigResource object.

      The returned object life-times are the same as the config object.

      Parameters
      config \FFI\CData|null const rd_kafka_ConfigResource_t* - ConfigResource to get configs from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConfigEntry_t**

      rd_kafka_ConfigResource_type()

      public static rd_kafka_ConfigResource_type ( 
          \FFI\CData|null $config
       ): int
      
      Parameters
      config \FFI\CData|null const rd_kafka_ConfigResource_t* - )
      Returns
      int rd_kafka_ResourceType_t - the ResourceType for config

      rd_kafka_ConfigResource_name()

      public static rd_kafka_ConfigResource_name ( 
          \FFI\CData|null $config
       ): string|null
      
      Parameters
      config \FFI\CData|null const rd_kafka_ConfigResource_t* - )
      Returns
      string|null const char* - the name for config

      rd_kafka_ConfigResource_error()

      public static rd_kafka_ConfigResource_error ( 
          \FFI\CData|null $config
       ): int
      
      Parameters
      config \FFI\CData|null const rd_kafka_ConfigResource_t* - )
      Returns
      int rd_kafka_resp_err_t - the error for this resource from an AlterConfigs request

      rd_kafka_ConfigResource_error_string()

      public static rd_kafka_ConfigResource_error_string ( 
          \FFI\CData|null $config
       ): string|null
      
      Parameters
      config \FFI\CData|null const rd_kafka_ConfigResource_t* - )
      Returns
      string|null const char* - the error string for this resource from an AlterConfigs request, or NULL if no error.

      rd_kafka_AlterConfigs()

      public static rd_kafka_AlterConfigs ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $configs, 
          int|null $config_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Update the configuration for the specified resources. Updates are not transactional so they may succeed for a subset of the provided resources while the others fail. The configuration for a particular resource is updated atomically, replacing values using the provided ConfigEntrys and reverting unspecified ConfigEntrys to their default values.

      Remarks
      Requires broker version >=0.11.0.0
      Warning
      AlterConfigs will replace all existing configuration for the provided resources with the new configuration given, reverting all other configuration to their default values.
      Remarks
      Multiple resources and resource types may be set, but at most one resource of type RD_KAFKA_RESOURCE_BROKER is allowed per call since these resource requests must be sent to the broker specified in the resource.
      Deprecated:
      Use rd_kafka_IncrementalAlterConfigs().
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      configs \FFI\CData|null rd_kafka_ConfigResource_t**
      config_cnt int|null size_t
      options \FFI\CData|null const rd_kafka_AdminOptions_t*
      rkqu \FFI\CData|null rd_kafka_queue_t*

      rd_kafka_AlterConfigs_result_resources()

      public static rd_kafka_AlterConfigs_result_resources ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of resource results from a AlterConfigs result.

      Use rd_kafka_ConfigResource_error() and rd_kafka_ConfigResource_error_string() to extract per-resource error results on the returned array elements.

      The returned object life-times are the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_AlterConfigs_result_t* - Result object to get resource results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConfigResource_t** - an array of ConfigResource elements, or NULL if not available.

      rd_kafka_DescribeConfigs()

      public static rd_kafka_DescribeConfigs ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $configs, 
          int|null $config_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Get configuration for the specified resources in configs.

      The returned configuration includes default values and the rd_kafka_ConfigEntry_is_default() or rd_kafka_ConfigEntry_source() methods may be used to distinguish them from user supplied values.

      The value of config entries where rd_kafka_ConfigEntry_is_sensitive() is true will always be NULL to avoid disclosing sensitive information, such as security settings.

      Configuration entries where rd_kafka_ConfigEntry_is_read_only() is true can't be updated (with rd_kafka_AlterConfigs()).

      Synonym configuration entries are returned if the broker supports it (broker version >= 1.1.0). See rd_kafka_ConfigEntry_synonyms().

      Remarks
      Requires broker version >=0.11.0.0
      Multiple resources and resource types may be requested, but at most one resource of type RD_KAFKA_RESOURCE_BROKER is allowed per call since these resource requests must be sent to the broker specified in the resource.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      configs \FFI\CData|null rd_kafka_ConfigResource_t**
      config_cnt int|null size_t
      options \FFI\CData|null const rd_kafka_AdminOptions_t*
      rkqu \FFI\CData|null rd_kafka_queue_t*

      rd_kafka_DescribeConfigs_result_resources()

      public static rd_kafka_DescribeConfigs_result_resources ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of resource results from a DescribeConfigs result.

      The returned resources life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DescribeConfigs_result_t* - Result object to get resource results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConfigResource_t**

      rd_kafka_conf()

      public static rd_kafka_conf ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      
      Remarks
      the returned object is read-only and its lifetime is the same as the rd_kafka_t object.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      \FFI\CData|null const rd_kafka_conf_t* - the configuration object used by an rd_kafka_t instance. For use with rd_kafka_conf_get(), et.al., to extract configuration properties from a running client.

      rd_kafka_conf_set_oauthbearer_token_refresh_cb()

      public static rd_kafka_conf_set_oauthbearer_token_refresh_cb ( 
          \FFI\CData|null $conf, 
          \FFI\CData|\Closure $oauthbearer_token_refresh_cb
       ): void
      

      Set SASL/OAUTHBEARER token refresh callback in provided conf object.

      The SASL/OAUTHBEARER token refresh callback is triggered via rd_kafka_poll() whenever OAUTHBEARER is the SASL mechanism and a token needs to be retrieved, typically based on the configuration defined in sasl.oauthbearer.config.

      The callback should invoke rd_kafka_oauthbearer_set_token() or rd_kafka_oauthbearer_set_token_failure() to indicate success or failure, respectively.

      The refresh operation is eventable and may be received via rd_kafka_queue_poll() with an event type of RD_KAFKA_EVENT_OAUTHBEARER_TOKEN_REFRESH.

      Note that before any SASL/OAUTHBEARER broker connection can succeed the application must call rd_kafka_oauthbearer_set_token() once – either directly or, more typically, by invoking either rd_kafka_poll(), rd_kafka_consumer_poll(), rd_kafka_queue_poll(), etc, in order to cause retrieval of an initial token to occur.

      Alternatively, the application can enable the SASL queue by calling rd_kafka_conf_enable_sasl_queue() on the configuration object prior to creating the client instance, get the SASL queue with rd_kafka_queue_get_sasl(), and either serve the queue manually by calling rd_kafka_queue_poll(), or redirecting the queue to the background thread to have the queue served automatically. For the latter case the SASL queue must be forwarded to the background queue with rd_kafka_queue_forward(). A convenience function is available to automatically forward the SASL queue to librdkafka's background thread, see rd_kafka_sasl_background_callbacks_enable().

      An unsecured JWT refresh handler is provided by librdkafka for development and testing purposes, it is enabled by setting the enable.sasl.oauthbearer.unsecure.jwt property to true and is mutually exclusive to using a refresh callback.

      See also
      rd_kafka_sasl_background_callbacks_enable()
      rd_kafka_queue_get_sasl()
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - the configuration to mutate.
      oauthbearer_token_refresh_cb \FFI\CData|\Closure void()(rd_kafka_t, const char*, void*) - the callback to set; callback function arguments:
      rk - Kafka handle
      oauthbearer_config - Value of configuration property sasl.oauthbearer.config. opaque - Application-provided opaque set via rd_kafka_conf_set_opaque()

      rd_kafka_conf_set_ssl_cert_verify_cb()

      public static rd_kafka_conf_set_ssl_cert_verify_cb ( 
          \FFI\CData|null $conf, 
          \FFI\CData|\Closure $ssl_cert_verify_cb
       ): int
      

      Sets the verification callback of the broker certificate.

      The verification callback is triggered from internal librdkafka threads upon connecting to a broker. On each connection attempt the callback will be called for each certificate in the broker's certificate chain, starting at the root certification, as long as the application callback returns 1 (valid certificate). broker_name and broker_id correspond to the broker the connection is being made to. The x509_error argument indicates if OpenSSL's verification of the certificate succeed (0) or failed (an OpenSSL error code). The application may set the SSL context error code by returning 0 from the verify callback and providing a non-zero SSL context error code in x509_error. If the verify callback sets x509_error to 0, returns 1, and the original x509_error was non-zero, the error on the SSL context will be cleared. x509_error is always a valid pointer to an int.

      depth is the depth of the current certificate in the chain, starting at the root certificate.

      The certificate itself is passed in binary DER format in buf of size size.

      The callback must return 1 if verification succeeds, or 0 if verification fails and then write a human-readable error message to errstr (limited to errstr_size bytes, including nul-term).

      The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

      Warning
      This callback will be called from internal librdkafka threads.
      Remarks
      See <openssl/x509_vfy.h> in the OpenSSL source distribution for a list of x509_error codes.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t*
      ssl_cert_verify_cb \FFI\CData|\Closure int()(rd_kafka_t, const char*, int32_t, int*, int, const char*, size_t, char*, size_t, void*)
      Returns
      int rd_kafka_conf_res_t - RD_KAFKA_CONF_OK if SSL is supported in this build, else RD_KAFKA_CONF_INVALID.

      rd_kafka_conf_set_ssl_cert()

      public static rd_kafka_conf_set_ssl_cert ( 
          \FFI\CData|null $conf, 
          int $cert_type, 
          int $cert_enc, 
          \FFI\CData|object|string|null $buffer, 
          int|null $size, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Set certificate/key cert_type from the cert_enc encoded memory at buffer of size bytes.

      Remarks
      Calling this method multiple times with the same cert_type will replace the previous value.
      Calling this method with buffer set to NULL will clear the configuration for cert_type.
      The private key may require a password, which must be specified with the ssl.key.password configuration property prior to calling this function.
      Private and public keys in PEM format may also be set with the ssl.key.pem and ssl.certificate.pem configuration properties.
      CA certificate in PEM format may also be set with the ssl.ca.pem configuration property.
      When librdkafka is linked to OpenSSL 3.0 and the certificate is encoded using an obsolete cipher, it might be necessary to set up an OpenSSL configuration file to load the "legacy" provider and set the OPENSSL_CONF environment variable. See https://github.com/openssl/openssl/blob/master/README-PROVIDERS.md for more information.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      cert_type int rd_kafka_cert_type_t - Certificate or key type to configure.
      cert_enc int rd_kafka_cert_enc_t - Buffer encoding type.
      buffer \FFI\CData|object|string|null const void* - Memory pointer to encoded certificate or key. The memory is not referenced after this function returns.
      size int|null size_t - Size of memory at buffer.
      errstr \FFI\CData|null char* - Memory were a human-readable error string will be written on failure.
      errstr_size int|null size_t - Size of errstr, including space for nul-terminator.
      Returns
      int rd_kafka_conf_res_t - RD_KAFKA_CONF_OK on success or RD_KAFKA_CONF_INVALID if the memory in buffer is of incorrect encoding, or if librdkafka was not built with SSL support.

      rd_kafka_event_config_string()

      public static rd_kafka_event_config_string ( 
          \FFI\CData|null $rkev
       ): string|null
      

      The returned memory is read-only and its lifetime is the same as the event object.

      Event types:

      • RD_KAFKA_EVENT_OAUTHBEARER_TOKEN_REFRESH: value of sasl.oauthbearer.config
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      string|null const char* - the associated configuration string for the event, or NULL if the configuration property is not set or if not applicable for the given event type.

      rd_kafka_oauthbearer_set_token()

      public static rd_kafka_oauthbearer_set_token ( 
          \FFI\CData|null $rk, 
          string|null $token_value, 
          int|null $md_lifetime_ms, 
          string|null $md_principal_name, 
          \FFI\CData|null $extensions, 
          int|null $extension_size, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): int
      

      Set SASL/OAUTHBEARER token and metadata.

      The SASL/OAUTHBEARER token refresh callback or event handler should invoke this method upon success. The extension keys must not include the reserved key "`auth`", and all extension keys and values must conform to the required format as per https://tools.ietf.org/html/rfc7628#section-3.1:

      key            = 1*(ALPHA)
      value          = *(VCHAR / SP / HTAB / CR / LF )
      
      See also
      rd_kafka_oauthbearer_set_token_failure
      rd_kafka_conf_set_oauthbearer_token_refresh_cb
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      token_value string|null const char* - the mandatory token value to set, often (but not necessarily) a JWS compact serialization as per https://tools.ietf.org/html/rfc7515#section-3.1.
      md_lifetime_ms int|null int64_t - when the token expires, in terms of the number of milliseconds since the epoch.
      md_principal_name string|null const char* - the mandatory Kafka principal name associated with the token.
      extensions \FFI\CData|null const char** - optional SASL extensions key-value array with extensions_size elements (number of keys * 2), where [i] is the key and [i+1] is the key’s value, to be communicated to the broker as additional key-value pairs during the initial client response as per https://tools.ietf.org/html/rfc7628#section-3.1. The key-value pairs are copied.
      extension_size int|null size_t - the number of SASL extension keys plus values, which must be a non-negative multiple of 2.
      errstr \FFI\CData|null char* - A human readable error string (nul-terminated) is written to this location that must be of at least errstr_size bytes. The errstr is only written in case of error. - Writable size in errstr.
      errstr_size int|null size_t
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, otherwise errstr set and:
      RD_KAFKA_RESP_ERR__INVALID_ARG if any of the arguments are invalid;
      RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED if SASL/OAUTHBEARER is not supported by this build;
      RD_KAFKA_RESP_ERR__STATE if SASL/OAUTHBEARER is supported but is not configured as the client’s authentication mechanism.

      rd_kafka_oauthbearer_set_token_failure()

      public static rd_kafka_oauthbearer_set_token_failure ( 
          \FFI\CData|null $rk, 
          string|null $errstr
       ): int
      

      SASL/OAUTHBEARER token refresh failure indicator.

      The SASL/OAUTHBEARER token refresh callback or event handler should invoke this method upon failure.

      See also
      rd_kafka_oauthbearer_set_token
      rd_kafka_conf_set_oauthbearer_token_refresh_cb
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      errstr string|null const char* - mandatory human readable error reason for failing to acquire a token.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success, otherwise:
      RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED if SASL/OAUTHBEARER is not supported by this build;
      RD_KAFKA_RESP_ERR__STATE if SASL/OAUTHBEARER is supported but is not configured as the client’s authentication mechanism,
      RD_KAFKA_RESP_ERR__INVALID_ARG if no error string is supplied.

      rd_kafka_interceptor_f_on_thread_start_t()

      public static rd_kafka_interceptor_f_on_thread_start_t ( 
          \FFI\CData|null $rk, 
          int $thread_type, 
          string|null $thread_name, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_thread_start() is called from a newly created librdkafka-managed thread.

      Warning
      The on_thread_start() interceptor is called from internal librdkafka threads. An on_thread_start() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      thread_type int rd_kafka_thread_type_t - Thread type.
      thread_name string|null const char* - Human-readable thread name, may not be unique.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_f_on_thread_exit_t()

      public static rd_kafka_interceptor_f_on_thread_exit_t ( 
          \FFI\CData|null $rk, 
          int $thread_type, 
          string|null $thread_name, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_thread_exit() is called just prior to a librdkafka-managed thread exiting from the exiting thread itself.

      Remarks
      Depending on the thread type, librdkafka may execute additional code on the thread after on_thread_exit() returns.
      Warning
      The on_thread_exit() interceptor is called from internal librdkafka threads. An on_thread_exit() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      thread_type int rd_kafka_thread_type_t - Thread type.n
      thread_name string|null const char* - Human-readable thread name, may not be unique.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_add_on_thread_start()

      public static rd_kafka_interceptor_add_on_thread_start ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_thread_start, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_thread_start() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_thread_start \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_thread_start_t*)(rd_kafka_t*, rd_kafka_thread_type_t, const char*, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_interceptor_add_on_thread_exit()

      public static rd_kafka_interceptor_add_on_thread_exit ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_thread_exit, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_thread_exit() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_thread_exit \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_thread_exit_t*)(rd_kafka_t*, rd_kafka_thread_type_t, const char*, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_mock_cluster_new()

      public static rd_kafka_mock_cluster_new ( 
          \FFI\CData|null $rk, 
          int|null $broker_cnt
       ): \FFI\CData|null
      
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      broker_cnt int|null int
      Returns
      \FFI\CData|null rd_kafka_mock_cluster_t*

      rd_kafka_mock_cluster_destroy()

      public static rd_kafka_mock_cluster_destroy ( 
          \FFI\CData|null $mcluster
       ): void
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*

      rd_kafka_mock_cluster_handle()

      public static rd_kafka_mock_cluster_handle ( 
          \FFI\CData|null $mcluster
       ): \FFI\CData|null
      
      Parameters
      mcluster \FFI\CData|null const rd_kafka_mock_cluster_t*
      Returns
      \FFI\CData|null rd_kafka_t*

      rd_kafka_mock_cluster_bootstraps()

      public static rd_kafka_mock_cluster_bootstraps ( 
          \FFI\CData|null $mcluster
       ): string|null
      
      Parameters
      mcluster \FFI\CData|null const rd_kafka_mock_cluster_t*
      Returns
      string|null const char*

      rd_kafka_mock_push_request_errors()

      public static rd_kafka_mock_push_request_errors ( 
          \FFI\CData|null $mcluster, 
          int|null $ApiKey, 
          int|null $cnt, 
          mixed $args
       ): void
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      ApiKey int|null int16_t
      cnt int|null size_t
      args mixed

      rd_kafka_mock_topic_set_error()

      public static rd_kafka_mock_topic_set_error ( 
          \FFI\CData|null $mcluster, 
          string|null $topic, 
          int $err
       ): void
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      topic string|null const char*
      err int rd_kafka_resp_err_t

      rd_kafka_mock_partition_set_leader()

      public static rd_kafka_mock_partition_set_leader ( 
          \FFI\CData|null $mcluster, 
          string|null $topic, 
          int|null $partition, 
          int|null $broker_id
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      topic string|null const char*
      partition int|null int32_t
      broker_id int|null int32_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_partition_set_follower()

      public static rd_kafka_mock_partition_set_follower ( 
          \FFI\CData|null $mcluster, 
          string|null $topic, 
          int|null $partition, 
          int|null $broker_id
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      topic string|null const char*
      partition int|null int32_t
      broker_id int|null int32_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_partition_set_follower_wmarks()

      public static rd_kafka_mock_partition_set_follower_wmarks ( 
          \FFI\CData|null $mcluster, 
          string|null $topic, 
          int|null $partition, 
          int|null $lo, 
          int|null $hi
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      topic string|null const char*
      partition int|null int32_t
      lo int|null int64_t
      hi int|null int64_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_broker_set_rack()

      public static rd_kafka_mock_broker_set_rack ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id, 
          string|null $rack
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      rack string|null const char*
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_error_code()

      public static rd_kafka_error_code ( 
          \FFI\CData|null $error
       ): int
      
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      int rd_kafka_resp_err_t - the error code for error or RD_KAFKA_RESP_ERR_NO_ERROR if error is NULL.

      rd_kafka_error_name()

      public static rd_kafka_error_name ( 
          \FFI\CData|null $error
       ): string|null
      
      Remarks
      The lifetime of the returned pointer is the same as the error object.
      See also
      rd_kafka_err2name()
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      string|null const char* - the error code name for error, e.g, “ERR_UNKNOWN_MEMBER_ID”, or an empty string if error is NULL.

      rd_kafka_error_string()

      public static rd_kafka_error_string ( 
          \FFI\CData|null $error
       ): string|null
      
      Remarks
      The lifetime of the returned pointer is the same as the error object.
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      string|null const char* - a human readable error string for error, or an empty string if error is NULL.

      rd_kafka_error_is_fatal()

      public static rd_kafka_error_is_fatal ( 
          \FFI\CData|null $error
       ): int|null
      
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      int|null int - 1 if the error is a fatal error, indicating that the client instance is no longer usable, else 0 (also if error is NULL).

      rd_kafka_error_is_retriable()

      public static rd_kafka_error_is_retriable ( 
          \FFI\CData|null $error
       ): int|null
      
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      int|null int - 1 if the operation may be retried, else 0 (also if error is NULL).

      rd_kafka_error_txn_requires_abort()

      public static rd_kafka_error_txn_requires_abort ( 
          \FFI\CData|null $error
       ): int|null
      
      Remarks
      The return value of this method is only valid for errors returned by the transactional API.
      Parameters
      error \FFI\CData|null const rd_kafka_error_t* - )
      Returns
      int|null int - 1 if the error is an abortable transaction error in which case the application must call rd_kafka_abort_transaction() and start a new transaction with rd_kafka_begin_transaction() if it wishes to proceed with transactions. Else returns 0 (also if error is NULL).

      rd_kafka_error_destroy()

      public static rd_kafka_error_destroy ( 
          \FFI\CData|null $error
       ): void
      

      Free and destroy an error object.

      Remarks
      As a conveniance it is permitted to pass a NULL error.
      Parameters
      error \FFI\CData|null rd_kafka_error_t* - )

      rd_kafka_error_new()

      public static rd_kafka_error_new ( 
          int $code, 
          string|null $fmt, 
          mixed $args
       ): \FFI\CData|null
      

      Create a new error object with error code and optional human readable error string in fmt.

      This method is mainly to be used for mocking errors in application test code.

      The returned object must be destroyed with rd_kafka_error_destroy().

      Parameters
      code int rd_kafka_resp_err_t
      fmt string|null const char*
      args mixed
      Returns
      \FFI\CData|null rd_kafka_error_t*

      rd_kafka_msg_partitioner_fnv1a()

      public static rd_kafka_msg_partitioner_fnv1a ( 
          \FFI\CData|null $rkt, 
          \FFI\CData|object|string|null $key, 
          int|null $keylen, 
          int|null $partition_cnt, 
          \FFI\CData|object|string|null $rkt_opaque, 
          \FFI\CData|object|string|null $msg_opaque
       ): int|null
      

      FNV-1a partitioner.

      Uses consistent hashing to map identical keys onto identical partitions using FNV-1a hashing.

      The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

      Parameters
      rkt \FFI\CData|null const rd_kafka_topic_t*
      key \FFI\CData|object|string|null const void*
      keylen int|null size_t
      partition_cnt int|null int32_t
      rkt_opaque \FFI\CData|object|string|null void*
      msg_opaque \FFI\CData|object|string|null void*
      Returns
      int|null int32_t - a partition between 0 and partition_cnt - 1.

      rd_kafka_msg_partitioner_fnv1a_random()

      public static rd_kafka_msg_partitioner_fnv1a_random ( 
          \FFI\CData|null $rkt, 
          \FFI\CData|object|string|null $key, 
          int|null $keylen, 
          int|null $partition_cnt, 
          \FFI\CData|object|string|null $rkt_opaque, 
          \FFI\CData|object|string|null $msg_opaque
       ): int|null
      

      Consistent-Random FNV-1a partitioner.

      Uses consistent hashing to map identical keys onto identical partitions using FNV-1a hashing. Messages without keys will be assigned via the random partitioner.

      The rkt_opaque argument is the opaque set by rd_kafka_topic_conf_set_opaque(). The msg_opaque argument is the per-message opaque passed to produce().

      Parameters
      rkt \FFI\CData|null const rd_kafka_topic_t*
      key \FFI\CData|object|string|null const void*
      keylen int|null size_t
      partition_cnt int|null int32_t
      rkt_opaque \FFI\CData|object|string|null void*
      msg_opaque \FFI\CData|object|string|null void*
      Returns
      int|null int32_t - a partition between 0 and partition_cnt - 1.

      rd_kafka_consumer_group_metadata()

      public static rd_kafka_consumer_group_metadata ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      
      Remarks
      The returned pointer must be freed by the application using rd_kafka_consumer_group_metadata_destroy().
      See also
      rd_kafka_send_offsets_to_transaction()
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      \FFI\CData|null rd_kafka_consumer_group_metadata_t* - the current consumer group metadata associated with this consumer, or NULL if rk is not a consumer configured with a group.id. This metadata object should be passed to the transactional producer’s rd_kafka_send_offsets_to_transaction() API.

      rd_kafka_consumer_group_metadata_new()

      public static rd_kafka_consumer_group_metadata_new ( 
          string|null $group_id
       ): \FFI\CData|null
      

      Create a new consumer group metadata object. This is typically only used for writing tests.

      Remarks
      The returned pointer must be freed by the application using rd_kafka_consumer_group_metadata_destroy().
      Parameters
      group_id string|null const char* - ) - The group id.
      Returns
      \FFI\CData|null rd_kafka_consumer_group_metadata_t*

      rd_kafka_consumer_group_metadata_destroy()

      public static rd_kafka_consumer_group_metadata_destroy ( 
          \FFI\CData|null $arg0
       ): void
      
      Parameters
      arg0 \FFI\CData|null rd_kafka_consumer_group_metadata_t*

      rd_kafka_consumer_group_metadata_write()

      public static rd_kafka_consumer_group_metadata_write ( 
          \FFI\CData|null $cgmd, 
          \FFI\CData|object|string|null $bufferp, 
          \FFI\CData|null $sizep
       ): \FFI\CData|null
      

      Serialize the consumer group metadata to a binary format. This is mainly for client binding use and not for application use.

      Remarks
      The serialized metadata format is private and is not compatible across different versions or even builds of librdkafka. It should only be used in the same process runtime and must only be passed to rd_kafka_consumer_group_metadata_read().
      See also
      rd_kafka_consumer_group_metadata_read()
      Parameters
      cgmd \FFI\CData|null const rd_kafka_consumer_group_metadata_t* - Metadata to be serialized.
      bufferp \FFI\CData|object|string|null void** - On success this pointer will be updated to point to na allocated buffer containing the serialized metadata. The buffer must be freed with rd_kafka_mem_free().
      sizep \FFI\CData|null size_t* - The pointed to size will be updated with the size of the serialized buffer.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure.

      rd_kafka_consumer_group_metadata_read()

      public static rd_kafka_consumer_group_metadata_read ( 
          \FFI\CData|null $cgmdp, 
          \FFI\CData|object|string|null $buffer, 
          int|null $size
       ): \FFI\CData|null
      

      Reads serialized consumer group metadata and returns a consumer group metadata object. This is mainly for client binding use and not for application use.

      Remarks
      The serialized metadata format is private and is not compatible across different versions or even builds of librdkafka. It should only be used in the same process runtime and must only be passed to rd_kafka_consumer_group_metadata_read().
      See also
      rd_kafka_consumer_group_metadata_write()
      Parameters
      cgmdp \FFI\CData|null rd_kafka_consumer_group_metadata_t** - On success this pointer will be updated to point to a new consumer group metadata object which must be freed with rd_kafka_consumer_group_metadata_destroy().
      buffer \FFI\CData|object|string|null const void* - Pointer to the serialized data.
      size int|null size_t - Size of the serialized data.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure.

      rd_kafka_init_transactions()

      public static rd_kafka_init_transactions ( 
          \FFI\CData|null $rk, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Initialize transactions for the producer instance.

      This function ensures any transactions initiated by previous instances of the producer with the same transactional.id are completed. If the previous instance failed with a transaction in progress the previous transaction will be aborted. This function needs to be called before any other transactional or produce functions are called when the transactional.id is configured.

      If the last transaction had begun completion (following transaction commit) but not yet finished, this function will await the previous transaction's completion.

      When any previous transactions have been fenced this function will acquire the internal producer id and epoch, used in all future transactional messages issued by this producer instance.

      Remarks
      This function may block up to timeout_ms milliseconds.
      This call is resumable when a retriable timeout error is returned. Calling the function again will resume the operation that is progressing in the background.
      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Producer instance.
      timeout_ms int|null int - The maximum time to block. On timeout the operation may continue in the background, depending on state, and it is okay to call init_transactions() again. If an infinite timeout (-1) is passed, the timeout will be adjusted to 2 * transaction.timeout.ms.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure. Check whether the returned error object permits retrying by calling rd_kafka_error_is_retriable(), or whether a fatal error has been raised by calling rd_kafka_error_is_fatal(). Error codes: RD_KAFKA_RESP_ERR__TIMED_OUT if the transaction coordinator could be not be contacted within timeout_ms (retriable), RD_KAFKA_RESP_ERR_COORDINATOR_NOT_AVAILABLE if the transaction coordinator is not available (retriable), RD_KAFKA_RESP_ERR_CONCURRENT_TRANSACTIONS if a previous transaction would not complete within timeout_ms (retriable), RD_KAFKA_RESP_ERR__STATE if transactions have already been started or upon fatal error, RD_KAFKA_RESP_ERR__UNSUPPORTED_FEATURE if the broker(s) do not support transactions (<Apache Kafka 0.11), this also raises a fatal error, RD_KAFKA_RESP_ERR_INVALID_TRANSACTION_TIMEOUT if the configured transaction.timeout.ms is outside the broker-configured range, this also raises a fatal error, RD_KAFKA_RESP_ERR__NOT_CONFIGURED if transactions have not been configured for the producer instance, RD_KAFKA_RESP_ERR__INVALID_ARG if rk is not a producer instance, or timeout_ms is out of range. Other error codes not listed here may be returned, depending on broker version.

      rd_kafka_begin_transaction()

      public static rd_kafka_begin_transaction ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      

      Begin a new transaction.

      rd_kafka_init_transactions() must have been called successfully (once) before this function is called.

      Upon successful return from this function the application has to perform at least one of the following operations within transaction.timeout.ms to avoid timing out the transaction on the broker:

      • rd_kafka_produce() (et.al)
      • rd_kafka_send_offsets_to_transaction()
      • rd_kafka_commit_transaction()
      • rd_kafka_abort_transaction()

      Any messages produced, offsets sent (rd_kafka_send_offsets_to_transaction()), etc, after the successful return of this function will be part of the transaction and committed or aborted atomatically.

      Finish the transaction by calling rd_kafka_commit_transaction() or abort the transaction by calling rd_kafka_abort_transaction().

      Remarks
      With the transactional producer, rd_kafka_produce(), rd_kafka_producev(), et.al, are only allowed during an on-going transaction, as started with this function. Any produce call outside an on-going transaction, or for a failed transaction, will fail.
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - ) - Producer instance.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure. Check whether a fatal error has been raised by calling rd_kafka_error_is_fatal(). Error codes: RD_KAFKA_RESP_ERR__STATE if a transaction is already in progress or upon fatal error, RD_KAFKA_RESP_ERR__NOT_CONFIGURED if transactions have not been configured for the producer instance, RD_KAFKA_RESP_ERR__INVALID_ARG if rk is not a producer instance. Other error codes not listed here may be returned, depending on broker version.

      rd_kafka_send_offsets_to_transaction()

      public static rd_kafka_send_offsets_to_transaction ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $offsets, 
          \FFI\CData|null $cgmetadata, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Sends a list of topic partition offsets to the consumer group coordinator for cgmetadata, and marks the offsets as part part of the current transaction. These offsets will be considered committed only if the transaction is committed successfully.

      The offsets should be the next message your application will consume, i.e., the last processed message's offset + 1 for each partition. Either track the offsets manually during processing or use rd_kafka_position() (on the consumer) to get the current offsets for the partitions assigned to the consumer.

      Use this method at the end of a consume-transform-produce loop prior to committing the transaction with rd_kafka_commit_transaction().

      Remarks
      This function must be called on the transactional producer instance, not the consumer.
      The consumer must disable auto commits (set enable.auto.commit to false on the consumer).
      Logical and invalid offsets (such as RD_KAFKA_OFFSET_INVALID) in offsets will be ignored, if there are no valid offsets in offsets the function will return NULL and no action will be taken.
      This call is retriable but not resumable, which means a new request with a new set of provided offsets and group metadata will be sent to the transaction coordinator if the call is retried.
      It is highly recommended to retry the call (upon retriable error) with identical offsets and cgmetadata parameters. Failure to do so risks inconsistent state between what is actually included in the transaction and what the application thinks is included in the transaction.
      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Producer instance.
      offsets \FFI\CData|null const rd_kafka_topic_partition_list_t* - List of offsets to commit to the consumer group upon successful commit of the transaction. Offsets should be the next message to consume, e.g., last processed message + 1.
      cgmetadata \FFI\CData|null const rd_kafka_consumer_group_metadata_t* - The current consumer group metadata as returned by rd_kafka_consumer_group_metadata() on the consumer instance the provided offsets were consumed from.
      timeout_ms int|null int - Maximum time allowed to register the offsets on the broker.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure. Check whether the returned error object permits retrying by calling rd_kafka_error_is_retriable(), or whether an abortable or fatal error has been raised by calling rd_kafka_error_txn_requires_abort() or rd_kafka_error_is_fatal() respectively. Error codes: RD_KAFKA_RESP_ERR__STATE if not currently in a transaction, RD_KAFKA_RESP_ERR_INVALID_PRODUCER_EPOCH if the current producer transaction has been fenced by a newer producer instance, RD_KAFKA_RESP_ERR_TRANSACTIONAL_ID_AUTHORIZATION_FAILED if the producer is no longer authorized to perform transactional operations, RD_KAFKA_RESP_ERR_GROUP_AUTHORIZATION_FAILED if the producer is not authorized to write the consumer offsets to the group coordinator, RD_KAFKA_RESP_ERR__NOT_CONFIGURED if transactions have not been configured for the producer instance, RD_KAFKA_RESP_ERR__INVALID_ARG if rk is not a producer instance, or if the consumer_group_id or offsets are empty. Other error codes not listed here may be returned, depending on broker version.

      rd_kafka_commit_transaction()

      public static rd_kafka_commit_transaction ( 
          \FFI\CData|null $rk, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Commit the current transaction (as started with rd_kafka_begin_transaction()).

      Any outstanding messages will be flushed (delivered) before actually committing the transaction.

      If any of the outstanding messages fail permanently the current transaction will enter the abortable error state and this function will return an abortable error, in this case the application must call rd_kafka_abort_transaction() before attempting a new transaction with rd_kafka_begin_transaction().

      Remarks
      It is strongly recommended to always pass -1 (remaining transaction time) as the timeout_ms. Using other values risk internal state desynchronization in case any of the underlying protocol requests fail.
      This function will block until all outstanding messages are delivered and the transaction commit request has been successfully handled by the transaction coordinator, or until timeout_ms expires, which ever comes first. On timeout the application may call the function again.
      Will automatically call rd_kafka_flush() to ensure all queued messages are delivered before attempting to commit the transaction. If the application has enabled RD_KAFKA_EVENT_DR it must serve the event queue in a separate thread since rd_kafka_flush() will not serve delivery reports in this mode.
      This call is resumable when a retriable timeout error is returned. Calling the function again will resume the operation that is progressing in the background.
      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Producer instance.
      timeout_ms int|null int - The maximum time to block. On timeout the operation may continue in the background, depending on state, and it is okay to call this function again. Pass -1 to use the remaining transaction timeout, this is the recommended use.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure. Check whether the returned error object permits retrying by calling rd_kafka_error_is_retriable(), or whether an abortable or fatal error has been raised by calling rd_kafka_error_txn_requires_abort() or rd_kafka_error_is_fatal() respectively. Error codes: RD_KAFKA_RESP_ERR__STATE if not currently in a transaction, RD_KAFKA_RESP_ERR__TIMED_OUT if the transaction could not be complete commmitted within timeout_ms, this is a retriable error as the commit continues in the background, RD_KAFKA_RESP_ERR_INVALID_PRODUCER_EPOCH if the current producer transaction has been fenced by a newer producer instance, RD_KAFKA_RESP_ERR_TRANSACTIONAL_ID_AUTHORIZATION_FAILED if the producer is no longer authorized to perform transactional operations, RD_KAFKA_RESP_ERR__NOT_CONFIGURED if transactions have not been configured for the producer instance, RD_KAFKA_RESP_ERR__INVALID_ARG if rk is not a producer instance, Other error codes not listed here may be returned, depending on broker version.

      rd_kafka_abort_transaction()

      public static rd_kafka_abort_transaction ( 
          \FFI\CData|null $rk, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Aborts the ongoing transaction.

         This function should also be used to recover from non-fatal abortable
         transaction errors.
      
      Any outstanding messages will be purged and fail with
         RD_KAFKA_RESP_ERR__PURGE_INFLIGHT or RD_KAFKA_RESP_ERR__PURGE_QUEUE.
         See rd_kafka_purge() for details.
      
      Remarks
      It is strongly recommended to always pass -1 (remaining transaction time) as the timeout_ms. Using other values risk internal state desynchronization in case any of the underlying protocol requests fail.
      This function will block until all outstanding messages are purged and the transaction abort request has been successfully handled by the transaction coordinator, or until timeout_ms expires, which ever comes first. On timeout the application may call the function again. If the application has enabled RD_KAFKA_EVENT_DR it must serve the event queue in a separate thread since rd_kafka_flush() will not serve delivery reports in this mode.
      This call is resumable when a retriable timeout error is returned. Calling the function again will resume the operation that is progressing in the background.
      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Producer instance.
      timeout_ms int|null int - The maximum time to block. On timeout the operation may continue in the background, depending on state, and it is okay to call this function again. Pass -1 to use the remaining transaction timeout, this is the recommended use.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure. Check whether the returned error object permits retrying by calling rd_kafka_error_is_retriable(), or whether a fatal error has been raised by calling rd_kafka_error_is_fatal(). Error codes: RD_KAFKA_RESP_ERR__STATE if not currently in a transaction, RD_KAFKA_RESP_ERR__TIMED_OUT if the transaction could not be complete commmitted within timeout_ms, this is a retriable error as the commit continues in the background, RD_KAFKA_RESP_ERR_INVALID_PRODUCER_EPOCH if the current producer transaction has been fenced by a newer producer instance, RD_KAFKA_RESP_ERR_TRANSACTIONAL_ID_AUTHORIZATION_FAILED if the producer is no longer authorized to perform transactional operations, RD_KAFKA_RESP_ERR__NOT_CONFIGURED if transactions have not been configured for the producer instance, RD_KAFKA_RESP_ERR__INVALID_ARG if rk is not a producer instance, Other error codes not listed here may be returned, depending on broker version.

      rd_kafka_handle_mock_cluster()

      public static rd_kafka_handle_mock_cluster ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      
      Parameters
      rk \FFI\CData|null const rd_kafka_t*
      Returns
      \FFI\CData|null rd_kafka_mock_cluster_t*

      rd_kafka_mock_topic_create()

      public static rd_kafka_mock_topic_create ( 
          \FFI\CData|null $mcluster, 
          string|null $topic, 
          int|null $partition_cnt, 
          int|null $replication_factor
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      topic string|null const char*
      partition_cnt int|null int
      replication_factor int|null int
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_broker_set_down()

      public static rd_kafka_mock_broker_set_down ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_broker_set_up()

      public static rd_kafka_mock_broker_set_up ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_coordinator_set()

      public static rd_kafka_mock_coordinator_set ( 
          \FFI\CData|null $mcluster, 
          string|null $key_type, 
          string|null $key, 
          int|null $broker_id
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      key_type string|null const char*
      key string|null const char*
      broker_id int|null int32_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_set_apiversion()

      public static rd_kafka_mock_set_apiversion ( 
          \FFI\CData|null $mcluster, 
          int|null $ApiKey, 
          int|null $MinVersion, 
          int|null $MaxVersion
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      ApiKey int|null int16_t
      MinVersion int|null int16_t
      MaxVersion int|null int16_t
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_mock_broker_set_rtt()

      public static rd_kafka_mock_broker_set_rtt ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id, 
          int|null $rtt_ms
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      rtt_ms int|null int
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_message_errstr()

      public static rd_kafka_message_errstr ( 
          \FFI\CData|null $rkmessage
       ): string|null
      

      Returns the error string for an errored rd_kafka_message_t or NULL if there was no error.

      Remarks
      This function MUST NOT be used with the producer.
      Parameters
      rkmessage \FFI\CData|null const rd_kafka_message_t* - )
      Returns
      string|null const char*

      rd_kafka_message_broker_id()

      public static rd_kafka_message_broker_id ( 
          \FFI\CData|null $rkmessage
       ): int|null
      

      Returns the broker id of the broker the message was produced to or fetched from.

      Parameters
      rkmessage \FFI\CData|null const rd_kafka_message_t* - )
      Returns
      int|null int32_t - a broker id if known, else -1.

      rd_kafka_produceva()

      public static rd_kafka_produceva ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $vus, 
          int|null $cnt
       ): \FFI\CData|null
      

      Produce and send a single message to broker.

      The message is defined by an array of rd_kafka_vu_t of count cnt.

      See also
      rd_kafka_produce, rd_kafka_producev, RD_KAFKA_V_END
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      vus \FFI\CData|null const rd_kafka_vu_t*
      cnt int|null size_t
      Returns
      \FFI\CData|null rd_kafka_error_t* - an error object on failure or NULL on success. See rd_kafka_producev() for specific error codes.

      rd_kafka_event_debug_contexts()

      public static rd_kafka_event_debug_contexts ( 
          \FFI\CData|null $rkev, 
          \FFI\CData|null $dst, 
          int|null $dstsize
       ): int|null
      

      Extract log debug context from event.

      Event types:

      • RD_KAFKA_EVENT_LOG
      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - the event to extract data from.
      dst \FFI\CData|null char* - destination string for comma separated list.
      dstsize int|null size_t - size of provided dst buffer.
      Returns
      int|null int - 0 on success or -1 if unsupported event type.

      rd_kafka_mock_broker_push_request_errors()

      public static rd_kafka_mock_broker_push_request_errors ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id, 
          int|null $ApiKey, 
          int|null $cnt, 
          mixed $args
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      ApiKey int|null int16_t
      cnt int|null size_t
      args mixed
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_conf_get_default_topic_conf()

      public static rd_kafka_conf_get_default_topic_conf ( 
          \FFI\CData|null $conf
       ): \FFI\CData|null
      

      Gets the default topic configuration as previously set with rd_kafka_conf_set_default_topic_conf() or that was implicitly created by configuring a topic-level property on the global conf object.

      Warning
      The returned topic configuration object is owned by the conf object. It may be modified but not destroyed and its lifetime is the same as the conf object or the next call to rd_kafka_conf_set_default_topic_conf().
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - )
      Returns
      \FFI\CData|null rd_kafka_topic_conf_t* - the conf’s default topic configuration (if any), or NULL.

      rd_kafka_queue_yield()

      public static rd_kafka_queue_yield ( 
          \FFI\CData|null $rkqu
       ): void
      

      Cancels the current rd_kafka_queue_poll() on rkqu.

      An application may use this from another thread to force an immediate return to the calling code (caller of rd_kafka_queue_poll()). Must not be used from signal handlers since that may cause deadlocks.

      Parameters
      rkqu \FFI\CData|null rd_kafka_queue_t* - )

      rd_kafka_seek_partitions()

      public static rd_kafka_seek_partitions ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $partitions, 
          int|null $timeout_ms
       ): \FFI\CData|null
      

      Seek consumer for partitions in partitions to the per-partition offset in the .offset field of partitions.

      The offset may be either absolute (>= 0) or a logical offset.

      If timeout_ms is specified (not 0) the seek call will wait this long for the consumer to update its fetcher state for the given partition with the new offset. This guarantees that no previously fetched messages for the old offset (or fetch position) will be passed to the application.

      If the timeout is reached the internal state will be unknown to the caller and this function returns RD_KAFKA_RESP_ERR__TIMED_OUT.

      If timeout_ms is 0 it will initiate the seek but return immediately without any error reporting (e.g., async).

      This call will purge all pre-fetched messages for the given partition, which may be up to queued.max.message.kbytes in size. Repeated use of seek may thus lead to increased network usage as messages are re-fetched from the broker.

      Individual partition errors are reported in the per-partition .err field of partitions.

      Remarks
      Seek must only be performed for already assigned/consumed partitions, use rd_kafka_assign() (et.al) to set the initial starting offset for a new assignmenmt.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
      timeout_ms int|null int
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure.

      rd_kafka_incremental_assign()

      public static rd_kafka_incremental_assign ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $partitions
       ): \FFI\CData|null
      

      Incrementally add partitions to the current assignment.

      If a COOPERATIVE assignor (i.e. incremental rebalancing) is being used, this method should be used in a rebalance callback to adjust the current assignment appropriately in the case where the rebalance type is RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS. The application must pass the partition list passed to the callback (or a copy of it), even if the list is empty. partitions must not be NULL. This method may also be used outside the context of a rebalance callback.

      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      partitions \FFI\CData|null const rd_kafka_topic_partition_list_t*
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success, or an error object if the operation was unsuccessful.

      rd_kafka_incremental_unassign()

      public static rd_kafka_incremental_unassign ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $partitions
       ): \FFI\CData|null
      

      Incrementally remove partitions from the current assignment.

      If a COOPERATIVE assignor (i.e. incremental rebalancing) is being used, this method should be used in a rebalance callback to adjust the current assignment appropriately in the case where the rebalance type is RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS. The application must pass the partition list passed to the callback (or a copy of it), even if the list is empty. partitions must not be NULL. This method may also be used outside the context of a rebalance callback.

      Remarks
      The returned error object (if not NULL) must be destroyed with rd_kafka_error_destroy().
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      partitions \FFI\CData|null const rd_kafka_topic_partition_list_t*
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success, or an error object if the operation was unsuccessful.

      rd_kafka_rebalance_protocol()

      public static rd_kafka_rebalance_protocol ( 
          \FFI\CData|null $rk
       ): string|null
      

      The rebalance protocol currently in use. This will be "NONE" if the consumer has not (yet) joined a group, else it will match the rebalance protocol ("EAGER", "COOPERATIVE") of the configured and selected assignor(s). All configured assignors must have the same protocol type, meaning online migration of a consumer group from using one protocol to another (in particular upgading from EAGER to COOPERATIVE) without a restart is not currently supported.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      string|null const char* - NULL on error, or one of “NONE”, “EAGER”, “COOPERATIVE” on success.

      rd_kafka_assignment_lost()

      public static rd_kafka_assignment_lost ( 
          \FFI\CData|null $rk
       ): int|null
      

      Check whether the consumer considers the current assignment to have been lost involuntarily. This method is only applicable for use with a high level subscribing consumer. Assignments are revoked immediately when determined to have been lost, so this method is only useful when reacting to a RD_KAFKA_EVENT_REBALANCE event or from within a rebalance_cb. Partitions that have been lost may already be owned by other members in the group and therefore commiting offsets, for example, may fail.

      Remarks
      Calling rd_kafka_assign(), rd_kafka_incremental_assign() or rd_kafka_incremental_unassign() resets this flag.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      int|null int - Returns 1 if the current partition assignment is considered lost, 0 otherwise.

      rd_kafka_consumer_group_metadata_new_with_genid()

      public static rd_kafka_consumer_group_metadata_new_with_genid ( 
          string|null $group_id, 
          int|null $generation_id, 
          string|null $member_id, 
          string|null $group_instance_id
       ): \FFI\CData|null
      

      Create a new consumer group metadata object. This is typically only used for writing tests.

      Remarks
      The returned pointer must be freed by the application using rd_kafka_consumer_group_metadata_destroy().
      Parameters
      group_id string|null const char* - The group id.
      generation_id int|null int32_t - The group generation id.
      member_id string|null const char* - The group member id.
      group_instance_id string|null const char* - The group instance id (may be NULL).
      Returns
      \FFI\CData|null rd_kafka_consumer_group_metadata_t*

      rd_kafka_event_DeleteRecords_result()

      public static rd_kafka_event_DeleteRecords_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Event types: RD_KAFKA_EVENT_DELETERECORDS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DeleteRecords_result_t* - the result of a DeleteRecords request, or NULL if event is of different type.

      rd_kafka_event_DeleteGroups_result()

      public static rd_kafka_event_DeleteGroups_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DeleteGroups result.

      Event types: RD_KAFKA_EVENT_DELETEGROUPS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DeleteGroups_result_t* - the result of a DeleteGroups request, or NULL if event is of different type.

      rd_kafka_event_DeleteConsumerGroupOffsets_result()

      public static rd_kafka_event_DeleteConsumerGroupOffsets_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DeleteConsumerGroupOffsets result.

      Event types: RD_KAFKA_EVENT_DELETECONSUMERGROUPOFFSETS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DeleteConsumerGroupOffsets_result_t* - the result of a DeleteConsumerGroupOffsets request, or NULL if event is of different type.

      rd_kafka_group_result_error()

      public static rd_kafka_group_result_error ( 
          \FFI\CData|null $groupres
       ): \FFI\CData|null
      

      Group result provides per-group operation result information.

      Remarks
      lifetime of the returned error is the same as the groupres.
      Parameters
      groupres \FFI\CData|null const rd_kafka_group_result_t* - )
      Returns
      \FFI\CData|null const rd_kafka_error_t* - the error for the given group result, or NULL on success.

      rd_kafka_group_result_name()

      public static rd_kafka_group_result_name ( 
          \FFI\CData|null $groupres
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the groupres.
      Parameters
      groupres \FFI\CData|null const rd_kafka_group_result_t* - )
      Returns
      string|null const char* - the name of the group for the given group result.

      rd_kafka_group_result_partitions()

      public static rd_kafka_group_result_partitions ( 
          \FFI\CData|null $groupres
       ): \FFI\CData|null
      
      Remarks
      lifetime of the returned list is the same as the groupres.
      Parameters
      groupres \FFI\CData|null const rd_kafka_group_result_t* - )
      Returns
      \FFI\CData|null const rd_kafka_topic_partition_list_t* - the partitions/offsets for the given group result, if applicable to the request type, else NULL.

      rd_kafka_DeleteRecords_new()

      public static rd_kafka_DeleteRecords_new ( 
          \FFI\CData|null $before_offsets
       ): \FFI\CData|null
      

      Create a new DeleteRecords object. This object is later passed to rd_kafka_DeleteRecords().

      before_offsets must contain topic, partition, and offset is the offset before which the messages will be deleted (exclusive). Set offset to RD_KAFKA_OFFSET_END (high-watermark) in order to delete all data in the partition.

      Parameters
      before_offsets \FFI\CData|null const rd_kafka_topic_partition_list_t* - ) - For each partition delete all messages up to but not including the specified offset.
      Returns
      \FFI\CData|null rd_kafka_DeleteRecords_t* - a new allocated DeleteRecords object. Use rd_kafka_DeleteRecords_destroy() to free object when done.

      rd_kafka_DeleteRecords_destroy()

      public static rd_kafka_DeleteRecords_destroy ( 
          \FFI\CData|null $del_records
       ): void
      
      Parameters
      del_records \FFI\CData|null rd_kafka_DeleteRecords_t*

      rd_kafka_DeleteRecords_destroy_array()

      public static rd_kafka_DeleteRecords_destroy_array ( 
          \FFI\CData|null $del_records, 
          int|null $del_record_cnt
       ): void
      
      Parameters
      del_records \FFI\CData|null rd_kafka_DeleteRecords_t**
      del_record_cnt int|null size_t

      rd_kafka_DeleteRecords()

      public static rd_kafka_DeleteRecords ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $del_records, 
          int|null $del_record_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Delete records (messages) in topic partitions older than the offsets provided.

      Supported admin options:

      • rd_kafka_AdminOptions_set_operation_timeout() - default 60 seconds. Controls how long the brokers will wait for records to be deleted.
      • rd_kafka_AdminOptions_set_request_timeout() - default socket.timeout.ms. Controls how long rdkafka will wait for the request to complete.
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DELETERECORDS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      del_records \FFI\CData|null rd_kafka_DeleteRecords_t** - The offsets to delete (up to). Currently only one DeleteRecords_t (but containing multiple offsets) is supported.
      del_record_cnt int|null size_t - The number of elements in del_records, must be 1.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DeleteRecords_result_offsets()

      public static rd_kafka_DeleteRecords_result_offsets ( 
          \FFI\CData|null $result
       ): \FFI\CData|null
      

      Get a list of topic and partition results from a DeleteRecords result. The returned objects will contain topic, partition, offset and err. offset will be set to the post-deletion low-watermark (smallest available offset of all live replicas). err will be set per-partition if deletion failed.

      The returned object's life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DeleteRecords_result_t* - )
      Returns
      \FFI\CData|null const rd_kafka_topic_partition_list_t*

      rd_kafka_DeleteGroup_new()

      public static rd_kafka_DeleteGroup_new ( 
          string|null $group
       ): \FFI\CData|null
      

      Create a new DeleteGroup object. This object is later passed to rd_kafka_DeleteGroups().

      Parameters
      group string|null const char* - ) - Name of group to delete.
      Returns
      \FFI\CData|null rd_kafka_DeleteGroup_t* - a new allocated DeleteGroup object. Use rd_kafka_DeleteGroup_destroy() to free object when done.

      rd_kafka_DeleteGroup_destroy()

      public static rd_kafka_DeleteGroup_destroy ( 
          \FFI\CData|null $del_group
       ): void
      
      Parameters
      del_group \FFI\CData|null rd_kafka_DeleteGroup_t*

      rd_kafka_DeleteGroup_destroy_array()

      public static rd_kafka_DeleteGroup_destroy_array ( 
          \FFI\CData|null $del_groups, 
          int|null $del_group_cnt
       ): void
      
      Parameters
      del_groups \FFI\CData|null rd_kafka_DeleteGroup_t**
      del_group_cnt int|null size_t

      rd_kafka_DeleteGroups()

      public static rd_kafka_DeleteGroups ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $del_groups, 
          int|null $del_group_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Delete groups from cluster as specified by the del_groups array of size del_group_cnt elements.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DELETEGROUPS_RESULT
      This function in called deleteConsumerGroups in the Java client.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      del_groups \FFI\CData|null rd_kafka_DeleteGroup_t** - Array of groups to delete.
      del_group_cnt int|null size_t - Number of elements in del_groups array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DeleteGroups_result_groups()

      public static rd_kafka_DeleteGroups_result_groups ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of group results from a DeleteGroups result.

      The returned groups life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DeleteGroups_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_group_result_t**

      rd_kafka_DeleteConsumerGroupOffsets_new()

      public static rd_kafka_DeleteConsumerGroupOffsets_new ( 
          string|null $group, 
          \FFI\CData|null $partitions
       ): \FFI\CData|null
      

      Create a new DeleteConsumerGroupOffsets object. This object is later passed to rd_kafka_DeleteConsumerGroupOffsets().

      Parameters
      group string|null const char* - Consumer group id.
      partitions \FFI\CData|null const rd_kafka_topic_partition_list_t* - Partitions to delete committed offsets for. Only the topic and partition fields are used.
      Returns
      \FFI\CData|null rd_kafka_DeleteConsumerGroupOffsets_t* - a new allocated DeleteConsumerGroupOffsets object. Use rd_kafka_DeleteConsumerGroupOffsets_destroy() to free object when done.

      rd_kafka_DeleteConsumerGroupOffsets_destroy()

      public static rd_kafka_DeleteConsumerGroupOffsets_destroy ( 
          \FFI\CData|null $del_grpoffsets
       ): void
      
      Parameters
      del_grpoffsets \FFI\CData|null rd_kafka_DeleteConsumerGroupOffsets_t*

      rd_kafka_DeleteConsumerGroupOffsets_destroy_array()

      public static rd_kafka_DeleteConsumerGroupOffsets_destroy_array ( 
          \FFI\CData|null $del_grpoffsets, 
          int|null $del_grpoffset_cnt
       ): void
      
      Parameters
      del_grpoffsets \FFI\CData|null rd_kafka_DeleteConsumerGroupOffsets_t**
      del_grpoffset_cnt int|null size_t

      rd_kafka_DeleteConsumerGroupOffsets()

      public static rd_kafka_DeleteConsumerGroupOffsets ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $del_grpoffsets, 
          int|null $del_grpoffsets_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Delete committed offsets for a set of partitions in a consumer group. This will succeed at the partition level only if the group is not actively subscribed to the corresponding topic.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DELETECONSUMERGROUPOFFSETS_RESULT
      The current implementation only supports one group per invocation.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      del_grpoffsets \FFI\CData|null rd_kafka_DeleteConsumerGroupOffsets_t** - Array of group committed offsets to delete. MUST only be one single element.
      del_grpoffsets_cnt int|null size_t - Number of elements in del_grpoffsets array. MUST always be 1.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DeleteConsumerGroupOffsets_result_groups()

      public static rd_kafka_DeleteConsumerGroupOffsets_result_groups ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of results from a DeleteConsumerGroupOffsets result.

      The returned groups life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DeleteConsumerGroupOffsets_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_group_result_t**

      rd_kafka_mock_clear_request_errors()

      public static rd_kafka_mock_clear_request_errors ( 
          \FFI\CData|null $mcluster, 
          int|null $ApiKey
       ): void
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      ApiKey int|null int16_t

      rd_kafka_mock_push_request_errors_array()

      public static rd_kafka_mock_push_request_errors_array ( 
          \FFI\CData|null $mcluster, 
          int|null $ApiKey, 
          int|null $cnt, 
          \FFI\CData|null $errors
       ): void
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      ApiKey int|null int16_t
      cnt int|null size_t
      errors \FFI\CData|null const rd_kafka_resp_err_t*

      rd_kafka_interceptor_f_on_response_received_t()

      public static rd_kafka_interceptor_f_on_response_received_t ( 
          \FFI\CData|null $rk, 
          int|null $sockfd, 
          string|null $brokername, 
          int|null $brokerid, 
          int|null $ApiKey, 
          int|null $ApiVersion, 
          int|null $CorrId, 
          int|null $size, 
          int|null $rtt, 
          int $err, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_response_received() is called when a protocol response has been fully received from a broker TCP connection socket but before the response payload is parsed.

      Warning
      The on_response_received() interceptor is called from internal librdkafka broker threads. An on_response_received() interceptor MUST NOT call any librdkafka API's associated with the rk, or perform any blocking or prolonged work.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      sockfd int|null int - Socket file descriptor (always -1).
      brokername string|null const char* - Broker response was received from, possibly empty string on error.
      brokerid int|null int32_t - Broker response was received from.
      ApiKey int|null int16_t - Kafka protocol request type or -1 on error.
      ApiVersion int|null int16_t - Kafka protocol request type version or -1 on error.
      CorrId int|null int32_t - Kafka protocol request correlation id, possibly -1 on error.
      size int|null size_t - Size of response, possibly 0 on error.
      rtt int|null int64_t - Request round-trip-time in microseconds, possibly -1 on error.
      err int rd_kafka_resp_err_t - Receive error.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_add_on_response_received()

      public static rd_kafka_interceptor_add_on_response_received ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_response_received, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_response_received() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_response_received \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_response_received_t*)(rd_kafka_t*, int, const char*, int32_t, int16_t, int16_t, int32_t, size_t, int64_t, rd_kafka_resp_err_t, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_conf_set_engine_callback_data()

      public static rd_kafka_conf_set_engine_callback_data ( 
          \FFI\CData|null $conf, 
          \FFI\CData|object|string|null $callback_data
       ): void
      

      Set callback_data for OpenSSL engine.

      Remarks
      The ssl.engine.location configuration must be set for this to have affect.
      The memory pointed to by value must remain valid for the lifetime of the configuration object and any Kafka clients that use it.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t* - Configuration object.
      callback_data \FFI\CData|object|string|null void* - passed to engine callbacks, e.g. ENGINE_load_ssl_client_cert.

      rd_kafka_mem_calloc()

      public static rd_kafka_mem_calloc ( 
          \FFI\CData|null $rk, 
          int|null $num, 
          int|null $size
       ): \FFI\CData|object|string|null
      

      Allocate and zero memory using the same allocator librdkafka uses.

      This is typically an abstraction for the calloc(3) call and makes sure the application can use the same memory allocator as librdkafka for allocating pointers that are used by librdkafka.

      rk can be set to return memory allocated by a specific rk instance otherwise pass NULL for rk.

      Remarks
      Memory allocated by rd_kafka_mem_calloc() must be freed using rd_kafka_mem_free()
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      num int|null size_t
      size int|null size_t
      Returns
      \FFI\CData|object|string|null void*

      rd_kafka_mem_malloc()

      public static rd_kafka_mem_malloc ( 
          \FFI\CData|null $rk, 
          int|null $size
       ): \FFI\CData|object|string|null
      

      Allocate memory using the same allocator librdkafka uses.

      This is typically an abstraction for the malloc(3) call and makes sure the application can use the same memory allocator as librdkafka for allocating pointers that are used by librdkafka.

      rk can be set to return memory allocated by a specific rk instance otherwise pass NULL for rk.

      Remarks
      Memory allocated by rd_kafka_mem_malloc() must be freed using rd_kafka_mem_free()
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      size int|null size_t
      Returns
      \FFI\CData|object|string|null void*

      rd_kafka_mock_broker_push_request_error_rtts()

      public static rd_kafka_mock_broker_push_request_error_rtts ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id, 
          int|null $ApiKey, 
          int|null $cnt, 
          mixed $args
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      ApiKey int|null int16_t
      cnt int|null size_t
      args mixed
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_conf_enable_sasl_queue()

      public static rd_kafka_conf_enable_sasl_queue ( 
          \FFI\CData|null $conf, 
          int|null $enable
       ): void
      

      Enable/disable creation of a queue specific to SASL events and callbacks.

      For SASL mechanisms that trigger callbacks (currently OAUTHBEARER) this configuration API allows an application to get a dedicated queue for the SASL events/callbacks. After enabling the queue with this API the application can retrieve the queue by calling rd_kafka_queue_get_sasl() on the client instance. This queue may then be served directly by the application (with rd_kafka_queue_poll(), et.al) or forwarded to another queue, such as the background queue.

      A convenience function is available to automatically forward the SASL queue to librdkafka's background thread, see rd_kafka_sasl_background_callbacks_enable().

      By default (enable = 0) the main queue (as served by rd_kafka_poll(), et.al.) is used for SASL callbacks.

      Remarks
      The SASL queue is currently only used by the SASL OAUTHBEARER mechanism's token_refresh_cb().
      See also
      rd_kafka_queue_get_sasl()
      rd_kafka_sasl_background_callbacks_enable()
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t*
      enable int|null int

      rd_kafka_queue_get_sasl()

      public static rd_kafka_queue_get_sasl ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      

      Use rd_kafka_queue_destroy() to loose the reference.

      See also
      rd_kafka_sasl_background_callbacks_enable()
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      \FFI\CData|null rd_kafka_queue_t* - a reference to the SASL callback queue, if a SASL mechanism with callbacks is configured (currently only OAUTHBEARER), else returns NULL.

      rd_kafka_sasl_background_callbacks_enable()

      public static rd_kafka_sasl_background_callbacks_enable ( 
          \FFI\CData|null $rk
       ): \FFI\CData|null
      

      Enable SASL OAUTHBEARER refresh callbacks on the librdkafka background thread.

      This serves as an alternative for applications that do not call rd_kafka_poll() (et.al.) at regular intervals (or not at all), as a means of automatically trigger the refresh callbacks, which are needed to initiate connections to the brokers in the case a custom OAUTHBEARER refresh callback is configured.

      See also
      rd_kafka_queue_get_sasl()
      rd_kafka_conf_set_oauthbearer_token_refresh_cb()
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on error.

      rd_kafka_consumer_close_queue()

      public static rd_kafka_consumer_close_queue ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $rkqu
       ): \FFI\CData|null
      

      Asynchronously close the consumer.

      Performs the same actions as rd_kafka_consumer_close() but in a background thread.

      Rebalance events/callbacks (etc) will be forwarded to the application-provided rkqu. The application must poll/serve this queue until rd_kafka_consumer_closed() returns true.

      Remarks
      Depending on consumer group join state there may or may not be rebalance events emitted on rkqu.
      See also
      rd_kafka_consumer_closed()
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      rkqu \FFI\CData|null rd_kafka_queue_t*
      Returns
      \FFI\CData|null rd_kafka_error_t* - an error object if the consumer close failed, else NULL.

      rd_kafka_consumer_closed()

      public static rd_kafka_consumer_closed ( 
          \FFI\CData|null $rk
       ): int|null
      

      Should be used in conjunction with rd_kafka_consumer_close_queue() to know when the consumer has been closed.

      See also
      rd_kafka_consumer_close_queue()
      Parameters
      rk \FFI\CData|null rd_kafka_t* - )
      Returns
      int|null int - 1 if the consumer is closed, else 0.

      rd_kafka_event_CreateAcls_result()

      public static rd_kafka_event_CreateAcls_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Event types: RD_KAFKA_EVENT_CREATEACLS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_CreateAcls_result_t* - the result of a CreateAcls request, or NULL if event is of different type.

      rd_kafka_event_DescribeAcls_result()

      public static rd_kafka_event_DescribeAcls_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Event types: RD_KAFKA_EVENT_DESCRIBEACLS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DescribeAcls_result_t* - the result of a DescribeAcls request, or NULL if event is of different type.

      rd_kafka_event_DeleteAcls_result()

      public static rd_kafka_event_DeleteAcls_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Event types: RD_KAFKA_EVENT_DELETEACLS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DeleteAcls_result_t* - the result of a DeleteAcls request, or NULL if event is of different type.

      rd_kafka_ResourcePatternType_name()

      public static rd_kafka_ResourcePatternType_name ( 
          int $resource_pattern_type
       ): string|null
      
      Parameters
      resource_pattern_type int rd_kafka_ResourcePatternType_t - )
      Returns
      string|null const char* - a string representation of the resource_pattern_type

      rd_kafka_acl_result_error()

      public static rd_kafka_acl_result_error ( 
          \FFI\CData|null $aclres
       ): \FFI\CData|null
      
      Parameters
      aclres \FFI\CData|null const rd_kafka_acl_result_t* - )
      Returns
      \FFI\CData|null const rd_kafka_error_t* - the error object for the given acl result, or NULL on success.

      rd_kafka_AclOperation_name()

      public static rd_kafka_AclOperation_name ( 
          int $acl_operation
       ): string|null
      
      Parameters
      acl_operation int rd_kafka_AclOperation_t - )
      Returns
      string|null const char* - a string representation of the acl_operation

      rd_kafka_AclPermissionType_name()

      public static rd_kafka_AclPermissionType_name ( 
          int $acl_permission_type
       ): string|null
      
      Parameters
      acl_permission_type int rd_kafka_AclPermissionType_t - )
      Returns
      string|null const char* - a string representation of the acl_permission_type

      rd_kafka_AclBinding_new()

      public static rd_kafka_AclBinding_new ( 
          int $restype, 
          string|null $name, 
          int $resource_pattern_type, 
          string|null $principal, 
          string|null $host, 
          int $operation, 
          int $permission_type, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): \FFI\CData|null
      

      Create a new AclBinding object. This object is later passed to rd_kafka_CreateAcls().

      Parameters
      restype int rd_kafka_ResourceType_t - The ResourceType.
      name string|null const char* - The resource name.
      resource_pattern_type int rd_kafka_ResourcePatternType_t - The pattern type.
      principal string|null const char* - A principal, following the kafka specification.
      host string|null const char* - An hostname or ip.
      operation int rd_kafka_AclOperation_t - A Kafka operation.
      permission_type int rd_kafka_AclPermissionType_t - A Kafka permission type.
      errstr \FFI\CData|null char* - An error string for returning errors or NULL to not use it.
      errstr_size int|null size_t - The errstr size or 0 to not use it.
      Returns
      \FFI\CData|null rd_kafka_AclBinding_t* - a new allocated AclBinding object, or NULL if the input parameters are invalid. Use rd_kafka_AclBinding_destroy() to free object when done.

      rd_kafka_AclBindingFilter_new()

      public static rd_kafka_AclBindingFilter_new ( 
          int $restype, 
          string|null $name, 
          int $resource_pattern_type, 
          string|null $principal, 
          string|null $host, 
          int $operation, 
          int $permission_type, 
          \FFI\CData|null $errstr, 
          int|null $errstr_size
       ): \FFI\CData|null
      

      Create a new AclBindingFilter object. This object is later passed to rd_kafka_DescribeAcls() or rd_kafka_DeletesAcls() in order to filter the acls to retrieve or to delete. Use the same rd_kafka_AclBinding functions to query or destroy it.

      Parameters
      restype int rd_kafka_ResourceType_t - The ResourceType or RD_KAFKA_RESOURCE_ANY if not filtering by this field.
      name string|null const char* - The resource name or NULL if not filtering by this field.
      resource_pattern_type int rd_kafka_ResourcePatternType_t - The pattern type or RD_KAFKA_RESOURCE_PATTERN_ANY if not filtering by this field.
      principal string|null const char* - A principal or NULL if not filtering by this field.
      host string|null const char* - An hostname or ip or NULL if not filtering by this field.
      operation int rd_kafka_AclOperation_t - A Kafka operation or RD_KAFKA_ACL_OPERATION_ANY if not filtering by this field.
      permission_type int rd_kafka_AclPermissionType_t - A Kafka permission type or RD_KAFKA_ACL_PERMISSION_TYPE_ANY if not filtering by this field.
      errstr \FFI\CData|null char* - An error string for returning errors or NULL to not use it.
      errstr_size int|null size_t - The errstr size or 0 to not use it.
      Returns
      \FFI\CData|null rd_kafka_AclBindingFilter_t* - a new allocated AclBindingFilter object, or NULL if the input parameters are invalid. Use rd_kafka_AclBinding_destroy() to free object when done.

      rd_kafka_AclBinding_restype()

      public static rd_kafka_AclBinding_restype ( 
          \FFI\CData|null $acl
       ): int
      
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      int rd_kafka_ResourceType_t - the resource type for the given acl binding.

      rd_kafka_AclBinding_name()

      public static rd_kafka_AclBinding_name ( 
          \FFI\CData|null $acl
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the acl.
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      string|null const char* - the resource name for the given acl binding.

      rd_kafka_AclBinding_principal()

      public static rd_kafka_AclBinding_principal ( 
          \FFI\CData|null $acl
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the acl.
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      string|null const char* - the principal for the given acl binding.

      rd_kafka_AclBinding_host()

      public static rd_kafka_AclBinding_host ( 
          \FFI\CData|null $acl
       ): string|null
      
      Remarks
      lifetime of the returned string is the same as the acl.
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      string|null const char* - the host for the given acl binding.

      rd_kafka_AclBinding_operation()

      public static rd_kafka_AclBinding_operation ( 
          \FFI\CData|null $acl
       ): int
      
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      int rd_kafka_AclOperation_t - the acl operation for the given acl binding.

      rd_kafka_AclBinding_permission_type()

      public static rd_kafka_AclBinding_permission_type ( 
          \FFI\CData|null $acl
       ): int
      
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      int rd_kafka_AclPermissionType_t - the permission type for the given acl binding.

      rd_kafka_AclBinding_resource_pattern_type()

      public static rd_kafka_AclBinding_resource_pattern_type ( 
          \FFI\CData|null $acl
       ): int
      
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      int rd_kafka_ResourcePatternType_t - the resource pattern type for the given acl binding.

      rd_kafka_AclBinding_error()

      public static rd_kafka_AclBinding_error ( 
          \FFI\CData|null $acl
       ): \FFI\CData|null
      
      Parameters
      acl \FFI\CData|null const rd_kafka_AclBinding_t* - )
      Returns
      \FFI\CData|null const rd_kafka_error_t* - the error object for the given acl binding, or NULL on success.

      rd_kafka_AclBinding_destroy()

      public static rd_kafka_AclBinding_destroy ( 
          \FFI\CData|null $acl_binding
       ): void
      
      Parameters
      acl_binding \FFI\CData|null rd_kafka_AclBinding_t*

      rd_kafka_AclBinding_destroy_array()

      public static rd_kafka_AclBinding_destroy_array ( 
          \FFI\CData|null $acl_bindings, 
          int|null $acl_bindings_cnt
       ): void
      
      Parameters
      acl_bindings \FFI\CData|null rd_kafka_AclBinding_t**
      acl_bindings_cnt int|null size_t

      rd_kafka_CreateAcls_result_acls()

      public static rd_kafka_CreateAcls_result_acls ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of acl results from a CreateAcls result.

      The returned acl result life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_CreateAcls_result_t* - CreateAcls result to get acl results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_acl_result_t**

      rd_kafka_CreateAcls()

      public static rd_kafka_CreateAcls ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $new_acls, 
          int|null $new_acls_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Create acls as specified by the new_acls array of size new_topic_cnt elements.

      Supported admin options:

      • rd_kafka_AdminOptions_set_request_timeout() - default socket.timeout.ms
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_CREATEACLS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      new_acls \FFI\CData|null rd_kafka_AclBinding_t** - Array of new acls to create.
      new_acls_cnt int|null size_t - Number of elements in new_acls array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DescribeAcls_result_acls()

      public static rd_kafka_DescribeAcls_result_acls ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of resource results from a DescribeAcls result.

      DescribeAcls - describe access control lists.

      The returned resources life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DescribeAcls_result_t* - DescribeAcls result to get acls from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_AclBinding_t**

      rd_kafka_DescribeAcls()

      public static rd_kafka_DescribeAcls ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $acl_filter, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Describe acls matching the filter provided in acl_filter.

      Supported admin options:

      • rd_kafka_AdminOptions_set_operation_timeout() - default 0
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DESCRIBEACLS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      acl_filter \FFI\CData|null rd_kafka_AclBindingFilter_t* - Filter for the returned acls.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DeleteAcls_result_responses()

      public static rd_kafka_DeleteAcls_result_responses ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of DeleteAcls result responses from a DeleteAcls result.

      The returned responses life-time is the same as the result object.

      Parameters
      result \FFI\CData|null const rd_kafka_DeleteAcls_result_t* - DeleteAcls result to get responses from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_DeleteAcls_result_response_t**

      rd_kafka_DeleteAcls_result_response_error()

      public static rd_kafka_DeleteAcls_result_response_error ( 
          \FFI\CData|null $result_response
       ): \FFI\CData|null
      
      Parameters
      result_response \FFI\CData|null const rd_kafka_DeleteAcls_result_response_t* - )
      Returns
      \FFI\CData|null const rd_kafka_error_t* - the error object for the given DeleteAcls result response, or NULL on success.

      rd_kafka_DeleteAcls_result_response_matching_acls()

      public static rd_kafka_DeleteAcls_result_response_matching_acls ( 
          \FFI\CData|null $result_response, 
          \FFI\CData|null $matching_acls_cntp
       ): \FFI\CData|null
      
      Remarks
      lifetime of the returned acl bindings is the same as the result_response.
      Parameters
      result_response \FFI\CData|null const rd_kafka_DeleteAcls_result_response_t*
      matching_acls_cntp \FFI\CData|null size_t*
      Returns
      \FFI\CData|null const rd_kafka_AclBinding_t** - the matching acls array for the given DeleteAcls result response.

      rd_kafka_DeleteAcls()

      public static rd_kafka_DeleteAcls ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $del_acls, 
          int|null $del_acls_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Delete acls matching the filteres provided in del_acls array of size del_acls_cnt.

      Supported admin options:

      • rd_kafka_AdminOptions_set_operation_timeout() - default 0
      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DELETEACLS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      del_acls \FFI\CData|null rd_kafka_AclBindingFilter_t** - Filters for the acls to delete.
      del_acls_cnt int|null size_t - Number of elements in del_acls array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_conf_set_resolve_cb()

      public static rd_kafka_conf_set_resolve_cb ( 
          \FFI\CData|null $conf, 
          \FFI\CData|\Closure $resolve_cb
       ): void
      

      Set address resolution callback.

      The callback is responsible for resolving the hostname node and the service service into a list of socket addresses as getaddrinfo(3) would. The hints and res parameters function as they do for getaddrinfo(3). The callback's opaque argument is the opaque set with rd_kafka_conf_set_opaque().

      If the callback is invoked with a NULL node, service, and hints, the callback should instead free the addrinfo struct specified in res. In this case the callback must succeed; the return value will not be checked by the caller.

      The callback's return value is interpreted as the return value of getaddrinfo(3).

      Remarks
      The callback will be called from an internal librdkafka thread.
      Parameters
      conf \FFI\CData|null rd_kafka_conf_t*
      resolve_cb \FFI\CData|\Closure int()(const char, const char*, const struct addrinfo*, struct addrinfo**, void*)

      rd_kafka_sasl_set_credentials()

      public static rd_kafka_sasl_set_credentials ( 
          \FFI\CData|null $rk, 
          string|null $username, 
          string|null $password
       ): \FFI\CData|null
      

      Sets SASL credentials used for SASL PLAIN and SCRAM mechanisms by this Kafka client.

      This function sets or resets the SASL username and password credentials used by this Kafka client. The new credentials will be used the next time this client needs to authenticate to a broker. This function will not disconnect existing connections that might have been made using the old credentials.

      Remarks
      This function only applies to the SASL PLAIN and SCRAM mechanisms.
      Parameters
      rk \FFI\CData|null rd_kafka_t*
      username string|null const char*
      password string|null const char*
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on error.

      rd_kafka_Node_id()

      public static rd_kafka_Node_id ( 
          \FFI\CData|null $node
       ): int|null
      

      Get the id of node.

      Parameters
      node \FFI\CData|null const rd_kafka_Node_t* - ) - The Node instance.
      Returns
      int|null int - The node id.

      rd_kafka_Node_host()

      public static rd_kafka_Node_host ( 
          \FFI\CData|null $node
       ): string|null
      

      Get the host of node.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the node object.
      Parameters
      node \FFI\CData|null const rd_kafka_Node_t* - ) - The Node instance.
      Returns
      string|null const char* - The node host.

      rd_kafka_Node_port()

      public static rd_kafka_Node_port ( 
          \FFI\CData|null $node
       ): int|null
      

      Get the port of node.

      Parameters
      node \FFI\CData|null const rd_kafka_Node_t* - ) - The Node instance.
      Returns
      int|null uint16_t - The node port.

      rd_kafka_consumer_group_state_name()

      public static rd_kafka_consumer_group_state_name ( 
          int $state
       ): string|null
      

      Returns a name for a state code.

      Parameters
      state int rd_kafka_consumer_group_state_t - ) - The state value.
      Returns
      string|null const char* - The group state name corresponding to the provided group state value.

      rd_kafka_consumer_group_state_code()

      public static rd_kafka_consumer_group_state_code ( 
          string|null $name
       ): int
      

      Returns a code for a state name.

      Parameters
      name string|null const char* - ) - The state name.
      Returns
      int rd_kafka_consumer_group_state_t - The group state value corresponding to the provided group state name.

      rd_kafka_event_ListConsumerGroups_result()

      public static rd_kafka_event_ListConsumerGroups_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get ListConsumerGroups result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_LISTCONSUMERGROUPS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_ListConsumerGroups_result_t* - the result of a ListConsumerGroups request, or NULL if event is of different type.

      rd_kafka_event_DescribeConsumerGroups_result()

      public static rd_kafka_event_DescribeConsumerGroups_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DescribeConsumerGroups result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_DESCRIBECONSUMERGROUPS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DescribeConsumerGroups_result_t* - the result of a DescribeConsumerGroups request, or NULL if event is of different type.

      rd_kafka_event_AlterConsumerGroupOffsets_result()

      public static rd_kafka_event_AlterConsumerGroupOffsets_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get AlterConsumerGroupOffsets result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_ALTERCONSUMERGROUPOFFSETS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_AlterConsumerGroupOffsets_result_t* - the result of a AlterConsumerGroupOffsets request, or NULL if event is of different type.

      rd_kafka_event_ListConsumerGroupOffsets_result()

      public static rd_kafka_event_ListConsumerGroupOffsets_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get ListConsumerGroupOffsets result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_LISTCONSUMERGROUPOFFSETS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_ListConsumerGroupOffsets_result_t* - the result of a ListConsumerGroupOffsets request, or NULL if event is of different type.

      rd_kafka_interceptor_f_on_broker_state_change_t()

      public static rd_kafka_interceptor_f_on_broker_state_change_t ( 
          \FFI\CData|null $rk, 
          int|null $broker_id, 
          string|null $secproto, 
          string|null $name, 
          int|null $port, 
          string|null $state, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      on_broker_state_change() is called just after a broker has been created or its state has been changed.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - The client instance.
      broker_id int|null int32_t - The broker id (-1 is used for bootstrap brokers).
      secproto string|null const char* - The security protocol.
      name string|null const char* - The original name of the broker.
      port int|null int - The port of the broker.
      state string|null const char* - Broker state name.
      ic_opaque \FFI\CData|object|string|null void* - The interceptor’s opaque pointer specified in ..add..().
      Returns
      int rd_kafka_resp_err_t - an error code on failure, the error is logged but otherwise ignored.

      rd_kafka_interceptor_add_on_broker_state_change()

      public static rd_kafka_interceptor_add_on_broker_state_change ( 
          \FFI\CData|null $rk, 
          string|null $ic_name, 
          \FFI\CData|\Closure $on_broker_state_change, 
          \FFI\CData|object|string|null $ic_opaque
       ): int
      

      Append an on_broker_state_change() interceptor.

      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      ic_name string|null const char* - Interceptor name, used in logging.
      on_broker_state_change \FFI\CData|\Closure rd_kafka_resp_err_t(rd_kafka_interceptor_f_on_broker_state_change_t*)(rd_kafka_t*, int32_t, const char*, const char*, int, const char*, void*)
      ic_opaque \FFI\CData|object|string|null void* - Opaque value that will be passed to the function.
      Returns
      int rd_kafka_resp_err_t - RD_KAFKA_RESP_ERR_NO_ERROR on success or RD_KAFKA_RESP_ERR__CONFLICT if an existing interceptor with the same ic_name and function has already been added to conf.

      rd_kafka_AdminOptions_set_require_stable_offsets()

      public static rd_kafka_AdminOptions_set_require_stable_offsets ( 
          \FFI\CData|null $options, 
          int|null $true_or_false
       ): \FFI\CData|null
      

      Whether broker should return stable offsets (transaction-committed).

      Remarks
      This option is valid for ListConsumerGroupOffsets.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin options.
      true_or_false int|null int - Defaults to false.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success, a new error instance that must be released with rd_kafka_error_destroy() in case of error.

      rd_kafka_AdminOptions_set_match_consumer_group_states()

      public static rd_kafka_AdminOptions_set_match_consumer_group_states ( 
          \FFI\CData|null $options, 
          \FFI\CData|null $consumer_group_states, 
          int|null $consumer_group_states_cnt
       ): \FFI\CData|null
      

      Set consumer groups states to query for.

      Remarks
      This option is valid for ListConsumerGroups.
      Parameters
      options \FFI\CData|null rd_kafka_AdminOptions_t* - Admin options.
      consumer_group_states \FFI\CData|null const rd_kafka_consumer_group_state_t* - Array of consumer group states.
      consumer_group_states_cnt int|null size_t - Size of the consumer_group_states array.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success, a new error instance that must be released with rd_kafka_error_destroy() in case of error.

      rd_kafka_ListConsumerGroups()

      public static rd_kafka_ListConsumerGroups ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      List the consumer groups available in the cluster.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_LISTCONSUMERGROUPS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_ConsumerGroupListing_group_id()

      public static rd_kafka_ConsumerGroupListing_group_id ( 
          \FFI\CData|null $grplist
       ): string|null
      

      Gets the group id for the grplist group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grplist object.
      Parameters
      grplist \FFI\CData|null const rd_kafka_ConsumerGroupListing_t* - ) - The group listing.
      Returns
      string|null const char* - The group id.

      rd_kafka_ConsumerGroupListing_is_simple_consumer_group()

      public static rd_kafka_ConsumerGroupListing_is_simple_consumer_group ( 
          \FFI\CData|null $grplist
       ): int|null
      

      Is the grplist group a simple consumer group.

      Parameters
      grplist \FFI\CData|null const rd_kafka_ConsumerGroupListing_t* - ) - The group listing.
      Returns
      int|null int - 1 if the group is a simple consumer group, else 0.

      rd_kafka_ConsumerGroupListing_state()

      public static rd_kafka_ConsumerGroupListing_state ( 
          \FFI\CData|null $grplist
       ): int
      

      Gets state for the grplist group.

      Parameters
      grplist \FFI\CData|null const rd_kafka_ConsumerGroupListing_t* - ) - The group listing.
      Returns
      int rd_kafka_consumer_group_state_t - A group state.

      rd_kafka_ListConsumerGroups_result_valid()

      public static rd_kafka_ListConsumerGroups_result_valid ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of valid list groups from a ListConsumerGroups result.

      The returned groups life-time is the same as the result object.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the result object.
      Parameters
      result \FFI\CData|null const rd_kafka_ListConsumerGroups_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConsumerGroupListing_t**

      rd_kafka_ListConsumerGroups_result_errors()

      public static rd_kafka_ListConsumerGroups_result_errors ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of errors from a ListConsumerGroups call result.

      The returned errors life-time is the same as the result object.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the result object.
      Parameters
      result \FFI\CData|null const rd_kafka_ListConsumerGroups_result_t* - ListConsumerGroups result.
      cntp \FFI\CData|null size_t* - Is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_error_t** - Array of errors in result.

      rd_kafka_DescribeConsumerGroups()

      public static rd_kafka_DescribeConsumerGroups ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $groups, 
          int|null $groups_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Describe groups from cluster as specified by the groups array of size groups_cnt elements.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_DESCRIBECONSUMERGROUPS_RESULT
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      groups \FFI\CData|null const char** - Array of groups to describe.
      groups_cnt int|null size_t - Number of elements in groups array.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_DescribeConsumerGroups_result_groups()

      public static rd_kafka_DescribeConsumerGroups_result_groups ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of group results from a DescribeConsumerGroups result.

      The returned groups life-time is the same as the result object.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the result object.
      Parameters
      result \FFI\CData|null const rd_kafka_DescribeConsumerGroups_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t**

      rd_kafka_ConsumerGroupDescription_group_id()

      public static rd_kafka_ConsumerGroupDescription_group_id ( 
          \FFI\CData|null $grpdesc
       ): string|null
      

      Gets the group id for the grpdesc group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grpdesc object.
      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      string|null const char* - The group id.

      rd_kafka_ConsumerGroupDescription_error()

      public static rd_kafka_ConsumerGroupDescription_error ( 
          \FFI\CData|null $grpdesc
       ): \FFI\CData|null
      

      Gets the error for the grpdesc group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grpdesc object.
      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      \FFI\CData|null const rd_kafka_error_t* - The group description error.

      rd_kafka_ConsumerGroupDescription_is_simple_consumer_group()

      public static rd_kafka_ConsumerGroupDescription_is_simple_consumer_group ( 
          \FFI\CData|null $grpdesc
       ): int|null
      

      Is the grpdesc group a simple consumer group.

      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      int|null int - 1 if the group is a simple consumer group, else 0.

      rd_kafka_ConsumerGroupDescription_partition_assignor()

      public static rd_kafka_ConsumerGroupDescription_partition_assignor ( 
          \FFI\CData|null $grpdesc
       ): string|null
      

      Gets the partition assignor for the grpdesc group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grpdesc object.
      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      string|null const char* - The partition assignor.

      rd_kafka_ConsumerGroupDescription_state()

      public static rd_kafka_ConsumerGroupDescription_state ( 
          \FFI\CData|null $grpdesc
       ): int
      

      Gets state for the grpdesc group.

      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      int rd_kafka_consumer_group_state_t - A group state.

      rd_kafka_ConsumerGroupDescription_coordinator()

      public static rd_kafka_ConsumerGroupDescription_coordinator ( 
          \FFI\CData|null $grpdesc
       ): \FFI\CData|null
      

      Gets the coordinator for the grpdesc group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grpdesc object.
      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      \FFI\CData|null const rd_kafka_Node_t* - The group coordinator.

      rd_kafka_ConsumerGroupDescription_member_count()

      public static rd_kafka_ConsumerGroupDescription_member_count ( 
          \FFI\CData|null $grpdesc
       ): int|null
      

      Gets the members count of grpdesc group.

      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - ) - The group description.
      Returns
      int|null size_t - The member count.

      rd_kafka_ConsumerGroupDescription_member()

      public static rd_kafka_ConsumerGroupDescription_member ( 
          \FFI\CData|null $grpdesc, 
          int|null $idx
       ): \FFI\CData|null
      

      Gets a member of grpdesc group.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the grpdesc object.
      Parameters
      grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t* - The group description.
      idx int|null size_t - The member idx.
      Returns
      \FFI\CData|null const rd_kafka_MemberDescription_t* - A member at index idx, or NULL if idx is out of range.

      rd_kafka_MemberDescription_client_id()

      public static rd_kafka_MemberDescription_client_id ( 
          \FFI\CData|null $member
       ): string|null
      

      Gets client id of member.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the member object.
      Parameters
      member \FFI\CData|null const rd_kafka_MemberDescription_t* - ) - The group member.
      Returns
      string|null const char* - The client id.

      rd_kafka_MemberDescription_group_instance_id()

      public static rd_kafka_MemberDescription_group_instance_id ( 
          \FFI\CData|null $member
       ): string|null
      

      Gets group instance id of member.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the member object.
      Parameters
      member \FFI\CData|null const rd_kafka_MemberDescription_t* - ) - The group member.
      Returns
      string|null const char* - The group instance id, or NULL if not available.

      rd_kafka_MemberDescription_consumer_id()

      public static rd_kafka_MemberDescription_consumer_id ( 
          \FFI\CData|null $member
       ): string|null
      

      Gets consumer id of member.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the member object.
      Parameters
      member \FFI\CData|null const rd_kafka_MemberDescription_t* - ) - The group member.
      Returns
      string|null const char* - The consumer id.

      rd_kafka_MemberDescription_host()

      public static rd_kafka_MemberDescription_host ( 
          \FFI\CData|null $member
       ): string|null
      

      Gets host of member.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the member object.
      Parameters
      member \FFI\CData|null const rd_kafka_MemberDescription_t* - ) - The group member.
      Returns
      string|null const char* - The host.

      rd_kafka_MemberDescription_assignment()

      public static rd_kafka_MemberDescription_assignment ( 
          \FFI\CData|null $member
       ): \FFI\CData|null
      

      Gets assignment of member.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the member object.
      Parameters
      member \FFI\CData|null const rd_kafka_MemberDescription_t* - ) - The group member.
      Returns
      \FFI\CData|null const rd_kafka_MemberAssignment_t* - The member assignment.

      rd_kafka_MemberAssignment_partitions()

      public static rd_kafka_MemberAssignment_partitions ( 
          \FFI\CData|null $assignment
       ): \FFI\CData|null
      

      Gets assigned partitions of a member assignment.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the assignment object.
      Parameters
      assignment \FFI\CData|null const rd_kafka_MemberAssignment_t* - ) - The group member assignment.
      Returns
      \FFI\CData|null const rd_kafka_topic_partition_list_t* - The assigned partitions.

      rd_kafka_ListConsumerGroupOffsets_new()

      public static rd_kafka_ListConsumerGroupOffsets_new ( 
          string|null $group_id, 
          \FFI\CData|null $partitions
       ): \FFI\CData|null
      

      Create a new ListConsumerGroupOffsets object. This object is later passed to rd_kafka_ListConsumerGroupOffsets().

      Parameters
      group_id string|null const char* - Consumer group id.
      partitions \FFI\CData|null const rd_kafka_topic_partition_list_t* - Partitions to list committed offsets for. Only the topic and partition fields are used.
      Returns
      \FFI\CData|null rd_kafka_ListConsumerGroupOffsets_t* - a new allocated ListConsumerGroupOffsets object. Use rd_kafka_ListConsumerGroupOffsets_destroy() to free object when done.

      rd_kafka_ListConsumerGroupOffsets_destroy()

      public static rd_kafka_ListConsumerGroupOffsets_destroy ( 
          \FFI\CData|null $list_grpoffsets
       ): void
      
      Parameters
      list_grpoffsets \FFI\CData|null rd_kafka_ListConsumerGroupOffsets_t*

      rd_kafka_ListConsumerGroupOffsets_destroy_array()

      public static rd_kafka_ListConsumerGroupOffsets_destroy_array ( 
          \FFI\CData|null $list_grpoffsets, 
          int|null $list_grpoffset_cnt
       ): void
      
      Parameters
      list_grpoffsets \FFI\CData|null rd_kafka_ListConsumerGroupOffsets_t**
      list_grpoffset_cnt int|null size_t

      rd_kafka_ListConsumerGroupOffsets()

      public static rd_kafka_ListConsumerGroupOffsets ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $list_grpoffsets, 
          int|null $list_grpoffsets_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      List committed offsets for a set of partitions in a consumer group.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_LISTCONSUMERGROUPOFFSETS_RESULT
      The current implementation only supports one group per invocation.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      list_grpoffsets \FFI\CData|null rd_kafka_ListConsumerGroupOffsets_t** - Array of group committed offsets to list. MUST only be one single element.
      list_grpoffsets_cnt int|null size_t - Number of elements in list_grpoffsets array. MUST always be 1.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_ListConsumerGroupOffsets_result_groups()

      public static rd_kafka_ListConsumerGroupOffsets_result_groups ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of results from a ListConsumerGroupOffsets result.

      The returned groups life-time is the same as the result object.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the result object.
      Parameters
      result \FFI\CData|null const rd_kafka_ListConsumerGroupOffsets_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_group_result_t**

      rd_kafka_AlterConsumerGroupOffsets_new()

      public static rd_kafka_AlterConsumerGroupOffsets_new ( 
          string|null $group_id, 
          \FFI\CData|null $partitions
       ): \FFI\CData|null
      

      Create a new AlterConsumerGroupOffsets object. This object is later passed to rd_kafka_AlterConsumerGroupOffsets().

      Parameters
      group_id string|null const char* - Consumer group id.
      partitions \FFI\CData|null const rd_kafka_topic_partition_list_t* - Partitions to alter committed offsets for. Only the topic and partition fields are used.
      Returns
      \FFI\CData|null rd_kafka_AlterConsumerGroupOffsets_t* - a new allocated AlterConsumerGroupOffsets object. Use rd_kafka_AlterConsumerGroupOffsets_destroy() to free object when done.

      rd_kafka_AlterConsumerGroupOffsets_destroy()

      public static rd_kafka_AlterConsumerGroupOffsets_destroy ( 
          \FFI\CData|null $alter_grpoffsets
       ): void
      
      Parameters
      alter_grpoffsets \FFI\CData|null rd_kafka_AlterConsumerGroupOffsets_t*

      rd_kafka_AlterConsumerGroupOffsets_destroy_array()

      public static rd_kafka_AlterConsumerGroupOffsets_destroy_array ( 
          \FFI\CData|null $alter_grpoffsets, 
          int|null $alter_grpoffset_cnt
       ): void
      
      Parameters
      alter_grpoffsets \FFI\CData|null rd_kafka_AlterConsumerGroupOffsets_t**
      alter_grpoffset_cnt int|null size_t

      rd_kafka_AlterConsumerGroupOffsets()

      public static rd_kafka_AlterConsumerGroupOffsets ( 
          \FFI\CData|null $rk, 
          \FFI\CData|null $alter_grpoffsets, 
          int|null $alter_grpoffsets_cnt, 
          \FFI\CData|null $options, 
          \FFI\CData|null $rkqu
       ): void
      

      Alter committed offsets for a set of partitions in a consumer group. This will succeed at the partition level only if the group is not actively subscribed to the corresponding topic.

      Remarks
      The result event type emitted on the supplied queue is of type RD_KAFKA_EVENT_ALTERCONSUMERGROUPOFFSETS_RESULT
      The current implementation only supports one group per invocation.
      Parameters
      rk \FFI\CData|null rd_kafka_t* - Client instance.
      alter_grpoffsets \FFI\CData|null rd_kafka_AlterConsumerGroupOffsets_t** - Array of group committed offsets to alter. MUST only be one single element.
      alter_grpoffsets_cnt int|null size_t - Number of elements in alter_grpoffsets array. MUST always be 1.
      options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
      rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

      rd_kafka_AlterConsumerGroupOffsets_result_groups()

      public static rd_kafka_AlterConsumerGroupOffsets_result_groups ( 
          \FFI\CData|null $result, 
          \FFI\CData|null $cntp
       ): \FFI\CData|null
      

      Get an array of results from a AlterConsumerGroupOffsets result.

      The returned groups life-time is the same as the result object.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the result object.
      Parameters
      result \FFI\CData|null const rd_kafka_AlterConsumerGroupOffsets_result_t* - Result to get group results from.
      cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
      Returns
      \FFI\CData|null const rd_kafka_group_result_t**

      rd_kafka_mock_broker_error_stack_cnt()

      public static rd_kafka_mock_broker_error_stack_cnt ( 
          \FFI\CData|null $mcluster, 
          int|null $broker_id, 
          int|null $ApiKey, 
          \FFI\CData|null $cntp
       ): int
      
      Parameters
      mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
      broker_id int|null int32_t
      ApiKey int|null int16_t
      cntp \FFI\CData|null size_t*
      Returns
      int rd_kafka_resp_err_t

      rd_kafka_topic_partition_set_leader_epoch()

      public static rd_kafka_topic_partition_set_leader_epoch ( 
          \FFI\CData|null $rktpar, 
          int|null $leader_epoch
       ): void
      

      Sets the offset leader epoch (use -1 to clear).

      Remarks
      See KIP-320 for more information.
      Parameters
      rktpar \FFI\CData|null rd_kafka_topic_partition_t* - Partition object.
      leader_epoch int|null int32_t - Offset leader epoch, use -1 to reset.

      rd_kafka_topic_partition_get_leader_epoch()

      public static rd_kafka_topic_partition_get_leader_epoch ( 
          \FFI\CData|null $rktpar
       ): int|null
      
      Remarks
      See KIP-320 for more information.
      Parameters
      rktpar \FFI\CData|null const rd_kafka_topic_partition_t* - ) - Partition object.
      Returns
      int|null int32_t - the offset leader epoch, if relevant and known, else -1.

      rd_kafka_message_leader_epoch()

      public static rd_kafka_message_leader_epoch ( 
          \FFI\CData|null $rkmessage
       ): int|null
      
      Remarks
      This API must only be used on consumed messages without error.
      Requires broker version >= 2.10 (KIP-320).
      Parameters
      rkmessage \FFI\CData|null const rd_kafka_message_t* - )
      Returns
      int|null int32_t - the message’s partition leader epoch at the time the message was fetched and if known, else -1.

      rd_kafka_offset_store_message()

      public static rd_kafka_offset_store_message ( 
          \FFI\CData|null $rkmessage
       ): \FFI\CData|null
      

      Store offset +1 for the consumed message.

      The message offset + 1 will be committed to broker according to auto.commit.interval.ms or manual offset-less commit()

      Warning
      This method may only be called for partitions that are currently assigned. Non-assigned partitions will fail with RD_KAFKA_RESP_ERR__STATE. Since v1.9.0.
      Avoid storing offsets after calling rd_kafka_seek() (et.al) as this may later interfere with resuming a paused partition, instead store offsets prior to calling seek.
      Remarks
      enable.auto.offset.store must be set to "false" when using this API.
      Parameters
      rkmessage \FFI\CData|null rd_kafka_message_t* - )
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success or an error object on failure.

      rd_kafka_event_IncrementalAlterConfigs_result()

      public static rd_kafka_event_IncrementalAlterConfigs_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get IncrementalAlterConfigs result.

      Event types: RD_KAFKA_EVENT_INCREMENTALALTERCONFIGS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_IncrementalAlterConfigs_result_t* - the result of a IncrementalAlterConfigs request, or NULL if event is of different type.

      rd_kafka_event_DescribeUserScramCredentials_result()

      public static rd_kafka_event_DescribeUserScramCredentials_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get DescribeUserScramCredentials result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_DESCRIBEUSERSCRAMCREDENTIALS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_DescribeUserScramCredentials_result_t* - the result of a DescribeUserScramCredentials request, or NULL if event is of different type.

      rd_kafka_event_AlterUserScramCredentials_result()

      public static rd_kafka_event_AlterUserScramCredentials_result ( 
          \FFI\CData|null $rkev
       ): \FFI\CData|null
      

      Get AlterUserScramCredentials result.

      Remarks
      The lifetime of the returned memory is the same as the lifetime of the rkev object.

      Event types: RD_KAFKA_EVENT_ALTERUSERSCRAMCREDENTIALS_RESULT

      Parameters
      rkev \FFI\CData|null rd_kafka_event_t* - )
      Returns
      \FFI\CData|null const rd_kafka_AlterUserScramCredentials_result_t* - the result of a AlterUserScramCredentials request, or NULL if event is of different type.

      rd_kafka_ConfigResource_add_incremental_config()

      public static rd_kafka_ConfigResource_add_incremental_config ( 
          \FFI\CData|null $config, 
          string|null $name, 
          int $op_type, 
          string|null $value
       ): \FFI\CData|null
      

      Add the value of the configuration entry for a subsequent incremental alter config operation. APPEND and SUBTRACT are possible for list-type configuration entries only.

      Parameters
      config \FFI\CData|null rd_kafka_ConfigResource_t* - ConfigResource to add config property to.
      name string|null const char* - Configuration name, depends on resource type.
      op_type int rd_kafka_AlterConfigOpType_t - Operation type, one of rd_kafka_AlterConfigOpType_t.
      value string|null const char* - Configuration value, depends on resource type and name. Set to NULL, only with with op_type set to DELETE, to revert configuration value to default.
      Returns
      \FFI\CData|null rd_kafka_error_t* - NULL on success, or an rd_kafka_error_t * with the corresponding error code and string. Error ownership belongs to the caller. Possible error codes:
      • RD_KAFKA_RESP_ERR__INVALID_ARG on invalid input.
      • rd_kafka_IncrementalAlterConfigs()

        public static rd_kafka_IncrementalAlterConfigs ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $configs, 
            int|null $config_cnt, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        

        Incrementally update the configuration for the specified resources. Updates are not transactional so they may succeed for some resources while fail for others. The configs for a particular resource are updated atomically, executing the corresponding incremental operations on the provided configurations.

        Remarks
        Requires broker version >=2.3.0
        Multiple resources and resource types may be set, but at most one resource of type RD_KAFKA_RESOURCE_BROKER is allowed per call since these resource requests must be sent to the broker specified in the resource. Broker option will be ignored in this case.
        Parameters
        rk \FFI\CData|null rd_kafka_t* - Client instance.
        configs \FFI\CData|null rd_kafka_ConfigResource_t** - Array of config entries to alter.
        config_cnt int|null size_t - Number of elements in configs array.
        options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
        rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

        rd_kafka_IncrementalAlterConfigs_result_resources()

        public static rd_kafka_IncrementalAlterConfigs_result_resources ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        

        Get an array of resource results from a IncrementalAlterConfigs result.

        Use rd_kafka_ConfigResource_error() and rd_kafka_ConfigResource_error_string() to extract per-resource error results on the returned array elements.

        The returned object life-times are the same as the result object.

        Parameters
        result \FFI\CData|null const rd_kafka_IncrementalAlterConfigs_result_t* - Result object to get resource results from.
        cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
        Returns
        \FFI\CData|null const rd_kafka_ConfigResource_t** - an array of ConfigResource elements, or NULL if not available.

        rd_kafka_ScramCredentialInfo_mechanism()

        public static rd_kafka_ScramCredentialInfo_mechanism ( 
            \FFI\CData|null $scram_credential_info
         ): int
        
        Parameters
        scram_credential_info \FFI\CData|null const rd_kafka_ScramCredentialInfo_t*
        Returns
        int rd_kafka_ScramMechanism_t

        rd_kafka_ScramCredentialInfo_iterations()

        public static rd_kafka_ScramCredentialInfo_iterations ( 
            \FFI\CData|null $scram_credential_info
         ): int|null
        
        Parameters
        scram_credential_info \FFI\CData|null const rd_kafka_ScramCredentialInfo_t*
        Returns
        int|null int32_t

        rd_kafka_UserScramCredentialsDescription_user()

        public static rd_kafka_UserScramCredentialsDescription_user ( 
            \FFI\CData|null $description
         ): string|null
        
        Parameters
        description \FFI\CData|null const rd_kafka_UserScramCredentialsDescription_t*
        Returns
        string|null const char*

        rd_kafka_UserScramCredentialsDescription_error()

        public static rd_kafka_UserScramCredentialsDescription_error ( 
            \FFI\CData|null $description
         ): \FFI\CData|null
        
        Parameters
        description \FFI\CData|null const rd_kafka_UserScramCredentialsDescription_t*
        Returns
        \FFI\CData|null const rd_kafka_error_t*

        rd_kafka_UserScramCredentialsDescription_scramcredentialinfo_count()

        public static rd_kafka_UserScramCredentialsDescription_scramcredentialinfo_count ( 
            \FFI\CData|null $description
         ): int|null
        
        Parameters
        description \FFI\CData|null const rd_kafka_UserScramCredentialsDescription_t*
        Returns
        int|null size_t

        rd_kafka_UserScramCredentialsDescription_scramcredentialinfo()

        public static rd_kafka_UserScramCredentialsDescription_scramcredentialinfo ( 
            \FFI\CData|null $description, 
            int|null $idx
         ): \FFI\CData|null
        
        Parameters
        description \FFI\CData|null const rd_kafka_UserScramCredentialsDescription_t*
        idx int|null size_t
        Returns
        \FFI\CData|null const rd_kafka_ScramCredentialInfo_t*

        rd_kafka_DescribeUserScramCredentials_result_descriptions()

        public static rd_kafka_DescribeUserScramCredentials_result_descriptions ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        

        Get an array of descriptions from a DescribeUserScramCredentials result.

        The returned value life-time is the same as the result object.

        Parameters
        result \FFI\CData|null const rd_kafka_DescribeUserScramCredentials_result_t* - Result to get descriptions from.
        cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
        Returns
        \FFI\CData|null const rd_kafka_UserScramCredentialsDescription_t**

        rd_kafka_DescribeUserScramCredentials()

        public static rd_kafka_DescribeUserScramCredentials ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $users, 
            int|null $user_cnt, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        

        Describe SASL/SCRAM credentials. This operation is supported by brokers with version 2.7.0 or higher.

        Parameters
        rk \FFI\CData|null rd_kafka_t* - Client instance.
        users \FFI\CData|null const char** - The users for which credentials are to be described. All users’ credentials are described if NULL.
        user_cnt int|null size_t - Number of elements in users array.
        options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
        rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

        rd_kafka_UserScramCredentialUpsertion_new()

        public static rd_kafka_UserScramCredentialUpsertion_new ( 
            string|null $username, 
            int $mechanism, 
            int|null $iterations, 
            \FFI\CData|null $password, 
            int|null $password_size, 
            \FFI\CData|null $salt, 
            int|null $salt_size
         ): \FFI\CData|null
        

        Allocates a new UserScramCredentialUpsertion given its fields. If salt isn't given a 64 B salt is generated using OpenSSL RAND_priv_bytes, if available.

        Remarks
        A random salt is generated, when NULL, only if OpenSSL >= 1.1.1. Otherwise it's a required param.
        Parameters
        username string|null const char* - The username (not empty).
        mechanism int rd_kafka_ScramMechanism_t - SASL/SCRAM mechanism.
        iterations int|null int32_t - SASL/SCRAM iterations.
        password \FFI\CData|null const unsigned char* - Password bytes (not empty).
        password_size int|null size_t - Size of password (greater than 0).
        salt \FFI\CData|null const unsigned char* - Salt bytes (optional).
        salt_size int|null size_t - Size of salt (optional).
        Returns
        \FFI\CData|null rd_kafka_UserScramCredentialAlteration_t* - A newly created instance of rd_kafka_UserScramCredentialAlteration_t. Ownership belongs to the caller, use rd_kafka_UserScramCredentialAlteration_destroy to destroy.

        rd_kafka_UserScramCredentialDeletion_new()

        public static rd_kafka_UserScramCredentialDeletion_new ( 
            string|null $username, 
            int $mechanism
         ): \FFI\CData|null
        

        Allocates a new UserScramCredentialDeletion given its fields.

        Parameters
        username string|null const char* - The username (not empty).
        mechanism int rd_kafka_ScramMechanism_t - SASL/SCRAM mechanism.
        Returns
        \FFI\CData|null rd_kafka_UserScramCredentialAlteration_t* - A newly created instance of rd_kafka_UserScramCredentialAlteration_t. Ownership belongs to the caller, use rd_kafka_UserScramCredentialAlteration_destroy to destroy.

        rd_kafka_UserScramCredentialAlteration_destroy()

        public static rd_kafka_UserScramCredentialAlteration_destroy ( 
            \FFI\CData|null $alteration
         ): void
        
        Parameters
        alteration \FFI\CData|null rd_kafka_UserScramCredentialAlteration_t*

        rd_kafka_UserScramCredentialAlteration_destroy_array()

        public static rd_kafka_UserScramCredentialAlteration_destroy_array ( 
            \FFI\CData|null $alterations, 
            int|null $alteration_cnt
         ): void
        
        Parameters
        alterations \FFI\CData|null rd_kafka_UserScramCredentialAlteration_t**
        alteration_cnt int|null size_t

        rd_kafka_AlterUserScramCredentials_result_response_user()

        public static rd_kafka_AlterUserScramCredentials_result_response_user ( 
            \FFI\CData|null $response
         ): string|null
        
        Parameters
        response \FFI\CData|null const rd_kafka_AlterUserScramCredentials_result_response_t*
        Returns
        string|null const char*

        rd_kafka_AlterUserScramCredentials_result_response_error()

        public static rd_kafka_AlterUserScramCredentials_result_response_error ( 
            \FFI\CData|null $response
         ): \FFI\CData|null
        
        Parameters
        response \FFI\CData|null const rd_kafka_AlterUserScramCredentials_result_response_t*
        Returns
        \FFI\CData|null const rd_kafka_error_t*

        rd_kafka_AlterUserScramCredentials_result_responses()

        public static rd_kafka_AlterUserScramCredentials_result_responses ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        

        Get an array of responses from a AlterUserScramCredentials result.

        The returned value life-time is the same as the result object.

        Parameters
        result \FFI\CData|null const rd_kafka_AlterUserScramCredentials_result_t* - Result to get responses from.
        cntp \FFI\CData|null size_t* - is updated to the number of elements in the array.
        Returns
        \FFI\CData|null const rd_kafka_AlterUserScramCredentials_result_response_t**

        rd_kafka_AlterUserScramCredentials()

        public static rd_kafka_AlterUserScramCredentials ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $alterations, 
            int|null $alteration_cnt, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        

        Alter SASL/SCRAM credentials. This operation is supported by brokers with version 2.7.0 or higher.

        Remarks
        For upsertions to be processed, librdkfka must be build with OpenSSL support. It's needed to calculate the HMAC.
        Parameters
        rk \FFI\CData|null rd_kafka_t* - Client instance.
        alterations \FFI\CData|null rd_kafka_UserScramCredentialAlteration_t** - The alterations to be applied.
        alteration_cnt int|null size_t - Number of elements in alterations array.
        options \FFI\CData|null const rd_kafka_AdminOptions_t* - Optional admin options, or NULL for defaults.
        rkqu \FFI\CData|null rd_kafka_queue_t* - Queue to emit result on.

        rd_kafka_Uuid_base64str()

        public static rd_kafka_Uuid_base64str ( 
            \FFI\CData|null $uuid
         ): string|null
        
        Parameters
        uuid \FFI\CData|null const rd_kafka_Uuid_t*
        Returns
        string|null const char*

        rd_kafka_Uuid_least_significant_bits()

        public static rd_kafka_Uuid_least_significant_bits ( 
            \FFI\CData|null $uuid
         ): int|null
        
        Parameters
        uuid \FFI\CData|null const rd_kafka_Uuid_t*
        Returns
        int|null int64_t

        rd_kafka_Uuid_most_significant_bits()

        public static rd_kafka_Uuid_most_significant_bits ( 
            \FFI\CData|null $uuid
         ): int|null
        
        Parameters
        uuid \FFI\CData|null const rd_kafka_Uuid_t*
        Returns
        int|null int64_t

        rd_kafka_Uuid_new()

        public static rd_kafka_Uuid_new ( 
            int|null $most_significant_bits, 
            int|null $least_significant_bits
         ): \FFI\CData|null
        
        Parameters
        most_significant_bits int|null int64_t
        least_significant_bits int|null int64_t
        Returns
        \FFI\CData|null rd_kafka_Uuid_t*

        rd_kafka_Uuid_copy()

        public static rd_kafka_Uuid_copy ( 
            \FFI\CData|null $uuid
         ): \FFI\CData|null
        
        Parameters
        uuid \FFI\CData|null const rd_kafka_Uuid_t*
        Returns
        \FFI\CData|null rd_kafka_Uuid_t*

        rd_kafka_Uuid_destroy()

        public static rd_kafka_Uuid_destroy ( 
            \FFI\CData|null $uuid
         ): void
        
        Parameters
        uuid \FFI\CData|null rd_kafka_Uuid_t*

        rd_kafka_Node_rack()

        public static rd_kafka_Node_rack ( 
            \FFI\CData|null $node
         ): string|null
        
        Parameters
        node \FFI\CData|null const rd_kafka_Node_t*
        Returns
        string|null const char*

        rd_kafka_event_DescribeTopics_result()

        public static rd_kafka_event_DescribeTopics_result ( 
            \FFI\CData|null $rkev
         ): \FFI\CData|null
        
        Parameters
        rkev \FFI\CData|null rd_kafka_event_t*
        Returns
        \FFI\CData|null const rd_kafka_DescribeTopics_result_t*

        rd_kafka_event_DescribeCluster_result()

        public static rd_kafka_event_DescribeCluster_result ( 
            \FFI\CData|null $rkev
         ): \FFI\CData|null
        
        Parameters
        rkev \FFI\CData|null rd_kafka_event_t*
        Returns
        \FFI\CData|null const rd_kafka_DescribeCluster_result_t*

        rd_kafka_event_ListOffsets_result()

        public static rd_kafka_event_ListOffsets_result ( 
            \FFI\CData|null $rkev
         ): \FFI\CData|null
        
        Parameters
        rkev \FFI\CData|null rd_kafka_event_t*
        Returns
        \FFI\CData|null const rd_kafka_ListOffsets_result_t*

        rd_kafka_AdminOptions_set_include_authorized_operations()

        public static rd_kafka_AdminOptions_set_include_authorized_operations ( 
            \FFI\CData|null $options, 
            int|null $true_or_false
         ): \FFI\CData|null
        
        Parameters
        options \FFI\CData|null rd_kafka_AdminOptions_t*
        true_or_false int|null int
        Returns
        \FFI\CData|null rd_kafka_error_t*

        rd_kafka_AdminOptions_set_isolation_level()

        public static rd_kafka_AdminOptions_set_isolation_level ( 
            \FFI\CData|null $options, 
            int $value
         ): \FFI\CData|null
        
        Parameters
        options \FFI\CData|null rd_kafka_AdminOptions_t*
        value int rd_kafka_IsolationLevel_t
        Returns
        \FFI\CData|null rd_kafka_error_t*

        rd_kafka_TopicCollection_of_topic_names()

        public static rd_kafka_TopicCollection_of_topic_names ( 
            \FFI\CData|null $topics, 
            int|null $topics_cnt
         ): \FFI\CData|null
        
        Parameters
        topics \FFI\CData|null const char**
        topics_cnt int|null size_t
        Returns
        \FFI\CData|null rd_kafka_TopicCollection_t*

        rd_kafka_TopicCollection_destroy()

        public static rd_kafka_TopicCollection_destroy ( 
            \FFI\CData|null $topics
         ): void
        
        Parameters
        topics \FFI\CData|null rd_kafka_TopicCollection_t*

        rd_kafka_DescribeTopics()

        public static rd_kafka_DescribeTopics ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $topics, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        
        Parameters
        rk \FFI\CData|null rd_kafka_t*
        topics \FFI\CData|null const rd_kafka_TopicCollection_t*
        options \FFI\CData|null const rd_kafka_AdminOptions_t*
        rkqu \FFI\CData|null rd_kafka_queue_t*

        rd_kafka_DescribeTopics_result_topics()

        public static rd_kafka_DescribeTopics_result_topics ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_DescribeTopics_result_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_TopicDescription_t**

        rd_kafka_TopicDescription_partitions()

        public static rd_kafka_TopicDescription_partitions ( 
            \FFI\CData|null $topicdesc, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_TopicPartitionInfo_t**

        rd_kafka_TopicPartitionInfo_partition()

        public static rd_kafka_TopicPartitionInfo_partition ( 
            \FFI\CData|null $partition
         ): int|null
        
        Parameters
        partition \FFI\CData|null const rd_kafka_TopicPartitionInfo_t*
        Returns
        int|null const int

        rd_kafka_TopicPartitionInfo_leader()

        public static rd_kafka_TopicPartitionInfo_leader ( 
            \FFI\CData|null $partition
         ): \FFI\CData|null
        
        Parameters
        partition \FFI\CData|null const rd_kafka_TopicPartitionInfo_t*
        Returns
        \FFI\CData|null const rd_kafka_Node_t*

        rd_kafka_TopicPartitionInfo_isr()

        public static rd_kafka_TopicPartitionInfo_isr ( 
            \FFI\CData|null $partition, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        partition \FFI\CData|null const rd_kafka_TopicPartitionInfo_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_Node_t**

        rd_kafka_TopicPartitionInfo_replicas()

        public static rd_kafka_TopicPartitionInfo_replicas ( 
            \FFI\CData|null $partition, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        partition \FFI\CData|null const rd_kafka_TopicPartitionInfo_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_Node_t**

        rd_kafka_TopicDescription_authorized_operations()

        public static rd_kafka_TopicDescription_authorized_operations ( 
            \FFI\CData|null $topicdesc, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_AclOperation_t*

        rd_kafka_TopicDescription_name()

        public static rd_kafka_TopicDescription_name ( 
            \FFI\CData|null $topicdesc
         ): string|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        Returns
        string|null const char*

        rd_kafka_TopicDescription_topic_id()

        public static rd_kafka_TopicDescription_topic_id ( 
            \FFI\CData|null $topicdesc
         ): \FFI\CData|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        Returns
        \FFI\CData|null const rd_kafka_Uuid_t*

        rd_kafka_TopicDescription_is_internal()

        public static rd_kafka_TopicDescription_is_internal ( 
            \FFI\CData|null $topicdesc
         ): int|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        Returns
        int|null int

        rd_kafka_TopicDescription_error()

        public static rd_kafka_TopicDescription_error ( 
            \FFI\CData|null $topicdesc
         ): \FFI\CData|null
        
        Parameters
        topicdesc \FFI\CData|null const rd_kafka_TopicDescription_t*
        Returns
        \FFI\CData|null const rd_kafka_error_t*

        rd_kafka_DescribeCluster()

        public static rd_kafka_DescribeCluster ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        
        Parameters
        rk \FFI\CData|null rd_kafka_t*
        options \FFI\CData|null const rd_kafka_AdminOptions_t*
        rkqu \FFI\CData|null rd_kafka_queue_t*

        rd_kafka_DescribeCluster_result_nodes()

        public static rd_kafka_DescribeCluster_result_nodes ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_DescribeCluster_result_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_Node_t**

        rd_kafka_DescribeCluster_result_authorized_operations()

        public static rd_kafka_DescribeCluster_result_authorized_operations ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_DescribeCluster_result_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_AclOperation_t*

        rd_kafka_DescribeCluster_result_controller()

        public static rd_kafka_DescribeCluster_result_controller ( 
            \FFI\CData|null $result
         ): \FFI\CData|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_DescribeCluster_result_t*
        Returns
        \FFI\CData|null const rd_kafka_Node_t*

        rd_kafka_DescribeCluster_result_cluster_id()

        public static rd_kafka_DescribeCluster_result_cluster_id ( 
            \FFI\CData|null $result
         ): string|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_DescribeCluster_result_t*
        Returns
        string|null const char*

        rd_kafka_ConsumerGroupDescription_authorized_operations()

        public static rd_kafka_ConsumerGroupDescription_authorized_operations ( 
            \FFI\CData|null $grpdesc, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        grpdesc \FFI\CData|null const rd_kafka_ConsumerGroupDescription_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_AclOperation_t*

        rd_kafka_ListOffsetsResultInfo_topic_partition()

        public static rd_kafka_ListOffsetsResultInfo_topic_partition ( 
            \FFI\CData|null $result_info
         ): \FFI\CData|null
        
        Parameters
        result_info \FFI\CData|null const rd_kafka_ListOffsetsResultInfo_t*
        Returns
        \FFI\CData|null const rd_kafka_topic_partition_t*

        rd_kafka_ListOffsetsResultInfo_timestamp()

        public static rd_kafka_ListOffsetsResultInfo_timestamp ( 
            \FFI\CData|null $result_info
         ): int|null
        
        Parameters
        result_info \FFI\CData|null const rd_kafka_ListOffsetsResultInfo_t*
        Returns
        int|null int64_t

        rd_kafka_ListOffsets_result_infos()

        public static rd_kafka_ListOffsets_result_infos ( 
            \FFI\CData|null $result, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        result \FFI\CData|null const rd_kafka_ListOffsets_result_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null const rd_kafka_ListOffsetsResultInfo_t**

        rd_kafka_ListOffsets()

        public static rd_kafka_ListOffsets ( 
            \FFI\CData|null $rk, 
            \FFI\CData|null $topic_partitions, 
            \FFI\CData|null $options, 
            \FFI\CData|null $rkqu
         ): void
        
        Parameters
        rk \FFI\CData|null rd_kafka_t*
        topic_partitions \FFI\CData|null rd_kafka_topic_partition_list_t*
        options \FFI\CData|null const rd_kafka_AdminOptions_t*
        rkqu \FFI\CData|null rd_kafka_queue_t*

        rd_kafka_mock_start_request_tracking()

        public static rd_kafka_mock_start_request_tracking ( 
            \FFI\CData|null $mcluster
         ): void
        
        Parameters
        mcluster \FFI\CData|null rd_kafka_mock_cluster_t*

        rd_kafka_mock_stop_request_tracking()

        public static rd_kafka_mock_stop_request_tracking ( 
            \FFI\CData|null $mcluster
         ): void
        
        Parameters
        mcluster \FFI\CData|null rd_kafka_mock_cluster_t*

        rd_kafka_mock_request_destroy()

        public static rd_kafka_mock_request_destroy ( 
            \FFI\CData|null $mreq
         ): void
        
        Parameters
        mreq \FFI\CData|null rd_kafka_mock_request_t*

        rd_kafka_mock_request_id()

        public static rd_kafka_mock_request_id ( 
            \FFI\CData|null $mreq
         ): int|null
        
        Parameters
        mreq \FFI\CData|null rd_kafka_mock_request_t*
        Returns
        int|null int32_t

        rd_kafka_mock_request_api_key()

        public static rd_kafka_mock_request_api_key ( 
            \FFI\CData|null $mreq
         ): int|null
        
        Parameters
        mreq \FFI\CData|null rd_kafka_mock_request_t*
        Returns
        int|null int16_t

        rd_kafka_mock_request_timestamp()

        public static rd_kafka_mock_request_timestamp ( 
            \FFI\CData|null $mreq
         ): int|null
        
        Parameters
        mreq \FFI\CData|null rd_kafka_mock_request_t*
        Returns
        int|null int64_t

        rd_kafka_mock_get_requests()

        public static rd_kafka_mock_get_requests ( 
            \FFI\CData|null $mcluster, 
            \FFI\CData|null $cntp
         ): \FFI\CData|null
        
        Parameters
        mcluster \FFI\CData|null rd_kafka_mock_cluster_t*
        cntp \FFI\CData|null size_t*
        Returns
        \FFI\CData|null rd_kafka_mock_request_t**

        rd_kafka_mock_clear_requests()

        public static rd_kafka_mock_clear_requests ( 
            \FFI\CData|null $mcluster
         ): void
        
        Parameters
        mcluster \FFI\CData|null rd_kafka_mock_cluster_t*

        Used by