The UnboundID Server SDK provides support for creating a number of different types of extensions for Ping Identity Server Products, including the Directory Server, Directory Proxy Server, Data Sync Server, and Data Governance Server. Some of those extensions include:
Access loggers may be used to record information about operations processed by the server. This includes information about connections that are established and closed, as well as whenever requests are received from clients or responses returned to clients. In the Directory Proxy Server, they may also be used to record information about requests forwarded to backend servers and their corresponding responses.
Access loggers often write their information to files, but they can write to other locations like databases, message queues, e-mail messages, or other targets. The server's filtered logging framework is available for use so that each logger can be configured so that only connections, request, and/or results matching a given set of criteria will be provided to the logger for processing.
When logging information about a new connection that has been established, loggers will be able to access information about that connection, including the connection ID, the IP address of that client, the protocol they are using to communicate with the server, and information about whether that connection is secure. For disconnects, the logger will have access to information about the connection as well as the reason the connection was closed. For requests, the logger will have access to information about the client connection and complete details of the request that was received. For results, the logger will have access to information about the client connection, as well as complete details of the request that was received and response that was sent. For operations passing through the Directory Proxy Server, the logger will also have access to information about the backend server used to process that operation.
Access loggers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Alert handlers may be used to convey alert notifications generated within the server to administrators so that they can take any appropriate action in response to them. Alert notifications may be used to report about significant errors, warnings, or events which may occur in the server that may be considered important enough to warrant immediate attention. For example, the server includes alert handlers that can make notifications available via e-mail messages, SNMP traps, JMX notifications, and e-mail messages, but you may write your own alert handler to publish those alerts to other kinds of systems.
Alert notifications include an alert type, which has a name, severity, and OID. Each alert notification also includes a unique identifier as well as a message providing more specific information about the condition that triggered the alert.
Alert handlers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Error loggers may be used to record information about events occurring within the server, including warning and error conditions, informational messages, and some limited debugging information (although most debugging information will be made available through debug loggers rather than error loggers).
Error loggers often write messages to files, but they may also be used to write to other locations, including databases or message queues. Each log message includes includes a category and severity, and the logger may be configured to only be invoked for messages with a particular severity (both overall and well as specific severities for individual message categories if desired).
Error loggers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
HTTP operation loggers may be used to record information about communication performed by HTTP clients, including requests received and responses written. Arbitrary state information may be maintained across requests and responses, and the server will automatically provide access to elements like a unique identifier, request and response times, and the length of time required to process the operation.
HTTP operation loggers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
HTTP servlet extensions may be used to create servlets that perform custom processing in response to requests received from HTTP clients. The server includes a servlet container which supports the Java EE Servlet API version 2.5. Extensions should depend only on the standard Servlet API, and should not make any assumptions about the specific servlet engine used to implement that API.
HTTP servlet extensions may customize the paths for which they should be invoked, the set of initialization parameters, the initialization order, and an optional set of filters that may be used in conjunction with the servlet.
HTTP servlet extensions may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Key manager providers are used to provide access to Java key managers, which are used to obtain access to a certificate that it may need to present to another system. This includes cases in which the server is configured to accept connections from secure clients using SSL or StartTLS, and also when it needs to establish secure connections to other systems (e.g., as in the Directory Proxy Server or Data Sync Server connecting to a Directory Server instance) in which it should present its own certificate for client authentication.
Key manager providers may obtain access to key material through key store files of various forms, through PKCS#11 hardware tokens, or other forms. In some circumstances, it may also be useful to create a key manager provider that wraps another provider (e.g., to help select one of multiple certificates available in a key store).
Key manager providers may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom key manager providers.
Unlike the other extension types, manage extension plugins are extensions that may be used to inject custom processing at various points in the extension bundle installation process only while using the manage-extension tool. These plugins are not meant to be used by the server in any other way. Manage extension plugins may be invoked in the following contexts:
Manage extension plugins may be used to examine the installation state and perform any additional processing. For example, before install plugins may be used to perform additional qualification and or dependency checks before an extension is installed. After update plugins may be used to migrate any configuration files used by the extension and warn the users of any incompatibilities.
Manage extension plugins may only be created using the Java-based API (Javadoc, example source)
OAuth Token Handlers validate incoming SCIM requests using OAuth 2.0 bearer tokens for authentication. Implementations of this API are responsible for decoding the bearer token and checking it for authenticity and validity.
The access token provides an abstraction, replacing different authorization constructs (e.g., username and password, assertion) for a single token understood by the resource server. This abstraction enables issuing access tokens valid for a short time period, as well as removing the resource server's need to understand a wide range of authentication schemes. See "OAuth 2.0 Authorization Framework: Bearer Token Usage" (RFC 6750) for the full specification and details.
TLS security is required to use OAuth 2.0 bearer tokens, as specified in RFC 6750. A bearer token may be used by any party in possession of that token (the "bearer"), and thus needs to be protected when transmitted across the network. Implementations of this API should take special care to verify that the token came from a trusted source (using a secret key or some other signing mechanism to prove that the token is authentic). Please read "OAuth 2.0 Threat Model and Security Considerations" (RFC 6819) for a comprehensive list of security threats to consider when working with OAuth bearer tokens.
The OAuthTokenHandler is also responsible for extracting an authorization DN from the bearer token (or otherwise providing one), which will be used to apply access controls before returning a protected resource. There are also methods to extract the expiration date of the token as well as verify that the intended audience is the local server (to deal with token redirect).
OAuth Token Handlers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Monitor providers are used to report information about the state of components within the server, and may be used for health checking purposes, real-time and historical monitoring, and debugging and troubleshooting. Each monitor provider instance may be used to generate a single monitor entry, generally with information about a single component within the server.
The information presented by monitor providers is obtained using an on-demand approach (in which the information is obtained only when the associated monitor entry is requested by a client), but some or all of the data may be collected in a background thread which will invoke a method in the monitor provider on a regular basis. This can be useful if the monitor provider should use a sampling mechanism to periodically update information that is not based on discrete events, or for which it is too expensive to update for each occurrence.
Monitor providers may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom monitor providers.
Plugins are general-purpose extensions that may be used to inject custom processing at various points in the server life cycle or in interaction with clients. Plugins may be invoked in the following contexts:
Plugins may be used to alter some content before the server performs other processing with it. For example, pre-parse plugins may be used to alter the content of a request read from a client or reject that request with an error. Post-operation plugins may be used to alter the response that will be returned. LDIF import and export plugins may be used to alter the contents of entries before they are inserted into the database or written out to the LDIF file, and they may optionally suppress some or all of those entries. Search result entry plugins can also suppress or alter entries to be returned to the client, and search result references can do the same for the referral URLs in references. Pre-operation and post-operation plugins for search operations may also return entries (including entries constructed on the fly) that would not otherwise have been sent to the client, and pre-operation and post-operation plugins for most types of operations can cause intermediate response messages to be returned for those operations.
Plugins may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Trust manager providers are used to provide access to Java trust managers, which are used to determine whether to trust a certificate presented to the server. This includes cases in which a client using SSL or StartTLS presents its own certificate to the server, and also when it needs to establish secure connections to other systems (e.g., as in the Directory Proxy Server or Data Sync Server connecting to a Directory Server instance) in which that server presents its own certificate to the client.
Trust managers may make their decisions based on a number of factors. For example, many are based on the presence of the client certificate or one of its issuers in a trust store. Others may simply examine the validity dates or may even blindly accept any certificate without any validation.
Trust manager providers may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom trust manager providers.
Account status notification handlers provide a mechanism for invoking custom processing in response to certain password policy events. They are primarily intended to notify end users and/or administrators about problems or significant events that impact user accounts. Notifications may be generated for events like a user account being locked or unlocked, an account disabled or re-enabled, an account or password expired, or a password changed by a user or reset by an administrator.
Account status notification handlers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Backend initialization listeners may be used to perform custom processing just after a backend is brought online or just before a backend is taken offline. This makes it possible to react to the possibility of changes in the set of base DNs below which data exists in the server.
Backend initialization listeners may be created using a Java-based API (Javadoc).
Certificate mappers are used to associate a client certificate with a corresponding user entry in the directory. This is primarily used during the course SASL EXTERNAL processing, in which the client uses a certificate to authenticate to the server. In this case, a trust manager is used to decide whether to trust the client certificate, and the certificate mapper is used to identify the user trying to authenticate.
Certificate mappers may use any information in the client certificate chain to make the determination. This includes content in the certificate subject, the certificate fingerprint, and any extensions it may have, as well as information from any of the issuer certificates. They will generally also need to perform internal operations in order to find entries within the server to be associated with the provided certificate chain.
Certificate mappers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Change subscription handlers may be used to receive notifications of changes processed in the server which match a given set of criteria, and to perform custom processing in response to those changes. For example, this may be used to keep track of changes to a particular attribute so that additional processing (e.g., notifying another system of the change) may be performed.
Note that it is technically possible to achieve the same result with a plugin. However, if a change subscription handler provides all of the functionality that you need, then it does provide a couple of other advantages over a plugin. For example, change subscription handlers provide a unique sequence number to each change in the server so that you can more easily determine the relative order of changes being processed. In addition, multiple change subscriptions can be created in the server, and the change subscription handler will be provided with a set of all of the subscriptions matched by each change.
Change subscription handlers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Cipher stream providers make it possible for the server to obtain input and output streams for reading and writing encrypted data. This capability is primarily used for interacting with encrypted entries, but may also be used for other features.
The primary benefits of a cipher stream provider are that they make it possible to customize the cipher that will be used for encryption and decryption, and perhaps more importantly, that they make it possible to customize the manner in which the encryption keys are obtained. Customizing the source of the encryption key can provide an even greater level of security because it can help prevent an attacker with access to the underlying system from obtaining the keys needed to decrypt the data.
Cipher stream providers may be created using a Java-based API (Javadoc, example source).
Connection criteria make it possible for the server to classify a client connection based on what the server knows about the connection (e.g., the client and server address, the communication protocol, security level, authentication state, etc.). Connection criteria may be used in various ways in the server, including in filtered logging and selecting client connection policies.
Connection criteria may be created using a Java-based API (Javadoc).
Extended operation handlers are used to provide the logic that should be invoked whenever the server receives a particular extended request from a client. Each extended operation handler may register for one or more extended operation OIDs, and that handler will be invoked for any extended requests received with one of those OIDs.
The extended operation handler is responsible for decoding the request value and encoding the response value, if applicable. In such cases, the ASN.1 support provided by the UnboundID LDAP SDK for Java should be used to perform the value encoding and decoding.
Extended operation handlers may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom extended operation handlers.
Identity mappers are used to associate a username or authorization identity with a user entry in the server. They are used in many places within the server, including in the course of SASL authentication processing with certain mechanisms, as well as the use of some controls like the proxied authorization control or the intermediate client control.
Identity mappers will generally need to process internal operations within the server in order to establish the mapping. They may or may not need to transform the given username in some way during the course of that processing.
Identity mappers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
One-time password delivery mechanisms are used to transmit one-time passwords (which may be used in conjunction with the UNBOUNDID-DELIVERED-OTP SASL mechanism to perform multifactor authentication) via some out-of-band mechanism, like SMS, e-mail, voice calls, etc.
One-time password delivery mechanisms may be created using a Java-based API (Javadoc).
Password generators are used to create new passwords for users during the course of processing for the password modify extended operation in the case that the request did not explicitly specify a new password for the user. Note that passwords created by password generators will not be subject to checking by password validators, so it is recommended that any password generators which are enabled be able to generate sufficiently-strong passwords.
Password generators may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Password storage schemes are used to encode clear-text passwords so that they may be stored in the database in a secure manner, and to determine whether a provided clear-text password matches the encoded representation stored in the server. Password storage schemes may use either one-way digests (in which it is not possible to determine the original clear-text password from the encoded representation) or reversible encryption. They may also optionally provide support for the authentication password syntax as described in RFC 3112.
Password storage schemes may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom password storage schemes.
Password validators are used to determine whether a proposed clear-text password is acceptable for use in the server. They are primarily used to determine whether a password is strong enough to resist attacks by malicious users attempting to guess the password. Password validators will have access to the full entry for the user with whom the password is associated, so it is possible to do things like ensuring the password doesn't match other content in the user's entry, and it may also have access to a clear-text version of the user's current password (e.g., to ensure that the new password is sufficiently different from the previous one).
Password validators may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Request criteria make it possible for the server to classify an operation request based on what the server knows about the request, including the content of that request and information about the client that issued it. Request criteria may be used in various ways in the server, including in filtered logging and indicating whether to invoke certain processing.
Request criteria may be created using a Java-based API (Javadoc).
Result criteria make it possible for the server to classify an operation result based on what the server knows about the request, including the content of that result and of the associated request, and information about the client that issued the request. Request criteria may be used in various ways in the server, including in filtered logging and indicating whether to invoke certain processing.
Result criteria may be created using a Java-based API (Javadoc).
SASL mechanism handlers provide the ability for the server to support custom authentication mechanisms using the Simple Authentication and Security Layer as described in RFC 4422. This can be used to allow the server to offer support for types of authentication not available out of the box, or to integrate authentication with other kinds of systems.
SASL mechanism handlers may be created using a Java-based API (Javadoc, example source).
Search entry criteria make it possible for the server to classify a search result entry based on what the server knows about the entry, including the content of that entry and of the associated search request, and information about the client that issued the request. Search entry criteria may be used in various ways in the server, including in filtered logging and indicating whether to invoke certain processing.
Search result entry criteria may be created using a Java-based API (Javadoc).
Search reference criteria make it possible for the server to classify a search result reference based on what the server knows about the entry, including the content of that reference and of the associated search request, and information about the client that issued the request. Search reference criteria may be used in various ways in the server, including in filtered logging and indicating whether to invoke certain processing.
Search result reference criteria may be created using a Java-based API (Javadoc).
Tasks provide a mechanism for invoking custom processing on demand, either immediately or scheduled to be processed at a specified time in the future. Tasks are scheduled by adding a properly-formatted entry below "cn=Scheduled Tasks,cn=tasks", and support is included in the Commercial Edition of the UnboundID LDAP SDK for Java for creating, scheduling, and interacting with task entries. Tasks are generally used for administrative processing, but may be used for a wide range of purposes.
Tasks may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Note that the server must be configured as follows to allow creation of third-party tasks:
dsconfig set-global-configuration-prop \ --add allowed-task:com.unboundid.directory.sdk.extensions.ThirdPartyTask
Uncached attribute criteria implementations can be used to determine on a per-attribute basis whether that attribute should be stored in cached or uncached form (i.e., in the id2entry database or the uncached-id2entry database). The entire entry will be available when making the determination, so the logic may be based on other aspects of the entry, like the presence or absence of other attributes (or attribute values), or the location of the entry in the DIT. Note that uncached attribute criteria will only be evaluated for entries that should not be completely uncached as determined by uncached entry criteria.
Uncached attribute criteria may be created using either a Java-based API (Javadoc) or as Groovy scripts (Javadoc).
Uncached entry criteria implementations can be used to determine whether a given entry should be stored in cached or uncached form (i.e., in the id2entry database or the uncached-id2entry database). Any entry that is completely uncached will not be evaluated against uncached attribute criteria. Entries that are not determined to be completely uncached may still be partially uncached if the configured uncached attribute criteria indicates that one or more of the attributes in the entry should be uncached.
Uncached entry criteria may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Virtual attribute providers may be used to provide attributes whose values are created on demand rather than stored in the database. If an entry already has one or more real values for the attribute, then those real values may be used in place of the virtual values, the virtual values may override the real values, or the real and virtual values may be merged and provided together.
Virtual attribute providers will have access to the rest of the entry in order to use its content in the course of generating the virtual values, and they may also perform internal operations (or potentially access data in external systems) in order to generate the values. The virtual attribute provider is only invoked to generate its values in the event that they are actually needed (e.g., for access control processing or to be returned to the client), so if virtual attributes are used for operational attribute types then they may not be constructed unless they are explicitly requested by the client. As a result, expensive processing required to generate virtual attributes may not have a significant impact on normal operation.
Virtual attribute providers may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
LDAP health checks are used to assess the availability of a backend server to be accessed through the Directory Proxy Server. The health will be assigned as a combination of a state (AVAILABLE, DEGRADED, or UNAVAILABLE) and a score (an integer between 0 and 10, with higher values being better). Health checks will be used by load-balancing algorithms in order to select an appropriate server to use when processing a given request.
Health checks are invoked in both proactive and reactive contexts. If all operations passing through the Directory Proxy Server are succeeding, then health checks will be invoked at regular intervals, which may help detect problems that are slowly building in order to potentially take a server out of service (and notify administrators about it) before it may impact client operations. However, in the event that an error is encountered when processing a request through the Directory Proxy Server, health checks may be immediately invoked to help quickly determine whether that backend server may be having a problem.
LDAP health checks may take any number of factors into account, including the result of attempts to process various operations, the length of time required to process those operations, the contents of entries in the server, or information obtained from external sources. Note that it may be a good idea to have different requirements for downgrading the health of a server than for upgrading it again. For example, if a health check is based on the length of time required to process an operation, then you may want to enforce more strict response time requirements for transitioning a server from DEGRADED to AVAILABLE than was originally required to downgrade it from AVAILABLE to DEGRADED. This can help avoid a ping-pong effect that could result from a server hovering near the border between two states.
LDAP health checks may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Placement algorithms are used in an entry-balancing environment to select the server set in which an entry should be placed when processing an add operation. Note that placement algorithms will be used only for entries located immediately below the balancing point; entries that are two or more levels below the balancing point will be placed in the same server set as their immediate parent.
Placement algorithms may take a number of factors into account when making the determination, including the content of the entry to be added and information about the backend sets available to be selected. For example, you may wish to perform placement based on a hash of the entry's DN or the value of a specified attribute, or you may want to select the backend set with the smallest number of entries. Alternately, it may be desirable to always add new entries to the same server set until it reaches a given size, and then always add to another set, so that it is easier to scale horizontally as new users are added.
Placement algorithms may be created using a Java-based API (Javadoc, example source). At this time, no scripted API is available for creating custom placement algorithms.
Proxy transformations may be used to alter the contents of requests and responses as they pass through the Directory Proxy Server. Although you cannot change one type of request to a different type of request, you can alter any aspect of the request or alternately prevent the request from being forwarded. This may be used to provide functionality like renaming attributes or transforming values so that clients which expect a certain behavior can be satisfied even if the data in the backend servers doesn't match that client's expectations.
For search operations, proxy transformations may also be used to transform or suppress search result entries and references, and they may also inject new entries or references that would not have otherwise been returned. For all types of operations with responses, you can also transform, suppress, and/or inject intermediate response messages.
Proxy transformations may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
JDBC Sync Sources may be used as adapters in order to synchronize data out of relational database systems. Since the Data Sync Server is LDAP-centric, this API allows you to take database records and convert them into LDAP entries which can then be processed by the Data Sync Server.
There are facilities for detecting changes, fetching full database entries, acknowledging completed changes, persisting the state of synchronization, cleaning up the changelog or equivalent mechanism in the database, and for performing a resync operation. There is a lot of flexibility in the API and in what you can do with the script implementation, making it possible to support a wide variety of use cases.
JDBC Sync Sources may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
JDBC Sync Destinations may be used as adapters in order to synchronize data into relational database systems. Since the Data Sync Server is LDAP-centric, this API allows you to take LDAP entries from the Data Sync Server and convert them into database records which can then be applied to the database.
There are facilities for fetching existing database entries, inserting, updating, and deleting entries on the database, and for performing a resync operation. There is a lot of flexibility in the API and in what you can do with the script implementation, making it possible to support a wide variety of use cases.
JDBC Sync Destinations may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
LDAP Sync Source Plugins may be used to either filter out synchronization operations from being synchronized or to alter the source entry that is synchronized. An LDAP Sync Source Plugin has access to the SyncOperation, the source entry after it has been fetched, and an LDAP connection to the source server.
LDAP Sync Source Plugins may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
LDAP Sync Destination Plugins may be used to either filter out synchronization operations from being synchronized or to alter how changes are applied at the destination. It can be called before or after the destination entry is fetched; and before a create, modify, or delete synchronization operation is applied at the destination. Each of these plugin points has access to an LDAP connection to the destination server.
LDAP Sync Destination Plugins may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Sync Sources may be used as generic adapters in order to synchronize data from an arbitrary endpoint. This covers use cases when an exact Sync Source implementation is not available for a platform. This API does not provide any protocol-specific connection management, and instead leaves it to the extension to define the interaction with the endpoint. This allows you to synchronize data from virtually any type of source, whether it be a flat file, web service, or a proprietary platform.
There are facilities for detecting changes, fetching existing entries, acknowledging completed changes back to the endpoint, and performing a resync operation. There is a lot of flexibility in the API and in what you can do with the implementation, making it possible to support a wide variety of use cases. For example, the extension could set up an HTTP listener and listen for changes from client applications, effectively making it a "push" model. Or, if the source endpoint doesn't provide logical separation between a "change record" and the actual "entry", you can easily skip the "fetch full entry" stage of processing and just synchronize the original change as if it were the full entry.
Sync Sources may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Sync Destinations may be used as generic adapters in order to synchronize data into an arbitrary endpoint. Typically these will be used when the destination is non-LDAP and non-JDBC, since there are already specific endpoint types for those environments. This API also supports one-way notifications from the Data Sync Server when a sync pipe is configured in notification mode.
There are facilities for fetching existing entries, creating, modifying, and deleting entries on the endpoint. There is a lot of flexibility in the API and in what you can do with the implementation, making it possible to support a wide variety of use cases. For example, changes can be pushed to clients via HTTP, added to a third-party JMS queue, or synchronized via some other protocol to a destination endpoint.
Sync Destinations may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Sync Pipe Plugins have access to synchronization operations within the core of the synchronization processing. They can be used with any type of end point, but do not have access to end point specific resources, such as an LDAP connection to a source server.
These extensions may be used to 1) filter out certain changes from being synchronized, 2) add and remove attributes that should be synchronized with the destination independent of whether they changed at the source or not, 3) manipulate the changes that are synchronized to ignore certain modified attributes or change the representation of modified attributes, or 4) skip certain steps in Sync Pipe processing, i.e. attribute and DN mapping.
Sync Pipe Plugins may be created using either a Java-based API (Javadoc, example source) or as Groovy scripts (Javadoc, example source).
Access Token Validators are used to validate access tokens submitted by client applications for access to protected resources of the Data Broker. The default Access Token Validator validates access tokens that are issued by the Data Broker itself. Access Token Validator extensions may be installed to allow the Data Broker to accept access tokens issued by other identity providers.
An Access Token Validator is responsible for decoding an incoming access token and returning token metadata that is similar in content to that specified by RFC 7662, including whether the token is valid and what scopes are granted to the token.
Access Token Validators may be created using a Java-based API (Javadoc, example source)
Identity Authenticators provide ways to authenticate a user or provide additional assurance about the identity of a user who is already authenticated. Each Identity Authenticator accepts a type of credential submitted by a user and either validates or rejects that credential.
Identity Authenticators are organized into Authentication Chains that define a sequence of events required for a user to authenticate. To enable an Identity Authenticator extension, add it to an active Authentication Chain.
Identity Authenticators may be created using a Java-based API (Javadoc, example source)
Policy Information Providers may be used to retrieve externally stored data that is required during XACML policy evaluation.
Whenever a policy references a request attribute or content that cannot be found in the incoming request document, the Data Broker invokes the Policy Information Point (PIP) in an attempt to retrieve the value of the requested information. The PIP consists of one or more Policy Information Providers. For each incoming request, the PIP cycles through the configured Policy Information Providers until it finds one that can retrieve the requested content. Adding custom Policy Information Providers allows the Data Broker to be extended to include a variety of external data in its decision-making process.
When invoked to retrieve an attribute, custom Policy Information Providers are provided access to the current XACML request context. This allows them to interrogate other parts of the request, if needed, to help determine what value should be returned for a particular attribute.
Policy Information Providers may be created using a Java-based API (Javadoc, example source)
Store Adapters may be used as a native interface to a backend data store (such as a RDBMS or a web service). Store Adapters are aggregated into a SCIM Resource Type in the Data Governance Server, which supports a SCIM front-end which can be backed by any number and type of native data stores (via Store Adapters).
When using multiple Data Broker instances in a deployment, the native data store should be accessible from all instances. The Store Adapter API allows you to advertise the native schema for the underlying data store. The Administrative Console can then be used to create mappings between this native schema and the common SCIM Resource Type Schema which is exposed via the SCIM front-end.
Store Adapters may be created using a Java-based API (Javadoc, file-based store adapter example source)
Store Adapter Plugins may be used to perform processing on Store Adapter operations before and after those operations are processed by a Store Adapter. The Store Adapter Plugin API has pre-request methods to intercept and make changes to store adapter requests before they are processed by the Store Adapter, and corresponding post-request methods to intercept and make changes to the results returned by the Store Adapter.
Store Adapter Plugins may be created using a Java-based API (Javadoc, store adapter plugin example source)
Telephony Messaging Providers provide a way to deliver a textual message to a user by telephone. The message can be sent via an SMS text message, or by making a voice call and converting the message from text to spoken words. The language and locale of the message is also available to the messaging provider in order to convert the message to spoken words in the appropriate language and locale.
The Telephony Delivered Code Identity Authenticator requires a Telephony Messaging Provider to deliver a verification code to a user. A voice call messaging provider implementation can decide how the verification code is to be represented in the message, and hence how the code is spoken.
Telephony Messaging Providers may be created using a Java-based API (Javadoc)