This is a short and sweet tutorial on how to deploy CAS via the WAR Overlay method.
This tutorial specifically requires and focuses on:
7.0.x
Overlays are a strategy to combat repetitive code and/or resources. Rather than downloading the CAS codebase and building it from source, overlays allow you to download a pre-built vanilla CAS web application server provided by the project itself, override/insert specific behavior into it and then merge it all back together to produce the final (web application) artifact. You can find a lot more about how overlays work here.
Please note that a CAS WAR Overlay can also be generated on demand using the CAS Initializr.
The concept of the WAR Overlay is NOT a CAS invention. It’s specifically an Apache Maven feature and of course, there are techniques and plugins available to apply the same concept to Gradle-based builds as well. For this tutorial, the Gradle overlay we will be working with is available here. Be sure to check out the appropriate branch, that is 7.0
.
The quickest way to generate a CAS WAR overlay starter template is via the following:
curl -k https://getcas.apereo.org/starter.tgz \
-d type=cas-overlay -d baseDir=overlay | tar -xzvf -
…if you prefer, you could always download and clone this repository.
Once you have forked and cloned the repository locally, or when you have generated the WAR overlay yourself using CAS Initializr, you’re ready to begin.
master
branch of the repository applies to CAS 7.0.x
deployments. That may not necessarily remain true when you start your own deployment. So examine the branches and make sure you checkout
the one matching your intended CAS version.
Similar to Grey’s, a Gradle WAR overlay is composed of several facets the most important of which are the build.gradle
and gradle.properties
file. These are build-descriptor files whose job is to teach Gradle how to obtain, build, configure (and in certain cases deploy) CAS artifacts.
The CAS Gradle Overlay is composed of several sections. The ones you need to worry about are the following.
In gradle.properties
file, project settings, and versions are specified:
cas.version=7.0.0
The gradle.properties
file describes what versions of CAS, Spring Boot, and Java are required for the deployment. You are in practice mostly concerned with the cas.version
setting and as new (maintenance) releases come out, it would be sufficient to simply update that version and re-run the build.
This might be a good time to review the CAS project’s Release Policy as well as Maintenance Policy.
You should do your best to stay current with CAS releases, particularly those that are issued as security or patch releases. Security releases are a critical minimal change on a release to address a serious confirmed security issue, and typically take on the format of X.Y.Z.1
, X.Y.Z.2
, etc. A patch release is a conservative incremental improvement that includes bug fixes and is absolutely backward compatible with previous patch releases and takes on the format of X.Y.1
, X.Y.2
, etc.
Upgrading to a security or patch release is STRONGLY recommended, and should be a drop-in replacement. To upgrade to such releases, all you should have to do is to adjust the cas.version
setting in your gradle.proprties
file. For example, going from CAS 7.0.0
to 7.0.1
should be as easy as:
# cas.version=7.0.0
cas.version=7.0.1
The best way to stay current with CAS releases and receive release notifications and announcements is via subscribing to the GitHub repository and watch for releases:
The next piece describes the dependencies of the overlay build. These are the set of components almost always provided by the CAS project that will be packaged up and put into the final web application artifact.
Here is an example:
dependencies {
/**
* CAS dependencies and modules may be listed here.
*
* There is no need to specify the version number for each dependency
* since versions are all resolved and controlled by the dependency management
* plugin via the CAS bom.
**/
}
Note that when you include dependencies in the CAS build, you do not need to specify the CAS version itself. Each release of CAS provides a curated list of dependencies it supports. In practice, you do not need to provide a version for any of these dependencies in your build configuration as the CAS distribution is managing that for you. When you upgrade CAS itself, these dependencies will be upgraded as well in a consistent way.
The curated list of dependencies contains a refined list of third-party libraries. The list is available as a standard Bill of Materials (BOM).
depndencies {
implementation enforcedPlatform("org.apereo.cas:cas-server-support-bom:${project.'cas.version'}")
implementation platform(org.springframework.boot.gradle.plugin.SpringBootPlugin.BOM_COORDINATES)
// Include the CAS reports module without its version
implementation "org.apereo.cas:cas-server-support-reports"
}
Including a CAS module/dependency in the build.gradle
simply advertises to CAS your intention of turning on a new feature or a variation of current behavior. Do NOT include something in your build just because it looks and sounds cool. Remember that the point of an overlay is to only keep track of things you need and care about, and no more.
Now that you have a basic understanding of the build descriptor, it’s time to run the build. A Gradle build is often executed by passing specific goals/commands to Gradle itself, aka gradlew
. So for instance in the terminal and once inside the project directory you could execute things like:
cd cas-overlay-template
./gradlew clean
The WAR Overlay project provides you with an embedded Gradle wrapper whose job is to first determine whether you have Gradle installed. If not, it will download and configure one for you based on the project’s needs. The gradlew tasks
command describes the set of available operations you may carry out with the build script.
README
file to keep to date.
As an example, here’s what I see if I were to run the build command:
./gradlew clean copyCasConfiguration build
...
Starting a Gradle Daemon (subsequent builds will be faster)
Configuration on demand is an incubating feature.
BUILD SUCCESSFUL in 14s
2 actionable tasks: 2 executed
...
You can see that the build attempts to download, clean, compile and package all artifacts, and finally, it produces a build/libs/cas.war
which you can then use for actual deployments.
I am going to skip over the configuration of /etc/cas/config
and all that it deals with. If you need the reference, you may always use this guide to study various aspects of CAS configuration.
Suffice it to say that, quite simply, CAS deployment expects the main configuration file to be found under /etc/cas/config/cas.properties
. This is a key-value store that can dictate and alter the behavior of the running CAS software.
As an example, you might encounter something like:
cas.server.name=https://cas.example.org:8443
cas.server.prefix=${cas.server.name}/cas
logging.config=file:/etc/cas/config/log4j2.xml
…which at a minimum, identifies the CAS server’s URL and prefix and instructs the running server to locate the logging configuration at file:/etc/cas/config/log4j2.xml
. The overlay by default ships with a log4j2.xml
that you can use to customize logging locations, levels, etc. Note that the presence of all that is contained inside /etc/cas/config/
is optional. CAS will continue to fall back onto defaults if the directory and the files within are not found.
It is VERY IMPORTANT that you contain and commit the entire overlay directory (save the obvious exclusions such as the build
directory) into some sort of source control system, such as git
. Treat your deployment just like any other project with tags, releases, and functional baselines.
CAS server logs are THE BEST RESOURCE for determining the root cause of a problem, provided you have configured the appropriate log levels. Specifically, you want to make sure DEBUG
or TRACE
levels are turned on for the relevant packages and components in your logging configuration. Know where the logging configuration is, become familiar with its syntax when changes are due and know where the output data is saved.
The CAS server web application by default ships with a log4j2.xml
file that provides sensible logging configuration and levels for basic use cases. This option typically is activated when no external logging configuration is available and provided by the CAS build or its configuration. In reality, the CAS build provides dedicated settings by default to control the loggig configuration via the following setting:
logging.config=file:/etc/cas/config/log4j2.xml
The logging configuration is then expected to be found and loaded from /etc/cas/config/log4j2.xml
. If you deactivate or remove this setting, the default logging described earlier will begin to activate.
Log messages are routed to console, and a cas.log
file at /tmp/logs
. Here are a few points about the default logging facility:
-DbaseDir=/my/directory
.-Dlog.file.stacktraces=true
for the runtime when you start or deploy CAS.-Dcas.log.level=debug
for the runtime when you start or deploy CAS. This will generally affect all log messages that would be submitted via components from the org.apereo.cas
namespace, including all sub-packages and components.If you prefer to control the logging levels a bit more forcefully and dynamically, you can define the log level for the package you prefer when you start and run CAS particularly with an embedded servlet container:
java -jar build/libs/cas.war --logging.level.org.apereo.cas=debug
Or alternatively, you could define the same setting in your cas.properties
, though note that this technique only affects log messages once the CAS configuration file has been loaded and processed by the runtime:
logging.level.org.apereo.cas=debug
debug
(or trace
for more verbose and thorough logging). This is the most effective insight you have into the running software and your best troubleshooting tool to determine what exactly the system might be doing, and why.
These options work for all packages and components, regardless of whether they’re owned or developed by CAS.
We need to first establish a primary mode of validating credentials by sticking with LDAP authentication. The strategy here, as indicated by the CAS documentation, is to declare the intention/module in the build script:
implemntation "org.apereo.cas:cas-server-support-ldap"
…and then configure the relevant cas.authn.ldap[x]
settings for the directory server in use. Most commonly, that would translate into the following settings:
cas.authn.ldap[0].type=AUTHENTICATED
cas.authn.ldap[0].ldap-url=ldaps://ldap1.example.org
cas.authn.ldap[0].base-dn=dc=example,dc=org
cas.authn.ldap[0].search-filter=cn={user}
cas.authn.ldap[0].bind-dn=cn=Directory Manager,dc=example,dc=org
cas.authn.ldap[0].bind-credential=...
To resolve and fetch the needed attributes which will be used later by CAS for release, the simplest way would be to let LDAP authentication retrieve the attributes directly from the directory server. The following setting allows us to do just that:
cas.authn.ldap[0].principal-attribute-list=memberOf,cn,givenName,mail
Client applications that wish to use the CAS server for authentication must be registered with the server apriori. CAS provides several facilities to keep track of the registration records and you may choose any that fits your needs best. In more technical terms, CAS deals with service management using two specific components: Individual implementations that support a form of a database are referred to as Service Registry components and they are many. There is also a parent component that sits on top of the configured service registry as more of an orchestrator that provides a generic facade and entry point for the rest of CAS without entangling all other operations and subsystems with the specifics and particulars of storage technology.
In this tutorial, we are going to try to configure CAS with the JSON service registry.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-json-service-registry"
Next, you must teach CAS how to look up JSON files to read and write registration records. This is done in the cas.properties
file:
cas.service-registry.core.init-from-json=false
cas.service-registry.json.location=file:/etc/cas/services
…where a sample ApplicationName-1001.json
would then be placed inside /etc/cas/services
:
{
"@class" : "org.apereo.cas.services.CasRegisteredService",
"serviceId" : "https://app.example.org",
"name" : "ApplicationName",
"id" : 1001
}
Or perhaps a slightly more advanced version would be an application definition that allows for the release of certain attributes that we previously retrieved from LDAP as part of authentication:
{
"@class" : "org.apereo.cas.services.CasRegisteredService",
"serviceId" : "https://app.example.org",
"name" : "ApplicationName",
"id" : 1001,
"attributeReleasePolicy" : {
"@class" : "org.apereo.cas.services.ReturnAllowedAttributeReleasePolicy",
"allowedAttributes" : [ "java.util.ArrayList", [ "cn", "mail" ] ]
}
}
A robust CAS deployment requires the presence and configuration of an internal database that is responsible for keeping track of tickets issued by CAS. CAS itself comes by default with a memory-based node-specific cache that is often more than sufficient for smaller deployments or certain variations of a clustered deployment. Just like the service management facility, a large variety of databases and storage options are supported by CAS under the facade of a Ticket Registry.
In this tutorial, we are going to configure CAS to use a Hazelcast Ticket Registry with the assumption that our deployment is going to be deployed in an AWS-sponsored environment. Hazelcast Ticket Registry is often a decent choice when deploying CAS in a cluster and can take advantage of AWS’s native support for Hazelcast to read node metadata properly and locate other CAS nodes in the same cluster to present a common, global and shared ticket registry. This is an ideal choice that requires very little manual work and/or troubleshooting, compared to using options such as Multicast or manually noting down the address and location of each CAS server in the cluster.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-hazelcast-ticket-registry"
Next, the AWS-specific configuration of Hazelcast would go into our cas.properties
:
cas.ticket.registry.hazelcast.cluster.discovery.enabled=true
cas.ticket.registry.hazelcast.cluster.discovery.aws.access-key=...
cas.ticket.registry.hazelcast.cluster.discovery.aws.secret-key=...
cas.ticket.registry.hazelcast.cluster.discovery.aws.region=us-east-1
cas.ticket.registry.hazelcast.cluster.discovery.aws.security-group-name=...
# cas.ticket.registry.hazelcast.cluster.discovery.aws.tag-key=
# cas.ticket.registry.hazelcast.cluster.discovery.aws.tag-value=
That should do it.
Of course, if you are working on a more modest CAS deployment in an environment that is more or less owned by you and you prefer more explicit control over CAS node registrations in your cluster, the following settings would be more ideal:
# cas.ticket.registry.hazelcast.cluster.instance-name=localhost
# cas.ticket.registry.hazelcast.cluster.network.port=5701
# cas.ticket.registry.hazelcast.cluster.network.port-auto-increment=true
cas.ticket.registry.hazelcast.cluster.network.members=123.321.123.321,223.621.123.521,...
CAS provides a facility for auditing authentication activity, allowing them to be recorded to a variety of storage services. Essentially, audited authentication events attempt to provide the who, what, when, how, along with any additional contextual information that might be useful to track activity. By default, auditable records are sent to the CAS log file and they may look like this:
WHO: casuser
WHAT: supplied credentials: ...
ACTION: AUTHENTICATION_SUCCESS
APPLICATION: CAS
WHEN: Mon Aug 26 12:35:59 IST 2013
CLIENT IP ADDRESS: 172.16.5.181
SERVER IP ADDRESS: 192.168.200.22
It’s often useful to track audit records in a relational database for future monitoring, data mining and querying features that may be done outside CAS. Here, we try to configure CAS to push audit data into a PostgreSQL database.
First, ensure you have declared the appropriate module/intention in the build:
dependencies {
implementation "org.apereo.cas:cas-server-support-audit-jdbc"
}
Then, put specific audit settings in cas.properties
:
cas.audit.jdbc.user=postgres
cas.audit.jdbc.password=password
cas.audit.jdbc.driver-class=org.postgresql.Driver
cas.audit.jdbc.url=jdbc:postgresql://localhost:5432/audit
cas.audit.jdbc.dialect=org.hibernate.dialect.PostgreSQL10Dialect
You may also note that the audit record includes a special field for Client IP Address, which typically notes the IP address of the end-user attempting to authenticate, etc. Deployments that are behind a proxy or a load balancer often tend to mask the real IP address by default, and expose it using a dedicated header, such as X-Forwarded-For
. This can be configured with CAS as well, so the correct IP is then recorded into the audit log:
cas.audit.engine.alternate-client-addr-header-name=X-Forwarded-For
As a rather common use case, the majority of CAS deployments that intend to turn on multifactor authentication support tend to do so via Duo Security. We will be going through the same exercise here where we let CAS trigger Duo Security for users who belong to the mfa-eligible
group, indicated by the memberOf
attribute on the LDAP user account.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-duo"
Then, put specific Duo Security settings in cas.properties
. Things such as the secret key, integration key, etc which should be provided by your Duo Security subscription:
cas.authn.mfa.duo[0].duo-secret-key=
cas.authn.mfa.duo[0].duo-integration-key=
cas.authn.mfa.duo[0].duo-api-host=
# cas.authn.mfa.duo[0].duo-application-key=
At this point, we have enabled Duo Security and we just need to find a way to instruct CAS to route the authentication flow over to Duo Security in the appropriate condition. Our task here is to build a special condition that activates multifactor authentication if any of the values assigned to the attribute memberOf
contain the value mfa-eligible
. This condition is placed in the cas.properties
file:
cas.authn.mfa.triggers.principal.global-principal-attribute-name-triggers=memberOf
cas.authn.mfa.triggers.principal.global-principal-attribute-value-regex=mfa-eligible
If the above condition holds true and CAS is to route to a multifactor authentication flow, that would be one supported and provided by Duo Security since that’s the only provider that is currently configured to CAS.
We can also turn on support for the OpenID Connect protocol, allowing CAS to act as an OP (OpenID Connect Provider). OpenId Connect is a continuation of the OAuth protocol with some additional variations. If you enable OpenId Connect, you will have automatically enabled OAuth as well. “Two birds for one stone” sort of thing, though no disrespect to the avian community!
By turning on support for OpenID Connect, CAS begins to act as an authorization server, allowing client applications to verify the identity of the end-user and to obtain basic profile information in an interoperable and REST-like manner. For this tutorial, our focus is to mainly on integrating web-based client applications using the Authorization Code flow of OpenID Connect, which is quite similar to the CAS protocol; you receive a code, you validate the code and receive an access token as well as an ID token.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-oidc"
Then, we teach CAS about specific aspects of the authorization server functionality:
cas.authn.oidc.core.issuer=https://sso.example.org/cas/oidc
cas.authn.oidc.jwks.file-system.jwks-file=file:///etc/cas/config/keystore.jwks
The JWKS resource is used by CAS to create (or use an existing) JSON web keystore composed of private and public keys that enable clients to validate a JSON Web Token (JWT) such as an id token, issued by CAS as an OpenID Connect Provider. Here, we define the global keystore as a path on the file system.
That should be all. Now, you can proceed to register your client web application with CAS similar to the approach described earlier:
{
"@class" : "org.apereo.cas.services.OidcRegisteredService",
"clientId": "my-client-id",
"clientSecret": "my-client-secret",
"serviceId" : "^https://my.application.com/oidc/.+",
"name": "OIDC",
"description": "A sample OIDC client application"
"id": 1
}
We can also turn on support for the SAML2 protocol, allowing CAS to act as a SAML2 identity provider. By turning on support for SAML2, CAS begins to accept SAML2 authentication requests and will in the end produce SAML2 assertions and responses. In doing so, CAS will also generate its own SAML2 identity provider metadata along with other needed artifacts and certificates, all of which should immediately get you started with service provider registrations.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-saml-idp"
Then, we need to decide what our SAML2 entity id should be and where to keep our SAML2 metadata. To keep matters simple, we’ll choose the filesystem to track and store metadata and its artifacts:
cas.authn.saml-idp.core.entity-id=https://cas.apereo.org/saml/idp,
cas.authn.saml-idp.metadata.file-system.location=file:///path/to/metadata/directory
An entity id is a globally unique name for your identity provider. It’s used to identify the IdP during SAML transactions. It is typically a URI, although it doesn’t have to point to an actual resource. It’s often set to the IdP’s base URL or a specific URL that describes the entity. For example, it could be something like https://cas.apereo.org/saml/idp
. This id is included in the metadata that CAS shares with its partners, and it’s used in SAML messages to indicate the sender or the intended recipient. It’s important that the entity id is unique to avoid confusion or conflicts.
Metadata is an XML document that contains information about a SAML entity, such as an Identity Provider (IdP) or a Service Provider (SP). This metadata is used to facilitate the exchange of information for SAML transactions.
The metadata typically includes:
HTTP-Redirect
, HTTP-POST
, etc.).
The metadata is usually exchanged out-of-band (i.e., not through the SAML protocol itself) and is often made available at a publicly accessible URL. This allows partners to fetch and refresh the metadata as needed. For CAS, this typically would be: https://sso.example.org/cas/idp/metadata
Now, you can proceed to register your client web application with CAS similar to the approach described earlier:
{
"@class" : "org.apereo.cas.support.saml.services.SamlRegisteredService",
"serviceId" : "the-entity-id-for-saml2-service-provider",
"name" : "Sample",
"id" : 1,
"metadataLocation" : "https://saml2.example.org/sp/metadata",
"attributeReleasePolicy" : {
"@class" : "org.apereo.cas.services.ReturnAllAttributeReleasePolicy"
}
}
Many CAS deployments rely on the /status
endpoint for monitoring the health and activity of the CAS deployment. This endpoint is typically secured via an IP address, allowing external monitoring tools and load balancers to reach the endpoint and parse the output. In this quick exercise, we are going to accomplish that task, allowing the status
endpoint to be available over HTTP to localhost
.
First, ensure you have declared the appropriate module/intention in the build:
implementation "org.apereo.cas:cas-server-support-monitor"
To enable and expose the status
endpoint, the following settings should come in handy:
management.endpoints.web.base-path=/actuator
management.endpoints.web.exposure.include=status
management.endpoint.status.enabled=true
cas.monitor.endpoints.endpoint.status.access=IP_ADDRESS
cas.monitor.endpoints.endpoint.status.required-ip-addresses=127.0.0.1
Remember that the default path for endpoints exposed over the web is at /actuator
, such as /actuator/status
.
The build/libs
directory contains the results of the overlay process. Since I have not actually customized and overlaid anything yet, all configuration files simply match their default and are packaged as such. As an example, let’s grab the default message bundle and change the text associated with screen.welcome.instructions
.
build
directory. The changesets will be cleaned out and set back to defaults every time you do a build. Follow the overlay process to avoid surprises.
First, I will need to move the file to my project directory so that during the overlay process Gradle can use that instead of what is provided by default.
Here we go:
./gradlew getResource -PresourceName=messages.properties
Then I’ll leave everything in that file alone, except the line I want to change.
...
screen.welcome.instructions=Speak friend and enter.
...
Then I’ll package things up as usual.
./gradlew clean build
If I explode
the built web application again and look at build/cas/WEB-INF/classes/messages.properties
after the build, I should see that the overlay process has picked and overlaid onto the default my version of the file.
To modify the CAS HTML views, each file first needs to be brought over into the overlay. You can use the ./gradlew listTemplateViews
command to see what HTML views are available for customizations. Once chosen, simply use ./gradlew getResource -PresourceName=footer.html
to bring the view into your overlay. Once you have the footer.html
brought into the overlay, you can simply modify the file at src/main/resources/templates/fragments/footer.html
, and then repackage and run the build as usual.
You have several options when it comes to deploying the final cas.war
file. The easiest approach would be to simply use the ./gradlew run
command and have the overlay be deployed inside an embedded container. By default, the CAS web application expects to run on the secure port 8443
which requires that you create a keystore file at /etc/cas/
named thekeystore
.
Using the embedded Apache Tomcat container provided by CAS automatically is the recommended approach in almost all cases (The embedded bit; not the Apache Tomcat bit) as the container configuration is entirely automated by CAS and its version is guaranteed to be compatible with the running CAS deployment. Furthermore, updates and maintenance of the servlet container are handled at the CAS project level where you as the adopter are only tasked with making sure your deployment is running the latest available release to take advantage of such updates.
If you wish to run CAS via the embedded Apache Tomcat container behind a proxy or load balancer and have that entity terminate SSL, you will need to open up a communication channel between the proxy and CAS such that (as an example):
The above task list translates to the following properties expected to be found in your cas.properties
:
server.port=8080
server.ssl.enabled=false
cas.server.tomcat.http.enabled=false
cas.server.tomcat.http-proxy.enabled=true
cas.server.tomcat.http-proxy.secure=true
cas.server.tomcat.http-proxy.scheme=https
The overlay embraces the Jib Gradle Plugin to provide easy-to-use out-of-the-box tooling for building CAS docker images. Jib is an open-source Java containerizer from Google that handles all the steps of packaging CAS into a container image. It does not require you to write a Dockerfile
and it is directly integrated into the overlay.
Building a CAS docker image via jib is as simple as:
./gradlew build jibDockerBuild
If you prefer a more traditional approach, there is always:
./gradlew build
docker-compose build
You may also build Docker images using the Spring Boot Gradle plugin.
If the WAR overlay is prepped with an embedded servlet container such as Apache Tomcat, then you may run the CAS web application directly and once built, using:
java -jar build/libs/cas.war
The choice of the embedded servlet container is noted by the appServer
property found in the gradle.properties
file:
# Use -tomcat, -jetty, -undertow for deployment to other embedded containers
# if the overlay application supports or provides the chosen type.
# You should set this to blank if you want to deploy to an external container.
# and want to set up, download, and manage the container (i.e. Apache Tomcat) yourself.
appServer=-tomcat
All servlet containers presented here, embedded or otherwise, aim to be production-ready. This means that CAS ships with useful defaults out of the box that may be overridden, if necessary and by default, CAS configures everything for you from development to production in today’s platforms. In terms of their production quality, there is almost no difference between using an embedded container vs. an external one.
Unless there are specific, technical, and reasonable objections, choosing an embedded servlet container is almost always the better choice.
If you forget to specify the correct servlet container type and yet choose to run CAS directly, it is likely that you would receive the following error:
ERROR [org.springframework.boot.SpringApplication] - <Application run failed>
org.springframework.context.ApplicationContextException: Unable to start web server;
nested exception is org.springframework.context.ApplicationContextException:
Unable to start ServletWebServerApplicationContext due to missing ServletWebServerFactory bean.
The Gradle WAR overlay provides many additional commands that might prove helpful for troubleshooting purposes:
# Run the CAS web application in standalone executable mode
./gradlew executable
# Debug the CAS web application in embedded mode on port 5005
./gradlew debug
# Run the CAS web application in embedded container mode
./gradlew run
# Display the CAS version
./gradlew casVersion
# Export collection of CAS properties
./gradlew exportConfigMetadata
The exportConfigMetadata
task can be quite useful as it produces a comprehensive catalog of all CAS settings that one could potentially use, along with documentation for each setting, default values, and more.
If you have questions about the contents and the topic of this blog post, or if you need additional guidance and support, feel free to send us a note and ask about consulting and support services.
You must start simple and make changes one step at a time. Once you have a functional environment, you can gradually and slowly add customizations to move files around.
I hope this review was of some help to you and I am sure that both this post as well as the functionality it attempts to explain can be improved in any number of ways. Please feel free to engage and contribute as best as you can.
Monday-Friday
9am-6pm, Central European Time
7am-1pm, U.S. Eastern Time
Monday-Friday
9am-6pm, Central European Time