If you want to establish a connection to your Oracle database over the public Internet, it is recommended to do so by encrypted connections. The Oracle database tools allow users to connect via Transport Layer Security (TLS) or by using SSH tunnels. In this blog post I would like to show how you can connect over SSH tunnels, how to set up your database for allowing secure connections to your database over TLS, as well as how to use client certificates for user authentication. Additionally I will explain any kind of configuration needed when using tools like SQLcl, SQL Developer or the Visual Studio Code Extension of SQL Developer.
Method | Advantages | Disadvantages |
---|---|---|
Connections over TLS |
|
|
SSH tunnels |
|
|
If you want to allow TLS encrypted connections to your database, you can create and use an Oracle Wallet to save your certificates. To create the Oracle Wallet, you can use the orapki utility which is part of each database installation. You can find the utility in the bin directory of the Oracle Home – $ORACLE_HOME/bin/orapki.
It is possible to create a server wallet and client wallet, whereby the client wallet is optional. This is the case if
If the server certificate is not stored on the client’s certificate store, or if you want to authenticate the user by the client certificate (mTLS), you will need to create a client wallet. In this blog post, I will show the configurations needed for this kind of user authentication.
Create Client Wallet: First, we will create the Oracle Wallet and certificate on client side and let the Oracle Wallet on the database server sign the client certificate later on. Alternatively, you may set one or multiple Subject Alternative Names (SAN) comma separated to your client certificate signing request. Please adapt the option argument values as needed.
# create wallet
orapki wallet create -wallet {path/to/store/client/wallet} -pwd {password} -auto_login_local
# create and export client certificate signing request
orapki wallet add -wallet {path/to/your/client/wallet} -pwd {password} -dn "CN={yourClientCommonName}" -keysize 2048 -validity 1825 -addext_san DNS:{yourSubjectAlternativeName}
orapki wallet export -wallet {path/to/your/client/wallet} -pwd {password} -dn "CN={yourClientCommonName}" -request {path/to/save/your}/client_signing_request.csr
Create Server Wallet: Create the Oracle Wallet on server side (database). Afterwards, you can import the Client certificate signing request client_signing_request.csr to let your Server CA issue a new certificate for the client.
# create wallet
orapki wallet create -wallet {path/to/save/server/wallet} -pwd {password} -auto_login_local
# create self-signed cert
orapki wallet add -wallet {path/to/your/server/wallet} -pwd {password} -dn "CN={yourCACommonName}" -keysize 2048 -self_signed -validity 3650
# display all wallet certificates (it should show your server CA certificate)
orapki wallet display -wallet {path/to/your/server/wallet} -complete
# create cert by client certificate signing request
orapki cert create -wallet {path/to/your/server/wallet} -request {path/to/your}/client_signing_request.csr -cert {path/to/save}/client_certificate.crt -validity 1825 -pwd {password}
# export server certificate
orapki wallet export -wallet {path/to/your/server/wallet} -pwd {password} -dn "CN={yourServerCommonName}" -cert {path/to/save/your}/server_certificate.crt
Import certificates into Wallet: Import certificate client_certificate.crt as user certificate and server_certificate.crt as trusted certificate to your Client Oracle Wallet.
# add trusted CA server certificate
orapki wallet add -wallet {path/to/your/client/wallet} -pwd {password} -trusted_cert -cert {path/to/your}/server_certificate.crt
# import CA signed client certificate
orapki wallet add -wallet {path/to/your/client/wallet} -pwd {password} -user_cert -cert {path/to/your}/client_certificate.crt
# display all wallet certificates (it should show your imported certificates)
orapki wallet display -wallet {path/to/your/client/wallet} -complete
Configure SQLnet.ora: Next we need to configure the SQLnet.ora files on the client and server side. Here we can define the location of our Oracle Wallet and define Parameter settings for TLS. On the client configuration, we define the following settings to ensure only strong ciphers are used.
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = {path/to/your/client/wallet})
)
)
SQLNET.AUTHENTICATION_SERVICES = (TCPS)
SSL_CLIENT_AUTHENTICATION = FALSE
SSL_SERVER_DN_MATCH = ON
SSL_VERSION = 1.2
SSL_CIPHER_SUITES = (TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256)
On the database server, similiar configurations can be set. In this case SSL_CLIENT_AUTHENTICATION will be set to TRUE to activate user authentication.
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = {path/to/your/server/wallet})
)
)
SQLNET.AUTHENTICATION_SERVICES = (TCPS)
SSL_CLIENT_AUTHENTICATION = TRUE
SSL_VERSION = 1.2
SSL_CIPHER_SUITES = (TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256)
Configure listener.ora: On the database server we need to add the Oracle Wallet and the Protocol Information to make the database listen on default Port 2484 for TCPS. You may use any other Port if you like.
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = {path/to/your/server/wallet})
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCPS)(HOST = {yourHostname})(PORT = 2484))
)
)
Configure TNSnames.ora: Finally, you can add the TCPS connection to your tnsnames.ora file on the client system. The parameter WALLET_LOCATION (MY_WALLET_DIRECTORY is deprecated since Oracle 23ai) allows you to specify the Oracle Wallet location where your certificates are stored. In the example below, I am using YOURPDB as the Network Alias.
YOURPDB=
(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCPS)
(HOST={yourHostname})
(PORT=2484)
)
(CONNECT_DATA=
(SERVICE_NAME={ServiceName})
)
(SECURITY=
(SSL_SERVER_CERT_DN="CN={yourServerCommonName}")
(SSL_SERVER_DN_MATCH=TRUE)
(WALLET_LOCATION={path/to/your/client/wallet})
)
)
That’s it! If you are using SQL Developer, you can set the TNSnames directory via Tools > Preferences > Database > Advanced > TNSnames Directory. Afterwards, you can select {YOURPDB} as the Network Alias in the connection settings. For SQLcl, you can set the environment variable TNS_ADMIN to link to the directory where your tnsnames.ora file is saved.
Optional – Activate TLS as an Authentication Service: In Oracle database it is possible to use TLS as an Authentication Service for user authentication. While remote OS authentication is deprecated since Oracle 21c and has been removed in 23ai, the authentication based on TLS certificates is still available. You can use it for a database schema by using the IDENTIFIED EXTERNALLY clause specifying the Distinguished Name (DN) of the certificate when creating or altering an user, for example:
-- Create or alter user
CREATE USER MYUSER IDENTIFIED EXTERNALLY AS 'CN={yourClientCommonName}';
ALTER USER MYUSER IDENTIFIED EXTERNALLY AS 'CN={yourClientCommonName}';
-- Minimal grants
GRANT CONNECT, CREATE SESSION TO MYUSER;
To be able to connect to a schema in SQL Developer, you need to activate TCPS as an authentication service in the config file of SQL Developer. Add the following line (or include value TCPS as available authentication service, separated by a comma) to your config in …/sqldeveloper/bin/sqldeveloper.conf:
....
AddVMOption -Doracle.net.authentication_services=(TCPS)
For the Visual Studio Code Extension of SQL Developer, you can add this option in the “Advanced” tab inside the connection settings. There you can set a new property “oracle.net.authentication_services” as Name and “TCPS” as its Value.
When using SQLcl, you need to set this configuration by the environment variable JAVA_TOOL_OPTIONS.
SET JAVA_TOOL_OPTIONS=-Doracle.net.authentication_services=(TCPS)
SET TNS_ADMIN={path/to/your/tnsnames.ora}
sql {user}@{YOURPDB}
SQL> Connected.
....
If SQL Developer or SQLcl asks you for a schema password, you can leave it empty.
If you want to be able to use SSH tunnels to securely connect to the database, you need to have access to the host system the database is running on. As Public Key Authentication is needed, you need to create an SSH Public Key and save it in the authorized_keys file on the host system.
SQL Developer: When using SQL Developer, you first need to add the SSH panel to the GUI (menu View > SSH). You can add a new SSH host and specify hostname and username for the SSH session, as well as hostname and port for the database connection.
To use the SSH tunnel for a connection to the database, you need to set SSH as the Connection Type and select your previously created SSH tunnel name as Port Forward.
SQLcl: You need to start SQLcl without logging in. Afterwards you can execute the sshtunnel command and log in via EZ Connect Naming Method. In my example, the remote database is running on localhost Port 1521.
SET TNS_ADMIN={path/to/your/tnsnames.ora}
sql /nolog
SQL> sshtunnel oracle@example.tld:22 -i ~/.ssh/id_rsa -L 1521:localhost:1521
Using port: 22
SSH Tunnel connected
SQL> conn {user}@{YOURPDB}
SQL> Connected.
....
Note: Please be aware that you need to specify the host connection details in your SSH tunnel settings. Therefore, you cannot use the tnsnames.ora file to save any SSH tunnel credentials.
In the previous blog post, we have seen how to export the binary raw data of application and workspace files during an APEX application export. This post will focus on how to import files to APEX during the Application import process.
Like described in Part 1: Export APEX App with workspace and application files in uncoded binary raw data format, let’s imagine we have some files saved in directories named workspace_files and application_files inside /tmp/apex/application/shared_components/ and we want to upload them to the APEX instance during our deployment. To achieve that, we need to convert all raw file contents to a readable hexadecimal format and save these code lines in an SQL script that can be executed in the database.
For doing that, we will use the Bash script below that will convert all raw contents of files to their hexadecimal representation. It will create a customized install file for installing the APEX app and importing the files on the workspace afterwards.
Please note that (as of APEX 24.1) there is no documented PL/SQL API provided by Oracle to import files into an APEX application or workspace. Therefore, we will rely on the upload procedures for workspace and application files given in the internal APEX packages WWV_FLOW_API and WWV_FLOW_IMP_SHARED.
For identifying the MIME type for each file extension, we will use the Oracle XML DB resource API, accessable over package DBMS_XDB. You may use any other function or provide your own to identify MIME types. The function we consume is named get_mime_type() as shown below. To convert the file contents from hexadecimal back to raw format during the import, we use the Oracle provided function of hextoraw(). This conversion is made by the function varchar2_tab_to_blob() in our script.
You can display available options by providing the “-help” option when calling the script.
sh create_custom_install_and_file_upload_scripts.sh -help
#!/bin/bash
# create_custom_install_and_file_upload_scripts.sh
#
# This script saves encoded data
# of workspace and application
# files in an upload SQL file
# and creates a customized install
# script for the APEX application
## Global Constants and Settings
readonly IMPORT_APEX_BASE_DIR_LOC="/tmp"
readonly IMPORT_APEX_DIR="apex"
readonly IMPORT_APEX_SHARED_COMPONENTS_PATH="${IMPORT_APEX_BASE_DIR_LOC}/${IMPORT_APEX_DIR}/application/shared_components"
readonly IMPORT_APEX_WORKSPACE_FILES_DIR="workspace_files"
readonly IMPORT_APEX_APP_FILES_DIR="application_files"
readonly IMPORT_APEX_APP_UPLOAD_FILES_NAME="upload_files.sql"
readonly IMPORT_APEX_APP_INSTALL_FILE_NAME="install_apex_app_with_files.sql"
readonly FONT_YELLOW='\033[1;33m'
readonly FONT_NOCOLOR='\033[0m'
## Functions
# Help text displayed to user in case of invalid script calls or via -help option
printHelpText()
{
echo ""
echo "Usage: $0 -apexAppId=<APEX_APP_ID> [-apexAppAlias=<APEX_APP_ALIAS>] [-workspaceName=<WORKSPACE_NAME>] [-schemaName=<SCHEMA_NAME>]"
echo ""
echo "Available Options:"
echo -e "\t${FONT_YELLOW}-apexAppId${FONT_NOCOLOR} - APEX App Id"
echo -e "\t${FONT_YELLOW}-apexAppAlias${FONT_NOCOLOR} - APEX App Alias (optional)"
echo -e "\t${FONT_YELLOW}-workspaceName${FONT_NOCOLOR} - Workspace Name (optional)"
echo -e "\t${FONT_YELLOW}-schemaName${FONT_NOCOLOR} - Schema Name (optional)"
echo -e "\t${FONT_YELLOW}-help${FONT_NOCOLOR} - Prints this help text"
echo ""
}
# Assigns given option args to vars
assignOptsToVars()
{
while [ "$1" != "" ];
do
opt=`echo $1 | awk -F= '{print $1}'`
optarg=`echo $1 | awk -F= '{print $2}'`
case "$opt" in
-help ) printHelpText
exit ;;
-apexAppId ) APEX_APP_ID="$optarg" ;;
-apexAppAlias ) APEX_APP_ALIAS="$optarg" ;;
-workspaceName ) WORKSPACE_NAME="$optarg" ;;
-schemaName ) SCHEMA_NAME="$optarg" ;;
* ) echo "ERROR: Unknown option provided: \"$opt\"."
echo ""
printHelpText
exit 1 ;;
esac
shift
done
}
# Verifies given option args
verifyOptions()
{
if [ "$APEX_APP_ID" == "" ]; then
echo "ERROR: Please provide all necessary option arguments. Use -help to show a list of all options."
echo ""
exit 1
fi
}
# Prints Stage to output
printStage()
{
echo ""
echo "#"
echo "# $1"
echo "#"
echo ""
}
# Prints Stage Result to output
printStageResult()
{
echo ""
echo ".. $1"
echo ""
}
# Cleans up custom files in case of Errors
cleanup()
{
local UPLOAD_FILE_LOC="${IMPORT_APEX_BASE_DIR_LOC}/${IMPORT_APEX_DIR}/${IMPORT_APEX_APP_UPLOAD_FILES_NAME}"
local INSTALL_FILE_LOC="${IMPORT_APEX_BASE_DIR_LOC}/${IMPORT_APEX_DIR}/${IMPORT_APEX_APP_INSTALL_FILE_NAME}"
if [ -f "${UPLOAD_FILE_LOC}" ]; then rm "${UPLOAD_FILE_LOC}"; fi
if [ -f "${INSTALL_FILE_LOC}" ]; then rm "${INSTALL_FILE_LOC}"; fi
}
# exits script with error or success
exitScript()
{
local EXIT_WITH_ERROR="$1"
if [ "$EXIT_WITH_ERROR" == "Y" ]
then
echo "Some scripts failed."
exit 1
else
echo "Finished."
exit 0
fi
}
# generates files upload script content
generateFilesUploadScriptContent()
{
local DIR
local DIR_PATH_FILE_NAME
local FILE_NAME
local INDEX
echo "SET SERVEROUTPUT ON"
echo "DECLARE
TYPE VARCHAR2_TAB_T IS TABLE OF VARCHAR2(32767) INDEX BY BINARY_INTEGER;
l_apex_app_id NUMBER := ${APEX_APP_ID};
l_workspace_id NUMBER;
l_dir_path_file_name VARCHAR2(500);
l_file_name VARCHAR2(200);
l_mime_type VARCHAR2(128);
l_file_content VARCHAR2_TAB_T;
FUNCTION get_mime_type(
pi_file_name IN VARCHAR2
) RETURN VARCHAR2
IS
l_mime_type VARCHAR2(128);
BEGIN
SELECT mime_type
INTO l_mime_type
FROM XMLTABLE(
XMLNAMESPACES(
DEFAULT 'http://xmlns.oracle.com/xdb/xdbconfig.xsd'),
'//mime-mappings/mime-mapping'
PASSING xdb.dbms_xdb.cfg_get()
COLUMNS
extension VARCHAR2(50) PATH 'extension',
mime_type VARCHAR2(128) PATH 'mime-type'
)
WHERE LOWER(extension) = LOWER(SUBSTR(pi_file_name, INSTR(pi_file_name, '.', -1) + 1));
RETURN l_mime_type;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END get_mime_type;
FUNCTION varchar2_tab_to_blob(
pi_vc2_table IN VARCHAR2_TAB_T
) RETURN BLOB
IS
l_blob BLOB;
l_size NUMBER;
BEGIN
DBMS_LOB.createtemporary(l_blob, true, DBMS_LOB.session);
FOR i IN 1 .. pi_vc2_table.COUNT
LOOP
l_size := LENGTH(pi_vc2_table(i)) / 2;
DBMS_LOB.writeappend(l_blob, l_size, hextoraw(pi_vc2_table(i)));
END LOOP;
RETURN l_blob;
EXCEPTION
WHEN OTHERS THEN
IF DBMS_LOB.istemporary(lob_loc => l_blob) = 1
THEN
DBMS_LOB.freetemporary(lob_loc => l_blob);
END IF;
RAISE;
END varchar2_tab_to_blob;
"
echo "BEGIN
SELECT workspace_id
INTO l_workspace_id
FROM apex_applications
WHERE application_id = l_apex_app_id;
APEX_UTIL.set_security_group_id(p_security_group_id => l_workspace_id);"
DIR="${IMPORT_APEX_SHARED_COMPONENTS_PATH}/${IMPORT_APEX_WORKSPACE_FILES_DIR}"
for FILE in $(find $DIR/* -type f -name '*' 2>/dev/null);
do
DIR_PATH_FILE_NAME=${FILE#*$DIR"/"}
FILE_NAME=$(basename -- "$FILE")
echo "
BEGIN
l_dir_path_file_name := '${DIR_PATH_FILE_NAME}';
l_file_name := '${FILE_NAME}';
l_mime_type := get_mime_type(pi_file_name => l_file_name);
"
echo " l_file_content := NEW VARCHAR2_TAB_T();"
INDEX=1
for LINE in $(od -t x1 -An -v "$FILE" | tr -d '\n ' | fold -w 200)
do
echo " l_file_content("$INDEX") := '"$LINE"';"
INDEX=$(($INDEX+1))
done
echo "
WWV_FLOW_API.create_workspace_static_file(p_file_name => l_dir_path_file_name,
p_mime_type => nvl(l_mime_type, 'application/octet-stream'),
p_file_charset => 'utf-8',
p_file_content => varchar2_tab_to_blob(pi_vc2_table => l_file_content)
);
DBMS_OUTPUT.put_line('INFO: Workspace File ' || l_dir_path_file_name || ' uploaded successfully.');
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20000, 'ERROR: Workspace File ' || l_dir_path_file_name || ' could not be uploaded - ' || SQLERRM || ' - ' || DBMS_UTILITY.format_error_backtrace);
END;"
done
DIR="${IMPORT_APEX_SHARED_COMPONENTS_PATH}/${IMPORT_APEX_APP_FILES_DIR}"
for FILE in $(find $DIR/* -type f -name '*' 2>/dev/null);
do
DIR_PATH_FILE_NAME=${FILE#*$DIR"/"}
FILE_NAME=$(basename -- "$FILE")
echo "
BEGIN
l_dir_path_file_name := '${DIR_PATH_FILE_NAME}';
l_file_name := '${FILE_NAME}';
l_mime_type := get_mime_type(pi_file_name => l_file_name);
"
echo " l_file_content := NEW VARCHAR2_TAB_T();"
INDEX=1
for LINE in $(od -t x1 -An -v "$FILE" | tr -d '\n ' | fold -w 200)
do
echo " l_file_content("$INDEX") := '"$LINE"';"
INDEX=$(($INDEX+1))
done
echo "
WWV_FLOW_IMP_SHARED.create_app_static_file(p_flow_id => l_apex_app_id,
p_file_name => l_dir_path_file_name,
p_mime_type => nvl(l_mime_type, 'application/octet-stream'),
p_file_charset => 'utf-8',
p_file_content => varchar2_tab_to_blob(pi_vc2_table => l_file_content)
);
DBMS_OUTPUT.put_line('INFO: Application File ' || l_dir_path_file_name || ' uploaded successfully.');
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20000, 'ERROR: Application File ' || l_dir_path_file_name || ' could not be uploaded - ' || SQLERRM || ' - ' || DBMS_UTILITY.format_error_backtrace);
END;"
done
echo "COMMIT;"
echo "END;"
echo "/"
echo ""
}
# generates custom install script content
generateInstallScriptContent()
{
echo "BEGIN"
echo " APEX_APPLICATION_INSTALL.generate_offset();"
echo " APEX_APPLICATION_INSTALL.set_application_id(${APEX_APP_ID});"
if [ "$APEX_APP_ALIAS" != "" ]; then
echo " APEX_APPLICATION_INSTALL.set_application_alias('${APEX_APP_ALIAS}');"
fi
if [ "$WORKSPACE_NAME" != "" ]; then
echo " APEX_APPLICATION_INSTALL.set_workspace('${WORKSPACE_NAME}');"
fi
if [ "$SCHEMA_NAME" != "" ]; then
echo " APEX_APPLICATION_INSTALL.set_schema('${SCHEMA_NAME}');"
fi
echo "END;"
echo "/"
echo ""
echo "PROMPT # Install APEX application"
echo "@install.sql"
echo ""
echo "PROMPT # Upload static workspace and application files"
echo "@${IMPORT_APEX_APP_UPLOAD_FILES_NAME}"
}
# creates files upload script
createFilesUploadScript()
{
local UPLOAD_FILE_LOC="${IMPORT_APEX_DIR}/${IMPORT_APEX_APP_UPLOAD_FILES_NAME}"
printStage "Create files upload script"
generateFilesUploadScriptContent > "${UPLOAD_FILE_LOC}"
if [ -f "${UPLOAD_FILE_LOC}" ]
then
printStageResult "Done: ${UPLOAD_FILE_LOC}"
else
printStageResult "ERROR: Failed to create files upload script."
exitScript "Y"
fi
}
# creates custom install script
createInstallScript()
{
local INSTALL_FILE_LOC="${IMPORT_APEX_DIR}/${IMPORT_APEX_APP_INSTALL_FILE_NAME}"
printStage "Create custom install script"
generateInstallScriptContent > "${INSTALL_FILE_LOC}"
if [ -f "${INSTALL_FILE_LOC}" ]
then
printStageResult "Done: ${INSTALL_FILE_LOC}"
else
printStageResult "ERROR: Failed to create install script."
exitScript "Y"
fi
}
# entry point
main()
{
## Set Options to Vars
assignOptsToVars "$@"
## Verify given Options
verifyOptions
cd $IMPORT_APEX_BASE_DIR_LOC
## Create files upload script
createFilesUploadScript
## Create custom install script
createInstallScript
exitScript "N"
}
main "$@"
To be able to execute the bash script, you need to make sure that it is executable for the deployment OS user.
chmod u+x create_custom_install_and_file_upload_scripts.sh
Before the installation of the APEX app can be started, the Bash script needs to be executed. It will create two customized SQL files: install_apex_app_with_files.sql and upload_files.sql inside the base directory of the APEX app.
The following screenshot shows an example of the contents of upload_files.sql generated by the Bash script.
All you need to do is to execute the generated install_apex_app_with_files.sql in your database schema during your deployment process. It will install the APEX app first, followed by uploading the workspace and application files to the workspace. The file content can look like this, for example:
BEGIN
APEX_APPLICATION_INSTALL.generate_offset();
APEX_APPLICATION_INSTALL.set_application_id(141);
END;
/
PROMPT # Install APEX application
@install.sql
PROMPT # Upload static workspace and application files
@upload_files.sql
Oracle APEX allows you to export applications including all app components as SQL files. This includes application files which binary raw contents are converted into a hexadecimal string representation. This makes it difficult to add, edit or review files from outside an APEX workspace. In this blog post, I would like to show you how you can export an APEX application together with workspace and application files in their binary raw format.
When you export the APEX app over the Workspace Web interface or with SQLcl, the export will only include app related objects but not those for the workspace. Nevertheless, SQLcl provides you a command to export workspace files too.
SQL> apex export -applicationid 141 -expOriginalIds -split
SQL> apex export -workspaceid 19274859372729475 -expOriginalIds -expFiles
The following shows an example of the content of an SQL file including data of an application file when the app got exported with “split” mode. Exported workspace files have a similiar structure, but all file contents are merged to a single export file named like files_19274859372729475.sql
prompt --application/shared_components/files/icons_app_icon_32_png
begin
-- Manifest
-- APP STATIC FILES: 141
-- Manifest End
wwv_flow_imp.component_begin (
p_version_yyyy_mm_dd=>'2022.10.07'
,p_release=>'22.2.11'
,p_default_workspace_id=>19274859372729475
,p_default_application_id=>141
,p_default_id_offset=>16094782927472913
,p_default_owner=>'DATABASESCHEMA'
);
wwv_flow_imp.g_varchar2_table := wwv_flow_imp.empty_varchar2_table;
wwv_flow_imp.g_varchar2_table(1) := '89504E470D0A1A0A0000000D4948445200000020000000200806000000737A7AF4000000017352474200AECE1CE9000003DB4944415458479D57CB4E534118FE0E050A358458902890B444034408246014909562C2D2C4B54B8DCFE0C28589F804BAD0C4';
wwv_flow_imp.g_varchar2_table(2) := '8D4FA09B920862DC881A1A342A14D4588C1A958B9808A5B43D666EE7CCCC9939A7DA459BCE7FFBFEFB8C333037E3C2051C0770E1C0FB76C10E09919FD35FCA4B45D84790091797A1544FCC610497FF0A39618B02D00EC95F62DAB321191167029A0F8E51';
wwv_flow_imp.g_varchar2_table(3) := '849C0C9B9C3A20B0C4AF647190033079242BB71994704A91D261A95CBE670E1C16015FBD8555512E3C9542C7E82E53558D362F92048094794B3254537690664A30F0BE0B0E4901535F2D6E1B5F98BC89C6AA85A740ADE87F87C2C32F0740AE622530AA76';
wwv_flow_imp.g_varchar2_table(4) := '0A20605069415B01F925EF964AA8ECEC50C69A44024E6D6D50C8D049B4A5590AC2ABC09A7317A8ECEF61B02181743289DFBBBBF8F0731D9F630E9C78BCAA527106E666F9D430F337FFDAC6D1A626EAEEDAD606C68EB4FB4308C0C2C63A46BB7BF0757313';
wwv_flow_imp.g_varchar2_table(5) := 'C54A19E9D643C864B39848A5B0B0FD0B5F128950206A0D184A71B85CC6644F2F55329D5BC6D4C86945E195A74F30D4D5858F6B6B483434E0705B1BE6575771EFCC046E2F66F1A05854A79A90E68D1500A0C38D027069F6114E7477E3DBD6162A950A3A5B';
wwv_flow_imp.g_varchar2_table(6) := '5AF072650577CF9E6300F68BC608881AF50158C6ED90210232EB9D37AF514C1C407D1D293C177BFB65C477FEE072FF006E2D66F19003B036AFBA0BC472F1D9D5082C616A645CF168BBB887AB2FE631984AD3F357F94F983A3982E6FA3803A0A74073544A';
wwv_flow_imp.g_varchar2_table(7) := '81BE09991D1DC08D91716551119EFBEFDEE23DF7F4585D3D2E1EEFA3854A52202260AB4469129A59AC35C083241CBAFEFC195570EDD4A8A7484E81772856BFCB06341F440E5C3E9075F7A28A90CD539E3ACD07B908FD7DE02F2D2AEAEF82E8086472CBB8';
wwv_flow_imp.g_varchar2_table(8) := 'A9B4A16E59FD2F2210BA25061ECFB04168F9B4AEAF23954C526A7E731363ED1D8C93C7DEAC9C1197BEFFC0EAC166BBF2C032E2AC542937C052D04329D3B91CD21D9D8AC242A984D57C1E17FAFB51138B29B44C2E876CAC463AD31611A987401B0A76BE90';
wwv_flow_imp.g_varchar2_table(9) := '86CB254C76F7D23C934998EAE85403E60214C49A0EC24526B7A2010806237217F4158A38DFDFC723B0EC45409F5B85FD20083502C16491BB2A03C0AF52A66491355B2EECB29C1B3E1E104331C41A1BE97A0E1334A6C0D3651CCFE27E6BC5145A744ACF92';
wwv_flow_imp.g_varchar2_table(10) := '5920D631EB53A9BCF93FC531F90D20BF184C970DAA4A7D0B182F66D622A460A4EBA4D188790E88D3A8AB1DAF01F3C3C416C728A52A6C99DB2CA94C42DBDD35EA52FEBF7214AC7E29ADC6C3882A93DE8BE1DAFC36B4F5986586856D80B0688837AAEC809F';
wwv_flow_imp.g_varchar2_table(11) := '025B2547B95BB59CB91ED81C30B457985D45157F79EBFC7E27C8ED4D5EEA0E5CFA8E679FBF36F6013F40676AA90000000049454E44AE426082';
wwv_flow_imp_shared.create_app_static_file(
p_id=>wwv_flow_imp.id(50871026532675204)
,p_file_name=>'icons/app-icon-32.png'
,p_mime_type=>'image/png'
,p_file_charset=>'utf-8'
,p_file_content => wwv_flow_imp.varchar2_to_blob(wwv_flow_imp.g_varchar2_table)
);
wwv_flow_imp.component_end;
end;
/
Now, let’s see how to export these files in raw format.
Note: For accessing workspace related data, the database user needs to be the owner of the workspace or have been granted the role APEX_ADMINISTRATOR_ROLE.
In the following, we will export the APEX application and all files into the /tmp/ directory. To avoid overwriting by simultaneous executions of the script, you can use a dynamic path instead.
For exporting the files in raw format, you have two options.
Option | Advantages | Disadvantages |
---|---|---|
stdout |
|
|
UTL_FILE |
|
|
In both scripts below, all workspace files and application files are exported into the path /tmp/apex/application/shared_components/workspace_files and application_files of the APEX application export folder. The directory path structure is retained if the files have been uploaded to directories. If there are no workspace or application files existing, the corresponding directory will not be created. As workspace files are shared resources in a workspace and do not only belong to one application, the export of these files can be removed in the scripts if required.
Example (Static Application Files):
These files are exported in the following structure:
..shared_components
└── application_files
├── dir1
│ └── file1.txt
├── dir2
│ └── dir2_2
│ ├── file2.js
│ └── file2_2.png
└── file3.txt
One way to get the raw content of a file would be to select the BLOB data from the database table where the files are stored to retrieve the raw over standard output (stdout). Afterwards, you can save the raw content directly into a file on the file system using Unix commands such as xxd -r -p. This works similiar to the SPOOL command available in SQL*Plus and SQLcl. In this case, you need to make sure to disable all PL/SQL execution messages, column heading information and any kind of echos so that only the raw data contents of files get returned over SQLcl. However, this method may not be stable in case of warning messages are generated by SQLcl itself or general errors by the database, as they cannot be suppressed.
The script below can be used to export files using stdout. You can display available options by providing the “-help” option when calling the script.
sh export_apex_app_with_files_using_stdout.sh -help
#!/bin/bash
# export_apex_app_with_files_using_stdout.sh
#
# This script exports APEX apps with
# workspace and application files in
# their uncoded binary raw format
# using the standard output and xxd
## Global Constants and Settings
readonly EXPORT_APEX_BASE_DIR_LOC="/tmp"
readonly EXPORT_APEX_DIR="apex"
readonly EXPORT_APEX_SHARED_COMPONENTS_PATH="${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}/application/shared_components"
readonly EXPORT_APEX_WORKSPACE_FILES_PATH="${EXPORT_APEX_SHARED_COMPONENTS_PATH}/workspace_files"
readonly EXPORT_APEX_APP_FILES_PATH="${EXPORT_APEX_SHARED_COMPONENTS_PATH}/application_files"
readonly FONT_YELLOW='\033[1;33m'
readonly FONT_NOCOLOR='\033[0m'
# Settings script for returning the BLOB data
readonly SETTINGS_SCRIPT="
WHENEVER SQLERROR EXIT 1;
SET SERVEROUTPUT ON SIZE UNLIMITED
SET FEEDBACK OFF
SET HEADING OFF
SET ECHO OFF
SET PAGESIZE 0
SET LINESIZE 32767
SET TRIMSPOOL ON
SET LONG 900000
"
# Get Workspace Id script
readonly EXPORT_WORKSPACE_ID_SCRIPT="
$SETTINGS_SCRIPT
SELECT TO_CHAR(workspace_id)
FROM apex_applications
WHERE application_id = _APP_ID_;
QUIT;
"
# Get Workspace file contents script
readonly EXPORT_WORKSPACE_FILE_SCRIPT="
$SETTINGS_SCRIPT
SELECT file_content
FROM apex_workspace_static_files
WHERE workspace_id = _WORKSPACE_ID_
AND file_name = '_FILENAME_';
QUIT;
"
# Get Application file contents script
readonly EXPORT_APPLICATION_FILE_SCRIPT="
$SETTINGS_SCRIPT
SELECT file_content
FROM apex_application_static_files
WHERE application_id = _APP_ID_
AND file_name = '_FILENAME_';
QUIT;
"
## Functions
# Help text displayed to user in case of invalid script calls or via -help option
printHelpText()
{
echo ""
echo "Usage: $0 -sqlclExec=<\"PATH/TO/SQLCL/BIN/sql\"> -apexAppId=<APEX_APP_ID> -dbHost=<DB_HOST> -dbPort=<DB_PORT> -dbServiceName=<DB_SERVICE_NAME> -dbExecUser=<DB_EXEC_USER> [-dbExecUserPassword=<DB_EXEC_USER_PASSWORD>]"
echo ""
echo "Available Options:"
echo -e "\t${FONT_YELLOW}-sqlclExec${FONT_NOCOLOR} - Path to executables of SQLcl"
echo -e "\t${FONT_YELLOW}-apexAppId${FONT_NOCOLOR} - APEX App Id"
echo -e "\t${FONT_YELLOW}-dbHost${FONT_NOCOLOR} - Hostname"
echo -e "\t${FONT_YELLOW}-dbPort${FONT_NOCOLOR} - Port"
echo -e "\t${FONT_YELLOW}-dbServiceName${FONT_NOCOLOR} - Service Name"
echo -e "\t${FONT_YELLOW}-dbExecUser${FONT_NOCOLOR} - User"
echo -e "\t${FONT_YELLOW}-dbExecUserPassword${FONT_NOCOLOR} - Password (optional; password prompt will be displayed)"
echo -e "\t${FONT_YELLOW}-help${FONT_NOCOLOR} - Prints this help text"
echo ""
}
# Assigns given option args to vars
assignOptsToVars()
{
while [ "$1" != "" ];
do
opt=`echo $1 | awk -F= '{print $1}'`
optarg=`echo $1 | awk -F= '{print $2}'`
case "$opt" in
-help ) printHelpText
exit ;;
-sqlclExec ) SQLCL_EXEC="$optarg" ;;
-apexAppId ) APP_ID="$optarg" ;;
-dbHost ) DB_HOST="$optarg" ;;
-dbPort ) DB_PORT="$optarg" ;;
-dbServiceName ) DB_SERVICE="$optarg" ;;
-dbExecUser ) DB_USERNAME="$optarg" ;;
-dbExecUserPassword ) DB_PASSWORD="$optarg" ;;
* ) echo "ERROR: Unknown option provided: \"$opt\"."
echo ""
printHelpText
exit 1 ;;
esac
shift
done
}
# Verifies given option args
verifyOptions()
{
if [ "$DB_USERNAME" != "" ] && [ "$DB_PASSWORD" == "" ]; then
stty -echo
read -p "Enter Password of User ${DB_USERNAME}: " DB_PASSWORD
stty echo
echo ""
fi
if [ "$SQLCL_EXEC" == "" ] || [ "$APP_ID" == "" ] || [ "$DB_HOST" == "" ] || [ "$DB_PORT" == "" ] || [ "$DB_SERVICE" == "" ] || [ "$DB_USERNAME" == "" ] || [ "$DB_PASSWORD" == "" ]; then
echo "ERROR: Please provide all necessary option arguments. Use -help to show a list of all options."
echo ""
exit 1
fi
}
# Prints Stage to output
printStage()
{
echo ""
echo "#"
echo "# $1"
echo "#"
echo ""
}
# Prints Stage Result to output
printStageResult()
{
echo ""
echo ".. $1"
echo ""
}
# Prints Step to output
printStep()
{
echo ".... $1"
}
# Prints Step Result to output
printStepResult()
{
echo "........ $1"
}
# Cleans up dirs in case of Errors or script termination
cleanup()
{
local CLEANUP_ALL="$1"
if [[ -f "${EXPORT_APEX_BASE_DIR_LOC}/files_${WORKSPACE_ID}.sql" ]]; then rm "${EXPORT_APEX_BASE_DIR_LOC}/files_${WORKSPACE_ID}.sql"; fi
if [[ -d "${EXPORT_APEX_BASE_DIR_LOC}/f${APP_ID}" ]]; then rm -rf "${EXPORT_APEX_BASE_DIR_LOC}/f${APP_ID}"; fi
if [ "$CLEANUP_ALL" == "N" ]
then
if [[ -d "${EXPORT_APEX_SHARED_COMPONENTS_PATH}/files" ]]; then rm -rf "${EXPORT_APEX_SHARED_COMPONENTS_PATH}/files"; fi
else
if [[ -d "${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}" ]]; then rm -rf "${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}"; fi
fi
}
# exits script with error or success
exitScript()
{
local EXIT_WITH_ERROR="$1"
if [ "$EXIT_WITH_ERROR" == "Y" ]
then
echo "Some scripts failed."
exit 1
else
echo "Finished."
exit 0
fi
}
# Tests the database connection
testDatabaseConnection()
{
echo "exit" | "$SQLCL_EXEC" -L "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" | grep -q "ORA-" > /dev/null
if [ $? -eq 0 ]
then
echo "ERROR: The database connection test failed. Please check the user credentials."
exitScript "Y"
fi
}
# Determine workspace id
determineWorkspaceId()
{
local EXPORT_WORKSPACE_ID_SCRIPT_MOD="${EXPORT_WORKSPACE_ID_SCRIPT/_APP_ID_/$APP_ID}"
local SCRIPT_FAILED
printStage "Determine Workspace ID"
WORKSPACE_ID=$("$SQLCL_EXEC" -s "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$EXPORT_WORKSPACE_ID_SCRIPT_MOD")
if [ $? -eq 0 ]
then
WORKSPACE_ID=$(echo "$WORKSPACE_ID" | grep -oP "\d*")
if [[ "$WORKSPACE_ID" =~ ^[[:digit:]]{6,}$ ]]
then
printStageResult "Done: ${WORKSPACE_ID}."
else
SCRIPT_FAILED="Y"
fi
else
SCRIPT_FAILED="Y"
fi
if [ "$SCRIPT_FAILED" == "Y" ]
then
printStageResult "ERROR: Failed to get Workspace ID."
cleanup "Y"
exitScript "Y"
fi
}
# Exports APEX application with application static files
exportApplicationWithEncodedFiles()
{
printStage "SQLcl: Export APEX Application with Static Application Files (hex encoded)"
"$SQLCL_EXEC" -s 2>&1 "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "apex export -applicationid ${APP_ID} -skipExportDate -expOriginalIds -split"
if [ -d "f${APP_ID}" ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export APEX application."
cleanup "Y"
exitScript "Y"
fi
# Rename export app folder
mv "f${APP_ID}" "$EXPORT_APEX_DIR"
}
# Exports hex encoded workspace static files
exportEncodedWorkspaceFiles()
{
printStage "SQLcl: Export APEX Static Workspace Files (hex encoded)"
"$SQLCL_EXEC" -s 2>&1 "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "apex export -workspaceid ${WORKSPACE_ID} -expOriginalIds -expFiles"
if [ -f "files_${WORKSPACE_ID}.sql" ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export APEX Static Workspace Files."
cleanup "Y"
exitScript "Y"
fi
}
# Exports binary raw workspace files
exportBinaryRawWorkspaceFiles()
{
local EXPORT_WORKSPACE_FILE_SCRIPT_MOD
local BLOB
local FILEPATH
printStage "SQLcl: Export APEX Static Workspace Files (binary raw)"
# Create directory for workspace files
mkdir -p "$EXPORT_APEX_WORKSPACE_FILES_PATH"
# loop through exported workspace static files script
grep -oPs "(?<=,p_file_name=>').*(?=')" "files_${WORKSPACE_ID}.sql" | while IFS= read -r FILENAME
do
printStep "Found Workspace File: ${FILENAME}"
EXPORT_WORKSPACE_FILE_SCRIPT_MOD="${EXPORT_WORKSPACE_FILE_SCRIPT/_WORKSPACE_ID_/$WORKSPACE_ID}"
EXPORT_WORKSPACE_FILE_SCRIPT_MOD="${EXPORT_WORKSPACE_FILE_SCRIPT_MOD/_FILENAME_/$FILENAME}"
BLOB=$("$SQLCL_EXEC" -s "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$EXPORT_WORKSPACE_FILE_SCRIPT_MOD")
if [ $? -eq 0 ] && [[ $BLOB ]]
then
FILEPATH=$(dirname "${EXPORT_APEX_WORKSPACE_FILES_PATH}/${FILENAME}")
mkdir -p "$FILEPATH"
if ! xxd -r -p <<< "$BLOB" > "${EXPORT_APEX_WORKSPACE_FILES_PATH}/${FILENAME}"; then
printStepResult "ERROR: Workspace File ${FILENAME} could not be created."
exit 1
else
printStepResult "Workspace File ${FILENAME} created."
fi
else
printStepResult "ERROR: BLOB data of File ${FILENAME} could not be returned via standard output."
exit 1
fi
done
if [ $? -eq 0 ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export APEX Static Workspace File BLOB data."
cleanup "Y"
exitScript "Y"
fi
}
# Exports binary raw application files
exportBinaryRawApplicationFiles()
{
local EXPORT_APPLICATION_FILE_SCRIPT_MOD
local BLOB
local FILEPATH
printStage "SQLcl: Export APEX Static Application Files (binary raw)"
# Create directory for application files
mkdir -p "$EXPORT_APEX_APP_FILES_PATH"
# loop through exported application static files scripts
for FILE in $(find ${EXPORT_APEX_SHARED_COMPONENTS_PATH}/files/* -type f -name '*.sql');
do
grep -oPs "(?<=,p_file_name=>').*(?=')" "$FILE" | while IFS= read -r FILENAME
do
printStep "Found Application File: ${FILENAME}"
EXPORT_APPLICATION_FILE_SCRIPT_MOD="${EXPORT_APPLICATION_FILE_SCRIPT/_APP_ID_/$APP_ID}"
EXPORT_APPLICATION_FILE_SCRIPT_MOD="${EXPORT_APPLICATION_FILE_SCRIPT_MOD/_FILENAME_/$FILENAME}"
BLOB=$("$SQLCL_EXEC" -s "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$EXPORT_APPLICATION_FILE_SCRIPT_MOD")
if [ $? -eq 0 ] && [[ $BLOB ]]
then
FILEPATH=$(dirname "${EXPORT_APEX_APP_FILES_PATH}/${FILENAME}")
mkdir -p "$FILEPATH"
if ! xxd -r -p <<< "$BLOB" > "${EXPORT_APEX_APP_FILES_PATH}/${FILENAME}"; then
printStepResult "ERROR: Application File ${FILENAME} could not be created."
exit 1
else
printStepResult "Application File ${FILENAME} created."
fi
else
printStepResult "ERROR: BLOB data of File ${FILENAME} could not be returned via standard output."
exit 1
fi
done
done
if [ $? -eq 0 ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export APEX Static Application File BLOB data."
cleanup "Y"
exitScript "Y"
fi
}
# Adjusts install.sql file of APEX app export
adjustInstallFile()
{
printStage "Remove references to hex encoded files from install.sql script of APEX application export"
# Remove calling of single app files inside install.sql
find "${EXPORT_APEX_DIR}" -type f -name "install.sql" -exec sed -i -e '/^@@application\/shared_components\/files/d' {} +
if [ $? -eq 0 ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to remove file references."
cleanup "Y"
exitScript "Y"
fi
}
# entry point
main()
{
## Set Options to Vars
assignOptsToVars "$@"
## Verify given Options
verifyOptions
## Test the database connection
testDatabaseConnection
cd $EXPORT_APEX_BASE_DIR_LOC
## Clean up before
cleanup "Y"
## Determine Workspace Id
determineWorkspaceId
## Export APEX application with app files (encoded)
exportApplicationWithEncodedFiles
## Workspace files export (encoded)
exportEncodedWorkspaceFiles
## Export binary raw files
exportBinaryRawWorkspaceFiles
exportBinaryRawApplicationFiles
## Adjust APEX install.sql script
adjustInstallFile
cleanup "N"
exitScript "N"
}
main "$@"
Another way would be to export the files within PL/SQL context only. Since Oracle 7.3, the Oracle package UTL_FILE can be used to create, read and write a file on the underlying file system. For this, you need to create a directory object which you specify when calling UTL_FILE.fopen(). If you want to define the files location dynamically, you need to create the directory objects during script execution. In this case, the database user needs to have the CREATE ANY DIRECTORY and DROP ANY DIRECTORY privileges with read/write permissions.
CREATE DIRECTORY DIRECTORY_EXPORT_WORKSPACE_FILES AS '/tmp/apex/application/shared_components/workspace_files';
CREATE DIRECTORY DIRECTORY_EXPORT_APP_FILES AS '/tmp/apex/application/shared_components/application_files';
GRANT READ, WRITE ON DIRECTORY DIRECTORY_EXPORT_WORKSPACE_FILES TO {dbSchema};
GRANT READ, WRITE ON DIRECTORY DIRECTORY_EXPORT_APP_FILES TO {dbSchema};
The following bash script can be used to export the APEX application and files to /tmp/apex/ using the UTL_FILE package. Please note that, as mentioned above, this script requires a privileged DB User for creation of dynamic directory objects. You can display available options by providing the “-help” option when calling the script.
sh export_apex_app_with_files_using_utl_file.sh -help
#!/bin/bash
# export_apex_app_with_files_using_utl_file.sh
#
# This script exports APEX apps with
# workspace and application files in
# their uncoded binary raw format
# using the PL/SQL UTL_FILE package
## Global Constants and Settings
readonly EXPORT_APEX_BASE_DIR_LOC="/tmp"
readonly EXPORT_APEX_DIR="apex"
readonly EXPORT_APEX_SHARED_COMPONENTS_PATH="${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}/application/shared_components"
readonly EXPORT_APEX_WORKSPACE_FILES_DIR_NAME="workspace_files"
readonly EXPORT_APEX_APPLICATION_FILES_DIR_NAME="application_files"
readonly EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT="DIRECTORY_EXPORT_WORKSPACE_FILES"
readonly EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT="DIRECTORY_EXPORT_APP_FILES"
readonly FONT_YELLOW='\033[1;33m'
readonly FONT_NOCOLOR='\033[0m'
# Script to create directory objects for each dir
readonly CREATE_DRECTORY_OBJECTS_SCRIPT="
WHENEVER SQLERROR EXIT 1;
SET SERVEROUTPUT ON SIZE UNLIMITED
SET FEEDBACK OFF
DECLARE
c_directory CONSTANT VARCHAR2(50) := '_EXPORT_APEX_SHARED_COMPONENTS_PATH_/';
TYPE VARCHAR2_TAB_T IS TABLE OF VARCHAR2(4000);
l_workspace_id NUMBER;
l_directory_object VARCHAR2(50);
l_directory VARCHAR2(70);
l_directories VARCHAR2(1000);
l_dir_object_workspace_files VARCHAR2(50) := '_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_';
l_dir_object_app_files VARCHAR2(50) := '_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_';
l_dir_workspace_files VARCHAR2(70) := c_directory || '_EXPORT_APEX_WORKSPACE_FILES_DIR_NAME_';
l_dir_application_files VARCHAR2(70) := c_directory || '_EXPORT_APEX_APPLICATION_FILES_DIR_NAME_';
l_directory_objects_table VARCHAR2_TAB_T := NEW VARCHAR2_TAB_T();
l_apex_app_id NUMBER := _APEX_APP_ID_;
l_db_running_user VARCHAR2(50) := '_DB_RUNNING_USER_';
l_db_privileged_user VARCHAR2(50) := '_DB_PRIVILEGED_USER_';
BEGIN
SELECT workspace_id
INTO l_workspace_id
FROM apex_applications
WHERE application_id = l_apex_app_id;
-- Get all directory paths of files
FOR c_directories IN (SELECT flow_id,
file_path,
rtrim(upper(replace(file_path, '/', '_')), '_') as file_path_name
FROM (SELECT flow_id,
regexp_replace(file_name, '[^/]+$', '') as file_path
FROM (SELECT 0 as flow_id,
file_name
FROM apex_workspace_static_files
WHERE workspace_id = l_workspace_id
UNION ALL
SELECT application_id as flow_id,
file_name
FROM apex_application_static_files
WHERE application_id = l_apex_app_id
)
ORDER BY flow_id ASC
)
GROUP BY flow_id, file_path
) LOOP
BEGIN
IF c_directories.flow_id = 0
THEN
l_directory_object := l_dir_object_workspace_files;
l_directory := l_dir_workspace_files;
ELSE
l_directory_object := l_dir_object_app_files;
l_directory := l_dir_application_files;
END IF;
IF c_directories.file_path IS NOT NULL
THEN
l_directory_object := l_directory_object || '_' || c_directories.file_path_name;
l_directory := l_directory || '/' || c_directories.file_path;
END IF;
EXECUTE IMMEDIATE '
CREATE DIRECTORY
' || l_directory_object || '
AS ''' || l_directory || '''
';
-- Save directory names temporarily in case of errors
l_directory_objects_table.extend;
l_directory_objects_table(l_directory_objects_table.last) := l_directory_object;
l_directories := l_directories || ':' || l_directory;
IF upper(l_db_running_user) != upper(l_db_privileged_user)
THEN
EXECUTE IMMEDIATE '
GRANT READ, WRITE ON DIRECTORY
' || l_directory_object || '
TO ' || l_db_running_user || '
';
END IF;
EXCEPTION
WHEN OTHERS THEN
-- Drop all previously created directories
FOR c_dir in l_directory_objects_table.FIRST .. l_directory_objects_table.LAST
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP DIRECTORY ' || l_directory_objects_table(c_dir) || '';
EXCEPTION
WHEN OTHERS THEN
CONTINUE;
END;
END LOOP;
RAISE;
END;
END LOOP;
DBMS_OUTPUT.put_line(ltrim(l_directories, ':'));
END;
/
QUIT;
"
# Script for cleanup directory objects
readonly DROP_DRECTORY_OBJECTS_SCRIPT="
SET FEEDBACK OFF
DECLARE
l_dir_object_workspace_files VARCHAR2(50) := '_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_';
l_dir_object_app_files VARCHAR2(50) := '_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_';
BEGIN
FOR c_directories IN (SELECT directory_name
FROM all_directories
WHERE ( directory_name LIKE l_dir_object_workspace_files || '%'
OR directory_name LIKE l_dir_object_app_files || '%'
)
) LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP DIRECTORY ' || c_directories.directory_name || '';
EXCEPTION
WHEN OTHERS THEN
CONTINUE;
END;
END LOOP;
END;
/
QUIT;
"
# Custom PL/SQL export script
readonly EXPORT_FILES_SCRIPT="
WHENEVER SQLERROR EXIT 1;
WHENEVER OSERROR EXIT 2;
SET FEEDBACK OFF
DECLARE
l_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER := 32767;
l_position NUMBER := 1;
l_blob_length NUMBER;
l_workspace_id NUMBER;
l_directory_object VARCHAR2(50);
l_dir_object_workspace_files VARCHAR2(50) := '_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_';
l_dir_object_app_files VARCHAR2(50) := '_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_';
l_apex_app_id NUMBER := _APEX_APP_ID_;
BEGIN
SELECT workspace_id
INTO l_workspace_id
FROM apex_applications
WHERE application_id = l_apex_app_id;
-- Get all files accessable in the workspace
/*
* Alternatively accessable over wwv_flow_files:
*
* You need to set the security group id first.
*
* APEX_UTIL.set_workspace('<WorkspaceName>');
*
* wwv_flow_files.flow_id:
* = 0 - workspace file
* > 0 - application file
*/
FOR c_files IN (SELECT flow_id,
file_content,
file_path,
rtrim(upper(replace(file_path, '/', '_')), '_') as file_path_name,
path_file_name,
regexp_substr(path_file_name, '[^/]*$') as file_name
FROM (SELECT flow_id,
file_content,
regexp_replace(file_name, '[^/]+$', '') as file_path,
file_name as path_file_name
FROM (SELECT 0 as flow_id,
file_content,
file_name
FROM apex_workspace_static_files
WHERE workspace_id = l_workspace_id
UNION ALL
SELECT application_id as flow_id,
file_content,
file_name
FROM apex_application_static_files
WHERE application_id = l_apex_app_id
)
)
ORDER BY flow_id ASC
) LOOP
BEGIN
IF c_files.flow_id = 0
THEN
l_directory_object := l_dir_object_workspace_files;
ELSE
l_directory_object := l_dir_object_app_files;
END IF;
IF c_files.file_path IS NOT NULL
THEN
l_directory_object := l_directory_object || '_' || c_files.file_path_name;
END IF;
-- Create and open file in write byte mode (binary) within directory
l_file := UTL_FILE.fopen(l_directory_object, c_files.file_name, 'wb', 32767);
-- Get length of BLOB, read bytes based on max chunk size and write it to file on OS
l_blob_length := DBMS_LOB.getlength(c_files.file_content);
WHILE l_position <= l_blob_length
LOOP
DBMS_LOB.read(c_files.file_content, l_amount, l_position, l_buffer);
UTL_FILE.put_raw(l_file, l_buffer, TRUE);
l_position := l_position + l_amount;
END LOOP;
l_position := 1;
UTL_FILE.fclose(l_file);
EXCEPTION
WHEN OTHERS THEN
IF UTL_FILE.is_open(l_file)
THEN
UTL_FILE.fclose(l_file);
END IF;
RAISE;
END;
END LOOP;
END;
/
QUIT;
"
## Functions
# Help text displayed to user in case of invalid script calls or via -help option
printHelpText()
{
echo ""
echo "Usage: $0 -sqlclExec=<\"PATH/TO/SQLCL/BIN/sql\"> -apexAppId=<APEX_APP_ID> -dbHost=<DB_HOST> -dbPort=<DB_PORT> -dbServiceName=<DB_SERVICE_NAME> [-dbPrivUser=<DB_PRIV_USER> -dbPrivUserPassword=<DB_PRIV_USER_PASSWORD>] -dbExecUser=<DB_EXEC_USER> [-dbExecUserPassword=<DB_EXEC_USER_PASSWORD>]"
echo ""
echo "Available Options:"
echo -e "\t${FONT_YELLOW}-sqlclExec${FONT_NOCOLOR} - Path to executables of SQLcl"
echo -e "\t${FONT_YELLOW}-apexAppId${FONT_NOCOLOR} - APEX App Id"
echo -e "\t${FONT_YELLOW}-dbHost${FONT_NOCOLOR} - Hostname"
echo -e "\t${FONT_YELLOW}-dbPort${FONT_NOCOLOR} - Port"
echo -e "\t${FONT_YELLOW}-dbServiceName${FONT_NOCOLOR} - Service Name"
echo -e "\t${FONT_YELLOW}-dbPrivUser${FONT_NOCOLOR} - User with privileges for maintaining Directory Objects (optional; if not specified, user defined by -dbExecUser will be used)"
echo -e "\t${FONT_YELLOW}-dbPrivUserPassword${FONT_NOCOLOR} - Password of privileged User (optional; password prompt will be displayed)"
echo -e "\t${FONT_YELLOW}-dbExecUser${FONT_NOCOLOR} - User"
echo -e "\t${FONT_YELLOW}-dbExecUserPassword${FONT_NOCOLOR} - Password (optional; password prompt will be displayed)"
echo -e "\t${FONT_YELLOW}-help${FONT_NOCOLOR} - Prints this help text"
echo ""
}
# Assigns given option args to vars
assignOptsToVars()
{
while [ "$1" != "" ];
do
opt=`echo $1 | awk -F= '{print $1}'`
optarg=`echo $1 | awk -F= '{print $2}'`
case "$opt" in
-help ) printHelpText
exit ;;
-sqlclExec ) SQLCL_EXEC="$optarg" ;;
-apexAppId ) APP_ID="$optarg" ;;
-dbHost ) DB_HOST="$optarg" ;;
-dbPort ) DB_PORT="$optarg" ;;
-dbServiceName ) DB_SERVICE="$optarg" ;;
-dbPrivUser ) DB_PRIV_USERNAME="$optarg" ;;
-dbPrivUserPassword ) DB_PRIV_PASSWORD="$optarg" ;;
-dbExecUser ) DB_USERNAME="$optarg" ;;
-dbExecUserPassword ) DB_PASSWORD="$optarg" ;;
* ) echo "ERROR: Unknown option provided: \"$opt\"."
echo ""
printHelpText
exit 1 ;;
esac
shift
done
}
# Verifies given option args
verifyOptions()
{
if [ "$DB_PRIV_USERNAME" != "" ] && [ "$DB_PRIV_PASSWORD" == "" ]; then
stty -echo
read -p "Enter Password of privileged User ${DB_PRIV_USERNAME}: " DB_PRIV_PASSWORD
stty echo
echo ""
fi
if [ "$DB_USERNAME" != "" ] && [ "$DB_PASSWORD" == "" ]; then
stty -echo
read -p "Enter Password of User ${DB_USERNAME}: " DB_PASSWORD
stty echo
echo ""
fi
if [ "$DB_USERNAME" != "" ] && [ "$DB_PRIV_USERNAME" == "" ]; then
DB_PRIV_USERNAME="$DB_USERNAME"
DB_PRIV_PASSWORD="$DB_PASSWORD"
fi
if [ "$SQLCL_EXEC" == "" ] || [ "$APP_ID" == "" ] || [ "$DB_HOST" == "" ] || [ "$DB_PORT" == "" ] || [ "$DB_SERVICE" == "" ] || [ "$DB_USERNAME" == "" ] || [ "$DB_PASSWORD" == "" ]; then
echo "ERROR: Please provide all necessary option arguments. Use -help to show a list of all options."
echo ""
exit 1
fi
}
# Prints Stage to output
printStage()
{
echo ""
echo "#"
echo "# $1"
echo "#"
echo ""
}
# Prints Stage Result to output
printStageResult()
{
echo ""
echo ".. $1"
echo ""
}
# Prints Step to output
printStep()
{
echo ".... $1"
}
# Prints Step Result to output
printStepResult()
{
echo "........ $1"
}
# Cleans up dirs and Directory Objects in case of Errors or script termination
cleanup()
{
local DROP_DRECTORY_OBJECTS_SCRIPT_MOD="${DROP_DRECTORY_OBJECTS_SCRIPT/_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_/$EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT}"
DROP_DRECTORY_OBJECTS_SCRIPT_MOD="${DROP_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_/$EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT}"
local CLEANUP_ALL="$1"
# Clean up directories
if [[ -d "${EXPORT_APEX_BASE_DIR_LOC}/f${APP_ID}" ]]; then rm -rf "${EXPORT_APEX_BASE_DIR_LOC}/f${APP_ID}"; fi
# Drop Directory Objects
"$SQLCL_EXEC" -s 2>&1 "${DB_PRIV_USERNAME}/${DB_PRIV_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$DROP_DRECTORY_OBJECTS_SCRIPT_MOD"
if [ "$CLEANUP_ALL" == "N" ]
then
if [[ -d "${EXPORT_APEX_SHARED_COMPONENTS_PATH}/files" ]]; then rm -rf "${EXPORT_APEX_SHARED_COMPONENTS_PATH}/files"; fi
else
if [[ -d "${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}" ]]; then rm -rf "${EXPORT_APEX_BASE_DIR_LOC}/${EXPORT_APEX_DIR}"; fi
fi
}
# exits script with error or success
exitScript()
{
local EXIT_WITH_ERROR="$1"
if [ "$EXIT_WITH_ERROR" == "Y" ]
then
echo "Some scripts failed."
exit 1
else
echo "Finished."
exit 0
fi
}
# Tests the database connection
testDatabaseConnection()
{
echo "exit" | "$SQLCL_EXEC" -L "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" | grep -q "ORA-" > /dev/null
if [ $? -eq 0 ]
then
echo "ERROR: The database connection test failed for user ${DB_USERNAME}. Please check the user credentials."
exitScript "Y"
fi
if [ $DB_PRIV_USERNAME != $DB_USERNAME ]
then
echo "exit" | "$SQLCL_EXEC" -L "${DB_PRIV_USERNAME}/${DB_PRIV_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" | grep -q "ORA-" > /dev/null
if [ $? -eq 0 ]
then
echo "ERROR: The database connection test failed for user ${DB_PRIV_USERNAME}. Please check the user credentials."
exitScript "Y"
fi
fi
}
# Exports APEX application with application static files
exportApplicationWithEncodedFiles()
{
printStage "SQLcl: Export APEX Application with Static Application Files (hex encoded)"
"$SQLCL_EXEC" -s 2>&1 "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "apex export -applicationid ${APP_ID} -skipExportDate -expOriginalIds -split"
if [ -d "f${APP_ID}" ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export APEX application."
cleanup "Y"
exitScript "Y"
fi
# Rename export app folder
mv "f${APP_ID}" "$EXPORT_APEX_DIR"
}
# Create Directory Objects and paths
createDirectories()
{
local CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT/_APEX_APP_ID_/$APP_ID}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_WORKSPACE_FILES_DIR_NAME_/$EXPORT_APEX_WORKSPACE_FILES_DIR_NAME}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_APPLICATION_FILES_DIR_NAME_/$EXPORT_APEX_APPLICATION_FILES_DIR_NAME}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_DB_RUNNING_USER_/$DB_USERNAME}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_DB_PRIVILEGED_USER_/$DB_PRIV_USERNAME}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_SHARED_COMPONENTS_PATH_/$EXPORT_APEX_SHARED_COMPONENTS_PATH}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_/$EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT}"
CREATE_DRECTORY_OBJECTS_SCRIPT_MOD="${CREATE_DRECTORY_OBJECTS_SCRIPT_MOD/_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_/$EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT}"
local SCRIPT_OUTPUT
local PATHS
local SCRIPT_FAILED
printStage "SQLcl: Create Directories"
# Run custom PL/SQL script from above for creating dynamic directory objects
SCRIPT_OUTPUT=$("$SQLCL_EXEC" -s "${DB_PRIV_USERNAME}/${DB_PRIV_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$CREATE_DRECTORY_OBJECTS_SCRIPT_MOD")
if [ $? -eq 0 ]
then
SCRIPT_OUTPUT=$(echo -n "$SCRIPT_OUTPUT" | tr -d '\n')
# Check if directory paths have no spaces
if [ "$SCRIPT_OUTPUT" == "${SCRIPT_OUTPUT%[[:space:]]*}" ]
then
# For each directory path returned from database, create directory path on OS file system
IFS=":" read -ra PATHS <<< "$SCRIPT_OUTPUT"
for i in "${PATHS[@]}"
do
printStep "Returned Path: ${i}"
mkdir -p "$i"
if [ $? -eq 0 ]
then
printStepResult "Path ${i} created."
else
printStepResult "ERROR: Path ${i} could not be created."
SCRIPT_FAILED="Y"
break
fi
done
else
SCRIPT_FAILED="Y"
fi
else
SCRIPT_FAILED="Y"
fi
if [ "$SCRIPT_FAILED" != "Y" ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to create directories."
cleanup "Y"
exitScript "Y"
fi
}
# Export binary raw files
exportBinaryRawFiles()
{
local EXPORT_FILES_SCRIPT_MOD="${EXPORT_FILES_SCRIPT/_APEX_APP_ID_/$APP_ID}"
EXPORT_FILES_SCRIPT_MOD="${EXPORT_FILES_SCRIPT_MOD/_EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT_/$EXPORT_APEX_WORKSPACE_FILES_DIR_OBJECT}"
EXPORT_FILES_SCRIPT_MOD="${EXPORT_FILES_SCRIPT_MOD/_EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT_/$EXPORT_APEX_APPLICATION_FILES_DIR_OBJECT}"
printStage "SQLcl: Export APEX Static Workspace and Application Files (binary raw)"
# Run custom PL/SQL script from above for exporting app and workspace files
"$SQLCL_EXEC" -s "${DB_USERNAME}/${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_SERVICE}" <<< "$EXPORT_FILES_SCRIPT_MOD"
if [ $? -eq 0 ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to export binary raw Files."
cleanup "Y"
exitScript "Y"
fi
}
# Adjusts install.sql file of APEX app export
adjustInstallFile()
{
printStage "Remove references to hex encoded files from install.sql script of APEX application export"
# Remove calling of single app files inside install.sql
find "${EXPORT_APEX_DIR}" -type f -name "install.sql" -exec sed -i -e '/^@@application\/shared_components\/files/d' {} +
if [ $? -eq 0 ]
then
printStageResult "Done."
else
printStageResult "ERROR: Failed to remove file references."
cleanup "Y"
exitScript "Y"
fi
}
# entry point
main()
{
## Set Options to Vars
assignOptsToVars "$@"
## Verify given Options
verifyOptions
## Test database connection
testDatabaseConnection
cd $EXPORT_APEX_BASE_DIR_LOC
## Clean up before
cleanup "Y"
## Export APEX application with app files (encoded)
exportApplicationWithEncodedFiles
## Create Directory Objects and paths
createDirectories
## Export binary raw files
exportBinaryRawFiles
## Adjust APEX install.sql script
adjustInstallFile
cleanup "N"
exitScript "N"
}
main "$@"
To be able to execute the bash scripts, you need to make sure that the script you are using is executable for the OS user.
chmod u+x export_apex_app_with_files_using_stdout.sh
chmod u+x export_apex_app_with_files_using_utl_file.sh
You now have the APEX app and all application and workspace files exported to /tmp/apex/, and you can save it in your version control system.
In the following blog post Part 2: Import binary raw data files during APEX App Import, I will show a way how to import the raw files during App import.
When exporting an APEX app by a tool like SQLcl splitted in separated files, each SQL file will include arguments inside the Metadata export which can differ when exporting from different APEX instances. Depending on the amount of generated files, this may cause a huge number of merge conflicts when versioning the exported app in a version control system like Git or Apache SVN, which have to be resolved manually. In this blog post, I would like to show an example how to reduce the amount of conflicts.
If you are using a tool for exporting the Apps, you may include a script which removes information that is not needed (or could be replaced) during an import. Most of the merge conflicts are caused by arguments like “p_default_id_offset” as well as “Manifest” information. While the Offset depends on the APEX instance from which you have exported, the Manifest gives various information about a single application component, and therefore usually remains identical. However, if different Application IDs are used on multiple APEX instances for the same application, some of this information (which relates to these IDs) may also differ between exports.
Based on the components your APEX app consists of, the folder structure can look different. While the application folder will always exist and which stores all application resources, including pages, build options, files and other related logics, your export could also include other folders like workspace. This is the case if you are using workspace related resources inside your application logics (e.g. Web credentials).
The Bash script below will do the following actions for all SQL files inside the given subfolders:
#!/bin/bash
# Go into the apex folder of application
# this is where the install.sql file is located
# Then run below commands for all sql files in specified subfolders
# Adjustments:
# Manifest: Delete every Manifest information
# Offset Id: Replace any value of p_default_id_offset with 0
APEX_APP_BASE_FOLDER="apex"
APEX_APP_SEARCH_SUBFOLDERS=("application" "workspace")
for subfolder in "${APEX_APP_SEARCH_SUBFOLDERS[@]}"
do
if [ -d ./"$APEX_APP_BASE_FOLDER"/"$subfolder" ]; then
find ./"$APEX_APP_BASE_FOLDER"/"$subfolder" -type f -name "*.sql" -exec sed -i -e '/-- Manifest/,/-- Manifest End/d' -i -e 's/p_default_id_offset.*/p_default_id_offset=>0/g' {} +
fi
done
The Manifest contains information about to which resource the component belongs to (e.g. App Id). This information is not needed for the import. The OffsetId is always linked to an APEX instance and ensures that the metadata for the Oracle APEX application definition does not collide with other metadata on the instance. This means, that the Offset value should always differ, especially if you are importing the same application (or components) to several workspaces on the same APEX instance. If you set the OffsetId to 0, you need to make sure that you are setting this Offset value – or even better, that APEX generates an appropriate one for you – before the APEX app gets imported. The Offset needs to be a positive integer. You have two procedures and one function in the package wwv_flow_application_install for generating, setting and getting the OffsetId value. They are accessable over the synonym APEX_APPLICATION_INSTALL.
-- Generate a new Offset value before application import
BEGIN
APEX_APPLICATION_INSTALL.generate_offset;
END;
-- Set the Offset value manually
DECLARE
l_offset_id NUMBER;
BEGIN
l_offset_id := DBMS_RANDOM.value(100000000000, 999999999999); -- may cause collisions
APEX_APPLICATION_INSTALL.set_offset(p_offset => l_offset_id);
END;
-- Get the current Offset value
DECLARE
l_offset_id NUMBER;
BEGIN
l_offset_id := APEX_APPLICATION_INSTALL.get_offset;
END;
When using an Interactive Grid with big data and mutiple columns, it may be helpful for the user to be able to filter the results on specific values. For this purpose, APEX offers the native filter functionality of the Interactive Grid, where the user can add filter options on single columns. In this case, the filter options must be set one by one.
There are some existing Oracle PL/SQL procedures like APEX_IR.ADD_FILTER which allow adding filter operations on columns on the side of the database. However, this is only available for Interactive Reports and requires the APEX page to be reloaded so that the filters can be set. Alternatively, the page items of Faceted Search and Smart Filters have been added to APEX recently, but they can be used (as of APEX 22.1) only in combination of a Classic Report, but neither with an Interactive Grid nor an Interactive Report.
In this blog post, I would like to show how the methods “addFilter” and “deleteFilter” of the IG could be used to add several filters at the same time using Dynamic Actions and JavaScript without adding them manually over the “Actions” menu or by rerendering the page. Please note that these methods are not documented on Oracle documentation pages and are therefore not supported. They will be probably subject to change and may not work in the same way for future versions of APEX.
Available operators to use are:
The following image shows an example how to use page items for filtering on columns of an Interactive Grid:
The following JavaScript code used in a Dynamic Action can filter and refresh the columns of the Interactive Grid with region id “employees_ig” shown above. First, all page items are verified and checked whether values have been entered. Some of the characters, like the separator “:” of the page item P1_FILTER_JOB (Popup LOV), need to be replaced by Unicode representations. To be able to use a date value to search for, you may need to convert it to a specific date format like “YYYYMMDD000000” (e.g. 20220902000000). When using the BETWEEN operator, you can combine the values by a tilde (~).
//an array with all filters we want to add when pressing Search
var arrayFields = [];
//counter controller to check whether it should save and refresh IG
var cont = 1;
//adding job filter
if ($v('P1_FILTER_JOB')) {
var searchStrings = $v('P1_FILTER_JOB').replace(/:/g, '\u0001');
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: 'Job',
operator: 'IN',
value: searchStrings,
isCaseSensitive: false
});
}
//adding hire date filter
if ($v('P1_FILTER_HIREDATE')) {
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: 'Hire Date',
operator: 'GT',
value: $v('P1_FILTER_HIREDATE') + '000000',
isCaseSensitive: false
});
}
//adding salary filter
if ($v('P1_FILTER_SALARY_FROM') && $v('P1_FILTER_SALARY_TO')) {
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: 'Salary',
operator: 'BETWEEN',
value: $v('P1_FILTER_SALARY_FROM') + '~' + $v('P1_FILTER_SALARY_TO'),
isCaseSensitive: false
});
}
//adding on leave filter
if ($v('P1_FILTER_ON_LEAVE')) {
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: 'On Leave',
operator: 'EQ',
value: $v('P1_FILTER_ON_LEAVE'),
isCaseSensitive: false
});
}
//adding no manager filter
if ($v('P1_CHECKBOX_NO_MANAGER') == 'Y') {
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: 'Manager',
operator: 'N',
value: 'null',
isCaseSensitive: false
} );
}
//adding starts with filter
if ($v('P1_FILTER_STARTS_WITH_COLUMN') && $v('P1_FILTER_STARTS_WITH_EXPRESSION')) {
arrayFields.push(
{
type: 'column',
columnType: 'column',
columnName: $v('P1_FILTER_STARTS_WITH_COLUMN'),
operator: 'S',
value: $v('P1_FILTER_STARTS_WITH_EXPRESSION'),
isCaseSensitive: false
} );
}
//delete all filters
var filters = apex.region('employees_ig').call("getFilters");
for (var i in filters) {
var filderId = filters[i].id;
apex.region('employees_ig').call("deleteFilter", filderId);
}
//iterate through the filters and refresh IG
for (var filterObj of arrayFields) {
if (filterObj.value) {
apex.region('employees_ig').widget().interactiveGrid("addFilter", filterObj,
{
refreshData: (cont == arrayFields.length) ? true : false,
save: (cont == arrayFields.length) ? true : false
});
}
cont++;
}
Today I would like to show an example to demonstrate how you can identify all dates of occurrences of a single weekday within a period of time in PL/SQL by specifying a weekday.
For a project, a form mask in APEX was needed on which the operator should be able to select one or more weekdays (Monday, Tuesday, etc.) within a period of time and specify a time of day and a coach. The aim was to calculate the dates of all selected weekdays between the start and end date with PL/SQL and to store the data in the database table “appointments”. End users can book a support slot on these days via a separate order mask.
Below you can find the specification and body of the PL/SQL package I have used:
CREATE OR REPLACE PACKAGE booking_dates_pkg AS
/* Loops through all weekdays and calls procedure save_dates
* @param pi_coach specifies the coach name
* @param pi_period_from specifies the start date in format DD.MM.YYYY
* @param pi_weekdays specifies the colon concatinated numbers of the weekdays, e.g. 0:4 = Monday and Friday
* @param pi_period_to specifies the end date in format DD.MM.YYYY
* @param pi_period_time specifies the daytime
*/
PROCEDURE loop_through_weekdays (
pi_coach IN VARCHAR2
, pi_period_from IN DATE
, pi_weekdays IN VARCHAR2
, pi_period_time IN VARCHAR2
, pi_period_to IN DATE
);
/* Identifies each date of a passed weekday within the period
* @param pi_coach specifies the coach name
* @param pi_period_from specifies the start date in format DD.MM.YYYY
* @param pi_weekday specifies the number of the weekday, e.g. 0 = Monday
* @param pi_period_to specifies the end date in format DD.MM.YYYY
* @param pi_period_time specifies the daytime
*/
PROCEDURE save_dates (
pi_coach IN VARCHAR2
, pi_period_from IN DATE
, pi_weekday IN NUMBER
, pi_period_time IN VARCHAR2
, pi_period_to IN DATE
);
END booking_dates_pkg;
/
CREATE OR REPLACE PACKAGE BODY booking_dates_pkg
AS
PROCEDURE loop_through_weekdays (
pi_coach IN VARCHAR2
, pi_period_from IN DATE
, pi_weekdays IN VARCHAR2
, pi_period_time IN VARCHAR2
, pi_period_to IN DATE
)
AS
BEGIN
FOR i IN (SELECT TO_NUMBER(REGEXP_SUBSTR(pi_weekdays, '[^:]+', 1, LEVEL)) AS day
FROM dual
CONNECT BY TO_NUMBER(REGEXP_SUBSTR(pi_weekdays, '[^:]+', 1, LEVEL)) IS NOT NULL)
LOOP
save_dates(
pi_coach => pi_coach
, pi_period_from => pi_period_from
, pi_weekday => i.day
, pi_period_time => pi_period_time
, pi_period_to => pi_period_to
);
END LOOP;
END loop_through_weekdays;
PROCEDURE save_dates (
pi_coach IN VARCHAR2
, pi_period_from IN DATE
, pi_weekday IN NUMBER
, pi_period_time IN VARCHAR2
, pi_period_to IN DATE
)
AS
l_day DATE;
l_weekday_name VARCHAR2(30);
l_period_from_year NUMBER;
l_period_to_year NUMBER;
l_count_days NUMBER;
l_count_holidays NUMBER;
BEGIN
l_period_from_year := EXTRACT(year FROM pi_period_from);
l_period_to_year := EXTRACT(year FROM pi_period_to);
/* count all weekdays of a single weekday (e.g. Monday) within the period
* function next_day() depends of the invoking session's NLS_DATE_LANGUAGE and does not allow the specification of NLS values to make it independent
* workaround: to make function next_day() independent, use trunc(sysdate, 'IW') to get the first day of week (Monday) defined by ISO standard,
* and add number of days pi_weekday to identify the name of the selected day of the week uniquely
*/
l_weekday_name := TO_CHAR(TRUNC(sysdate, 'IW') + pi_weekday, 'Day');
l_count_days := ( ( NEXT_DAY ( pi_period_to - 7, l_weekday_name )
- NEXT_DAY ( pi_period_from - 1, l_weekday_name )
) / 7
) + 1;
FOR rec_days IN 0..l_count_days-1
LOOP
/* identify the weekday (subtract 1 day to include the start date as well) and all following occurrences within the period by calculating weekday+7*rec_days */
l_day := ( NEXT_DAY ( pi_period_from - 1, l_weekday_name ) + 7 * rec_days );
SELECT COUNT(holiday_date) INTO l_count_holidays
FROM TABLE(get_german_holidays(l_period_from_year, l_period_to_year))
WHERE holiday_date = l_day;
IF l_count_holidays = 0 THEN
INSERT INTO appointments(date, booked, coach)
VALUES (TO_DATE(TO_CHAR(l_day, 'DD.MM.YYYY') || ' ' || pi_period_time, 'DD.MM.YYYY HH24:MI'), 0, pi_coach);
END IF;
END LOOP;
END save_dates;
END booking_dates_pkg;
/
Via the database function NEXT_DAY() the date of the weekday name passed in the second parameter will be returned which follows the specified date. As an example, NEXT_DAY(’23-AUG-2020′, ‘SUNDAY’) would return the following Sunday (August 30, 2020), even though 23 August is also a Sunday.
The parameter for determining the returning day of the week depends on the NLS_DATE_LANGUAGE of the calling session, at the same time the function does not allow the free specification of NLS values. For this reason, calling NEXT_DAY(’23-AUG-2020′, ‘SUNDAY’) may result in the ORA error “ORA-01846: not a valid day of the week” if a different language (English, in the example above) is stored for NLS_DATE_LANGUAGE within the session. To use the function independently and get the weekday name in the language of the session, we use TRUNC(sysdate, ‘IW’) as a workaround to get the first day of the week according to ISO standard (Monday). To identify which weekdays have been selected on the website, colon-separated return values (from 0 (= Monday) to 4 (= Friday)) are passed as varchar from the APEX page to the procedure loop_through_weekdays (e.g. ‘0:1:3’ for Monday, Tuesday and Thursday) that are added on top of Monday. Via TO_CHAR(TRUNC(sysdate, ‘IW’) + pi_weekday, ‘Day’) we get the selected weekday in the correct language of the session. After determining the specific date of the weekday, all following occurrences of this weekday can be identified by simple mathematical multiplications using a FOR loop. In order for the start day to be taken into account, we use NEXT_DAY(〈start date〉-1, 〈weekday name〉) to obtain the date of the day preceding the start date and use this for the calculations. Via the calculation NEXT_DAY( pi_period_from – 1, l_weekday_name ) + 7 * rec_days we get all further occurrences of this weekday with each run of the loop.
Dates of public holidays should not be saved in the database. To check whether a date represents a public holiday, the function get_german_holidays() described in the previous blog post is used at this point.
Related posts:
How to Find the Next Business Day and Add or Subtract N Working Days with SQL (blogs.oracle.com)
Make NEXT_DAY() independent of the invoking session’s NLS_DATE_LANGUAGE (community.oracle.com)
Add optional NLS_DATE_LANGUAGE parameter to NEXT_DAY() (community.oracle.com)
In this blog post I would like to show how to calculate public holidays in Germany within a period of time using a table function in the Oracle database.
The calculation of the dates for the public holidays is made on the basis of the date of Easter (Easter Sunday). I have taken the following PL/SQL code from a post written by Thomas Uhren on his blog Oracle SQL und PL/SQL: Rund um Oracle und die Verwendung von SQL und PL/SQL von Thomas Uhren and added and adapted dates to include further public holidays in the federal state of North Rhine-Westphalia in Germany.
create or replace function get_easter_day
(
pi_year in integer
)
return date
as
v_k integer;
v_m integer;
v_s integer;
v_a integer;
v_d integer;
v_r integer;
v_og integer;
v_sz integer;
v_oe integer;
v_os integer;
v_day integer;
v_month integer;
begin
v_k := floor(pi_year / 100);
v_m := 15 + floor((3 * v_k + 3) / 4) - floor((8 * v_k + 13) / 25);
v_s := 2 - floor((3 * v_k + 3) / 4);
v_a := mod(pi_year, 19);
v_d := mod((19 * v_a + v_m), 30);
v_r := floor(v_d / 29) + (floor(v_d / 28) - floor(v_d / 29)) * floor(v_a / 11);
v_og := 21 + v_d - v_r;
v_sz := 7 - mod((pi_year + floor(pi_year / 4) + v_s), 7);
v_oe := 7 - mod(v_og - v_sz, 7);
v_os := v_og + v_oe;
if (v_os ⟹ 31) then
v_day := v_os;
v_month := 3;
else
v_day := v_os - 31;
v_month := 4;
end if;
return to_date(v_day || '.' || v_month || '.' || pi_year, 'DD.MM.YYYY');
end get_easter_day;
/
Afterwards, the object types can be created for the table function:
create type holiday_t as object
(
holiday_date date,
holiday_name varchar2(30),
holiday_only_in varchar2(100),
holiday_description varchar2(100)
);
/
create type holiday_tab as table of holiday_t;
/
After that, the table function can be created:
create or replace function get_german_holidays
(
pi_year_start in integer,
pi_year_end in integer
)
return holiday_tab pipelined
as
v_easter_day date;
begin
for y in pi_year_start .. pi_year_end
loop
v_easter_day := get_easter_day(y);
pipe row (
holiday_t(
to_date('01.01.' || y, 'DD.MM.YYYY'),
'Neujahrstag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
v_easter_day - interval '48' day,
'Rosenmontag',
'',
'Frei'));
pipe row (
holiday_t(
v_easter_day - interval '2' day,
'Karfreitag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
v_easter_day + interval '1' day,
'Ostermontag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
to_date('01.05.' || y, 'DD.MM.YYYY'),
'Tag der Arbeit',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
v_easter_day + interval '39' day,
'Christi Himmelfahrt',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
v_easter_day + interval '50' day,
'Pfingstmontag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
v_easter_day + interval '60' day,
'Fronleichnam',
'BW, BY, HE, NW, RP',
'Feiertag in BW, BY, HE, NW, RP'));
pipe row (
holiday_t(
to_date('03.10.' || y, 'DD.MM.YYYY'),
'Tag der Deutschen Einheit',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
to_date('01.11.' || y, 'DD.MM.YYYY'),
'Allerheiligen',
'BW, BY, NW, RP und SL',
'Feiertag in BW, BY, NW, RP und SL'));
pipe row (
holiday_t(
to_date('24.12.' || y, 'DD.MM.YYYY'),
'Heiligabend',
'',
'Frei'));
pipe row (
holiday_t(
to_date('25.12.' || y, 'DD.MM.YYYY'),
'1. Weihnachtsfeiertag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
to_date('26.12.' || y, 'DD.MM.YYYY'),
'2. Weihnachtsfeiertag',
'',
'Gesetzlicher Feiertag'));
pipe row (
holiday_t(
to_date('31.12.' || y, 'DD.MM.YYYY'),
'Silvester',
'',
'Frei'));
end loop;
end get_german_holidays;
/
This script is also used in my next blog post PL/SQL: Identify dates of weekdays within time period using NEXT_DAY() function. There I explain how to use PL/SQL to dynamically determine days of the week.
Related posts:
Ermittlung von Feiertagen per Table Function (ora-sql-plsql.blogspot.com)
Welcome to my blog!
Here I write about topics in programming that interest me, mostly around web design and web development with HTML, CSS and JavaScript as well as Oracle APEX and PL/SQL.
Have fun reading. 🙂