Quantcast
Channel: Archives des Documentum - dbi Blog
Viewing all 173 articles
Browse latest View live

Documentum – Silent Install – D2

$
0
0

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed) as well as several repositories. In this one, we will see how to install D2 on a predefined list of docbases/repositories (on the Content Server side) and you will see that, here, the process is quite different.

D2 is supporting the silent installation since quite some time now and it is pretty easy to do. At the end of the D2 GUI Installer, there is a screen where you are asked if you want to generate a silent properties (response) file containing the information that have been set in the D2 GUI Installer. Therefore, this is a first way to start working with silent installation or you can just read this blog ;).

So, let’s start this with the preparation of a template file. I will use a lot of placeholders in the template and will replace the values with sed commands, just as a quick look at how you can script a silent installation with a template configuration file and some properties prepared before.

[dmadmin@content_server_01 ~]$ vi /tmp/dctm_install/D2_template.xml
[dmadmin@content_server_01 ~]$ cat /tmp/dctm_install/D2_template.xml
<?xml version="1.0" encoding="UTF-8"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HTMLHelloPanel id="welcome"/>
  <com.izforge.izpack.panels.UserInputPanel id="SelectInstallOrMergeConfig">
    <userInput>
      <entry key="InstallD2" value="true"/>
      <entry key="MergeConfigs" value="false"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.HTMLInfoPanel id="readme"/>
  <com.izforge.izpack.panels.PacksPanel id="UNKNOWN (com.izforge.izpack.panels.PacksPanel)">
    <pack index="0" name="Installer files" selected="true"/>
    <pack index="1" name="D2" selected="###WAR_REQUIRED###"/>
    <pack index="2" name="D2-Config" selected="###WAR_REQUIRED###"/>
    <pack index="3" name="D2-API for Content Server/JMS" selected="true"/>
    <pack index="4" name="D2-API for BPM" selected="###BPM_REQUIRED###"/>
    <pack index="5" name="DAR" selected="###DAR_REQUIRED###"/>
  </com.izforge.izpack.panels.PacksPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.0">
    <userInput>
      <entry key="jboss5XCompliant" value="false"/>
      <entry key="webappsDir" value="###DOCUMENTUM###/D2-Install/war"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.2">
    <userInput>
      <entry key="pluginInstaller" value="###PLUGIN_LIST###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.3">
    <userInput>
      <entry key="csDir" value="###DOCUMENTUM###/D2-Install/D2-API"/>
      <entry key="bpmDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/bpm.ear"/>
      <entry key="jmsDir" value="###JMS_HOME###/server/DctmServer_MethodServer/deployments/ServerApps.ear"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.4">
    <userInput>
      <entry key="installationDir" value="###DOCUMENTUM###/D2-Install/DAR"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.5">
    <userInput>
      <entry key="dfsDir" value="/tmp/###DFS_SDK_PACKAGE###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.7">
    <userInput>
      <entry key="COMMON.USER_ACCOUNT" value="###INSTALL_OWNER###"/>
      <entry key="install.owner.password" value="###INSTALL_OWNER_PASSWD###"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.8">
    <userInput>
      <entry key="SERVER.REPOSITORIES.NAMES" value="###DOCBASE_LIST###"/>
      <entry key="setReturnRepeatingValue" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UserInputPanel.9">
    <userInput>
      <entry key="securityRadioSelection" value="true"/>
    </userInput>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPD2ConfigOrClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseUsetheSameDFC">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableDFCSessionPool">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDFCKeyStoreInfo">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetD2ConfigLanguage">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableD2BOCS">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetHideDomainforConfig">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTemporaryMaxFiles">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="10">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="11">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPChooseReferenceDFCForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPDocbrokerInfoForClient">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="12">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="13">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="14">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="15">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="16">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="17">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="18">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="19">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="20">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="21">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="22">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPSetTransferMode">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="24">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="25">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPEnableAuditing">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPchooseWebAppServer">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskWebappsDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.UserInputPanel id="UIPAskNewWarDir">
    <userInput/>
  </com.izforge.izpack.panels.UserInputPanel>
  <com.izforge.izpack.panels.InstallPanel id="UNKNOWN (com.izforge.izpack.panels.InstallPanel)"/>
  <com.izforge.izpack.panels.XInfoPanel id="UNKNOWN (com.izforge.izpack.panels.XInfoPanel)"/>
  <com.izforge.izpack.panels.FinishPanel id="UNKNOWN (com.izforge.izpack.panels.FinishPanel)"/>
</AutomatedInstallation>

[dmadmin@content_server_01 ~]$

 

As you probably understood by looking at the above file, I’m using “/tmp/” for the input elements needed by D2 like the DFS package, the D2 installer or the D2+Pack Plugins and I’m using “$DOCUMENTUM/D2-Install” as the output folder where D2 generates its stuff into.

Once you have the template ready, you can replace the placeholders as follow (this is just an example of configuration based on the other silent blogs I wrote so far):

[dmadmin@content_server_01 ~]$ export d2_install_file=/tmp/dctm_install/D2.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cp /tmp/dctm_install/D2_template.xml ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###WAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###BPM_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###DAR_REQUIRED###,true," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DOCUMENTUM###,$DOCUMENTUM," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###PLUGIN_LIST###,/tmp/D2_pluspack_4.7.0.P18/Plugins/C2-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/D2-Bin-Install-4.7.0.jar;/tmp/D2_pluspack_4.7.0.P18/Plugins/O2-Install-4.7.0.jar;," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###JMS_HOME###,$DOCUMENTUM_SHARED/wildfly9.0.1," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s,###DFS_SDK_PACKAGE###,emc-dfs-sdk-7.3," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "  ----> Please enter the Install Owner's password: " dm_pw; echo; echo
  ----> Please enter the Install Owner's password: <TYPE HERE THE PASSWORD>
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER###,dmadmin," ${d2_install_file}
[dmadmin@content_server_01 ~]$ sed -i "s,###INSTALL_OWNER_PASSWD###,${dm_pw}," ${d2_install_file}
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i "s/###DOCBASE_LIST###/Docbase1/" ${d2_install_file}
[dmadmin@content_server_01 ~]$

 

A short description of these properties as well as some notes on the values used above:

  • langpack: The language you are usually using for running the installers… English is fine if you use this template
  • entry key=”InstallD2″: Whether or not you want to install D2
  • entry key=”MergeConfigs”: Whether or not you want to merge the actual configuration/installation with the new one. I’m always restarting a D2 installation from scratch (removing the D2 hidden files for that) so I always set this to false
  • pack index=”0″ name=”Installer files”: Always set this to true to install D2 on a CS
  • pack index=”1″ name=”D2″: Whether or not you want to generate the D2 WAR file. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • pack index=”2″ name=”D2-Config”: Same as above but for the D2-Config WAR file
  • pack index=”3″ name=”D2-API for Content Server/JMS”: Whether or not you want the D2 Installer to put the D2 specific libraries into the JMS lib folder (path defined in: entry key=”jmsDir”). Even if you set this to true, you will still need to manually put a lot of D2 libs into the JMS lib folder because D2 only put a few of them but much more are required to run D2 properly (see documentation for the full list)
  • pack index=”4″ name=”D2-API for BPM”: Same as above but for the BPM this time (path defined in: entry key=”bpmDir”)
  • pack index=”5″ name=”DAR”: Whether or not you want to generate the DARs. This is usually true for a “Primary” Content Server and can be set to false for other “Remote” CSs
  • entry key=”jboss5XCompliant”: I guess this is for the JBoss 5 support so if you are on Dctm 7.x, leave this as false
  • entry key=”webappsDir”: The path the D2 Installer will put the generated WAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/war” so this folder MUST exist before running the installer in silent
  • entry key=”pluginInstaller”: This one is a little bit trickier… It’s a semi-colon list of all D2+Pack Plugins you would like to install in addition to the D2. In the above, I’m using the C2, D2-Bin as well as O2 plugins. The D2+Pack package must obviously be extracted BEFORE running the installer in silent and all the paths MUST exist (you will need to extract the plugins jar from each plugin zip files). I opened a few bugs & enhancements requests for these so if you are facing an issue, let me know, I might be able to help you
  • entry key=”csDir”: The path the D2 Installer will put the generated libraries into. In this example, I set it to “$DOCUMENTUM/D2-Install/D2-API” so this folder MUST exist before running the installer in silent
  • entry key=”bpmDir”: The path the D2 Installer will put a few of the D2 libraries into for the BPM (it’s not all needed JARs and this parameter is obviously not needed if you set ###BPM_REQUIRED### to false)
  • entry key=”jmsDir”: Same as above but for the JMS this time
  • entry key=”installationDir”: The path the D2 Installer will put the generated DAR files into. In this example, I set it to “$DOCUMENTUM/D2-Install/DAR” so this folder MUST exist before running the installer in silent
  • entry key=”dfsDir”: The path where the DFS SDK can be found. The DFS SDK package MUST be extracted in this folder before running the installer in silent
  • entry key=”COMMON.USER_ACCOUNT”: The name of the Documentum Installation Owner
  • entry key=”install.owner.password”: The password of the Documentum Installation Owner. I used above a “read -s” command so it doesn’t appear on the command line, but it will be put in clear text in the xml file…
  • entry key=”SERVER.REPOSITORIES.NAMES”: A comma separated list of all docbases/repositories (without spaces) that need to be configured for D2. The DARs will be installed automatically on these docbases/repositories and if you want to do it properly, it mustn’t contain the GR. You could potentially add the GR in this parameter but all D2 DARs would be installed into the GR and this isn’t needed… Only the “D2-DAR.dar” and “Collaboration_Services.dar” are needed to be installed on the GR so I only add normal docbases/repositories in this parameter and once D2 is installed, I manually deploy these two DARs into the GR (I wrote a blog about deploying DARs easily to a docbase a few years ago if you are interested). So, here I have a value of “Docbase1″ but if you had two, you could set it to “Docbase1,Docbase2″
  • entry key=”setReturnRepeatingValue”: Whether or not you want the repeating values. A value of true should set the “return_top_results_row_based=false” in the server.ini
  • entry key=”securityRadioSelection”: A value of true means that D2 have to apply Security Rules to content BEFORE applying AutoLink and a value of false means that D2 can do it AFTER only
  • That’s the end of this file because I’m using D2 4.7 and in D2 4.7, there is no Lockbox anymore! If you are using previous D2 versions, you will need to put additional parameters for the D2 Lockbox generation, location, password, aso…

 

Once the properties file is ready, you can install the docbroker/connection broker using the following command:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -DTRACE=true -DDEBUG=true -Djava.io.tmpdir=$DOCUMENTUM/D2-Install/tmp -jar /tmp/D2_4.7.0_P18/D2-Installer-4.7.0.jar ${d2_install_file}

 

You now know how to install D2 on a Content Server using the silent installation provided by D2. As you saw above, it is quite different compared to all Documentum components silent installation, but it is working so… Maybe at some point in the future, D2 will switch to use the same kind of properties file as Documentum.

 

Cet article Documentum – Silent Install – D2 est apparu en premier sur Blog dbi services.


Documentum – Silent Install – Remote Docbases/Repositories (HA)

$
0
0

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed), several repositories and finally D2. In this one, we will see how to install remote docbases/repositories to have a High Availability environment with the docbases/repositories that we already installed in silent.

As mentioned in the first blog of this series, there is a utility under “$DM_HOME/install/silent/silenttool” that can be used to generate a skeleton for a CFS/Remote CS but there are still missing parameters so I will describe them in this blog.

In this blog, I will also configure the Global Repository (GR) in HA so that you have it available even if the first node fails… This is particularly important if, like me, you prefer to set the GR as the crypto repository (so it is the repository used for encryption/decryption).

 

1. Documentum Remote Global Registry repository installation

The properties file for a Remote GR installation is as follow (it supposes that you already have the binaries and a docbroker installed on this Remote CS):

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_GR.properties
### Silent installation response file for a Remote Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.PRIMARY_SERVER_CONFIG_NAME=gr_docbase
CFS_SERVER_CONFIG_NAME=content_server_02_gr_docbase
SERVER.DOCBASE_SERVICE_NAME=gr_docbase
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED=true
SERVER.PROJECTED_DOCBROKER_HOST_OTHER=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT_OTHER=1489
SERVER.GLOBAL_REGISTRY_REPOSITORY=gr_docbase
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$

 

Just like in previous blog, I will let you set the DATA and SHARE folders as you want to.

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.COMPONENT_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.aek.passphrase.password: The password used for the AEK on the Primary CS
  • common.aek.key.name: The name of the AEK key used on the Primary CS. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you used a Lockbox to protect the AEK key on the Primary CS. If set to true, the lockbox will be downloaded from the Primary CS, that’s why you don’t need the “common.use.existing.aek.lockbox” property
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox used on the Primary CS. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password used for the Lockbox on the Primary CS
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository created on the Primary CS (dm_docbase_config.object_name)
  • SERVER.PRIMARY_SERVER_CONFIG_NAME: The name of the dm_server_config object created on the Primary CS
  • CFS_SERVER_CONFIG_NAME: The name of dm_server_config object to be created for this Remote CS
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used
  • SERVER.REPOSITORY_USERNAME: The name of the Installation Owner. I believe it can be any superuser account but I didn’t test it
  • SERVER.SECURE.REPOSITORY_PASSWORD: The password of the above account
  • SERVER.REPOSITORY_USER_DOMAIN: The domain of the above account. If using an inline user like the Installation Owner, you should keep it empty
  • SERVER.REPOSITORY_USERNAME_WITH_DOMAIN: Same value as the REPOSITORY_USERNAME if the USER_DOMAIN is kept empty
  • SERVER.REPOSITORY_HOSTNAME: The Fully Qualified Domain Name of the Primary CS
  • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository (it goes with the SERVER.CONNECT_MODE). If you set this to true, you will have to add the usual additional parameters, just like for the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_HOST: The Fully Qualified Domain Name of the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_PORT: The port used by the docbroker/connection broker on the Primary CS
  • SERVER.PROJECTED_CONNECTION_BROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_CONNECTION_BROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED: Whether or not you want to validate the GR on the Primary CS. I always set this to true for the first docbase/repository installed on the Remote CS (in other words: for the GR installation). If you set this to true, you will have to provide some additional parameters:
    • SERVER.PROJECTED_DOCBROKER_HOST_OTHER: The Fully Qualified Domain Name of the docbroker/connection broker that the GR on the Primary CS projects to so this is usually the Primary CS…
    • SERVER.PROJECTED_DOCBROKER_PORT_OTHER: The port of the docbroker/connection broker that the GR on the Primary CS projects to
    • SERVER.GLOBAL_REGISTRY_REPOSITORY: The name of the GR repository
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account created on the Primary CS inside the GR repository. This is usually something like “dm_bof_registry”
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD: The password used by the BOF Registry account

 

Once the properties file is ready, first make sure the gr_docbase is running on the “Primary” CS (content_server_01) and then start the CFS installer using the following commands:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap gr_docbase
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase gr_docbase has 1 server:
--------------------------------------------
  server name         :  gr_docbase
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  12345
  last ckpt time      :  6/12/2018 14:23:35
  next ckpt time      :  6/12/2018 14:28:35
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010101
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_GR.properties

 

Don’t forget to check the logs once done to make sure it went without issue!

 

2. Other Remote repository installation

Once you have the Remote Global Registry repository installed, you can install the Remote repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional remote repository is as follow:

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_Other.properties
### Silent installation response file for a Remote Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.PRIMARY_SERVER_CONFIG_NAME=Docbase1
CFS_SERVER_CONFIG_NAME=content_server_02_Docbase1
SERVER.DOCBASE_SERVICE_NAME=Docbase1
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$

 

I won’t list all these parameters again because as you can see above, it is exactly the same, except the docbase/repository name. Only the last section regarding the GR validation isn’t needed anymore. Once the properties file is ready, you can install the additional remote repository in the same way:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap Docbase1
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase Docbase1 has 1 server:
--------------------------------------------
  server name         :  Docbase1
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  23456
  last ckpt time      :  6/12/2018 14:46:42
  next ckpt time      :  6/12/2018 14:51:42
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010102
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_Other.properties

 

At this point, you will have the second docbases/repositories dm_server_config object created but that’s pretty much all you got… For a correct/working HA solution, you will need to configure the jobs for HA support (is_restartable, method_verb, …), maybe change the checkpoint_interval, configure the projections, trust the needed DFC clients (JMS applications), aso…

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Remote” Content Server (CFS) using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – Remote Docbases/Repositories (HA) est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore binaries & Dsearch

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here) and finally D2. I believe I only have 2 blogs left and they are both related to xPlore. In this one, we will see how to install the xPlore binaries as well as configure a first instance (Dsearch here) on it.

Just like other Documentum components, you can find some silent installation files or at least a template for the xPlore part. On the Full Text side, it is actually easier to find these silent files because they are included directly into the tar installation package so you will be able to find the following files as soon as you extract the package (xPlore 1.6):

  • installXplore.properties: Contains the template to install the FT binaries
  • configXplore.properties: Contains the template to install a FT Dsearch (primary, secondary) or a CPS only
  • configIA.properties: Contains the template to install a FT IndexAgent

 

In addition to that and contrary to most of the Documentum components, you can actually find a documentation about most of the xPlore silent parameters so if you have questions, you can check the documentation.

 

1. Documentum xPlore binaries installation

The properties file for the xPlore binaries installation is really simple:

[xplore@full_text_server_01 ~]$ cd /tmp/xplore_install/
[xplore@full_text_server_01 xplore_install]$ tar -xvf xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ chmod 750 setup.bin
[xplore@full_text_server_01 xplore_install]$ rm xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ ls *.properties
configIA.properties  configXplore.properties  installXplore.properties
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ vi FT_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Installation.properties
### Silent installation response file for FT binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
SMTP_HOST=localhost
ADMINISTRATOR_EMAIL_ADDRESS=xplore@full_text_server_01.dbi-services.com

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you want to install xPlore on. This will be the base folder under which the binaries will be installed. I put here /opt/xPlore but you can use whatever you want
  • SMTP_HOST: The host to target for the SMTP (emails)
  • ADMINISTRATOR_EMAIL_ADDRESS: The email address to be used for the watchdog. If you do not specify the SMTP_HOST and ADMINISTRATOR_EMAIL_ADDRESS properties, the watchdog configuration will end-up with a non-fatal error, meaning that the binaries installation will still be working without issue but you will have to add these manually for the watchdog if you want to use it. If you don’t want to use it, you can go ahead without, the Dsearch and IndexAgents will work properly without but obviously you are loosing the features that the watchdog brings

 

Once the properties file is ready, you can install the Documentum xPlore binaries in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ ./setup.bin -f FT_Installation.properties

 

2. Documentum xPlore Dsearch installation

I will use below a lot the word “Dsearch” but this section can actually be used to install any instances: Primary Dsearch, Secondary Dsearch or even CPS only. Once you have the binaries installed, you can install a first Dsearch (PrimaryDsearch usually or PrimaryEss) that will be used for the Full Text indexing. The properties file for this component is as follow:

[xplore@full_text_server_01 xplore_install]$ vi FT_Dsearch_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Dsearch_Installation.properties
### Silent installation response file for Dsearch
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
ess.configMode.primary=1
ess.configMode.secondary=0
ess.configMode.upgrade=0
ess.configMode.delete=0
ess.configMode.cpsonly=0

### Other configurations
ess.primary=true
ess.sparenode=0

ess.data_dir=/opt/xPlore/data
ess.config_dir=/opt/xPlore/config

ess.primary_host=full_text_server_01.dbi-services.com
ess.primary_port=9300
ess.xdb-primary-listener-host=full_text_server_01.dbi-services.com
ess.xdb-primary-listener-port=9330
ess.transaction_log_dir=/opt/xPlore/config/wal/primary

ess.name=PrimaryDsearch
ess.FQDN=full_text_server_01.dbi-services.com

ess.instance.password=ds34rchAdm1nP4ssw0rd
ess.instance.port=9300

ess.ess.active=true
ess.cps.active=false
ess.essAdmin.active=true

ess.xdb-listener-port=9330
ess.admin-rmi-port=9331
ess.cps-daemon-port=9321
ess.cps-daemon-local-port=9322

common.installOwner.password=ds34rchAdm1nP4ssw0rd
admin.username=admin
admin.password=ds34rchAdm1nP4ssw0rd

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the system supports a 64 bits architecture
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • ess.configMode.primary: Whether or not you want to install a Primary Dsearch (binary value)
  • ess.configMode.secondary: Whether or not you want to install a Secondary Dsearch (binary value)
  • ess.configMode.upgrade: Whether or not you want to upgrade an instance (binary value)
  • ess.configMode.delete: Whether or not you want to delete an instance (binary value)
  • ess.configMode.cpsonly: Whether or not you want to install a CPS only and not a Primary/Secondary Dsearch (binary value)
  • ess.primary: Whether or not this instance is a primary instance (set this to true if installing a primary instance)
  • ess.sparenode: Whether or not the secondary instance is to be used as a spare node. This should be set to 1 only if “ess.configMode.secondary=1″ and you want it to be a spare node only
  • ess.data_dir: The path to be used to contain the instance data. For a single-node, this is usually /opt/xPlore/data and for a multi-node, it needs to be a shared folder between the different nodes of the multi-node
  • ess.config_dir: Same as “ess.data_dir” but for the config folder
  • ess.primary_host: The Fully Qualified Domain Name of the primary Dsearch this new instance will be linked to. Here we are installing a Primary Dsearch so it is the local host
  • ess.primary_port: The port that the primary Dsearch is/will be using
  • ess.xdb-primary-listener-host: The Fully Qualified Domain Name of the host where the xDB has been installed on for the primary Dsearch. This is usually the same value as “ess.primary_host”
  • ess.xdb-primary-listener-port: The port that the xDB is/will be using for the primary Dsearch. This is usually the value of “ess.primary_port” + 30
  • ess.transaction_log_dir: The path to be used to store the xDB transaction logs. This is usually under the “ess.config_dir” folder (E.g.: /opt/xPlore/config/wal/primary)
  • ess.name: The name of the instance to be installed. For a primary Dsearch, it is usually something like PrimaryDsearch
  • ess.FQDN: The Fully Qualified Domain Name of the current host the instance is being installed on
  • ess.instance.password: The password to be used for the new instance (xDB Administrator & superuser). Using the GUI installer, you can only set 1 password and it will be used for everything (JBoss admin, xDB Administrator, xDB superuser). In silent, you can separate them a little bit, if you want to
  • ess.instance.port: The port of the instance to be installed. For a primary Dsearch, it is usually 9300
  • ess.ess.active: Whether or not you want to enable/deploy the Dsearch (set this to true if installing a primary or secondary instance)
  • ess.cps.active: Whether or not you want to enable/deploy the CPS (already included in the Dsearch so set this to true only if installing a CPS Only)
  • ess.essAdmin.active: Whether or not you want to enable/deploy the Dsearch Admin
  • ess.xdb-listener-port: The port to be used by the xDB for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 30
  • ess.admin-rmi-port: The port to be used by the RMI for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 31
  • ess.cps-daemon-port: I’m not sure what this is used for because the correct port for the CPS daemon0 (on a primary Dsearch) is the next parameter but I know that the default value for this is usually “ess.instance.port” + 21. It is possible that this parameter is only used in case the new instance is a CPS Only because this port (instance port + 21) is used on a CPS Only host as Daemon0 so it would make sense… To be confirmed!
  • ess.cps-daemon-local-port: The port to be used by the CPS daemon0 for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 22. You need a few ports available after this one in case you are going to have several CPS daemons (9322, 9323, 9324, …)
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue
  • admin.username: The name of the JBoss instance admin account to be created
  • admin.password: The password of the above-mentioned account

 

Once the properties file is ready, you can install the Documentum xPlore instance in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ /opt/xPlore/setup/dsearch/dsearchConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f FT_Dsearch_Installation.properties

 

You now know how to install the Full Text binaries and a first instance on top of it using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – xPlore binaries & Dsearch est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore IndexAgent

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

Documentum – Checking warnings&errors from an xPlore full re-index

$
0
0

When working with xPlore as a Full Text Server (indexing), there are a few ways to perform a full re-index. You can potentially do it from the IndexAgent UI, from the Dsearch UI, from the file system (with an ids.txt file for example, it is usually for a “small” number of r_object_id so that’s probably not an ideal way) or from the docbase (mass-queue, it’s not really a good way to do it either). Performing a full re-index from the xPlore Server directly will be faster because you remove a few layers where the Content Server asks for an index (the index queues) and expect an answer/result, that’s why I will in this blog only talk about the full re-index performed from the xPlore Server directly and below I will use a full re-index from the IndexAgent UI. For each of these cases, there might be a few warnings or errors along the re-index, some of which might be normal (password protected file), some others might not (timeout because xPlore heavily loaded).

The whole purpose of this blog is to show you how you can check these warnings/errors because there is no information about them directly displayed on the UI, you need to go find that information manually. These warnings/errors aren’t shown in the index queues since they weren’t triggered from the docbase but from the xPlore Server directly.

So first of all, you need to trigger a re-index using the IndexAgent:

  • Open the IndexAgent UI (https://<hostname>:<ia_port>/IndexAgent)
  • Login with the installation owner’s account
  • Stop the IndexAgent if it is currently running in Normal mode and then launch a re-index operation

It should look like that (for xPlore 1.6):
IA1

On the above screenshot, the green represents the success count and the blue is for the filtered count. Once completed and as shown above, you might have a few warnings/errors but you don’t have any information about them as I mentioned previously. To narrow down and facilitate the check of the warnings/errors, you need to know (approximately) the start and end time of the re-index operation: 2018-06-12 11:55 UTC to 2018-06-12 12:05 UTC for the above example. From that point, the analysis of the warnings/errors can be done in two main ways:

 

1. Using the Dsearch Admin

I will start with the way that most of you probably already know: use the Dsearch reports to see the errors/warnings. That’s not the fastest way, clearly not the funniest way either but it is an easy way for sure…

Accessing the reports from the Dsearch Admin:

  • Open the Dsearch Admin UI (https://<hostname>:<ds_port>/dsearchadmin)
  • Login with the admin account (or any other valid account with xPlore 1.6+)
  • Navigate to: Home > Diagnostic and Utilities > Reports
  • Select the “Document Processing Error Summary” report and set the following:
    • Start from: 2018-06-12 11:55
    • To: 2018-06-12 12:05
    • Domain name (optional): leave empty if you only have one IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
  • Click on Run to get the report

At this point, you will have a report with the number of warnings/errors per type, meaning that you do not have any information about the documents yet, you only know the number of errors for each of the pre-defined error types (=error code). For the above example, I had 8 warnings once the re-index was completed and I could see them all (seven warnings for ‘777’ and one warning for ‘770’):
IA2

Base on the information from this “Document Processing Error Summary” report, you can go deeper and find the details about the documents but you can only do it for one type, one Error Code, at a time. Therefore, you will have to loop on all Error Codes returned:

  • For each Error Code:
    • Select the “Document Processing Error Detail” report and set the following:
      • Start from: 2018-06-12 11:55
      • To: 2018-06-12 12:05
      • Domain name (optional): leave empty if you only have 1 IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
      • Processing Error Code: Select the Error Code you want to see (either 777 or 770 in my case)
      • Number of Results to Display: Set here the number of items you want to display, 10, 20, …
    • Click on Run to get the report

And there you finally have the details about the warnings/errors documents that weren’t indexed properly because of the Error Code you choose. In my case, I selected 770 so I have only 1 document:
IA3

You can export this list to excel if you want, to do some processing on these items for example but you will need to do it for all Error Codes and then merge them or whatever.

 

2. Using the logs

In the above example, I used the IndexAgent to perform the re-index so I will use the IndexAgent logs to find what happened exactly. This section is really the main purpose of this blog because I assume that most people are using the Dsearch Admin reports already but probably not the logs! If you want to script the check of warnings/errors after a re-index of just if you want to play and have fun while doing your job, then this is what you need ;).

So let’s start simple: listing all errors and warnings and keeping only the lines that contain an r_object_id.

[xplore@full_text_server_01 ~]$ cd $JBOSS_HOME/server/DctmServer_Indexagent_DocBase1/logs/
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

Indexagent_DocBase1.log:2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

As you can see above, there is also one queue item (1b0f1234501327f0) listed because I kept everything that is 16 char long with 0-9 or a-z. If you want, you can rather select only r_object_id starting with 09 to have all dm_documents (using this: “[ (<]09[0-9a-z]{14}[>) ]” ) or you can just remove the r_object_id starting with 1b which are the queue items.

In the above example, all the results are in the timeframe I expected them to be but it is possible that there are older or newer warnings/errors so you might want to apply another filter with the date. Since I want everything from 11:55 to 12:05 on the 12-Jun-2018, this is how I can do it (and removing the log file name too) using a time regex:

[xplore@full_text_server_01 logs]$ time_regex="2018-06-12 11:5[5-9]|2018-06-12 12:0[0-5]"
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the messages for each of these warnings/errors:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,^[^]]*],,' \
                                   | sort -u

[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the r_object_id (to resubmit them via the ids.txt for example):

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"

090f12345007f40e
090f1234500aa9f6
090f1234500aaa97
090f1234500aaa98
090f1234500aaa99
090f1234500aaa9a
090f1234500aaa9b
090f1234500aaa9d
[xplore@full_text_server_01 logs]$

 

If you want to generate the iapi commands to resubmit them all:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"
                                   | sed 's/.*/queue,c,&,dm_fulltext_index_user/'

queue,c,090f12345007f40e,dm_fulltext_index_user
queue,c,090f1234500aa9f6,dm_fulltext_index_user
queue,c,090f1234500aaa97,dm_fulltext_index_user
queue,c,090f1234500aaa98,dm_fulltext_index_user
queue,c,090f1234500aaa99,dm_fulltext_index_user
queue,c,090f1234500aaa9a,dm_fulltext_index_user
queue,c,090f1234500aaa9b,dm_fulltext_index_user
queue,c,090f1234500aaa9d,dm_fulltext_index_user
[xplore@full_text_server_01 logs]$

 

Finally, to group the warnings/errors per types:

[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `egrep -i "err|warn" Indexagent_*.log* \
                                     | sed 's,^[^:]*:,,' \
                                     | egrep "${time_regex}" \
                                     | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                     | sed 's,^[^]]*],,' \
                                     | sort -u \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     egrep -i "err|warn" Indexagent_*.log* \
                                       | sed 's,^[^:]*:,,' \
                                       | egrep "${time_regex}" \
                                       | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                       | sed 's,^[^]]*],,' \
                                       | sort -u \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ # Or to shorten a little bit the loop command:
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$

 

So the above was related to a very simple example where a full reindex took only a few minutes because it is a very small repository. But what about a full reindex that takes days because there are several millions of documents? Well the truth is that checking the logs might actually surprise you because it is usually more accurate than checking the Dsearch Admin. Yes, I said more accurate!

 

3. Accuracy of the Dsearch Admin vs the Logs

Let’s take another example with a repository containing a few TB of documents. A full re-index took 2.5 days to complete and in the commands below, I will check the status of the indexing for the 1st day: from 2018-09-19 07:00:00 UTC to 2018-09-20 06:59:59 UTC. Here is what the Dsearch Admin is giving you:

IA4

So based on this, you would expect 1 230 + 63 + 51 = 1 344 warnings/errors. So what about the logs then? I included below the DM_INDEX_AGENT_REINDEX_BATCH which are the “1b” object_id (item_id) I was talking about earlier but these aren’t document indexing, they are just batches:

[xplore@full_text_server_01 logs]$ time_regex="2018-09-19 0[7-9]|2018-09-19 [1-2][0-9]|2018-09-20 0[0-6]"
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Number of warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}" \
                                       | wc -l;
                                     echo;
                                   done

  --  Number of warnings/errors with the following messages: [Corrupt file]
51

  --  Number of warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
293

  --  Number of warnings/errors with the following messages: [DM_STORAGE_E_BAD_TICKET]
7

  --  Number of warnings/errors with the following messages: [Password-protected or encrypted file]
63

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction]
5

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 18, native msg: unknown error)]
1

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 257, native msg: handle is invalid)]
1053

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 30, native msg: out of memory)]
14

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 65534, native msg: unknown error)]
157

[xplore@full_text_server_01 logs]$

 

As you can see above, there is more granularity regarding the types of errors from the logs. Here are some key points in the comparison between the logs and the Dsearch Admin:

  1. In the Dsearch Admin, all messages that start with “Unknown error during text extraction” are considered as a single error type (N° 1023). Therefore from the logs, you can addition all of them: 5 + 1 + 1 053 + 14 + 157 = 1 230 to find the same number that was mentioned in the Dsearch Admin. You cannot separate them on the Dsearch Admin on the Error Summary report, it will only be on the Error Details report that you will see the full message and you can then separate them, kind of…
  2. You can find properly the same amount of “Password-protected or encrypted file” (63) as well as “Corrupt file” (51) from the logs and from the Dsearch Admin so no differences here
  3. You can see 7 “DM_STORAGE_E_BAD_TICKET” warnings/errors from the logs but none from the Dsearch Admin… Why is that? That’s because the Dsearch Admin do not have any Error Code for that so these errors aren’t shown!

So like I was saying at the beginning of this blog, using the Dsearch Admin is very easy but that’s not fun and you might actually miss a few information while checking the logs is funny and you are sure that you won’t miss anything (these 7 DM_STORAGE_E_BAD_TICKET errors for example)!

 

You could just as easily do the same thing in perl or using awk, that’s just a question of preferences… Anyway, you understood it, working with the logs allows you to do pretty much what you want but you will need some linux/scripting knowledge obviously while working with the Dsearch Admin is simple and easy but you will have to work with what OTX gives you and with the restrictions that it has.

 

 

Cet article Documentum – Checking warnings&errors from an xPlore full re-index est apparu en premier sur Blog dbi services.

Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp

$
0
0

This blog will go straight to the topic. When upgrading/installing your content server to 7+, you may experience an internal error with a popup telling you to look into a file called something like: DBTestResult7092863812136784595.tmp

In fact, the installation process failed to test the database connection. Even if it managed to find your schema previously. In the file you’ll find something like:

 Last SQL statement executed by DB was:

#0  0x00000033b440f33e in waitpid () from /lib64/libpthread.so.0
#1  0x00000000004835db in dmExceptionManager::WalkStack(dmException*, int, siginfo*, void*) ()
#2  0x0000000000483998 in dmExceptionHandlerProc ()
#3  <signal handler called>
#4  0x00007f3d8c0e7d85 in ber_flush2 () from /dctm/product/7.3/bin/liblber-2.4.so.2
#5  0x00007f3d8bebb00b in ldap_int_flush_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#6  0x00007f3d8bebb808 in ldap_send_server_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#7  0x00007f3d8bebbb30 in ldap_send_initial_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#8  0x00007f3d8beab828 in ldap_search () from /dctm/product/7.3/bin/libldap-2.4.so.2
#9  0x00007f3d8beab952 in ldap_search_st () from /dctm/product/7.3/bin/libldap-2.4.so.2
#10 0x00007f3d898f93b2 in nnflqbf () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#11 0x00007f3d898ef124 in nnflrne1 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#12 0x00007f3d898fe5b6 in nnfln2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#13 0x00007f3d886cffc0 in nnfgrne () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#14 0x00007f3d887f4274 in nlolgobj () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#15 0x00007f3d886ce43f in nnfun2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#16 0x00007f3d886ce213 in nnfsn2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#17 0x00007f3d8875f7f1 in niqname () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#18 0x00007f3d88612d06 in kpplcSetServerType () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#19 0x00007f3d8861387b in kpuatch () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#20 0x00007f3d893e9dc1 in kpulon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#21 0x00007f3d892e15f2 in OCILogon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#22 0x0000000000555232 in DBConnection::Connect(DBString const*, DBString const*, DBString const*) ()
#23 0x00000000005555e4 in DBConnection::DBConnection(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, dmListHead*, int, int volatile*) ()
#24 0x000000000055f6ff in DBDataBaseImp::DBDataBaseImp(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, DBDataBase*, dmListHead*, int, int volatile*) ()
#25 0x0000000000545aaf in DBDataBase::DBDataBase(DBStats*, DBString const&, DBString const&, DBString const&, DBString const&, dmListHead*, int, int volatile*) ()
#26 0x0000000000466bd8 in dmServer_Dbtest(int, char**) ()
#27 0x00000033b3c1ed1d in __libc_start_main () from /lib64/libc.so.6
#28 0x0000000000455209 in _start ()
Tue Jan  8 16:18:15 2019 Documentum Internal Error: Assertion failure at line: 1459 in file: dmexcept.cxx

Not so precise right?

In fact, it’s pretty simple. The installer failed to use your tnsnames.ora file because LDAP auth is set with a higher priority. For those who don’t know, the tnsnames.ora holds your database connection information. You won’t be able to connect documentum without it, as documentum will try to locate it.

Sometimes, depending on how the DBA installed the oracle client on the machine, LDAP identification may be set prior to the tnsnames identification. So you have two possibilities:

  • Edit sqlnet.ora to set TNSNAMES before LDAP.
  • Rename ldap.ora to something else so that the Oracle Client doesn’t find it and fall back to TNSNAMES. I recommend this way as if the DBA patches the Client, the sqlnet.ora may be set back to LDAP in priority.

For info, these files are located in $ORACLE_HOME/network/admin, by default they are installed under the Oracle user install owner. So to edit the files you must be root or ask the DBAs to do it for you.

Cet article Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp est apparu en premier sur Blog dbi services.

Documentum CS 7.* – 777 permission on jobs log

$
0
0

A few weeks ago at a customer, our team was involved in a security control.
We tracked files with 777 permission and we detected that logs generated by Documentum jobs have 777 permissions.

Security before anything else, that’s why this topic was my top priority!

First of all, I checked the logs on some Content Servers, and I had the same issue everywhere.

[dmadmin@vmCS1 ~]$ cd $DOCUMENTUM/dba/log/Repo1/sysadmin
[dmadmin@vmCS1 sysadmin]$ ls -rtl
total 192
-rwxrwxrwx. 1 dmadmin dmadmin   1561 Oct 25 10:12 DataDictionaryPublisherDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   5172 Oct 28 08:02 DMCleanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   6701 Oct 28 08:17 DMFilescanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin  14546 Nov  2 00:01 ConsistencyCheckerDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   2969 Nov  2 00:09 ContentWarningDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin    596 Nov  2 00:12 DBWarningDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin 102765 Nov  2 00:17 FileReportDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   3830 Nov  2 00:25 LogPurgeDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin    527 Nov  2 00:28 QueueMgtDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin  15932 Nov  2 00:31 StateOfDocbaseDoc.txt

I verified the umask at operating system level:

[dmadmin@vmCS1 ~]$ umask
0027

umask has the expected value!
For more information regarding the umask : https://en.wikipedia.org/wiki/Umask

Check if a different value of umask is set in the server.ini file ([SERVER_STARTUP] section):

[dmadmin@vmCS1 ~]$ cd $DOCUMENTUM/dba/config/Repo1
[dmadmin@vmCS1 ~]$ grep umask server.ini
[dmadmin@vmCS1 ~]$ 

No result
If it has been set, the umask setting in the server.ini overwrite the one set at operation system level.
This umask value is intended to control the permissions of files associated with documents stored in the repository, and their enclosing folders.
In my case, these files and folders have the correct permission.

Well, why only these logs have a different permission? I checked again some servers and I saw that not all jobs log have 777 permission, strange:

[dmadmin@vmCS2 sysadmin]$ ls -rtl
total 108
-rwxrwxrwx. 1 dmadmin dmadmin   601  Oct 18 07:12 DMFilescanDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   138  Oct 20 21:37 UpdateStatsDoc.txt
-rw-r-----. 1 dmadmin dmadmin   1832 Oct 24 13:45 FTCreateEventsDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   1251 Oct 25 11:55 DataDictionaryPublisherDoc.txt
-rwxrwxrwx. 1 dmadmin dmadmin   442  Oct 28 07:12 DMCleanDoc.txt

In fact, the common point between logs with 777 permission is that they are generated by dmbasic methods. These logs are not controlled by the umask set at the operating system level or server.ini.

The system umask value is overridden in the docbase start script, and set to 0. This value is then inherited by dmbasic methods!

[dmadmin@vmCS1 sysadmin]$ grep umask $DOCUMENTUM/dba/dm_start_Repo1
umask 0

I feel better now :D

So, to resolve this issue I had to:

  • Change the umask to 027 instead of 0 in the docbase start script
  • Stop the docbase
  • Change the permission of logs already generated
  • Start the docbase
  • Check the logs after a job execution

To make it easy and quickly, you can use below steps:
Commands below take in account High Availability case, don’t worry about that ;)

  1. To change on one docbase
    Define the docbase name
    		export DCTM_DOCBASE_NAME="DOCBASENAME"

    Check if it is a HA environment or not, and set the DCTM_DOCBASE_GLOBAL_NAME accordingly:

    		cd $DOCUMENTUM/dba
    		export DCTM_DOCBASE_SERVER_CONFIG=$(grep server_config_name config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3) ;
    		if [ ${DCTM_DOCBASE_SERVER_CONFIG} == ${DCTM_DOCBASE_NAME} ]
    		then
    			export DCTM_DOCBASE_GLOBAL_NAME=${DCTM_DOCBASE_NAME}
    		else
    			export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3) ;
    			export DCTM_DOCBASE_GLOBAL_NAME=${DCTM_DOCBASE_NAME}"_"${DCTM_DOCBASE_SERVICE_NAME}
    		fi

    Change the umask value in the start script

    		cp -p dm_start_${DCTM_DOCBASE_GLOBAL_NAME} dm_start_${DCTM_DOCBASE_GLOBAL_NAME}_bck_$(date +%Y%m%d-%H%M%S)
    		echo "Docbase ${DCTM_DOCBASE_NAME} : Start script has been saved"
    		sed -i 's,umask 0,umask 027,' dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    		echo "Docbase ${DCTM_DOCBASE_NAME} : Umask changed"

    Stop the docbases using the following command:

    		./dm_shutdown_${DCTM_DOCBASE_GLOBAL_NAME}

    Check if the docbase has been stopped:

    		ps -ef | grep ${DCTM_DOCBASE_NAME}

    Change the permission of existing files:

    		DCTM_DOCBASE_ID_DEC=$(grep docbase_id config/${DCTM_DOCBASE_NAME}/server.ini | cut -d \  -f 3)
    		DCTM_DOCBASE_ID_HEX=$(printf "%x\n" $DCTM_DOCBASE_ID_DEC)
    		chmod 640 log/*${DCTM_DOCBASE_ID_HEX}/sysadmin/*

    Start the docbase using the following command:

    		./dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
  2. To change on all docbases
    Check if it is a HA environment or not (check done one docbase only), and set the DCTM_DOCBASE_GLOBAL_NAME accordingly, then change the umask value in the start script.

    		cd $DOCUMENTUM/dba
    		export FIRST_DOCBASE_NAME=$(ls config | head -1)
    		export DCTM_DOCBASE_SERVER_CONFIG=$(grep server_config_name config/${FIRST_DOCBASE_NAME}/server.ini | cut -d \  -f 3)
    		if [ ${FIRST_DOCBASE_NAME} == ${DCTM_DOCBASE_SERVER_CONFIG} ]
    		then
    			export HA_ENV="NO"
    		else
    			export HA_ENV="YES"
    		fi
    		
    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			cp -p dm_start_${DCTM_DOCBASE_GLOBAL_NAME} dm_start_${DCTM_DOCBASE_GLOBAL_NAME}_bck_$(date +%Y%m%d-%H%M%S)
    			echo "Docbase ${i} : Start script has been saved"
    			sed -i 's,umask 0,umask 027,' dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "Docbase ${i} : Umask changed"
    		done

    Stop the docbases using the following command:

    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			echo "Stopping docbase ${i}"
    			./dm_shutdown_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "The docbase ${i} has been stopped"
    		done

    Check that all docbases are stopped

    		ps -ef | grep dmadmin

    Change permission on log files

    chmod 640 log/*/sysadmin/*

    Start the docbases using the following commands:

    
    		for i in `ls config`; do 
    			if [ ${HA_ENV} == "NO" ]
    			then
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}
    			else
    				export DCTM_DOCBASE_SERVICE_NAME=$(grep 'service =' config/${i}/server.ini | cut -d \  -f 3)
    				export DCTM_DOCBASE_GLOBAL_NAME=${i}"_"${DCTM_DOCBASE_SERVICE_NAME}
    			fi
    			echo "Starting docbase ${i}" 
    			./dm_start_${DCTM_DOCBASE_GLOBAL_NAME}
    			echo "The docbase ${i} has been started" 
    		done

    Check that all docbases are started

    		ps -ef | grep dmadmin

I was able to sleep peacefully this night ;) and you know now how to resolve this security issue.

Cet article Documentum CS 7.* – 777 permission on jobs log est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 1 – Change Docbase ID

$
0
0

This blog is the first one of a series that I will publish in the next few days/weeks regarding how to change a Docbase ID, Docbase name, aso in Documentum CS.
So, let’s dig in with the first one: Docbase ID. I did it on Documentum CS 16.4 with Oracle database on a freshly installed docbase.

We will be interested by the docbase repo1, to change the docbase ID from 101066 (18aca) to 101077 (18ad5).

1. Migration tool overview and preparation

The tool we will use here is MigrationUtil, and the concerned folder is:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil
total 108
-rwxr-xr-x 1 dmadmin dmadmin 99513 Oct 28 23:55 MigrationUtil.jar
-rwxr-xr-x 1 dmadmin dmadmin   156 Jan 19 11:09 MigrationUtil.sh
-rwxr-xr-x 1 dmadmin dmadmin  2033 Jan 19 11:15 config.xml

The default content of MigrationUtil.sh:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration. It was my case, I had to add the oracle driver path to the $CLASSPATH, because I received the below error:

...
ERROR...oracle.jdbc.driver.OracleDriver
ERROR...Database connection failed.
Skipping changes for docbase: repo1

To make the blog more readable, I will not show you all the contents of config.xml, below is the updated version to change the Docbase ID:

...
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repo1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

<entry key="ChangeDocbaseID">yes</entry> <!-- To change docbase ID or not -->
<entry key="Docbase_name">repo1</entry> <!-- has to match with DocbaseName.1 -->
<entry key="NewDocbaseID">101077</entry> <!-- New docbase ID -->
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

Before you start the migration script, you have to adapt the maximum open cursors in the database. In my case, with a freshly installed docbase, I had to set open_cursors value to 1000 (instead of 300):

alter system set open_cursors = 1000

See with your DB Administrator before any change.

Otherwise, I got below error:

...
Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLException: ORA-01000: maximum open cursors exceeded
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)
...

2. Before the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101066
Docbase description : repo1 repository
...

Create a document in the docbase
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
09018aca8000111b
(1 row affected)

note the r_object_id

3. Execute the migration

Before you execute the migration you have to stop the docbase and the docbroker.

$DOCUMENTUM/dba/dm_shutdown_repo1
$DOCUMENTUM/dba/dm_stop_DocBroker

Now, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Changing Docbase ID...
Database owner password is read from config.xml
Finished changing Docbase ID...

Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...
Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

No Error, sounds good ;) All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseIdChange.log
Reading config.xml from path: config.xmlReading server.ini parameters

Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Set the following properties:

Docbase Name:repo1
Docbase ID:101066
New Docbase ID:101077
DBMS: oracle
DatabaseName: DCTMDB
SchemaOwner: repo1
ServerName: vmtestdctm01
PortNumber: 1521
DatabaseOwner: repo1
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
Utility is going to modify Objects with new docbase ID
Sun Jan 27 19:08:58 CET 2019
-----------------------------------------------------------
Processing tables containing r_object_id column
-----------------------------------------------------------
-------- Oracle JDBC Connection Testing ------
jdbc:oracle:thin:@vmtestdctm01:1521:DCTMDB
Connected to database
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DMC_ACT_GROUP_INSTANCE_R with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID

Processing ID columns in each documentum table

Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
...
...
-----------------------------------------------------------
Update the object IDs of the Table: DM_XML_ZONE_S with new docbase ID:18ad5
-----------------------------------------------------------
Processing objectID columns
-----------------------------------------------------------
Getting all ID columns from database
-----------------------------------------------------------
Processing ID columns in each documentum table
Column Name: R_OBJECT_ID
Update the ObjectId columns of the Table: with new docbase ID
-----------------------------------------------------------
Updating r_docbase_id of dm_docbase_config_s and dm_docbaseid_map_s...
update dm_docbase_config_s set r_docbase_id = 101077 where r_docbase_id = 101066
update dm_docbaseid_map_s set r_docbase_id = 101077 where r_docbase_id = 101066
Finished updating database values...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the new DocbaseID value in dmi_vstamp_s table
...
...
Updating Data folder...
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select root from dm_filestore_s))
Renamed '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/replica_content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018aca' to '/app/dctm/product/16.4/data/repo1/replicate_temp_store/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/streaming_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/content_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/content_storage_01/00018ad5
Renamed '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018aca' to '/app/dctm/product/16.4/data/repo1/thumbnail_storage_01/00018ad5
select file_system_path from dm_location_s where r_object_id in (select r_object_id from dm_sysobject_s where r_object_type = 'dm_location' and object_name in (select log_location from dm_server_config_s))
Renamed '/app/dctm/product/16.4/dba/log/00018aca' to '/app/dctm/product/16.4/dba/log/00018ad5
select r_object_id from dm_ldap_config_s
Finished updating folders...
-----------------------------------------------------------
-----------------------------------------------------------
Updating the server.ini with new docbase ID
-----------------------------------------------------------
Retrieving server.ini path for docbase: repo1
Found path: /app/dctm/product/16.4/dba/config/repo1/server.ini
Backed up '/app/dctm/product/16.4/dba/config/repo1/server.ini' to '/app/dctm/product/16.4/dba/config/repo1/server.ini_docbaseid_backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repo1/server.ini
Docbase ID Migration Utility completed!!!
Sun Jan 27 19:09:52 CET 2019

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repo1

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repo1
Docbase id          : 101077
Docbase description : repo1 repository
...

Check the document created before the migration:
Adapt the r_object_id with the new docbase id : 09018ad58000111b

API> dump,c,09018ad58000111b    
...
USER ATTRIBUTES
  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...
  r_object_id                     : 09018ad58000111b
...
  i_folder_id                  [0]: 0c018ad580000105
  i_contents_id                   : 06018ad58000050c
  i_cabinet_id                    : 0c018ad580000105
  i_antecedent_id                 : 0000000000000000
  i_chronicle_id                  : 09018ad58000111b

5. Conclusion

After a lot of tests on my VMs, I can say that changing docbase id is reliable on a freshly installed docbase. On the other hand, each time I tried it on a “used” Docbase, I got errors like:

Changing Docbase ID...
Database owner password is read from config.xml
java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (GREPO5.D_1F00272480000139) violated

	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
	at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
	at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
	at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
	at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
	at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
	at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
	at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1150)
	at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
	at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4875)
	at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
	at SQLUtilHelper.setSQL(SQLUtilHelper.java:129)
	at SQLUtilHelper.processColumns(SQLUtilHelper.java:543)
	at SQLUtilHelper.processTables(SQLUtilHelper.java:478)
	at SQLUtilHelper.updateDocbaseId(SQLUtilHelper.java:333)
	at DocbaseIDUtil.(DocbaseIDUtil.java:61)
	at MigrationUtil.main(MigrationUtil.java:25)

I didn’t investigate enough on above error, it deserves more time but it wasn’t my priority. Anyway, the tool made a correct rollback.

Now, it is your turn to practice, don’t hesitate to comment this blog to share your own experience and opinion :)
In the next blog, I will try to change the docbase name.

Cet article Documentum – MigrationUtil – 1 – Change Docbase ID est apparu en premier sur Blog dbi services.


Documentum – Process Builder Installation Fails

$
0
0

A couple of weeks ago, at a customer I received an incident from the application team regarding an error occurred when installing Process Builder. The error message was:
The Process Engine license has not been enabled or is invalid in the ‘RADEV’ repository.
The Process Engine license must be enabled to use the Process Builder.
Please see your system administrator
.”

The error appears when selecting the repository:

Before I investigate on this incident I had to learn more about the Process Builder as it is usually managed by the application team.
In fact, The Documentum Process Builder is a software for creating a business process templates, used to formalize the steps required to complete a business process such as an approval process, so the goal is to extend the basic functionality of Documentum Workflow Manager.
It is a client application that can be installed on any computer, but before installing Process Builder you need to prepare your content server and repository by installing the Process Engine, because the CS handle the check in, check out, versioning, archiving, and all processes created are saved in the repository… Hummm, so maybe the issue is that my content server or repository is not well configured?

To be clean from the client side, I asked the application team to confirm the docbroker and port configured in C:\Documentum\Config\dfc.properties.

From the Content Server side, we used Process Engine installer, which install the Process Engine on all repositories that are served by the Content Server, deploy the bpm.ear file on Java Method Server and install the DAR files on each repository.

So let’s check the installation:

1. The BPM url http://Server:9080/bpm/modules.jsp is reachable:

2. No error in the bpm log file $JBOSS_HOME/server/DctmServer_MethodServer/logs/bpm-runtime.log.

3. BPM and XCP DARs are correctly installed in the repository:

select r_object_id, object_name, r_creation_date from dmc_dar where object_name in ('BPM', 'xcp');
080f42a480026d98 BPM 8/29/2018 10:43:35
080f42a48002697d xcp 8/29/2018 10:42:11

4. The Process Engine module is missed in the docbase configuration:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3

We know the root cause of this incident now :D
To resolve the issue, add the Process Engine module to the docbase config:

API>fetch,c,docbaseconfig
API>append,c,l,r_module_name
Process Engine
API>append,c,l,r_module_mode
3
API>save,c,l

Check after update:

	API> retrieve,c,dm_docbase_config
	...
	3c0f42a480000103
	API> dump,c,l
	...
	USER ATTRIBUTES

		object_name                : RADEV
		title                      : RADEV Repository
	...
	SYSTEM ATTRIBUTES

		r_object_id                : 3c0f42a480000103
		r_object_type              : dm_docbase_config
		...
		r_module_name           [0]: Snaplock
								[1]: Archive Service
								[2]: CASCADING_AUTO_DELEGATE
								[3]: MAX_AUTO_DELEGATE
								[4]: Collaboration
								[5]: Process Engine
		r_module_mode           [0]: 0
								[1]: 0
								[2]: 0
								[3]: 1
								[4]: 3
								[5]: 3
		...

Then I asked the application team to retry the installation, the issue has been resolved.

No manual docbase configuration required in the Process Engine Installation Guide. I guess the Process Engine Installer should do it automatically.
I will install a new environment in the next few days/weeks, and keep you informed if any news ;)

Cet article Documentum – Process Builder Installation Fails est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 2 – Change Docbase Name

$
0
0

You are attending the second episode of the MigrationUtil series, today we will change the Docbase Name. If you missed the first one, you can find it here. I did this change on Documentum CS 16.4 with Oracle database, on the same docbase I already used to change the docbase ID.
My goal is to do both changes on the same docbase because that’s what I will need in the future.

So, we will be interested in the docbase RepoTemplate to change his name to repository1.

1. Migration preparation

I will not give the overview of the MigrationUtil, as I already did in the previous blog.
1.a Update the config.xml file
Below is the updated version of config.xml file to change the Docbase Name:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/product/16.4/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">RepoTemplate</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>
...
<entry key="ChangeDocbaseName">yes</entry>
<entry key="NewDocbaseName.1">repository1</entry>
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Before the migration (optional)

– Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : RepoTemplate
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

– Create a document in the docbase:
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
090f449880001125
(1 row affected)

note the r_object_id.

3. Execute the migration

3.a Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

3.b Update the database name in the server.ini file
It is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/RepoTemplate/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = RepoTemplate
server_config_name = RepoTemplate
database_conn = dctmdb.local
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/RepoTemplate/dbpasswd.txt
service = RepoTemplate
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin
...

Don’t worry, we will roll back this change before docbase start ;)

3.c Execute the MigrationUtil script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...
Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...

Changing Docbase Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Finished changing Docbase Name...

Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

No Error encountred here but it doesn’t mean that everything is ok… Please check the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Start: 2019-02-01 19:32:10.631
Changing Docbase Name
=====================

DocbaseName: RepoTemplate
New DocbaseName: repository1
Retrieving server.ini path for docbase: RepoTemplate
Found path: /app/dctm/product/16.4/dba/config/RepoTemplate/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_docbase_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3c0f449880000103'
select r_object_id,docbase_name from dm_docbaseid_map_s where docbase_name = 'RepoTemplate'
update dm_docbaseid_map_s set docbase_name = 'repository1' where r_object_id = '440f449880000100'
select r_object_id,file_system_path from dm_location_s where file_system_path like '%RepoTemplate%'
update dm_location_s set file_system_path = '/app/dctm/product/16.4/data/repository1/content_storage_01' where r_object_id = '3a0f44988000013f'
...
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800003e0'
...
select i_stamp from dmi_vstamp_s where i_application = 'dmi_dd_attr_info'
...
Successfully updated database values...
...
Backed up '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_startup script.
Renamed '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_start_repository1'
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_shutdown script.
Renamed '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_shutdown_repository1'
WARNING...File /app/dctm/product/16.4/dba/config/RepoTemplate/rkm_config.ini doesn't exist. RKM is not configured
Finished processing File changes...

Processing Directory Changes...
Renamed '/app/dctm/product/16.4/data/RepoTemplate' to '/app/dctm/product/16.4/data/repository1'
Renamed '/app/dctm/product/16.4/dba/config/RepoTemplate' to '/app/dctm/product/16.4/dba/config/repository1'
Renamed '/app/dctm/product/16.4/dba/auth/RepoTemplate' to '/app/dctm/product/16.4/dba/auth/repository1'
Renamed '/app/dctm/product/16.4/share/temp/replicate/RepoTemplate' to '/app/dctm/product/16.4/share/temp/replicate/repository1'
Renamed '/app/dctm/product/16.4/share/temp/ldif/RepoTemplate' to '/app/dctm/product/16.4/share/temp/ldif/repository1'
Renamed '/app/dctm/product/16.4/server_uninstall/delete_db/RepoTemplate' to '/app/dctm/product/16.4/server_uninstall/delete_db/repository1'
Finished processing Directory Changes...
...
Processing Services File Changes...
Backed up '/etc/services' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/services_docbase_RepoTemplate.backup'
ERROR...Couldn't update file: /etc/services (Permission denied)
ERROR...Please update services file '/etc/services' manually with root account
Finished changing docbase name 'RepoTemplate'

Finished changing docbase name....
End: 2019-02-01 19:32:23.791

Here it is a justified error… Let’s change the service name manually.

3.d Change the service
As root, change the service name:

[root@vmtestdctm01 ~]$ vi /etc/services
...
repository1				49402/tcp               # DCTM repository native connection
repository1_s       	49403/tcp               # DCTM repository secure connection

3.e Change back the Database name in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

3.f Start the Docbroker and the Docbase

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

3.g Check the docbase log

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-01T19:43:15.677455	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 16594, session 010f449880000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:15.677967	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16595, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:16.680391	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16606, session 010f44988000000b) is started sucessfully." 

You are saying the log name is still RepoTemplate.log ;) Yes! because in my case the docbase name and the server name were the same before I change the docbase name:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/repository1/dbpasswd.txt
service = repository1
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin

Be patient, in the next episode we will see how we can change the server name :)

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

it’s not very nice to keep the old description of the docbase… Use below idql request to change it:

Update dm_docbase_config object set title='Renamed Repository' where object_name='repository1';

Check after change:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
...
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Renamed Repository
...

Check the document created before the migration:
docbase id : 090f449880001125

API> dump,c,090f449880001125
...
USER ATTRIBUTES

  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...

5. Conclusion

Well, the tool works, but as you saw we need a workaround to make the change. Which is not great, hope that it will be fixed in the future versions.
In the next episode I will change the server config name, see you there ;)

Cet article Documentum – MigrationUtil – 2 – Change Docbase Name est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

$
0
0

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...

3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."

4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.

Documentum – Documents not transferred to WebConsumer

$
0
0

Receiving an incident is not always a pleasure, but it is always the case when we share the solution!
A few days ago, I received an incident regarding WebConsumer on a production environment, saying that documents are not transferred as expected to WebConsumer.

The issue didn’t happened for all documents, that’s why I directly suspect the High Availability configuration on this environment. Moreover, I know that the IDS is installed only on CS1 (as designed). So I checked the JMS logs on :
CS1 : No errors found there.

CS2 : Errors found :

2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-11 04:05:39,141 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.044s
2019-02-11 04:05:39,773 UTC INFO  [stdout] (default task-89) 2019-02-11 04:05:39,773 UTC ERROR [com.domain.repository1.dctm.methods.WCPublishDoc] (default task-89) DfException:: THREAD: default task-89; 
MSG: [DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND]error:  "The app_server_name/servlet_name 'WebCache' is not specified in dm_server_config/dm_jms_config."; ERRORCODE: 100; NEXT: null

To cross check:

On CS1:

[dmadmin@CONTENT_SERVER1 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER1 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
0

On CS2:

[dmadmin@CONTENT_SERVER2 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER2 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
60

So I listed all dm_server_config:

API> ?,c,select r_object_id,object_name from dm_server_config;
r_object_id       object_name                                                                                                                                                                                                                                     
----------------  -----------------------------
3d01e24080000102  repository1                                                                                                                                                                                                                                         
3d01e240800062be  CONTENT_SERVER2_repository1

Then, I checked the app servers list configured:

On CS1:

API> dump,c,3d01e24080000102
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: WebCache
                               [4]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://CONTENT_SERVER1:6679/services/scs/publish
                               [4]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Good, WebCache is configured here.

On CS2:

API> dump,c,3d01e240800062be
...
USER ATTRIBUTES

  object_name                     : CONTENT_SERVER2_repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER2:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER2:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Ok! The root cause of this error is clear now.

The concerned method is WCPublishDocumentMethod, but applied when? by who?

I noticed that in the log above:

D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'

So, WCPublishDocumentMethod applied by the D2LifecycleConfig, which is applied when? by who?
Sought in the server.log file and found:

2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : User  : repository1
2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : New session manager creation.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager set identity.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager get session.
2019-02-11 04:05:06,006 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem ID: 4a01e2408002bd3d
2019-02-11 04:05:06,023 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow tracker...
2019-02-11 04:05:06,031 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow config...
2019-02-11 04:05:06,032 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Get packaged documents...
2019-02-11 04:05:06,067 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Apply on masters...
2019-02-11 04:05:06,068 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem acquire...
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Applying lifecycle (Target state : On Approved / Transition :promote
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : No workflow properties
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching target state name and/or transition type.
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target state name :On Approved
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target transition type :promote
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Performing D2 lifecycle on :FRM-8003970 (0901e240800311cd)
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching associated D2 lifecycle...
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::getInstancesForObject start time 0.000s
...
2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
...

Hummmm, the D2WFLifeCycleMethod is applied by the job D2JobLifecycleBatch. I checked the target server of this job:

API> ?,c,SELECT target_server FROM dm_job WHERE object_name='D2JobLifecycleBatch';
target_server                                                                                                                                                                               
-------------
 
(1 row affected)

As I suspected, no target server defined! That’s mean that the job can be executed on “Any Running Server”, that’s why this method has been executed on CS2… While CS2 is not configured to do so.

Now, two solutions are possible:
1. Change the target_server to use only CS1:

API> ?,c,UPDATE dm_job OBJECTS SET target_server='repository1.repository1@CONTENT_SERVER1' WHERE object_name='D2JobLifecycleBatch';

2. Add the app server WebCache to CS2, pointing to CS1:

API> fetch,c,3d01e240800062be
API> append,c,l,app_server_name
SET> WebCache
API> append,c,l,app_server_uri
SET> https://CONTENT_SERVER1:6679/services/scs/publish
API> save,c,l

Check after update:

API> dump,c,3d01e240800062be
...
USER ATTRIBUTES

  object_name                     : CONTENT_SERVER2_repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
                               [4]: WebCache
  app_server_uri               [0]: https://CONTENT_SERVER2:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER2:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
                               [4]: https://CONTENT_SERVER1:6679/services/scs/publish
...

We choose the second option, because:
– The job is handled by the application team,
– Modifying the job to run only on CS1 will resolve this case, but if the method is applied by another job or manually on CS2, we will get again the same error.

After this update no error has been recorded in the log file:

...
2019-02-12 04:06:10,948 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.007s
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : No ARG_RETURN_ID in mapArguments
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : newObject created, user session used: 0801e2408023f714
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.D2SysObject                    : getFolderIdFromCache: got folder: /System/D2/Data/c6_method_return, object id: 0b01e2408000256b, docbase: repository1
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : mapArguments: {-method_return_id=0801e2408023f714}
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : origArguments: {-id=0901e24080122a59}
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodName: WCPublishDocumentMethod
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodParams: -id 0901e24080122a59 -user_name dmadmin -docbase_name repository1
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : Start WCPublishDocumentMethod method with JMS (Java Method Services) runLocally hint set is false
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : key: -method_return_id, and value: 0801e2408023f714
...

I hope this blog will help you to quickly resolve this kind of incident.

Cet article Documentum – Documents not transferred to WebConsumer est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Partner Day

$
0
0

First day of the #OTEW here at the Austria International Center in Vienna, Guillaume Fuchs and I where invited to assist to the Partner Global sessions.

Welcome to OTEW Vienna 2019

img4Mark J. Barrenechea, the OpenText’s CEO & CTO, started the day with some generic topics concerning the global trends and achievements like:

  • More and More partners and sponsors
  • Cloud integration direction
  • Strong security brought to customers
  • AI & machine learning new trend
  • New customer wave made of Gen Z and millennials to consider
  • OpenText #1 in Content Services in 2018
  • Turned to the future with Exabytes goals (high level transfers and storage)
  • Pushing to upgrade to version 16 with most complete Content Platform ever for security and integration
  • Real trend of SaaS with the new OT2 solutions

OpenText Cloud and OT2 is the future

img1

Today the big concern is the sprawl of data, OpenText is addressing this point by centralizing data and flux and create an information advantage. Using Cloud and OT2 SaaS, PaaS will open the business to every thing.

OT2 is the EIM as a service, it’s an hybrid cloud platform that brings security and scalability to customers solutions which you can integrates to leading applications like O365 Microsoft Teams, Documentum and more, it provides SaaS as well. One place for your data and many connectors to it. More info on it to come, stay tuned.

Smart View is the default

Smart View is the new OpenText UI default for every components such as D2 for documentum, SAP integration, Extended ECM, SuccessFactor and so on.

img3img5

Documentum and D2

New features:

  • Add documents to subfodlers without opening folder first
  • Multi-items download -> Zip and download
  • Download phases displayed in progress bar
  • Pages editable inline with smart view
  • Possibility to add widgets in smart view
  • Workspace look improved in smart view
  • Image/media display improved: Galery View with sorting, filters by name
  • Threaded discussion in smart view look and feel
  • New permission management visual representation
  • Mobile capabilities
  • Integrated in other lead applications (Teams, SAP, Sharepoint and so on…)

img6img7

OpenText Roadmap

OpenText trends are the following:

  • New UI for products: Smart View: All devices, well integrated to OT2
  • Content In Context
    • Embrace Office 365, with Documentum integration
    • Integration of documentum in SAP
  • Push to Cloud
    • More cloud based product: Docker, Kubernetes
    • Run applications anywhere with OpenText Cloud, Azure, AWS, Google
    • SaaS Applications & Services on OT2
  • Line Of Business
    • SAP applications
    • LoB solutions like SuccessFactors
    • Platform for industry solutions like Life Science, Engineering and Government
  • Intelligent Automation
    • Information extraction with machine learning (Capture)
    • Cloud capture apps for SAP, Salesforce, etc
    • Drive automation with Document Generation
    • Automatic sharing with OT Core
    • Leverage Magellan and AI
    • Personal Assistant / Bots
  • Governance:
    • Smart Compliance
    • GDPR and DPA ready
    • Archiving and Application decommissioning

Conclusion

After this first day at OTEW we can see that OpenText is really pushing on new UI with Smart View, as well as centralized services and storage with OT2 and OpenText Cloud solutions. Content Services will become the angular stone for all content storage with plugged interfaces and components provided by the OT2 platform.

Cet article OpenText Enterprise World Europe 2019 – Partner Day est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Day 2

$
0
0

Day 2 of OTEW, we followed the global stream this morning which was taking most of the points from yesterday. But we had the pleasure to have a session from Dr. Michio Kaku, Theoretical Physicist, Futurist and popularizer of science. He wrote several books about physics and how he sees the future.

kaku

He sees us in the next 20 years ultra connected with internet lenses, the Moore’s law will collapse in 2025 where it will probably be replaced by Graphene technology (instead of basic transistors), which will, in an unknown perspective, be replaced by Quantum calculation machines (q-bits instead of bits). The main issue with quantum calculation is that q-bit are really disrupted by noises and electromagnetic waves (decoherence). According to him, internet will be replaced by brain net thanks to biological new technologies focusing on sensations instead of visualization.

What’s new and what’s next for OpenText Documentum

We were totally waiting for this session as we, documentum experts, were exited to see the future of this well spread technology. Micah Byrd, Director Product Management at OpenText started to talk about the generic integration roadmap with “Content in Context”, “Cloud”, “LoB and industry” and “Intelligent automation” and how Documentum interprets these guidelines.

Documentum will be more and more integrated to Office 365 thanks to the new UI Smart View. A Coherent solution across all platforms which allows easy and seamless fusion into leading applications like Word and SAP. This is content in context.

OpenText is aggressively pushing Documentum to the cloud since several years with custom solutions like private, managed or public cloud. With Private you keep your data on your data center (2014-2016). With Managed your data goes to OpenText cloud (2017-2018). With Public your data goes where you want on different cloud providers like AWS, Azure, Google and so on (2019). OpenText invests on containerization as well with Docker and Kubernetes for “Documentum from Everywhere”.

 Documentum future innovations

As part of the main new features we have the continous integration of Documentum in Office 365 which already supports Word and SAP and soon (EP7 in October) Excel, Power Point and Outlook. It means that you’ll be able to access Documentum data from Office softwares. In addition OpenText wants to enable Bi-Directional synchronization between Documentum and Core, implying possibilities of interrecting with content outside of the corporate network. Hence, the content will be synced no matter where, no matter when, in a secure and controlled way.

img10

Next to come is also improved content creation experience in D2 thanks to more integration of Brava! for annotation sharing as well as more collaborative capabilities with Share point (improvement of DC4SP).

img11

A new vision of security:

img12

D2 on mobile will come soon on IOS and Android, developed in AppWorks:

img13

We are particularly exited about a prototype which was presented today: the Documentum Security Dashboard. It gives a quick and easy view of user activities and tracks the content usage like views and downloads and can demonstrate trends about content evolution. We hope it will be released one day.

img14

Many more topics around Documentum components where presented but we will not provide details here about it, we were only focusing on main features.

Documentum D2 Demo

We had a chance to put our hands on the new D2 Smart View which brings reactivity and modernity. Our feeling about it is: SMOOTH.

img15

Conclusion

Another amazing day at the OTEW where we met a lot of expert people and attended interesting sessions about the huge OpenText world.

Cet article OpenText Enterprise World Europe 2019 – Day 2 est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Day 3

$
0
0

Last but not least, today was mainly dedicated to demos and customer cases. It started with the global stream presenting some OpenText applications like Core for Quality: An application developed with AppWorks and integrated to Magellan. It was meant to manage quality issues and connected to Documentum in order to link issues with SOP documents.

In the different demos we saw the integration of these SaaS applications in OT2 and their responsiveness (drag and drop from desktop, loading time, easy accessibility to other OT2 applications and so on).

OT2

We went to an OT2 specific session to get more info in this new way of bringing services and business to customers (and developers).

img20

OT2 is a platform of services, it can be really interesting for companies to avoid IT management on site and to deport the infrastructure management inside OT2 at OpenText charge: security, maintenance, updates and patches aso.

The main purpose is “A2A”, means Any to Any or Anywhere, anytime. OT2 hosted applications can be accessed from anywhere because it’s a public cloud. As it’s hosted by OpenText, you should expect almost no downtime, applications up to dates and most important: Security.

Core is another main feature of OpenText. It’s a secure way to share content with people outside of the company’s organization like external partners, customers (documentation sharing). The content can be edited by external people as it will be synced with your application (or backend) at all time. We saw how easy it was to share the content based on rules or just selection inside the application, and everything is taken care of by the product.

Federated Compliance will also come as a service, allowing you to track data and usage of your applications. An easy way to keep an eye on the status of your infra.

img21

Some other products where mentioned as SAP Archive Server to be brought to the cloud with the help of OpenText but we won’t focus on that point. The developers are really guided and escorted through Smart View application development and directly integrated to OT2. With this, OpenText is counting on Devs to enhance the panel of available solutions in OT2.

Documentum Stories

During the day we had the opportunity to discover some success stories from some of the Documentum customers.

Wiesbaden

Wiesbaden is a city in Germany where they came across an issue of organization in the administration sector.

img24

In this sector it’s difficult to make changes due to a recusant vision of changing habits. Dr. Thomas Ortseifen, who was presenting, told us that the administration was not well organized and each part of it was “living alone” in its proper ecosystem.

Hence, the city decided to put their trust in OpenText to bring coherence in this organization. The solution proposed by OpenText was to setup a centralized DMS (Documentum) in the middle of a SOA architecture allowing flexibility and possibility to use APIs to increase the scalability of new applications.

Here are the benefits of this solution:

  • Enhanced information flow
  • Faster, continous availability
  • Less transporting times
  • Enhanced usage of existing database
  • Enhanced processes
  • Cross-functional complex search and analysis options
  • Reduced costs for information creation
  • Reduced costs for information management
  • Reduced costs for space required

Alstom ACOMIS

Alstom is a french company managing transport solutions like Tram, metro, digital mobility, maintenance, modernisation, rails infrastructure and so on.

img25

ACOMIS stands for Alstom COntent Management Information System. At first it was setup on premise with several Web Tops and docbases.

Alstom decided to create an ACOMIS V1 in order to merge all docbases and centralize the business. To achieve this, with help of OpenText, they migrated millions of documents and merge everything to D2 and one docbase, all of this in a private cloud, letting the on premise behind.

Added business value:

  • Replacing webtop with D2 for better user experience
  • ACOMIS operated by OpenText specialists
  • One single repo for cross project searches

There was some new requirements then, and some performance issues. The need of GDPR compliance and new 3D standard format. In order to gain these features Alstrom decided to move to a V2. So they moved to the public cloud, still managed by OpenText in order to solve the perf issue (network lags). They used Brava! in order to view 3D objects in HTML 5 interface.

Added business value:

  • Public cloud for perfs and external access
  • GDPR compliance
  • Security managed by OpenText
  • Version 16.4 with Brava! integration for 3D viewer

Conclusion

The OpenText World in vienna is now closed. We met a lot of people and experts. We clearly see the trend of service and centralization from OpenText. We are exited to see where it is going.

Cet article OpenText Enterprise World Europe 2019 – Day 3 est apparu en premier sur Blog dbi services.


Documentum : Dctm job locked after docbase installation

$
0
0

A correct configuration of Documentum jobs is paramount, that’s why it is the first thing we do after the docbase installation.
A few days ago, I configured the jobs on a new docbase using DQL, and I got an error because a job is locked by the user dmadmin.

The error message was:

DQL> UPDATE dm_job OBJECTS SET target_server=' ' WHERE target_server!=' ' ;
...
[DM_QUERY_F_UP_SAVE]fatal:  "UPDATE:  An error has occurred during a save operation."

[DM_SYSOBJECT_E_LOCKED]error:  "The operation on dm_FTQBS_WEEKLY sysobject was unsuccessful because it is locked by user dmadmin."

I checked the status of this job:

API> ?,c,select r_object_id from dm_job where object_name ='dm_FTQBS_WEEKLY';
r_object_id
----------------
0812D68780000ca6
(1 row affected)

API> dump,c,0812D68780000ca6
...
USER ATTRIBUTES

  object_name                     : dm_FTQBS_WEEKLY
  title                           :
  subject                         : qbs weekly job
...
  start_date                      : 2/28/2019 05:21:15
  expiration_date                 : 2/28/2027 23:00:00
...
  is_inactive                     : T
  inactivate_after_failure        : F
...
  run_now                         : T
...

SYSTEM ATTRIBUTES

  r_object_type                   : dm_job
  r_creation_date                 : 2/28/2019 05:21:15
  r_modify_date                   : 2/28/2019 05:24:48
  r_modifier                      : dmadmin
...
  r_lock_owner                    : dmadmin
  r_lock_date                     : 2/28/2019 05:24:48
...

APPLICATION ATTRIBUTES

...
  a_status                        :
  a_is_hidden                     : F
...
  a_next_invocation               : 3/7/2019 05:21:15

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
...

The job was locked 3 minutes after the creation date… And still locked since (4 days).

Let’s check job logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*0812D68780000ca6*
-rw-r--r--. 1 dmadmin dmadmin   0 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6.lck
-rw-rw-rw-. 1 dmadmin dmadmin 695 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
Thu Feb 28 05:24:50 2019 [ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, 
status: 0, with error message [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (CONTENT_SERVER1:1489) does not know of a server for the specified docbase (repository1)"
...NO HEADER (RECURSION) No session id for current job.
Thu Feb 28 05:24:50 2019 [FATAL ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, status: 0, with error message .
..NO HEADER (RECURSION) No session id for current job.

I noted three important information here:
1. The DocBroker consider that the docbase is stopped when the AgentExec sent the request.
2. The timestamp corresponds to the installation date of the docbase.
3. LAUNCHER 20749.

I checked the install logs to confirm the first point:

[dmadmin@CONTENT_SERVER1 ~]$ egrep " The installer will s.*. repository1" $DOCUMENTUM/product/7.3/install/logs/install.log*
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:03:24,757  INFO [main]  - The installer will start component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:24:39,588  INFO [main]  - The installer will stop component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:26:49,110  INFO [main]  - The installer will start component process for repository1.

The AgentExec logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*agentexec.log*
-rw-rw-rw-. 1 dmadmin dmadmin    640 Feb 28 05:24 agentexec.log.save.02.28.19.05.27.54
-rw-rw-rw-. 1 dmadmin dmadmin    384 Feb 28 05:36 agentexec.log.save.02.28.19.05.42.26
-rw-r-----. 1 dmadmin dmadmin      0 Feb 28 05:42 agentexec.log.save.02.28.19.09.51.24
...
-rw-r-----. 1 dmadmin dmadmin 569463 Mar  8 09:11 agentexec.log
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1/agentexec/agentexec.log.save.02.28.19.05.27.54
Thu Feb 28 05:17:48 2019 [INFORMATION] [LAUNCHER 19584] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:19 2019 [INFORMATION] [LAUNCHER 20191] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:49 2019 [INFORMATION] [LAUNCHER 20253] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:19 2019 [INFORMATION] [LAUNCHER 20555] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:49 2019 [INFORMATION] [LAUNCHER 20749] Detected during program initialization: Version: 7.3.0050.0039  Linux64

I found here the LAUNCHER 20749 noted above ;) So, this job corresponds to the last job executed by the AgentExec before being stopped.
The AgentExec was up, the Docbase should be up also, but the DocBroker said that the docbase is down :(

Now, the question is : when execatly the DocBroker was informed that the docbase is shut down?

[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1.log.save.02.28.2019.05.26.49
...
2019-02-28T05:24:48.644873      20744[20744]    0112D68780000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (CONTENT_SERVER1) with port (1489).  
Information: (Config(repository1), Proximity(1), Status(Server shut down by user (dmadmin)), Dormancy Status(Active))."

To recapitulate:
– 05:24:48.644873 : Docbase shut down and DocBroker informed
– 05:24:49 : AgentExec sent request to DocBroker

So, we can say that the AgentExec was still alive after the docbase stop!

Now, to resolve the issue it is easy :D

API> unlock,c,0812D68780000ca6
...
OK

I didn’t find in the logs when exactly the docbase stop the AgentExec, I guess the docbase request the stop (kill) but don’t check if it has been really stopped.
I confess that I encounter this error many times after docbase installation, that’s why it is useful to know why and how to resolve it quickly. I advise you to configure Dctm jobs after each installation, at least check if the r_lock_date is set and if it is justified.

Cet article Documentum : Dctm job locked after docbase installation est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 4 – Change Host Name

$
0
0

In this blog I will change the Host Name, it comes after three blogs to change the Docbase ID, Docbase Name, and Server Config Name, hope that you already read them, if not don’t delay 😉

So, let’s change the Host Name!

1. Migration preparation

Update the configuration file of the Migration Utility:

[dmadmin@vmtestdctm01 ~]$ vi $DM_HOME/install/external_apps/MigrationUtil/config.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">docbase1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...
<entry key="ChangeHostName">yes</entry>
<entry key="HostName">vmtestdctm01</entry>
<entry key="NewHostName">vmtestdctm02</entry>
...
</properties>

Be careful, the hostname may be FQDN or not, before any change check using “hostname –fqdn” and compare what you have in place.
You can also use select queries from the log of my migration below to be sure
😉

Stop the Docbase and the Docbroker:

$DOCUMENTUM/dba/dm_shutdown_docbase1
$DOCUMENTUM/dba/dm_stop_DocBroker

Update the database name in the server.ini file, it is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local:

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/docbase1/server.ini
...
[SERVER_STARTUP]
docbase_id = 123456
docbase_name = docbase1
server_config_name = docbase1
database_conn = dctmdb.local
database_owner = docbase1
...

Don’t worry, we will roll back this change before docbase start.

Add the vmtestdctm02 in /etc/hosts

[root@vmtestdctm01 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.122.1 vmtestdctm01 vmtestdctm02

2. Execute the Migration

Execute the migration script.

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...

Changing Host Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange.log
Finished changing host name...Please check log file for more details/errors
Finished changing Host Name...

Skipping Install Owner Change...
Skipping Server Name Change...
Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

Check the log content to understand what has been changed and check errors if any.

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange.log
Start: 2019-04-09 18:55:48.613
Changing Host Name
=====================
HostName: vmtestdctm01
New HostName: vmtestdctm02
Changing HostName for docbase: docbase1
Retrieving server.ini path for docbase: docbase1
Found path: /app/dctm/product/16.4/dba/config/docbase1/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/HostNameChange_docbase1_DatabaseRestore.sql'
Processing _s table...
select r_object_id,r_host_name from dm_server_config_s where lower(r_host_name) = lower('vmtestdctm01')
update dm_server_config_s set r_host_name = 'vmtestdctm02' where r_object_id = '3d01e24080000102'
select r_object_id,r_install_domain from dm_server_config_s where lower(r_install_domain) = lower('vmtestdctm01')
select r_object_id,web_server_loc from dm_server_config_s where lower(web_server_loc) = lower('vmtestdctm01')
update dm_server_config_s set web_server_loc = 'vmtestdctm02' where r_object_id = '3d01e24080000102'
select r_object_id,host_name from dm_mount_point_s where lower(host_name) = lower('vmtestdctm01')
update dm_mount_point_s set host_name = 'vmtestdctm02' where r_object_id = '3e01e24080000149'
select r_object_id,user_os_domain from dm_user_s where lower(user_os_domain) = lower('vmtestdctm01')
select r_object_id,user_global_unique_id from dm_user_s where lower(user_global_unique_id) like lower('vmtestdctm01:%')
select r_object_id,user_login_domain from dm_user_s where lower(user_login_domain) = lower('vmtestdctm01')
select r_object_id,target_server from dm_job_s where lower(target_server) like lower('%@vmtestdctm01')
update dm_job_s set target_server = 'docbase1.docbase1@vmtestdctm02' where r_object_id = '0801e240800003d6'
...
update dm_job_s set target_server = 'docbase1.docbase1@vmtestdctm02' where r_object_id = '0801e24080000384'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'JMS vmtestdctm02:9080 for docbase1.docbase1' where r_object_id = '0801e240800010a4'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_outputdevice' and lower(object_name) like lower('%vmtestdctm01%')
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_client_registration' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_WM6Aoa' where r_object_id = '0801e24080000581'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_CqJKIa' where r_object_id = '0801e2408000058b'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_uEp7oa' where r_object_id = '0801e24080001107'
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_j44a0a' where r_object_id = '0801e24080001111'
select r_object_id,host_name from dm_client_registration_s where lower(host_name) = lower('vmtestdctm01')
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e2408000058b'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080001107'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080000581'
update dm_client_registration_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080001111'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_client_rights' and lower(object_name) like lower('%vmtestdctm01%')
update dm_sysobject_s set object_name = 'dfc_vmtestdctm02_WM6Aoa' where r_object_id = '0801e24080000582'
select r_object_id,host_name from dm_client_rights_s where lower(host_name) = lower('vmtestdctm01')
update dm_client_rights_s set host_name = 'vmtestdctm02' where r_object_id = '0801e24080000582'
Successfully updated database values...
Processing _r table...
select r_object_id,base_uri,i_position from dm_sysprocess_config_r where lower(base_uri) like lower('%//vmtestdctm01:%') or lower(base_uri) like lower('%//vmtestdctm01.%:%')
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/DmMail/servlet/DoMail' where r_object_id = '0801e240800010a4' and i_position = -3
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/SAMLAuthentication/servlet/ValidateSAMLResponse' where r_object_id = '0801e240800010a4' and i_position = -2
update dm_sysprocess_config_r set base_uri = 'http://vmtestdctm02:9080/DmMethods/servlet/DoMethod' where r_object_id = '0801e240800010a4' and i_position = -1
select r_object_id,projection_targets,i_position from dm_sysprocess_config_r where lower(projection_targets) = lower('vmtestdctm01')
update dm_sysprocess_config_r set projection_targets = 'vmtestdctm02' where r_object_id = '0801e240800010a4' and i_position = -1
select r_object_id,acs_base_url,i_position from dm_acs_config_r where lower(acs_base_url) like lower('%//vmtestdctm01:%') or lower(acs_base_url) like lower('%//vmtestdctm01.%:%')
update dm_acs_config_r set acs_base_url = 'http://vmtestdctm02:9080/ACS/servlet/ACS' where r_object_id = '0801e24080000490' and i_position = -1
select r_object_id,method_arguments,i_position from dm_job_r where lower(method_arguments) like lower('%vmtestdctm01%')
select r_object_id,projection_targets,i_position from dm_server_config_r where lower(projection_targets) = lower('vmtestdctm01')
select r_object_id,a_storage_param_value,i_position from dm_extern_store_r where lower(a_storage_param_value) like lower('%//vmtestdctm01:%') or lower(a_storage_param_value) like lower('%//vmtestdctm01.%:%')
Successfully updated database values...
Committing all database operations...

Processing server.ini changes for docbase: docbase1
Backed up '/app/dctm/product/16.4/dba/config/docbase1/server.ini' to '/app/dctm/product/16.4/dba/config/docbase1/server.ini_host_vmtestdctm01.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/docbase1/server.ini

Finished changing host name for docbase:docbase1

Processing DFC properties changes...
Backed up '/app/dctm/product/16.4/config/dfc.properties' to '/app/dctm/product/16.4/config/dfc.properties_host_vmtestdctm01.backup'
Updated dfc.properties file: /app/dctm/product/16.4/config/dfc.properties
No need to update dfc.properties file: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties
No need to update dfc.properties file: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties
File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear/APP-INF/classes/dfc.properties doesn't exist
Backed up '/app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties' to '/app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties_host_vmtestdctm01.backup'
Updated dfc.properties file: /app/dctm/product/16.4/product/16.4/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/documentum.config/dfc.properties
Finished processing DFC properties changes...

Processing File changes...
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_host_vmtestdctm01.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_DMS/deployments/DMS.ear/lib/configs.jar/dms.properties doesn't exist
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/XhiveConnector.ear/XhiveConnector.war/WEB-INF/web.xml doesn't exist
Backed up '/app/dctm/product/16.4/dba/dm_launch_DocBroker' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_launch_DocBroker_host_vmtestdctm01.backup'
Updated /app/dctm/product/16.4/dba/dm_launch_DocBroker
Backed up '/app/dctm/product/16.4/dba/dm_stop_DocBroker' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_stop_DocBroker_host_vmtestdctm01.backup'
Updated /app/dctm/product/16.4/dba/dm_stop_DocBroker
Finished processing File changes...

Finished changing host name...
End: 2019-04-09 18:55:50.948

3. Post Migration

Remove vmtestdctm01 from /etc/hosts.

[root@vmtestdctm02 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.122.1 vmtestdctm02

It is important to think about other applications/databases installed on the same server before this step.

Revert change done in the server.ini file.

[dmadmin@vmtestdctm02 ~]$ vi $DOCUMENTUM/dba/config/docbase1/server.ini
...
[SERVER_STARTUP]
docbase_id = 123456
docbase_name = docbase1
server_config_name = docbase1
database_conn = DCTMDB
database_owner = docbase1
...

Start the DocBroker:

[dmadmin@vmtestdctm02 ~]$ $DOCUMENTUM/dba/dm_launch_DocBroker
starting connection broker on current host: [vmtestdctm02]
with connection broker log: [/app/dctm/product/16.4/dba/log/docbroker.vmtestdctm02.1489.log]
connection broker pid: 11863

Start the Docbase:

[dmadmin@vmtestdctm02 ~]$ $DOCUMENTUM/dba/dm_start_docbase1
starting Documentum server for repository: [docbase1]
with server log: [/app/dctm/product/16.4/dba/log/docbase1.log]
server pid: 12810

Check docbase log:

[dmadmin@vmtestdctm02 ~]$ cat $DOCUMENTUM/dba/log/docbase1.log
...
2019-04-09T19:11:30.915327	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 13776, session 0101e24080000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:30.916008	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13777, session 0101e2408000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:31.917818	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13786, session 0101e2408000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-04-09T19:11:32.918943	13732[13732]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 13798, session 0101e2408000000c) is started sucessfully."
2019-04-09T19:11:33.919701	13732[13732]	0000000000000000	[DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "
2019-04-09T19:11:33.927309	13732[13732]	0000000000000000	[DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 13810, session 0101e24080000456) is started sucessfully."
2019-04-09T19:11:34.639677	13809[13809]	0101e24080000003	[DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (vmtestdctm02) with port (1490).  Information: (Config(docbase1), Proximity(1), Status(Open), Dormancy Status(Active))."

Get the docbase map from the docbroker:

[dmadmin@vmtestdctm02 ~]$ dmqdocbroker -t vmtestdctm02 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm02
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm02 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : docbase1
Docbase id          : 123456
Docbase description : First docbase
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     : 
--------------------------------------------

idql query for a quick check:

dmadmin@vmtestdctm02 ~]$ idql docbase1
...
Connected to OpenText Documentum Server running Release 16.4.0000.0248  Linux64.Oracle
1> select user_login_name from dm_user where user_name='dmadmin';
2> go
user_login_name                                                                                                                                                                                                                                                
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dmadmin                                                                                                                                                                                                                                                        
(1 row affected)

4. Conclusion

This is a helpful way to change the Host Name, I tried it many times and I can say that it works very well.
For the moment all changes done was only on a simple environment, maybe the next blog will talk about a change on a High Availability one 😉
Did you already practice this tool? Don’t hesitate to share your experience!

Cet article Documentum – MigrationUtil – 4 – Change Host Name est apparu en premier sur Blog dbi services.

Documentum – RCS/CFS installation failure

$
0
0

A few weeks ago, I had a task to add a new CS into already HA environments (DEV/TEST/PROD) to better support the load on these environments as well as adding a new repository on all Content Servers. These environments were installed a nearly two years ago already so it was really just adding something new into the picture. When doing so, the installation of a new repository on existing Content Servers (CS1 / CS2) was successful and without much trouble (installation in silent obviously so it’s fast & reliable for the CS and RCS) but then the new Remote Content Server (RCS/CFS – CS3) installation, using the same silent scripts, failed for the two existing/old repositories while it succeeded for the new one.

Well actually, the CFS installation didn’t completely fail. The silent installer returned the prompt properly, the repository start/stop scripts were present, the config folder was present, the dm_server_config object was there, aso… So it looked like the installation was successful but, as a best practice, it is really important to always check the log file for a silent installation because it doesn’t show anything on the prompt, even if there are errors. So while checking at the log file after the silent installer returned the prompt, I saw the following:

[dmadmin@content_server_03 ~]$ cd $DM_HOME/install/logs/
[dmadmin@content_server_03 logs]$ cat install.log
15:12:31,830  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
15:12:31,870  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
15:12:31,883  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
15:12:33,259  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:33,635  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 352 ms
15:12:33,667  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa,O=EMC,OU=Documentum> valid from Fri Feb 01 15:07:33 UTC 2019 to Mon Jan 29 15:12:33 UTC 2029:

15:12:33,668  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:33,681  INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:33,682  INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:33,687  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 7.3.0040.0025
15:12:33,939  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
15:12:34,717  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:34,758  INFO [main] com.documentum.fc.client.security.internal.AuthenticationMgr - new identity bundle <dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa   1549033954      content_server_03.dbi-services.com         hicAAvU7QX3VNvDft2PwmnW4SIFX+5Snx7PlA5hryuOpo2eWLcEANYAEwYBbU6F3hEBAMenRR/lXFrHFqlrxTZl54whGL+9VnH6CCEu4x8dxdQ+QLRE3EtLlO31SPNhqkzjyVwhktNuivhiZkxweDNynvk+pDleTPvzUvF0YSoggcoiEq+kGr6/c9vUPOMuuv1k7PR1AO05JHmu7vea9/UBaV+TFA6/cGRwVh5i5D2s1Ws7qiDlBl4R+Wp3+TbNLPjbn/SeOz5ZSjAmXThK0H0RXwbcwHo9bVm0Hzu/1n7silII4ZzjAW7dd5Jvbxb66mxC8NWaNabPksus2mTIBhg==>
15:12:35,002  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:12:35,119  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
15:12:36,317  INFO [main] com.documentum.fc.client.privilege.impl.PublicKeyCertificate - stored certificate for CN
15:12:36,353  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling in GR_DocBase a new record with this persistent certificate:
-----BEGIN CERTIFICATE-----
MIIDHzCCAgcCELGIh8FYcycggMmImLESjEYwDQYJKoZIhvcNAQELBQAwTjETMBEG
YXZxbFJuN1lRZFlUTXRQNnBWNnpRY3JBYTAeFw0xOTAyMDExNTA3MzNaFw0yOTAx
MjkxNTEyMzNaME4xEzARBgNVBAsMCkRvY3VtZW50dW0xDDAKBgNVBAoMA0VNQzEp
hKnQmaMo/wCv+QXZTCsitrBNvoomcT82mYzwIxV5/7cPCIHHMcJijsJCtunjiucV
MCcGA1UEAwwgZGZjX1VuSWF2cWxSbjdZUWRZVE10UDZwVjZ6UWNyQWEwggEiMA0G
HcL0KUImSV7owDqKzV3lEYCGdomX4gYTI5bMKAiTEuGyWRKw2YTQGhfp5y0mU0hV
ORTYyRoGjpRUuXWpdrsrbX8g8gD9l6ijWTSIWfTGO/7//mTHp2zwp/TiIEuAS+RA
eFw1pBLSCKneYgquMuiyFfuCfBVNY5Q0MzyPHYxrDAp4CtjasIrNT5h3AgMBAAEw
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4Hli+niUAD0ksVVWocPnvzV10ZOj2
DQYJKoZIhvcNAQELBQADggEBAEAre45NEpqzGMMYX1zpjgib9wldSmiPVDZbhj17
KnUCgDy7FhFQ5U5w6wf2iO9UxGV42AYQe2TjED0EbYwpYB8DC970J2ZrjZRFMy/Y
A1UECwwKRG9jdW1lbnR1bTEMMAoGA1UECgwDRU1DMSkwJwYDVQQDDCBkZmNfVW5J
gwKynVf9O10GQP0a8Z6Fr3jrtCEzfLjOXN0VxEcgwOEKRWHM4auxjevqGCPegD+y
FVWwylyIsMRsC9hOxoNHZPrbhk3N9Syhqsbl+Z9WXG0Sp4uh1z5R1NwVhR7YjZkF
19cfN8uEHqedJo26lq7oFF2KLJ+/8sWrh2a6lrb4fNXYZIAaYKjAjsUzcejij8en
Rd8yvghCc4iwWvpiRg9CW0VF+dXg6KkQmaFjiGrVosskUjuACHncatiYC5lDNJy+
TDdnNWYlctfWcT8WL/hX6FRGedT9S30GShWJNobM9vECoNg=
-----END CERTIFICATE-----
15:12:36,355  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
15:12:36,535  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling a new registration record for dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:12:36,563  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - [DFC_SECURITY_GR_REGISTRATION_PUBLISH] this dfc instance is now published in the global registry GR_DocBase
15:12:37,513  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:38,773  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:12:39,314  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is adding it as primary connection broker and moves existing primary as backup.
15:12:41,643  INFO [main]  - The installer updates dfc.properties file.
15:12:41,644  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is adding it as primary connection broker and moves existing primary as backup.
15:12:41,649  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - The installer will validate AEK/Lockbox fileds.
15:12:41,656  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is changing primary as backup and backup as primary.
15:12:43,874  INFO [main]  - The installer updates dfc.properties file.
15:12:43,874  INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is changing primary as backup and backup as primary.
15:12:43,876  INFO [main]  - The installer is creating folders for the selected repository.
15:12:43,876  INFO [main]  - Checking if cfs is being installed on the primary server...
15:12:43,877  INFO [main]  - CFS is not being installed on the primary server
15:12:43,877  INFO [main]  - Installer creates necessary directory structure.
15:12:43,879  INFO [main]  - Installer copies aek.key, server.ini, dbpasswd.txt and webcache.ini files from primary server.
15:12:43,881  INFO [main]  - Installer executes dm_rcs_copyfiles.ebs to get files from primary server
15:12:56,295  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/dbpasswd.txt has been created successfully
15:12:56,302  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/webcache.ini has been created successfully
15:12:56,305  INFO [main]  - Installer found exising file $DOCUMENTUM/dba/secure/lockbox.lb
15:12:56,305  INFO [main]  - Installer renamed exising file $DOCUMENTUM/dba/secure/lockbox.lb to $DOCUMENTUM/dba/secure/lockbox.lb.bak.3
15:12:56,306  INFO [main]  - $DOCUMENTUM/dba/secure/lockbox.lb has been created successfully
15:12:56,927  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/server_content_server_03_DocBase1.ini has been created successfully
15:12:56,928  INFO [main]  - Installer found exising file $DOCUMENTUM/dba/castore_license
15:12:56,928  INFO [main]  - Installer renamed exising file $DOCUMENTUM/dba/castore_license to $DOCUMENTUM/dba/castore_license.bak.3
15:12:56,928  INFO [main]  - $DOCUMENTUM/dba/castore_license has been created successfully
15:12:56,931  INFO [main]  - $DOCUMENTUM/dba/config/DocBase1/ldap_080f123450006deb.cnt has been created successfully
15:12:56,934  INFO [main]  - Installer updates server.ini
15:12:56,940  INFO [main]  - The installer tests database connection.
15:12:57,675  INFO [main]  - Database successfully opened.
Test table successfully created.
Test view successfully created.
Test index successfully created.
Insert into table successfully done.
Index successfully dropped.
View successfully dropped.
Database case sensitivity test successfully past.
Table successfully dropped.
15:13:00,675  INFO [main]  - The installer creates server config object.
15:13:00,853  INFO [main]  - The installer is starting a process for the repository.
15:13:01,993  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:03,079  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:04,149  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:05,187  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:13:06,256  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCreateContentFileServerPostSeq - logPath is $DOCUMENTUM/dba/log/content_server_03_DocBase1.log
15:14:06,352  INFO [main]  - Waiting for repository DocBase1.content_server_03_DocBase1 to start up.
15:14:25,003  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
15:14:25,495  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:14:25,498  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/655905.tmp/dfc.keystore
15:14:25,513  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: true
15:14:25,672  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - assigning rights to all roles for this client on DocBase1
15:14:25,682  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - found client rights: false
15:14:25,736  INFO [main] com.documentum.fc.client.privilege.impl.PublicKeyCertificate - stored certificate for CN
15:14:25,785  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling in DocBase1 a new record with this persistent certificate:
-----BEGIN CERTIFICATE-----
MIIDHzCCAgcCELGIh8FYcycggMmImLESjEYwDQYJKoZIhvcNAQELBQAwTjETMBEG
YXZxbFJuN1lRZFlUTXRQNnBWNnpRY3JBYTAeFw0xOTAyMDExNTA3MzNaFw0yOTAx
MjkxNTEyMzNaME4xEzARBgNVBAsMCkRvY3VtZW50dW0xDDAKBgNVBAoMA0VNQzEp
hKnQmaMo/wCv+QXZTCsitrBNvoomcT82mYzwIxV5/7cPCIHHMcJijsJCtunjiucV
MCcGA1UEAwwgZGZjX1VuSWF2cWxSbjdZUWRZVE10UDZwVjZ6UWNyQWEwggEiMA0G
HcL0KUImSV7owDqKzV3lEYCGdomX4gYTI5bMKAiTEuGyWRKw2YTQGhfp5y0mU0hV
ORTYyRoGjpRUuXWpdrsrbX8g8gD9l6ijWTSIWfTGO/7//mTHp2zwp/TiIEuAS+RA
eFw1pBLSCKneYgquMuiyFfuCfBVNY5Q0MzyPHYxrDAp4CtjasIrNT5h3AgMBAAEw
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4Hli+niUAD0ksVVWocPnvzV10ZOj2
DQYJKoZIhvcNAQELBQADggEBAEAre45NEpqzGMMYX1zpjgib9wldSmiPVDZbhj17
KnUCgDy7FhFQ5U5w6wf2iO9UxGV42AYQe2TjED0EbYwpYB8DC970J2ZrjZRFMy/Y
A1UECwwKRG9jdW1lbnR1bTEMMAoGA1UECgwDRU1DMSkwJwYDVQQDDCBkZmNfVW5J
gwKynVf9O10GQP0a8Z6Fr3jrtCEzfLjOXN0VxEcgwOEKRWHM4auxjevqGCPegD+y
FVWwylyIsMRsC9hOxoNHZPrbhk3N9Syhqsbl+Z9WXG0Sp4uh1z5R1NwVhR7YjZkF
19cfN8uEHqedJo26lq7oFF2KLJ+/8sWrh2a6lrb4fNXYZIAaYKjAjsUzcejij8en
Rd8yvghCc4iwWvpiRg9CW0VF+dXg6KkQmaFjiGrVosskUjuACHncatiYC5lDNJy+
TDdnNWYlctfWcT8WL/hX6FRGedT9S30GShWJNobM9vECoNg=
-----END CERTIFICATE-----
15:14:25,789  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: true
15:14:25,802  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - found client rights: false
15:14:25,981  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling a new rights record for dfc_UnYQdYTP6pV6zRn7tQMIavqlcrAa
15:14:26,032  INFO [main] com.documentum.fc.client.security.impl.DfcRightsCreator - [DFC_SECURITY_DOCBASE_RIGHTS_REGISTER] this dfc instance has now escalation rights registered with docbase DocBase1
15:14:26,052  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$DOCUMENTUM/shared/dfc
15:14:26,052  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
15:14:26,059  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerAddDocbaseEntryToWebXML - BPM webapp does not exist.
15:14:26,191  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the Docbase HeadStart script.
15:14:36,202  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the Creates ACS config object script.
15:14:46,688  INFO [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - Executing the This script does miscellaneous setup tasks for remote content servers script.
15:14:56,840 ERROR [main] com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2 - The installer failed to execute the This script does miscellaneous setup tasks for remote content servers script. For more information, please read output file: $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out.
com.documentum.install.shared.common.error.DiException: The installer failed to execute the This script does miscellaneous setup tasks for remote content servers script. For more information, please read output file: $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out.
        at com.documentum.install.server.installanywhere.actions.cfs.DiWAServerProcessingScripts2.setup(DiWAServerProcessingScripts2.java:98)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        ...
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
15:14:56,843  INFO [main]  - The INSTALLER_UI value is SILENT
15:14:56,843  INFO [main]  - The KEEP_TEMP_FILE value is true
15:14:56,843  INFO [main]  - The common.installOwner.password value is ******
15:14:56,843  INFO [main]  - The SERVER.SECURE.ROOT_PASSWORD value is ******
15:14:56,843  INFO [main]  - The common.upgrade.aek.lockbox value is null
15:14:56,843  INFO [main]  - The common.old.aek.passphrase.password value is null
15:14:56,843  INFO [main]  - The common.aek.algorithm value is AES_256_CBC
15:14:56,843  INFO [main]  - The common.aek.passphrase.password value is ******
15:14:56,843  INFO [main]  - The common.aek.key.name value is CSaek
15:14:56,843  INFO [main]  - The common.use.existing.aek.lockbox value is null
15:14:56,843  INFO [main]  - The SERVER.ENABLE_LOCKBOX value is true
15:14:56,844  INFO [main]  - The SERVER.LOCKBOX_FILE_NAME value is lockbox.lb
15:14:56,844  INFO [main]  - The SERVER.LOCKBOX_PASSPHRASE.PASSWORD value is ******
15:14:56,844  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,844  INFO [main]  - The SERVER.DOCBROKER_ACTION value is null
15:14:56,844  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_HOST value is content_server_01.dbi-services.com
15:14:56,844  INFO [main]  - The SERVER.PRIMARY_CONNECTION_BROKER_PORT value is 1489
15:14:56,844  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content_server_03.dbi-services.com
15:14:56,844  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
15:14:56,844  INFO [main]  - The SERVER.FQDN value is content_server_03.dbi-services.com
15:14:56,845  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,845  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,845  INFO [main]  - The SERVER.SECURE.REPOSITORY_PASSWORD value is ******
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,845  INFO [main]  - The SERVER.REPOSITORY_HOSTNAME value is content_server_01.dbi-services.com
15:14:56,845  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
15:14:56,845  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_NAME value is
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_PORT value is
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_CONNECT_MODE value is null
15:14:56,846  INFO [main]  - The SERVER.USE_CERTIFICATES value is false
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_FILE_NAME value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_KEYSTORE_PASSWORD_FILE_NAME value is null
15:14:56,846  INFO [main]  - The SERVER.DOCBROKER_CIPHER_LIST value is null
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE value is null
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_TRUSTSTORE_PASSWORD value is ******
15:14:56,853  INFO [main]  - The SERVER.DFC_SSL_USE_EXISTING_TRUSTSTORE value is null
15:14:56,853  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
15:14:56,854  INFO [main]  - The SERVER.DOCUMENTUM_DATA value is $DATA
15:14:56,854  INFO [main]  - The SERVER.DOCUMENTUM_SHARE value is $DOCUMENTUM/share
15:14:56,854  INFO [main]  - The CFS_SERVER_CONFIG_NAME value is content_server_03_DocBase1
15:14:56,854  INFO [main]  - The SERVER.DOCBASE_SERVICE_NAME value is DocBase1
15:14:56,854  INFO [main]  - The CLIENT_CERTIFICATE value is null
15:14:56,854  INFO [main]  - The RKM_PASSWORD value is ******
15:14:56,854  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED value is null
15:14:56,854  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is null
15:14:56,854  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is null
15:14:56,854  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is null
15:14:56,854  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
15:14:56,855  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,855  INFO [main]  - The SERVER.COMPONENT_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,855  INFO [main]  - The SERVER.CONNECTION_BROKER_NAME value is null
15:14:56,855  INFO [main]  - The SERVER.CONNECTION_BROKER_PORT value is null
15:14:56,855  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_HOST value is content_server_03.dbi-services.com
15:14:56,855  INFO [main]  - The SERVER.PROJECTED_CONNECTION_BROKER_PORT value is 1489
15:14:56,855  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,855  INFO [main]  - The SERVER.DOCBROKER_NAME value is
15:14:56,856  INFO [main]  - The SERVER.DOCBROKER_PORT value is
15:14:56,856  INFO [main]  - The SERVER.CONNECTION_BROKER_SERVICE_STARTUP_TYPE value is null
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,856  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,856  INFO [main]  - The SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED_KEY value is null
15:14:56,856  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_PORT_OTHER value is null
15:14:56,856  INFO [main]  - The SERVER.PROJECTED_DOCBROKER_HOST_OTHER value is null
15:14:56,856  INFO [main]  - The SERVER.GLOBAL_REGISTRY_REPOSITORY value is null
15:14:56,856  INFO [main]  - The SERVER.BOF_REGISTRY_USER_LOGIN_NAME value is null
15:14:56,856  INFO [main]  - The SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD value is ******
15:14:56,856  INFO [main]  - The SERVER.COMPONENT_ACTION value is CREATE
15:14:56,857  INFO [main]  - The SERVER.COMPONENT_NAME value is null
15:14:56,857  INFO [main]  - The SERVER.PRIMARY_SERVER_CONFIG_NAME value is DocBase1
15:14:56,857  INFO [main]  - The SERVER.DOCBASE_NAME value is DocBase1
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USERNAME value is dmadmin
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_PASSWORD value is ******
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USER_DOMAIN value is
15:14:56,857  INFO [main]  - The SERVER.REPOSITORY_USERNAME_WITH_DOMAIN value is dmadmin
15:14:56,857  INFO [main]  - The env PATH value is: /usr/xpg4/bin:$DOCUMENTUM/shared/java64/JAVA_LINK/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM/shared/java64/JAVA_LINK/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content_server_03 logs]$

 

As you can see above, everything was going well until the script “This script does miscellaneous setup tasks for remote content servers” is executed. Yes that is a hell of a description, isn’t it? What this script is doing is actually running the “dm_rcs_setup.ebs” script (you can find it under $DM_HOME/install/admin/) on the repository to setup the remote jobs, project the RCS/CFS repository to the local docbroker, create the log folder and a few other things. Here was the content of the output file for the execution of this EBS:

[dmadmin@content_server_03 logs]$ cat $DOCUMENTUM/dba/config/DocBase1/dm_rcs_setup.out
Running dm_rcs_setup.ebs script on docbase DocBase1.content_server_03_DocBase1 to set up jobs for a remote content server.
docbaseNameOnly = DocBase1
Connected To DocBase1.content_server_03_DocBase1
$DOCUMENTUM/dba/log/000f1234/sysadmin was created.
Duplicating distributed jobs.
Creating job object for dm_ContentWarningcontent_server_03_DocBase1
Successfully created job object for dm_ContentWarningcontent_server_03_DocBase1
Creating job object for dm_LogPurgecontent_server_03_DocBase1
Successfully created job object for dm_LogPurgecontent_server_03_DocBase1
Creating job object for dm_ContentReplicationcontent_server_03_DocBase1
Successfully created job object for dm_ContentReplicationcontent_server_03_DocBase1
Creating job object for dm_DMCleancontent_server_03_DocBase1
The dm_DMClean job does not exist at the primary server so we will not create it at the remote site, either.
Failed to create job object for dm_DMCleancontent_server_03_DocBase1
[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_BADID]error:  "Bad ID given: 0000000000000000"

[DM_API_E_NO_MATCH]error:  "There was no match in the docbase for the qualification: dm_job where object_name = 'dm_DMClean' and lower(target_server) like lower('DocBase1.DocBase1@%')"


Exiting with return code (-1)
[dmadmin@content_server_03 logs]$
[dmadmin@content_server_03 logs]$

 

The RCS/CFS installation is failing because the creation of a remote job cannot complete successfully. It’s working properly for 3 out of the 5 remote jobs but not for the 2 remaining. Only one is shown in the log file because it didn’t even try to process the 2nd one since it failed already and therefore stopped the installation here. That’s why the start/stop scripts were there, the log folder was there, the dm_server_config was ok as well but there were some missing pieces actually.

The issue here is that the RCS/CFS installation isn’t able to find the r_object_id of the “dm_DMClean” job (it mention “Bad ID given: 0000000000000000”) and therefore it’s not able to create the remote job. The last message is actually more interesting: “There was no match in the docbase for the qualification: dm_job where object_name = ‘dm_DMClean’ and lower(target_server) like lower(‘DocBase1.DocBase1@%’)”.

The RCS/CFS installation is actually looking at the job with the name ‘dm_DMClean’, which is OK but it is also filtering only on the target_server which is equal to ‘docbase_name.server_config_name@…’ and here, it’s not finding any result.

 

So what happened? Like I was saying in the introduction, this environment was already installed several years ago in HA already. As a result of that, the jobs were already configured by us as we would expect them. Usually, we are configuring the jobs as follow (I’m only talking about the distributed jobs here):

Job Name on CS1 Job Status on CS1 Job Name on RCS% Job Status on RCS%
dm_ContentWarning Active dm_ContentWarning% Inactive
dm_LogPurge Active dm_LogPurge% Active
dm_DMClean Active dm_DMClean% Inactive
dm_DMFilescan Active dm_DMFilescan% Inactive
dm_ContentReplication Inactive dm_ContentReplication% Inactive

Based on this, we usually disable the dm_ContentReplication completely (if it’s not needed), we obviously leave the dm_LogPurge enabled (all of them) with the target_server set to the local CS it is supposed to run into (so 1 job per CS). Then for the 3 remaining jobs, it depends on the load of the environment. These jobs can be set to run on the CS1 by setting the target_server equal to ‘DocBase1.DocBase1@content_server_03.dbi-services.com’ or you can set them to run on ANY Content Server by setting an empty target_server (a single space: ‘ ‘). It doesn’t matter where they are running but it is important for these jobs to run and hence the setting to ANY available Content Server is better so it’s not bound to a single point of failure.

So the reason why the RCS/CFS installation failed is because we configured our jobs properly… Funny, right? As you could see in the logs, the dm_ContentWarning was created properly but that was because someone was doing some testing with this job and it was temporarily set to run on the CS1 only and therefore, when the installer checked it, it was a coincidence/luck that it could find it.

After the failure, there is normally not much done except creating the JMS config object, checking the ACS URLs and finally restarting the JMS but still, it is cleaner to just remove the RCS/CFS, clean the repository objects still remaining (the distributed jobs that were created) and then reinstalling the RCS/CFS after setting the jobs as the installer expects them to be…

 

Cet article Documentum – RCS/CFS installation failure est apparu en premier sur Blog dbi services.

WebLogic – Update on the WLST monitoring

$
0
0

A few years ago, I wrote this blog about a WLST script to monitor a WebLogic Server. At that time, we were managing a Documentum Platform with 115 servers and now, it’s more than 700 servers so I wanted to come back in this blog with an update on the WLST script.

1. Update of the WLST script needed

Over the past two years, we installed a lot of new servers with a lot of new components. Some of these components required us to adapt slightly our monitoring solution to be able to handle the monitoring in the same, efficient way, for all servers of our Platform: we want to have a single solution which fits all cases. The new cases we came accross where WebLogic Clustering as well as EAR Applications.

In the past, we only had WAR files related to Documentum: D2.war, da.war, D2-REST.war, aso… All these WAR files are quite simple to monitor because one “ApplicationRuntimes” equal one “ComponentRuntimes” (I’m talking here about the WLST script from the previous blog). So basically if you want to check the number of open sessions [get(‘OpenSessionsCurrentCount’)] or the total amount of sessions [get(‘SessionsOpenedTotalCount’)], then it’s just one value. EAR files often contain WAR file(s) as well as other components so in this case, you have potentially a lot of “ComponentRuntimes” for each “ApplicationRuntimes”. Therefore, the best way I found to keep having a single monitoring solution for all WebLogic Servers, no matter what application is deployed on it, was to loop on each components and cumulate the number of open (respectively total sessions) for each components and then return that for the application.

In addition to that, we also started to deploy some WebLogic Servers in Cluster so the monitoring script also needed to take that into account. In the previous version, the WLST script supposed that the deployment was a single local Managed Server (local to the AdminServer) so in case of a WLS Cluster, the deployment target can be a cluster and in this case, the WLST script wouldn’t find the correct monitoring value so I had to introduce a check on whether or not the Application is deployed on a cluster and in this case, then I’m selecting the deployment on the local Managed Server that is part of this cluster. We are using the NodeManager Listen Address to know if the Managed Server is a local one so it expects both the NodeManager and the Managed Server to use the same Listen Address.

As a side note, in case you have a WebLogic Cluster that is deploying an Application only on certain machines of the WebLogic Domain (so for example you have 3 machines but a cluster only targets 2 of them), then on the machine(s) where the Application isn’t deployed by the WebLogic Cluster, the monitoring will still try to find the Application on a local Managed Server and it will not succeed. This will still create a log file for this Application with the following content: “CRITICAL – The Managed Server ‘ + appTargetName + ‘ or the Application ‘ + app.getName() + ‘ is not started”. This is expected since the Application isn’t deployed there but it’s then your job to either set the monitoring tool to expect a CRITICAL or just not check this specific log file for this machine.

Finally the last modification I did was using a properties file instead of embedded properties because we are now deploying more and more WebLogic Servers with our silent scripts (takes a few minutes to have a WLS fully installed, configured, with clustering, with SSL, aso…) and it is easier to have a properties file for a WebLogic Domain that is used by our WebLogic Servers as well as by the Monitoring System to know what’s installed, if it’s a cluster, where is the AdminServer, if it’s using t3 or t3s, aso…

2. WebLogic Domain properties file

As mentioned above, we started to use properties file with our silent scripts to describes what is installed on the local server aso… This is an extract of a domain.properties file that we are using:

[weblogic@weblogic_server_01 ~]$ cat /app/weblogic/wlst/domain.properties
...
NM_HOST=weblogic_server_01.dbi-services.com
ADMIN_URL=t3s://weblogic_server_01.dbi-services.com:8443
DOMAIN_NAME=MyDomain
...
CLUSTERS=clusterWS-01:msWS-011,machine-01,weblogic_server_01.dbi-services.com,8080,8081:msWS-012,machine-02,weblogic_server_02.dbi-services.com,8080,8081|clusterWS-02:msWS-021,machine-01,weblogic_server_01.dbi-services.com,8082,8083:msWS-022,machine-02,weblogic_server_02.dbi-services.com,8082,8083
...
[weblogic@weblogic_server_01 ~]$

The parameter “CLUSTERS” in this properties file is composed in the following way:

  • If it’s a WebLogic Domain with Clustering: CLUSTERS=cluster1:ms11,machine11,listen11,http11,https11:ms12,machine12,…|cluster2:ms21,machine21,…:ms22,machine22,…:ms23,machine23,…
    • ms11 and ms12 being 2 Managed Servers part of the cluster cluster1
    • ms21, ms22 and ms23 being 3 Managed Servers part of the cluster cluster2
  • If it’s not a WebLogic Domain with Clustering: CLUSTERS= (equal nothing, it’s empty, not needed)

There are other properties in this domain.properties of ours like the config and key secure files that WebLogic is using (different from the Nagios ones), the NodeManager configuration (port, type, config & key secure files as well) and a few other things about the AdminServer, the list of Managed Servers, aso… But all these properties aren’t needed for the monitoring topic so I’m only showing the ones that make sense.

3. New version of the WLST script

Enough talk, I assume you came here for the WLST script so here it is. I highlighted below what changed compared to the previous version so you can spot easily how the customization was done:

[nagios@weblogic_server_01 ~]$ cat /app/nagios/etc/objects/scripts/MyDomain_check_weblogic.wls
# WLST
# Identification: check_weblogic.wls  v1.2  15/08/2018
#
# File: check_weblogic.wls
# Purpose: check if a WebLogic Server is running properly
# Author: dbi services (Morgan Patou)
# Version: 1.0 23/03/2016
# Version: 1.1 14/06/2018 - re-formatting
# Version: 1.2 15/08/2018 - including cluster & EAR support
#
###################################################

from java.io import File
from java.io import FileOutputStream

import re

properties='/app/weblogic/wlst/domain.properties'

try:
  loadProperties(properties)
except:
  exit()

directory='/app/nagios/etc/objects/scripts'
userConfig=directory + '/' + DOMAIN_NAME + '_configfile.secure'
userKey=directory + '/' + DOMAIN_NAME + '_keyfile.secure'

try:
  connect(userConfigFile=userConfig, userKeyFile=userKey, url=ADMIN_URL)
except:
  exit()

def setOutputToFile(fileName):
  outputFile=File(fileName)
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

def setOutputToNull():
  outputFile=File('/dev/null')
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

def getLocalServerName(clustername):
  localServerName=""
  for clusterList in CLUSTERS.split('|'):
    found=0
    for clusterMember in clusterList.split(':'):
      if found == 1:
        clusterMemberDetails=clusterMember.split(',')
        if clusterMemberDetails[2] == NM_HOST:
          localServerName=clusterMemberDetails[0]
      if clusterMember == clustername:
        found=1
  return localServerName

while 1:
  domainRuntime()
  for server in domainRuntimeService.getServerRuntimes():
    setOutputToFile(directory + '/wl_threadpool_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/ThreadPoolRuntime/ThreadPoolRuntime')
    print 'threadpool_' + domainName + '_' + server.getName() + '_OUT',get('ExecuteThreadTotalCount'),get('HoggingThreadCount'),get('PendingUserRequestCount'),get('CompletedRequestCount'),get('Throughput'),get('HealthState')
    setOutputToNull()
    setOutputToFile(directory + '/wl_heapfree_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/JVMRuntime/' + server.getName())
    print 'heapfree_' + domainName + '_' + server.getName() + '_OUT',get('HeapFreeCurrent'),get('HeapSizeCurrent'),get('HeapFreePercent')
    setOutputToNull()

  try:
    setOutputToFile(directory + '/wl_sessions_' + domainName + '_console.out')
    cd('/ServerRuntimes/AdminServer/ApplicationRuntimes/consoleapp/ComponentRuntimes/AdminServer_/console')
    print 'sessions_' + domainName + '_console_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
    setOutputToNull()
  except WLSTException,e:
    setOutputToFile(directory + '/wl_sessions_' + domainName + '_console.out')
    print 'CRITICAL - The Server AdminServer or the Administrator Console is not started'
    setOutputToNull()

  domainConfig()
  for app in cmo.getAppDeployments():
    domainConfig()
    cd('/AppDeployments/' + app.getName())
    for appTarget in cmo.getTargets():
      if appTarget.getType() == "Cluster":
        appTargetName=getLocalServerName(appTarget.getName())
      else:
        appTargetName=appTarget.getName()
      print appTargetName
      domainRuntime()
      try:
        setOutputToFile(directory + '/wl_sessions_' + domainName + '_' + app.getName() + '.out')
        cd('/ServerRuntimes/' + appTargetName + '/ApplicationRuntimes/' + app.getName())
        openSessions=0
        totalSessions=0
        for appComponent in cmo.getComponentRuntimes():
          result=re.search(appTargetName,appComponent.getName())
          if result != None:
            cd('ComponentRuntimes/' + appComponent.getName())
            try:
              openSessions+=get('OpenSessionsCurrentCount')
              totalSessions+=get('SessionsOpenedTotalCount')
            except WLSTException,e:
              cd('/ServerRuntimes/' + appTargetName + '/ApplicationRuntimes/' + app.getName())
            cd('/ServerRuntimes/' + appTargetName + '/ApplicationRuntimes/' + app.getName())
        print 'sessions_' + domainName + '_' + app.getName() + '_OUT',openSessions,totalSessions
        setOutputToNull()
      except WLSTException,e:
        setOutputToFile(directory + '/wl_sessions_' + domainName + '_' + app.getName() + '.out')
        print 'CRITICAL - The Managed Server ' + appTargetName + ' or the Application ' + app.getName() + ' is not started'
        setOutputToNull()

  java.lang.Thread.sleep(120000)

[nagios@weblogic_server_01 ~]$

 

For all our WAR files, even if the WLST script changed, the outcome is the same since there is only one component and for the EAR files, it will just add all of the open sessions into a global count. Obviously, this doesn’t necessary represent the real number of “user” sessions but it’s an estimation of the load. We do not really care about a specific number but we want to see how the load evolves during the day and we can adjust our thresholds to take into account that it’s not just a single component’s sessions but it’s a global count.

You can obviously tweak the script to match your needs but this is working pretty well for us on all our environments. If you have ideas about what could be updated to make it even better, don’t hesitate to share!

 

Cet article WebLogic – Update on the WLST monitoring est apparu en premier sur Blog dbi services.

Documentum – FT – Document not found using search from D2

$
0
0

At a customer, I received an incident saying that on D2 a document is found by browsing and not found using normal search. The root cause seems to be easy: The Document isn’t indexed?! Not really, you will see it wasn’t easy to find 😉

1. Analysis

When we have an issue to find a document, usually the problem is that this document is not indexed or the user don’t have enough permissions.

I checked if the document is indexed, by doing a search from:
– DsearchAdmin, Document found:
doc found

– Content Server using idql, Document found also:

1> select r_object_id,object_name,r_modifier,r_modify_date from dm_document search document contains 'A_694';
2> go
r_object_id       object_name        r_modifier      r_modify_date
----------------  -----------------  --------------  --------------------
0901e240802ca812  A_694              dmadmin         3/30/2019 03:02:30

So, the Document is indexed and found correctly as I showed you in above both searches.
Let’s check permissions, despite that I know already that the user has permissions as he can browse and see the document.
Got the ACL name of the document:

API> dump,c,0901e240802ca812
...
USER ATTRIBUTES

  object_name                     : A_694
  title                           : This Document is related to my blog
...
  acl_domain                      : Doc
  acl_name                        : d2_2350e171_213b12de
...

Got ACL Object ID:

1> select r_object_id,description from dm_acl where object_name='d2_2350e171_213b12de';
2> go
r_object_id       description         
----------------  ------------------- 
4501e24080028cce  1 - BLOG - Documents
(1 row affected)

Check permissions:

API> dump,c,4501e24080028cce
...
USER ATTRIBUTES

  object_name                     : d2_2350e171_213b12de
  description                     : 1 - BLOG - Documents
...

SYSTEM ATTRIBUTES

  r_is_internal                   : F
  r_accessor_name              [0]: dm_world
                               [1]: dm_owner
                               [2]: GROUP_BLOG_TEST1
                               [3]: GROUP_BLOG_TEST2
  r_accessor_permit            [0]: 1
                               [1]: 7
                               [2]: 3
                               [3]: 6
  r_accessor_xpermit           [0]: 3
                               [1]: 3
                               [2]: 3
                               [3]: 3
  r_is_group                   [0]: F
                               [1]: F
                               [2]: T
                               [3]: T
...

The impacted user is member of GROUP_BLOG_TEST1, you can check using DA for example, browse:
Administration -> User Management -> Users, then find the impacted user, right click and chose “View Current User Memberships”.

So, the document is indexed and the user has correct permission…

2. Solution

In fact, when a user search using keyword, Documentum will make the search on Documents indexed to find matched documents, but that’s not all… ACL are got from Search also and applied to documents found to give permission to the user accordingly, that’s mean that ACL should be also indexed!

Check ACL index status in the DsearchAdmin:

4501e24080028cce  d2_2350e171_213b12de 

The ACL is not found :
ACL NOT FOUND

Submit the indexing to the queue, using api:

queue,c,4501e24080028cce,dm_FT_i_user

Once indexed, I asked the user to search the document again on D2, and he could find it. So, yes the ACL need also to be indexed if not the document will not be found even if it is indexed.

Cet article Documentum – FT – Document not found using search from D2 est apparu en premier sur Blog dbi services.

Viewing all 173 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>