Quantcast
Channel: Archives des Documentum - dbi Blog
Viewing all 173 articles
Browse latest View live

Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’

$
0
0

At one of our customer, we were in the middle of a migration process of some docbases from 6.7 to 7.2. A few days after the migration, we started seeing some failures/errors during simple download of documents from D2 4.5. The migration has been done using the EMC EMA migration tool by some EMC colleagues. The strange thing here is that these download failures only applied to a few documents, far from the majority and only when opening the document using “View Native Content”. In addition to that, it appeared that the issue was only on migrated documents and it wasn’t happening for new ones.

 

This is an example of the error message we were able to see in the D2 4.5 log files:

2016-07-04 12:00:20 [ERROR] [[ACTIVE] ExecuteThread: '326' for queue: 'weblogic.kernel.Default (self-tuning)'] - c.e.d.d.s.D2HttpServlet[                    ] : Download failed
java.net.ProtocolException: Exceeded stated content-length of: '63000' bytes
        at weblogic.servlet.internal.ServletOutputStreamImpl.checkCL(ServletOutputStreamImpl.java:217) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:162) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:375) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:280) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.download.Download.processRequest(Download.java:132) [D2FS4DCTM-WEB-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:242) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:498) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:115) [D2FS4DCTM-API-4.5.0.jar:na]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) [weblogic.server.merged.jar:12.1.3.0.0]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:844) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:280) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:254) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:136) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:346) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.HttpHeaderFilter.doFilter(HttpHeaderFilter.java:77) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66) [guice-servlet-3.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) [shiro-web-1.1.0.jar:na]
        at com.company.d2.auth.NonSSOAuthenticationFilter.executeChain(NonSSOAuthenticationFilter.java:21) [_wl_cls_gen.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:359) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:275) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:344) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:272) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) [guice-servlet-3.0.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:34) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3436) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3402) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) [com.oracle.css.weblogic.security.wls_7.1.0.0.jar:CSS 7.1 0.0]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:57) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2285) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2201) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1572) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:255) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:263) [weblogic.server.merged.jar:12.1.3.0.0]

 

So the error in this case is “Exceeded stated content-length of: ‘63000’ bytes”. Hum what does that mean? Well it is not really clear (who said not clear at all?)… So we checked several documents for which the download failed (iAPI using dumps) and the only common points we were able to find for these documents were the following ones:

  • They all had a r_full_content_size of: 0
  • They all had a r_content_size bigger than: 63 000

 

The issue only appeared for objects which were assigned a r_full_content_size of 0 during the migration. We tried to set this property to 0 on a document for which the download was working, in order to try to reproduce the issue, but that didn’t change anything: the download was still working properly.

 

So here is some background regarding this parameter: the expected behavior for this parameter is that it has a real value (obviously). If the file in question is smaller than 2 GB, then the r_full_content_size will have the same value as the r_content_size which is of course the size of the file in bytes. If the file is bigger than 2 GB, then the r_content_size field is too small to show the real size in bytes and therefore the real size is only displayed in the r_full_content_size field… The r_full_content_size is the one read by D2 when using the “View Native Content” action while other actions like “View” or “Edit” behave like older wdk clients and are therefore reading the r_content_size attribute… So yes there is a difference in behavior between the few actions that are doing a download and that’s the reason why we only had this issue with the “View Native Content” action!

 

Unfortunately and as you probably already understood if you read the previous paragraph, the r_content_size and r_full_content_size aren’t of the same type (Integer Vs. Double) and therefore you can’t simply execute one single DQL to set the value of r_full_content_size equal to the value of r_content_size because you will get a DM_QUERY_E_UP_BAD_ATTR_TYPES error. So you will have to do things a little bit slower.

 

The first thing to do is obviously to gather a list of all documents that need to be updated with their r_object_id and value of their r_content_size (r_full_content_size don’t really matter since you will gather only affected documents so this value is always 0):

> SELECT r_object_id, r_content_size FROM dm_document WHERE r_full_content_size='0' AND r_content_size>'63000';
r_object_id         r_content_size
------------------- -------------------
090f446780034513    289326
090f446780034534    225602
090f446780034536    212700
090f446780034540    336916
090f446780034559    269019
090f446780034572    196252
090f446780034574    261094
090f44678003459a    232887
...                 ...

 

Then a first solution would be to manually go through the list and execute one DQL query for each document setting r_full_content_size=’241309′ (or whatever the r_content_size is). For the first document listed above, that would therefore be the following DQL:

> UPDATE dm_document objects SET r_full_content_size='289326' WHERE r_object_id='090f446780034513';

 

Note 1: In case you will want at some point to restore a previous version of a document for example, then you will most probably need to use the “FROM dm_document(ALL)” instead of “FROM dm_document”. But then take care that non current versions are immutable and can’t therefore be updated using a simple DQL. You will need to remove the immutable flag, update the object and restore that so that’s a little bit more tricky ;)

Note 2: In case you have a few documents bigger than 2 GB, the r_content_size will not reflect the real value and therefore setting the r_full_content_size to that value isn’t correct… I wasn’t able to test that since our customer didn’t have any document bigger than 2 GB but you should most probably be able to use instead the full_content_size that is stored on the dmr_content for this object… Just like the dm_document, the dmr_content object has two fields that you should be able to use to find the correct size: content_size (that should reflect r_content_size) and full_content_size (that should reflect r_full_content_size). If that isn’t helping then a last solution would be to export and re-import all documents bigger than 2 GB…

 

Ok so updating all objects is possible but this is really boring so a second solution – and probably a better one – is to use a script to prepare a list of DQL queries to be executed. When you have the r_object_id and r_content_size of all affected documents, you can just put that in a CSV file (copy/paste in excel and save as CSV for example) and wrote a small script (bash for example) that will generate 1 DQL query per document, that’s really simple and if you have thousands of documents affected, then it will just take you a few minutes to write the script and in 1/2 seconds you will have thousands of DQL queries generated. Then you can put all these commands in a single file that can be executed against a docbase on the Content Server. That’s a better solution but actually the simplest solution you can ever find will always be to use dqMan (or any similar solution). Indeed dqMan has a dedicated feature that allows you to execute a “template DQL” on any list of objects returned by a specific command. Therefore you don’t need any bash scripting if you are using dqMan and that does the job in a few seconds.

 

A last solution would be to go directly to the database and execute SQL queries to set r_full_content_size equal to r_content_size BUT I would NOT recommend you to do that unless you have a very good knowledge of the Documentum Content Model and if you absolutely know what you are doing and what you are messing with ;).

 

See you soon!

 

Cet article Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’ est apparu en premier sur Blog dbi services.


Documentum story – Authentication failed for Installation Owner with the correct password

$
0
0

When installing a new Remote Content Server (High Availability), everything was going according to the plan until we try to login to DA using this new CS: the login using the Installation Owner (dmadmin) failed… Same result from dqMan or any other third party tools and only the iapi or idql sessions on the Content Server itself were still working because of the local trust. When something strange is happening regarding the authentication of the Installation Owner, I tink the first to do is always to verify if the dm_check_password is able to recognize your username/password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (245) = (DM_CHKPASS_BAD_LOGIN)

 

As you can see above, this was actually not working so apparently the CS is thinking that the password isn’t the correct one… This might happen for several reasons:

 

1. Wrong permissions on the dm_check_password script

This script is part of the few scripts that need some specific permissions to be working… This is either done by the Installer if you provide the root’s password during the installation or you can run the $DOCUMENTUM/dba/dm_root_task script manually using the root account (using sudo/dzdo or asking your UNIX admin team for example). These are the permissions that are needed for this script:

[dmadmin@content_server_01 ~]$ ls -l $DOCUMENTUM/dba/dm_check_password
-rwsr-s---. 1 root dmadmin 14328 Oct 10 12:57 $DOCUMENTUM/dba/dm_check_password

 

If the permissions aren’t the right ones or if you think that this file has been corrupted somehow, then you can re-execute the dm_root_task again as root. It will ask you if you want to overwrite the current files and it will in the end set the permissions properly.

 

2. Expired password/account

If the OS password/account you are testing (Installation Owner in my case) is expired then there are several behaviors. In case the account is expired, then the dm_check_password will return a DM_CHKPASS_ACCOUNT_EXPIRED error. If it is the password that is expired, then in some OS, the dm_check_password won’t work with the bad login error shown at the beginning of this blog. This can be checked pretty easily on most OS. On a RedHat for example, it would be something like:

[dmadmin@content_server_01 ~]$ chage -l dmadmin
Last password change                                    : Oct 10, 2016
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 4294967295
Number of days of warning before password expires       : 7

 

In our case, we are setting the password/account to never expire to avoid such issues so that’s not the problem here. By the way, an expired password will also prevent you to use the crontab for example…

 

To change these parameters, you will need to have root permissions. That’s how it is done for example (press enter to just use the proposed value if that’s fine for you). The value between brackets is the current/proposed value and you can put your desired value after the colon:

[root@content_server_01 ~]$ chage dmadmin
Changing the aging information for dmadmin
Enter the new value, or press ENTER for the default

        Minimum Password Age [0]: 0
        Maximum Password Age [4294967295]: 4294967295
        Last Password Change (YYYY-MM-DD) [2016-10-10]:
        Password Expiration Warning [7]:
        Password Inactive [-1]: -1
        Account Expiration Date (YYYY-MM-DD) [-1]: -1

 

3. Wrong OS password

Of course if the OS password isn’t the correct one, then the dm_check_password will return a BAD_LOGIN… Makes sense, isn’t it? What I wanted to explain in this section is that in our case, we are always using sudo/dzdo options to change the current user to the Installation Owner and therefore we never really use the Installation Owner’s password at the OS level (only in DA, dqMan, aso…). To check if the password is correct at the OS level, you can of course start a ssh session with dmadmin directly or any other command that would require the password to be entered like a su or sudo on itself in our case:

[dmadmin@content_server_01 ~]$ su - dmadmin
Password:
[dmadmin@content_server_01 ~]$

 

As you can see, the OS isn’t complaining and therefore this is working properly: the OS password is the correct one.

 

4. Wrong mount options

Finally the last thing that can prevent you to login to your docbases remotely is some wrong options on your mount points… For this paragraph, I will suppose that $DOCUMENTUM has been installed on a mount point /app. So let’s check the current mount options, as root of course:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev,nosuid        1 2

 

That seems alright but actually it isn’t… For Documentum to work properly, the nosuid shouldn’t be present on the mount point where it has been installed! Therefore we need to change this. First of all, you need to update the file /etc/fstab so this change will remain after a reboot of the linux host. Just remove “,nosuid” to have something like that:

[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev               1 2

 

Now, the configuration inside the file /etc/fstab isn’t applied or reloaded if /app is already mounted. Therefore you need to remount it with the right options and that’s how you can do it:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount -o remount,nodev /app
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev)

 

Now that the mount options are correct, we can check again the dm_check_password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)

 

As you can see, this is now working… That’s a miracle ;).

 

Cet article Documentum story – Authentication failed for Installation Owner with the correct password est apparu en premier sur Blog dbi services.

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

$
0
0

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification

  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions

$
0
0

WebLogicServer takes a long time (e.g. over 15 minutes) to startup. Can the performance be improved?
Using /dev/urandom option in the Weblogic Java Virtual Machine parameters from startup script, can be the solution.
In fact, /dev/random is a blocking device and during times of low entropy (when there is not enough random bits left in it), /dev/random will block any processes reading from it (and hence hang) until more random bits have been generated.
So, if you are running some custom WLST scripts to startup your Weblogic infrastructure, you have to know that there are a few things you could try to improve performance.

This change allows to improve the performances of your WLST sessions.. There is no functional impact to modify the wlst.sh script.

1- Login to the Weblogic Server.
2- Use the Unix (installation owner) account:
3- Go to the following directory using:

cd $ORACLE_HOME/oracle_common/common/bin

4- Edit the wlst.sh script with vi editor
5- Add the following option in JVM_ARGS

 JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${UTILS_MEM_ARGS} ${COMMON_JVM_ARGS} ${CONFIG_JVM_ARGS} -Djava.security.egd=file:///dev/./urandom" 

 
 
6- Please note that if you specifically want to reduce the load time in start time script, you can do It by providing :

java weblogic.WLST -skipWLSModuleScanning <script>.py 

It can be an advantage in some cases.

Others ways to increase performance of your WLS:

1- This blog deals with urandom parameter especially.
Be aware that you have another ways  to tune well your Weblogic Server(s).
Just in order to give you an overview of all possible changes :
Set Java parameters in starting Weblogic Server script or directly in JAVA_OPTS:
• urandom (see above)
• java heap size :
For higher performance throughput, set (if possible) the minimum java heap size equal to the maximum heap size. For example:

"$JAVA_HOME/bin/java" -server –Xms512m –Xmx512m -classpath $CLASSPATH ... 

 
2- Think about the number of Work Managers you’re really needed :
A default Work Manager might not be adapted to all application’s needs.
In this case, you’ll have to create a custom Work Manager.

3- Remember to have a good tuning regarding the stuck threads process detection.
Indeed, if WLS already detects when a thread process  becomes a “Stuck” thread, you have to know that the stuck status doesn’t allow to finish the current work or accept a new one.
So It impacts directly and quickly  the performance of the machine.
 
4- Don’t forget the basics :  Operating System tuning and/or Network tuning

 

Cet article Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions est apparu en premier sur Blog dbi services.

Documentum D2 4.5 and IE compatibility and F5

$
0
0

We had a problem with a customer where D2 was not loading properly in IE when going through F5 (load balancer). When trying to access D2 through the F5, let’s say: https://d2prod/D2 only a few menus and some part of the workspace were loading but it ended to say “Unexpected error occured”.

Investigation

It would have been too easy if this error appeared in the logs, but it didn’t. So that means it was not a D2 internal error but maybe in the interface or the way it is loading in IE. Because, fun fact, it was loading properly in Chrome. Additional fun fact, when using a superuser account it was also loading properly in IE!

As it was an interface error I used the IE debugging tool F12. At first I didn’t see the error in the console but when digging a bit inside all the verbose logs I found this:

SEVERE: An unexpected error occurred. Please refresh your browser
com.google.gwt.core.client.JavaScriptException: (TypeError) 
 description: Object doesn't support property or method 'querySelectorAll'
number: -2146827850: Object doesn't support property or method 'querySelectorAll'

After some researches I figured out that others had issues with “querySelectorAll” and IE. In fact it was depending on the version of  IE used because this function was not available prior IE 9.

Hence I came to the idea that my IE was not in the right compatibility mode, because I had IE 11, so it couldn’t be a version mismatch.

Fortunately thanks to the F12 console you can change the compatibility mode:

Capture_Compat_8

As I thought, the compatibility mode was set (and blocked) to 8, which was not supporting “querySelectorAll”. But I couldn’t change it to a higher value. Hence, I figured this out:

Capture_Compat_Enterprise

I was in Enterprise Mode. This mode forces the compatibility version and some other sort of things. Fortunately you can disable it in the browser by going into the “Tools” menu of IE. Then, like magic, I was able to switch to the compatibility version 10:

Capture_Compat_10_2

And, miracle. The D2 interface reloaded properly, with all menus and workspaces. You remember it was working with superuser accounts? In fact, when using a superuser account the Enterprise Mode was not activated and the Compatibility version was set to 10.

The question is, why was it forced to 8?

Solution

In fact, it was customer related. They had a policy rule applying for the old D2 (3.1) which needed the Enterprise Mode and compatibility mode set to 8. So when using the old dns link to point to the new D2, these modes were still applied.

So we asked to disable the Enterprise Mode and the compatibility mode returned to 10 by default. So be careful with IE in your company ;)

 

Cet article Documentum D2 4.5 and IE compatibility and F5 est apparu en premier sur Blog dbi services.

Documentum – Thumbnail not working with TCS

$
0
0

A few months ago and right after a migration of around 0.5TB of documents, we enabled the TCS for one of our Application. We were using a Content Server 7.2 P05 with the associated D2 4.5 P03. As already mentioned in a previous blog, D2 4.5 doesn’t handle the previews of the documents and therefore there were also a Thumbnail Server used by this Application. The setup of the TCS for the document filestores went well without issue but when we tried to do the same thing for the Thumbnail filestore, the previews weren’t working anymore.

Basically when you configure the Thumbnail to use a TCS filestore, you need to request new renditions for existing documents otherwise they will continue to use the non-TCS filestore.

If you access D2 while inspecting the network traffic or using the browser dev/debug feature, you will find that D2 is building the Thumbnail preview URL using the value of the “dm_document.thumbnail_url”. This thumbnail_url is actually – by default – a concatenation of several things: thumbnail_url = base_url + path + store.

Therefore if you define:

  • the Thumbnail base_url to be “https://thumbnail_alias/thumbsrv/getThumbnail?”
  • the Thumbnail TCS filestore to be “trusted_thumbnail_store_01″
  • a test document 09012345801a8f56 with a file system path (get_path) to be: “00012345/80/00/10/4d.jpg”

Then D2 will retrieve a thumbnail_url that is:

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01

 

When accessing this URL, if the filestore mentioned above is indeed a TCS filestore, this is the kind of logs that will be generated:

Jul 05, 2016 9:42:15 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_RETRIEVE_DOCBASE_THUMB] Retrieving docbase thumbnail...
Jul 05, 2016 9:42:15 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : the ETag received is the path - will return 304 response
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_NEW_REQUEST] New request: path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_RETRIEVE_DOCBASE_THUMB] Retrieving docbase thumbnail...
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_RETRIEVE_STORE] Retrieving storage area...
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : getting storage area for docbase: 00012345 store: trusted_thumbnail_store_01
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_CHECK_SECURITY] Checking security...
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : About to start reading files...
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : Trusted content store found, will use DFC for decryption
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : Content paramters are: format= null; page=null; page modifier=null
Jul 05, 2016 9:45:12 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: Object ID missing from request...

 

When using a non-TCS filestore, this url will retrieve the document preview properly (of course the name of the store was “thumbnail_store_01″ and not “trusted_thumbnail_store_01″) but as you can see above, with a TCS filestore, the Thumbnail is trying to use the DFC for decryption but isn’t able to because the Object ID is missing on the request… With a non-TCS filestore, the Thumbnail is retrieving the content directly from the file system but it’s not possible to do that with a TCS filestore because of the encryption. Therefore a TCS-enabled Thumbnail has to use the “getContent” method of the DFC and this method requires the Object ID. That’s the issue here.

When we faced this issue last summer, we contacted EMC because that was clearly a bug in how D2 is constructing the URLs to request the preview and not how the Thumbnail Server is processing the requested URLs. After several days, the D2 engineering team provided a hotfix to enable D2 to provide the correct parameters. Basically with the new hotfix deployed, D2 was now able to generate the following URL:

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres

 

Note: In this case, we also had to specify the format (jpeg_lres). While waiting for the EMC hotfix, we performed some tests on our side by trying to add missing parameters and trying to understand what is needed for the Thumbnail Server in the requests. At some point in time, we found out that the Thumbnail was trying to use “jpeg_th” as a default value for the format if you don’t specify any. As far as I know, we have a pretty default Thumbnail configuration but we don’t have any “jpeg_th” formats. Only jpeg_lres and jpeg_story are used and therefore I don’t know where this jpeg_th is coming from. I believe that’s an issue of the Thumbnail Server because EMC included the format in the D2 hotfix too, after we mentioned that to them.

 

So using this new URL, the generated log files were:

Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_NEW_REQUEST] New request: path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_RETRIEVE_DOCBASE_THUMB] Retrieving docbase thumbnail...
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_RETRIEVE_STORE] Retrieving storage area...
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : getting storage area for docbase: 00012345 store: trusted_thumbnail_store_01
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : [DM_TS_T_CHECK_SECURITY] Checking security...
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : About to start reading files...
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : Trusted content store found, will use DFC for decryption
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : Content paramters are: format= jpeg_lres; page=null; page modifier=null
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : will be retrieving object 09012345801a8f56
Jul 12, 2016 6:29:53 AM org.apache.catalina.core.ApplicationContext log
INFO: getThumbnail: [DEBUG] : Session ok for 00012345

 

And it worked; the preview of the first page of this document has been displayed. The hotfix seemed to work so we started to deploy it in our DEV environments while waiting for a confirmation from the Application Team that it was also working from D2 (because of SoD, our admin accounts don’t have access to documents in D2). But then after a few tests, The App Team found out that the hotfix was actually only partially working: only for the first page! Indeed EMC created this hotfix using a one-page document and they never tried to retrieve the previews of a multi-page document. The thing with the Thumbnail Server is that if you have a 25-pages document, then this document will have 25 previews. I already talked about that in another previous blog so you can take a look at this blog to have more information on how to manage that.

I will suppose that 09012345801a8f56 is a 3-pages document. I gathered some information from this document and also got the path of the thumbnail previews that are related to the 3 pages:

API> ?,c,select r_object_id, full_format, parent_id, content_size, full_content_size, set_time, set_file from dmr_content where any parent_id='09012345801a8f56'
r_object_id       full_format  parent_id         content_size  full_content_size  set_time           set_file
----------------  -----------  ----------------  ------------  -----------------  -----------------  --------------------------------------------------------------------
06012345801d95cc  jpeg_lres    09012345801a8f56  60467         60467              7/4/2016 13:22:39  C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
	get_path => /data/DOCBASE/trusted_thumbnail_storage_01/00012345/80/00/10/4d.jpg

06012345801d95cd  jpeg_lres    09012345801a8f56  138862        138862             7/4/2016 13:22:39  C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
	get_path => /data/DOCBASE/trusted_thumbnail_storage_01/00012345/80/00/10/4e.jpg

06012345801d95ce  jpeg_lres    09012345801a8f56  29596         29596              7/4/2016 13:22:39  C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
	get_path => /data/DOCBASE/trusted_thumbnail_storage_01/00012345/80/00/10/4f.jpg

 

So here we have three different jpg files for the same “parent_id” (the document) and each one is a preview of a specific page. These previews have a different size (content_size/full_content_size), therefore the previews should be different!

With the information provided above, the previews URLs generated by D2 (using the new hotfix) are:

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4e.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres


https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4f.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres

 

Three different URLs for three different pages. That looks fine but actually if you access all these three URLs, you will always see the preview of the first page of the document… It goes even beyond that! Actually you can put any path in the URL and it will always show the first page of the document (09012345801a8f56). To confirm that, you can access the following URLs and see by yourself that they will also display the same preview:

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/zz.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/zz/zz/zz/zz.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres


https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres

 

Based on all the above information, you can understand that the way the Thumbnail Server is working is really different when using a non-TCS or a TCS filestore… For non-TCS filestores, the URL points directly to the actual file and that’s the reason why the full path is really important: it leads directly to the right file and each page has its own path. For TCS filestores, since all the files are encrypted, the Thumbnail Server can’t directly use the file path. Therefore it relies on the DFC to first decrypt the file and then return the result. That’s the reason why only the docbase ID is needed inside the “path” when using a TCS and that everything else in the path is completely useless. On the other end and as seen previously in this blog, you of course also need some additional information to specify which preview you want.

In addition to the parameters we saw above and in order to uniquely identify different renditions, page and/or page_modifier is also required with a TCS filestore. Both attributes are part of dmr_content table and page is the position of the content when the object has multiple contents (generally it will always be zero by default) while page_modifier uniquely identifies a rendition within the same page number and format for a document. If no/null page_modifier is passed in the URL then the rendition with an empty page_modifier value is returned. If there isn’t any rendition in the docbase without a page_modifier value, then the one with the smallest page_modifier (in alphabetical order) will be returned instead: that’s the reason why all previous URLs always returned the preview of the first page only… In short, for non-TCS filestores, the path pretty much does the job, but for TCS filestores we need additional parameters to uniquely identify the rendition.

 

So to summarize the investigation we did with EMC (because yes we found a definitive solution), the minimum elements that should be present in the Thumbnail URLs are:

  • non-TCS filestore:
    • Full path to the right file
    • Store used
  • TCS filestore:
    • Path containing at least the Docbase ID
    • Store used
    • Document ID (parent_id)
    • Format of the preview
    • page and/or page_modifier (from the dmr_content table)

 

In the end, EMC provided another hotfix which fix the first hotfix. The complete URLs are now generated by D2 and the previews are working properly even with a TCS Thumbnail filestore. Here are the examples for the 3-pages document:

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4d.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres&page_modifier=000000001

https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4e.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres&page_modifier=000000002


https://thumbnail_alias/thumbsrv/getThumbnail?path=00012345/80/00/10/4f.jpg&store=trusted_thumbnail_store_01&did=09012345801a8f56&format=jpeg_lres&page_modifier=000000003

 

The definitive hotfix should normally be included in all the 2017 releases according to EMC.

 

 

Cet article Documentum – Thumbnail not working with TCS est apparu en premier sur Blog dbi services.

Documentum – Wrong dfc versions after installation, upgrade or patch

$
0
0

If you are familiar with Documentum or if you already read some of my blogs, then you probably already know that EMC has sometimes issues with libraries. In a previous blog (this one), I talked about the Lockbox versions which caused us an issue and in this blog, I will talk about DFC versions.

Whenever you install a CS patch or another patch, it will probably have its own DFC libraries simply because EMC fixed something in it or because it was needed. Whenever you install D2, it will also have its own DFC libraries in the JMS and the WAR files. The problem is that the DFC libraries are everywhere… Each and every DFC client has its own DFC libraries which come when you install it, patch it, aso… Basically that’s not a wrong approach, it ensure that the components will work wherever they are installed so it can always talk to the Content Server.

The problem here is that the DFC libraries are changing at every patch almost and therefore it is kind of complicated to keep a clean environment. It already happened to us that two different patches (CS and D2 for example), released on the exact same day, were using different DFC versions and you will see below another example coming from the same package…  You can live with a server having five different DFC versions but this also means that whenever a bug impact one of your DFC library, it will be hard to fix that because you then need to deploy the next official patch which is always a pain. It also multiplies the number of issues that impact your DFC versions since you are running several versions at the same time.

I’m not saying that you absolutely need to always use only the latest DFC version but if you can properly and quickly perform the appropriate testing, I believe it can brings you something. A few weeks ago for example, one of the Application Teams we are supporting had an issue with some search functionalities in D2. This was actually caused by the DFC version bundled with D2 (DFC 7.2P03 I think) and we solved this issue by simply using the DFC version coming from our Content Server (DFC 7.2P05) which was only two patch above.

To quickly and efficiently see which versions of the DFC libraries you are using and where, you can use:

find <WHERE_TO_FIND> -type f -name dfc.jar -print0 | while read -d $'' file; do echo "DFC: `$JAVA_HOME/bin/java -cp "$file" DfShowVersion`  ===  Size: `ls -ks "$file" | awk '{print $1}'` kb  ===  Location: $file"; done

or

find <WHERE_TO_FIND> -type f -name dfc.jar -print0 | while IFS= read -r -d '' file; do echo "DFC: `$JAVA_HOME/bin/java -cp "$file" DfShowVersion`  ===  Size: `ls -ks "$file" | awk '{print $1}'` kb  ===  Location: $file"; done

 

You can execute these commands on a Content Server, Application Server (Note: dfc.jar files might be on the D2/DA war files if you aren’t using exploded deployments), Full Text Server or any other Linux Servers for what it matters. These commands handle the spaces in the paths even if normally you shouldn’t have any for the dfc files. To use them, you can just replace <WHERE_TO_FIND> with the base folder of your installation. This can be $DOCUMENTUM for a Content Server, $XPLORE_HOME for a Full Text Server, aso… Of course you still need to have the proper permissions to see the files otherwise it will be quite useless to execute this command.

A small example on a Content Server 7.3 (no patches are available yet) including xCP 2.3 P05 (End of February 2017 patch which is supposed to be for CS 7.3):

[dmadmin@content_server_01 ~]$ find $DOCUMENTUM -type f -name dfc.jar -print0 | while read -d $'' file; do echo "DFC: `$JAVA_HOME/bin/java -cp "$file" DfShowVersion`  ===  Size: `ls -ks "$file" | awk '{print $1}'` kb  ===  Location: $file"; done
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/product/7.3/install/composer/ComposerHeadless/plugins/com.emc.ide.external.dfc_1.0.0/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/temp/installer/wildfly/dctmutils/templates/dfc/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/dfc/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/dfc.jar
DFC: 7.2.0210.0184  ===  Size: 15212 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/dfc.jar
DFC: 7.2.0210.0184  ===  Size: 15212 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/tmp/vfs/deployment/deploymente7c710bab402b3f7/dfc.jar-7ac143a725d0471/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/tmp/vfs/deployment/deploymente7c710bab402b3f7/dfc.jar-bc760ece35b05a08/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/tmp/vfs/deployment/deploymente7c710bab402b3f7/dfc.jar-35c79cfe4b79d974/dfc.jar

 

As you can see above, it looks like there are two different versions of the DFC library on this Content Server which has just been installed: one coming from the CS 7.3 which is therefore in 7.3 P00 build number 205 and another version which is still in 7.2 P21 build number 184. This second version has been put on the Content Server by the xCP 2.3 P05 installer. Therefore using a 7.2 library on a 7.3 Content Server is a little bit ugly but the good news is that they are both in a pretty recent version since these two libraries were released almost at the same time (end of 2016/beginning of 2017). Therefore here I don’t think it would be a big problem even if as soon as the CS 7.3 P01 is out (normally end of this month), we will replace all dfc.jar files with the 7.3 P01 versions.

Another example on a Full Text Server using xPlore 1.6 (same as before, no patches are available yet for xPlore 1.6) including one Primary Dsearch and two IndexAgents for DOCBASE1 and DOCBASE2:

[xplore@fulltext_server_01 ~]$ find $XPLORE_HOME -type f -name dfc.jar -print0 | while read -d $'' file; do echo "DFC: `$JAVA_HOME/bin/java -cp "$file" DfShowVersion`  ===  Size: `ls -ks "$file" | awk '{print $1}'` kb  ===  Location: $file"; done
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/temp/installer/wildfly/dctmutils/templates/dfc/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/dfc/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_Indexagent_DOCBASE1/tmp/vfs/deployment/deployment5417db9ca7307cfc/dfc.jar-aa1927b943be418f/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_Indexagent_DOCBASE1/deployments/IndexAgent.war/WEB-INF/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_Indexagent_DOCBASE2/tmp/vfs/deployment/deploymentbb9811e18d147b6a/dfc.jar-7347e3d3bbd8ffd/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_Indexagent_DOCBASE2/deployments/IndexAgent.war/WEB-INF/lib/dfc.jar
DFC: 7.3.0000.0196  ===  Size: 15220 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/tmp/vfs/deployment/deployment5fd2cff2d805ceb2/dfc.jar-29edda1355c549b8/dfc.jar
DFC: 7.3.0000.0196  ===  Size: 15220 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/deployments/dsearchadmin.war/WEB-INF/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/watchdog/lib/dfc.jar
DFC: 7.3.0000.0205  ===  Size: 15236 kb  ===  Location: $XPLORE_HOME/setup/qbs/tool/lib/dfc.jar

 

Do you see something strange here? Because I do! This is a completely new xPlore 1.6 server which has just been installed and yet we have two different versions of the DFC libraries… It’s not a difference on the minor version but it’s a difference on the build number! As you can see above, it looks like the PrimaryDsearch uses a DFC 7.3 P00 build number 196 while all other DFC versions are in 7.3 P00 build number 205 (so just like the Content Server). The problem here is that each xPlore modules (IA, Dsearch, aso…) are built by different xPlore Teams. Therefore the team that package the Dsearch libraries isn’t the same that the one that package the IndexAgent libraries.

Since there is a difference here, it probably means that the Dsearch team built their package some days/weeks before the other teams (from IA, CS, aso…) and therefore the DFC libraries included in the Dsearch are older… Is it an issue or not? According to EMC, it’s not, BUT I wouldn’t be so categorical. If EMC built this library 9 additional times, it’s not for nothing… There must be a reason behind those builds and therefore not having the latest build seems a little bit risky to me. Since this is just a sandbox environment, I will most probably just wait for the P01 of the xPlore 1.6 which will be release in a few days and I will implement it to have an aligned version of the DFC for all components.

 

Have fun finding issues in the EMC releases :).

 

 

Cet article Documentum – Wrong dfc versions after installation, upgrade or patch est apparu en premier sur Blog dbi services.

Documentum – Increase the number of concurrent_sessions

$
0
0

In Documentum, there is a parameter named concurrent_sessions which basically defines how many sessions a Content Server can open simultaneously. This parameter is defined in the server.ini file (or server_<dm_server_config.object_name>.ini on a Remote Content Server) of each docbase and it has a default value of 100.

An empty Content Server with an IndexAgent and D2 installed (without user using it) will usually take around 10 sessions for the jobs, for the searches, aso… As soon as there are users in the environment, the number of concurrent sessions will quickly grow and therefore depending on how many users you have, you may (will) need to increase this limit. To be more precise, the concurrent_sessions controls the number of connections the server can handle concurrently. This number must take into account not only the number of users who are using the repository concurrently, but also the operations those users are executing. Some operations require a separate connection to complete. For example:

  • Issuing an IDfQuery.execute method with the readquery flag set to FALSE causes an internal connection request
  • Executing an apply method or an EXECUTE DQL statement starts another connection
  • When the agent exec executes a job, it generally requires two additional connections
  • Issuing a full-text query requires an additional connection
  • aso…

 

Do you already know how to increase the number of allowed concurrent sessions? I’m sure you do, it’s pretty easy:

  1. Calculate the appropriate number of concurrent sessions needed based on the information provided above
  2. Open the file $DOCUMENTUM/dba/config/DOCBASE/server.ini and replace “concurrent_sessions = 100″ with the desired value (“concurrent_sessions = 500″ for example)
  3. Restart the docbase DOCBASE using your custom script of the default Documentum scripts under $DOCUMENTUM/dba
  4. Ensure that the Database used can handle the new number of sessions properly and see if you need to increase the sessions/processes for that

 

To know how many sessions are currently used, it’s pretty simple, you can just execute the DQL “execute show_sessions” but be aware that all sessions will be listed and that’s not exactly what we want. Therefore you need to keep only the ones with a dormancy_status that is Active in the final count otherwise the value will be wrong. The number of active sessions is not only linked to a docbase but also to a Content Server (to be more precise, it is only linked to a dm_server_config object). This means that if you have a High Availability environment, each Content Server (each dm_server_config object) will have its own number of active sessions. This also means that you need to take this into account when calculating how many concurrent sessions you need.

For example if you calculated that you will need a total of 500 concurrent sessions (again it’s not the number of concurrent users!) for a docbase, then:

  • If you have only one Content Server, you will need to set “concurrent_sessions = 500″ on the server.ini file of this docbase.
  • If you have two Content Servers (two dm_server_config objects) for the docbase, then you can just set “concurrent_sessions = 275″ on each server.ini files. Yes I know 2*275 isn’t really equal to 500 but that’s because each Content Server will need its internal sessions for the jobs, searches, aso… In addition to that, the Content Servers might need to talk to each other so these 25 additional sessions wouldn’t really hurt.

 

Now is the above procedure working for any value of the concurrent_sessions? Well the answer to this question is actually the purpose of this blog: yes and no. From a logical point of view, there is no restriction to this value but from a technical point of view, there is… A few months ago at one of our customer, I was configuring a new Application which had a requirement of 2150 concurrent_sessions accross a High Availability environment composed of two Content Servers. Based on the information provided above, I started the configuration with 1100 concurrent sessions on each Content Server to match the requirement. But then when I tried to start the docbase again, I got the following error inside the docbase log file ($DOCUMENTUM/dba/log/DOCBASE.log):

***********************************************************

Program error: Illegal parameter value for concurrent_sessions: 1100

***********************************************************

Usage: ./documentum -docbase_name <docbase name> -init_file <filename> [-o<option>]

    -docbase_name : name of docbase
    -init_file    : name of server.ini file (including path) which
                    contains the server startup parameters

Usage: ./documentum -h

 

As you can see, the docbase refuses to start with a number of concurrent sessions set to 1100. What’s the reason behind that? There is an artificial limit set to 1020. This is actually mentioned in the documentation:
The maximum number of concurrent sessions is dependent on the operating system of the server host machine. The limit is:

  • 3275 for Windows systems
  • 1020 for Linux, Solaris, and AIX systems

 

So why is there a limit? Why 1020? This limit is linked to the FD_SETSIZE value. The documentation on FD_SETSIZE says the following:

An fd_set is a fixed size buffer. Executing FD_CLR() or FD_SET() with a value of fd that is negative or is equal to or larger than FD_SETSIZE will result in undefined behavior. Moreover, POSIX requires fd to be a valid file descriptor.

 

Thus FD_SETSIZE doesn’t explicitly limit the number of file descriptors that can be worked on with the system select() call. Inside every UNIX process, for its PID, Documentum maintain a corresponding list (of pointers) of file descriptors. In UNIX based systems, every Documentum session is created as a separate process. Since the number of sessions created directly depends on the number of file descriptors in an OS, each of these processes will be having a list of the file descriptors within their process which will be taking a good chunk of physical memory. With this technical reasoning the value 1020 has been set to be the max concurrent sessions available by default in Documentum.

So basically this limit of 1020 has been set arbitrary by EMC to stay within the default OS (kernel) value which is set to 1024 (can be checked with “ulimit -Sa” for example). An EMC internal task (CS-40186) was opened to discuss this point and to discuss the possibility to increase this maximum number. Since the current default limit is set only in regards to the default OS value of 1024, if this value is increased to 4096 for example (which was our case since the beginning), then there is no real reason to be stuck at 1020 on Documentum side. The Engineering Team implemented a change in the binaries that allows changing the limit. This is done by adding the environment variable DM_FD_SETSIZE.

Therefore to change the concurrent sessions above 1020 (1100 in this example) and in addition to the steps already mentioned before, you also need to do the following (depending on your OS, you might need to update the .bashrc or .profile files instead):

echo "export DM_FD_SETSIZE=1200" >> ~/.bash_profile
source ~/.bash_profile
$DOCUMENTUM/dba/dm_start_DOCBASE

 

With this environment variable DM_FD_SETSIZE now set to 1200, we can use 1100 concurrent sessions without issue. The value that will be used for the concurrent_sessions will be the one from the server.ini file. We just need to define a DM_FD_SETSIZE variable with a value equal or bigger than what we want. Also, I didn’t mention the ulimit but of course, you also need to set the limits of your OS accordingly (this is done in the file /etc/limits.conf or inside any file under /etc/limits.d/).

 

 

Cet article Documentum – Increase the number of concurrent_sessions est apparu en premier sur Blog dbi services.


Documentum – Deactivation of a docbase without uninstallation

$
0
0

At some of our customers, we often install new docbases for development purposes which are used only for a short time to avoid cross-team interactions/interferences and this kind of things. Creating new docbases is quite easy with Documentum but it still takes some time (unless you use silent installations or Docker components). Therefore installing/removing docbases over and over can be a pain. For this purpose, we often install new docbases but then we don’t uninstall it, we simply “deactivate” it. By deactivate I mean updating configuration files and scripts to act just like if this docbase has never been created in the first place. As said above, some docbases are there only temporarily but we might need them again in a near future and therefore we don’t want to remove them completely.

In this blog, I will show you which files should be updated and how to simulate a “deactivation” so that the Documentum components will just act as if the docbase wasn’t there. I will describe the steps for the different applications of the Content Server including the JMS and Thumbnail Server, Web Application Server (D2/DA for example), Full Text Server and ADTS.

On this blog, I will use a Documentum 7.2 environment in LINUX of course (except for the ADTS…) which is therefore using JBoss 7.1.1 (for the JMS and xPlore 1.5). In all our environments we also have a custom script that can be used to stop or start all components installed in the host. Therefore in this blog, I will assume that you do have a similar script (let’s say that this script is named “startstop”) which include a variable named “DOCBASES=” that contains the list of docbases/repositories installed on the local Content Server (DOCBASES=”DOCBASE1 DOCBASE2 DOCBASE3″). For the Full Text Server, this variable will be “INDEXAGENTS=” and it will contain the name of the Index Agents installed on the local FT (INDEXAGENTS=”Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3″). If you don’t have such kind of script or if it is setup differently, then just adapt the needed steps below. I will put this custom startstop script at the following locations: $DOCUMENTUM/scripts/startstop in the Content Server and $XPLORE_HOME/scripts/startstop in the Full Text Server.

In the steps below, I will also assume that the docbase that need to be deactivated is “DOCBASE1″ and that we have two additional docbases installed on our environment (“DOCBASE2″ and “DOCBASE3″) that need to stay up&running. If you have some High Availability environments, then the steps below will apply to the Primary Content Server but for Remote Content Servers, you will need to adapt the name of the Docbase start and shutdown scripts which are placed under $DOCUMENTUM/dba: the correct name for Remote CSs should be $DOCUMENTUM/dba/dm_shutdown_DOCBASE1_<ServiceName@RemoteCSs>.

 

1. Content Server

Ok so let’s start with the deactivation of the docbase on the Content Server. Obviously the first thing to do is to stop the docbase if it is running:

ps -ef | grep "docbase_name DOCBASE1 " | grep -v grep
$DOCUMENTUM/dba/dm_shutdown_DOCBASE1

 

Once done and since we don’t want the docbase to be inadvertently restarted, then we need to update the custom script that I mentioned above. In addition to that, we should also rename the Docbase start script so an installer won’t start the docbase too.

mv $DOCUMENTUM/dba/dm_start_DOCBASE1 $DOCUMENTUM/dba/dm_start_DOCBASE1_deactivated
vi $DOCUMENTUM/scripts/startstop
    ==> Duplicate the line starting with "DOCBASES=..."
    ==> Comment one of the two lines and remove the docbase DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        DOCBASES="DOCBASE2 DOCBASE3"
        #DOCBASES="DOCBASE1 DOCBASE2 DOCBASE3"

 

Ok so now the docbase has been stopped and can’t be started anymore so let’s start to check all the clients that were able to connect to this docbase. If you have a monitoring running on the Content Server (using the crontab for example), don’t forget to disable the monitoring too since the docbase isn’t running anymore. In the crontab, you can just comment the lines for example (using “crontab -e”). On the Java MethodServer (JMS) side, there are at least two applications you should take a look at (ServerApps and the ACS). To deactivate the docbase DOCBASE1 for these two applications, you should apply the following steps:

cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
vi ServerApps.ear/DmMethods.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

vi acs.ear/lib/configs.jar/config/acs.properties
    ==> Reorder the “repository.name.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Reorder the “repository.acsconfig.X=” properties for DOCBASE1 to have the biggest number (X is a number which goes from 1 to 3 in this case since I have 3 docbases)
    ==> Comment the “repository.name.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.acsconfig.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.login.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)
    ==> Comment the “repository.password.Y=” property with the biggest number (Y is the number for DOCBASE1 so should be 3 now)

 

So what has been done above? In the file web.xml, there is a reference to all docbases that are configured for the applications. Therefore commenting these lines in the file simply avoid the JMS to try to contact the docbase DOCBASE1 because it’s not running anymore. For the ACS, the update of the file acs.properties is a little bit more complex. What I usually do in this file is reordering the properties so that the docbases that aren’t running have the biggest index. Since we have DOCBASE1, DOCBASE2 and DOCBASE3, DOCBASE1 being the first docbase installed, therefore it will have by default the index N°1 inside the acs.properties (e.g.: repository.name.1=DOCBASE1.DOCBASE1 // repository.name.2=DOCBASE2.DOCBASE2 // …). Reordering the properties will simply allow you to just comment the highest number (3 in this case) for all properties and you will keep the numbers 1 and 2 enabled.

In addition to the above, you might also have a BPM (xCP) installed, in which case you also need to apply the following step:

vi bpm.ear/bpm.war/WEB-INF/web.xml
    ==> Comment the 4 lines related to DOCBASE1 as follow:
        <!--init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param-->

 

Once the steps have been applied, you can restart the JMS using your preferred method. This is an example:

$DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
ps -ef | grep "MethodServer" | grep -v grep
nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &

 

After the restart of the JMS, it won’t contain any errors anymore related to connection problems to DOCBASE1. For example if you don’t update the ACS file (acs.properties), it will still try to project itself to all docbases and it will therefore fail for DOCBASE1.

The next component I wanted to describe isn’t a component that is installed by default on all Content Servers but you might have it if you need document previews: the Thumbnail Server. To deactivate the docbase DOCBASE1 inside the Thumbnail Server, it’s pretty easy too:

vi $DM_HOME/thumbsrv/conf/user.dat
    ==> Comment the 5 lines related to DOCBASE1:
        #[DOCBASE1]
        #user=dmadmin
        #local_folder=thumbnails
        #repository_folder=/System/ThumbnailServer
        #pfile.txt=/app/dctm/server/product/7.2/thumbsrv/conf/DOCBASE1/pfile.txt

sh -c "$DM_HOME/thumbsrv/container/bin/shutdown.sh"
ps -ef | grep "thumbsrv" | grep -v grep
sh -c "$DM_HOME/thumbsrv/container/bin/startup.sh"

 

If you don’t do that, the Thumbnail Server will try to contact all docbases configured in the “user.dat” file and because of a bug with certain versions of the Thumbnail (see this blog for more information), your Thumbnail Server might even fail to start. Therefore commenting the lines related to DOCBASE1 inside this file is quite important.

 

2. Web Application Server

For the Web Application Server hosting your Documentum Administrator and D2/D2-Config clients, the steps are pretty simple: usually nothing or almost nothing has to be done. If you really want to be clean, then there might be a few things to do, it all depends on what you configured… On this part, I will consider that you are using non-exploded applications (which means: war files). I will put these WAR files under $WS_HOME/applications/. In case your applications are exploded (meaning your D2 is a folder and not a war file), then you don’t have to extract the files (no need to execute the jar commands). If you are using a Tomcat Application Server, then the applications will usually be exploded (folder) and will be placed under $TOMCAT_HOME/webapps/.

 – D2:

If you defined the LoadOnStartup property for DOCBASE1, then you might need to execute the following commands to extract the file, comment the line for the DOCBASE1 inside it and update the file back into the war file:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties
sed -i 's,^LoadOnStartup.DOCBASE1.\(username\|domain\)=.*,#&,' WEB-INF/classes/D2FS.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/D2FS.properties

 

Also if you defined which docbase should be the default one in D2 and that this docbase is DOCBASE1 then you need to change the default docbase to DOCBASE2 or DOCBASE3. In my case, I will use DOCBASE2 as new default docbase:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties
sed -i 's,^defaultRepository=.*,defaultRepository=DOCBASE2,' WEB-INF/classes/config.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/config.properties

 

Finally if you are using Single Sign-On, you will have a SSO User. This is defined inside the d2fs-trust.properties file with recent versions of D2 while it was defined in the shiro.ini file before. Since I’m using a D2 4.5, the commands would be:

jar -xvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties
sed -i 's,^DOCBASE1.user=.*,#&,' WEB-INF/classes/d2fs-trust.properties
jar -uvf $WS_HOME/applications/D2.war WEB-INF/classes/d2fs-trust.properties

 

 – D2-Config:

Usually nothing is needed. Only running docbases will be available through D2-Config.

 

 – DA:

Usually nothing is needed, unless you have specific customization for DA, in which case you probably need to take a look at the files under the “custom” folder.

 

3. Full Text Server

For the Full Text Server, the steps are also relatively easy. The only thing that needs to be done is to stop the Index Agent related to the docbase DOCBASE1 and prevent it from starting again. In our environments, since we sometimes have several docbases installed on the same Content Server and several Index Agents installed on the same Full Text, then we need to differentiate the name of the Index Agents. We usually only add the name of the docbase at the end: Indexagent_DOCBASE1. So let’s start with stopping the Index Agent:

ps -ef | grep "Indexagent_DOCBASE1" | grep -v grep
$XPLORE_HOME/jboss7.1.1/server/stopIndexagent_DOCBASE1.sh

 

Once done and if I use the global startstop script I mentioned earlier in this blog, then the only remaining step is preventing the Index Agent to start again and that can be done in the following way:

mv $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh $XPLORE_HOME/jboss7.1.1/server/startIndexagent_DOCBASE1.sh_deactivated
vi $XPLORE_HOME/scripts/startstop
    ==> Duplicate the line starting with "INDEXAGENTS=..."
    ==> Comment one of the two lines and remove the Index Agent related to DOCBASE1 from the list that isn't commented
    ==> In the end, you should have something like:
        INDEXAGENTS="Indexagent_DOCBASE2 Indexagent_DOCBASE3"
        #INDEXAGENTS="Indexagent_DOCBASE1 Indexagent_DOCBASE2 Indexagent_DOCBASE3"

 

If you have a monitoring running on the Full Text Server for this Index Agent, don’t forget to disable it.

 

4. ADTS

The last section of this blog will talk about the ADTS (Advanced Document Transformation Services), also called the Rendition Server. The ADTS is fairly similar to all other Documentum components: first you start with installing the different binaries and then you can configure a docbase to use/be supported by the ADTS. By doing that, the ADTS will update some configuration files that therefore need to be updated again if you want to deactivate a docbase. As you know, the ADTS is a Windows Server so I won’t show you commands to be executed in this section, I will just point you to the configuration files that need to be edited and what to update inside them. In this section, I will use %ADTS_HOME% as the folder under which the ADTS has been installed. It’s usually a good idea to install the ADTS under a specific/separated drive (not the OS drive) like D:\CTS\.

So the first thing to do is to prevent the different profiles for a docbase to be loaded:

Open the file "%ADTS_HOME%\CTS\config\CTSProfileService.xml"
    ==> Comment the whole "ProfileManagerContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--ProfileManagerContext DocbaseName="DOCBASE1" ProcessExternally="false">
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightProfiles" CTSProfileName="lightWeightProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\lightWeightSystemProfiles" CTSProfileName="lightWeightSystemProfile"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\\docbases\\DOCBASE1\\config\\profiles\\heavyWeightProfiles" CTSProfileName="heavyWeightProfile"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Profiles" CTSProfileName="lightWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/System Profiles" CTSProfileName="lightWeightSystemProfileFolder"/>
            <CTSServerProfile CTSProfileValue="/System/Media Server/Command Line Files" CTSProfileName="heavyWeightProfileFolder"/>
            <CTSServerProfile CTSProfileValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\temp_profiles" CTSProfileName="tempFileDir"/>
            <CTSServerProfile CTSProfileValue="ProfileSchema.dtd" CTSProfileName="lwProfileDTD"/>
            <CTSServerProfile CTSProfileValue="MP_PROPERTIES.dtd" CTSProfileName="hwProfileDTD"/>
            <ForClients>XCP</ForClients>
        </ProfileManagerContext-->

 

Once that is done, the queue processors need to be disabled too:

Open the file "%ADTS_HOME%\CTS\config\CTSServerService.xml"
    ==> Comment the two "QueueProcessorContext" XML tags related to DOCBASE1
    ==> In the end, you should have something like (I'm not displaying the whole XML tags since they are quite long...):
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_mediaserver"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->
        <!--QueueProcessorContext DocbaseName="DOCBASE1">
            <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
            <CTSServer AttributeName="queueInterval" AttributeValue="10"/>
            <CTSServer AttributeName="maxThreads" AttributeValue="10"/>
            ...
            <CTSServer AttributeName="processOnlyParked" AttributeValue=""/>
            <CTSServer AttributeName="parkingServerName" AttributeValue=""/>
            <CTSServer AttributeName="notifyFailureMessageAdmin" AttributeValue="No"/>
        </QueueProcessorContext-->

 

After that, there is only one last configuration file to be updated and that’s the session manager which is the one responsible for the errors printed during startup of the ADTS because it defines which docbases the ADTS should try to contact, using which user/password and how many tries should be perform:

Open the file "%ADTS_HOME%\CTS\config\SessionService.xml"
    ==> Comment the whole "LoginContext" XML tag related to DOCBASE1
    ==> In the end, you should have something like:
        <!--LoginContext DocbaseName="DOCBASE1" Domain="" isPerformanceLogRepository="false">
            <CTSServer AttributeName="userName" AttributeValue="adtsuser"/>
            <CTSServer AttributeName="passwordFile" AttributeValue="%ADTS_HOME%\CTS\docbases\DOCBASE1\config\pfile\mspassword.txt"/>
            <CTSServer AttributeName="maxConnectionRetries" AttributeValue="10"/>
        </LoginContext-->

 

Once the configuration files have been updated, simply restart the ADTS services for the changes to be applied.

 

And here we go, you should have a clean environment with one less docbase configured without having to remove it on all servers. As a final note, if you ever want to reactivate the docbase, simply uncomment everything that was commented above, restore the default line from the custom “startstop” scripts and rename the Documentum start scripts with their original names (without the “_deactivated”) on the Content Server and Full Text Server.

 

 

Cet article Documentum – Deactivation of a docbase without uninstallation est apparu en premier sur Blog dbi services.

Documentum – Not able to install IndexAgent with xPlore 1.6

$
0
0

In the last few months, we already did some tests with the Content Server 7.3, D2 4.7 and xPlore 1.6. As often, we found some funny/interesting behaviors and we faced some issues. In this blog I will talk about one issue I faced when I tried to install a new IndexAgent with xPlore 1.6. The Content Server used was a CS 7.3 (no patch available when I did this installation). This issue was present in both Silent and non-Silent installations.

 

So what was the issue? While trying to install a first IndexAgent, it looked like the installer wasn’t able to contact the Content Server. The GUI installer was stuck on the “Connection Broker Information”. After filling the information and clicking on “Next”, the Full Text Installer was stuck and nothing happened anymore.

 

Basically on this step of the installer, it is trying to connect to the docbroker using the information provided (docbroker hostname and port). Of course the docbroker and the docbase were running and responding properly.

 

To verify what was happening, I just checked the xPlore processes:

[xplore@full_text_server_01 ~]$ ps -ef | grep xplore
xplore    2423     1  0 08:04 ?        00:00:00 /bin/sh ./startPrimaryDsearch.sh
xplore    2425  2423  0 08:04 ?        00:00:00 /bin/sh $XPLORE_HOME/wildfly9.0.1/bin/standalone.sh
xplore    2572  2425  5 08:04 ?        00:03:01 $XPLORE_HOME/java64/1.8.0_77/bin/java -D[Standalone] -server -XX:+UseCompressedOops -server -XX:+UseCompressedOops -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=196m -Xms4096m -Xmx4096m -XX:MaxMetaspaceSize=512m -Xss1024k -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-ReduceInitialCardMarks -Djava.awt.headless=true -Djboss.server.base.dir=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch -Djava.net.preferIPv4Stack=true -Dorg.jboss.boot.log.file=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/log/server.log -Dlogging.configuration=file:$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/configuration/logging.properties -jar $XPLORE_HOME/wildfly9.0.1/jboss-modules.jar -mp $XPLORE_HOME/wildfly9.0.1/modules org.jboss.as.standalone -Djboss.home.dir=$XPLORE_HOME/wildfly9.0.1 -Djboss.server.base.dir=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch
xplore    3977  2572  0 08:06 ?        00:00:00 $XPLORE_HOME/dsearch/cps/cps_daemon/bin/CPSDaemon $XPLORE_HOME/dsearch/cps/cps_daemon/PrimaryDsearch_local_configuration.xml Daemon0 9322 NORMAL
xplore    4069  2572  0 08:06 ?        00:00:00 $XPLORE_HOME/dsearch/cps/cps_daemon/bin/CPSDaemon $XPLORE_HOME/dsearch/cps/cps_daemon/PrimaryDsearch_local_configuration.xml QDaemon0 9323 QUERY
xplore    9591  9568  0 08:45 pts/3    00:00:00 /bin/sh ./configIndexagent.sh
xplore    9592  9591  0 08:45 pts/3    00:00:08 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -Xmx262114000 -Xms262114000 com.zerog.lax.LAX /tmp/install.dir.9592/temp.lax /tmp/env.properties.9592 "-f" "config.properties"
xplore   11099  9592  0 08:51 pts/3    00:00:02 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/342186.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/install.dir.9592/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
xplore   11928  9515  0 08:59 pts/1    00:00:00 ps -ef
xplore   11929  9515  0 08:59 pts/1    00:00:00 grep xplore

 

As you can see above, the PrimaryDsearch is up & running so that’s good but there are two additional interesting processes:

xplore    9592  9591  0 08:45 pts/3    00:00:08 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -Xmx262114000 -Xms262114000 com.zerog.lax.LAX /tmp/install.dir.9592/temp.lax /tmp/env.properties.9592 "-f" "config.properties"
xplore   11099  9592  0 08:51 pts/3    00:00:02 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/342186.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/install.dir.9592/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489

 

The most interesting one is the second line: this is the command that the installer is starting to check that the docbroker is responding properly and this process is stuck. To do some testing, I just put the utilities and the needed elements to a test folder so I would be able to execute this command myself whenever I wanted. Then I killed the installer since it was stuck and I tried myself to contact the docbroker:

[xplore@full_text_server_01 ~]$ time $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/my.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/my.tmp/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
^C
real    7m43.669s
user    0m1.618s
sys     0m0.153s

 

As you can see above, after 7min and 43 seconds, I just killed the process (^C) because it still didn’t return anything. Just to be sure, I checked our monitoring tool and it was able to connect to the docbase, remotely, without issue so the issue here was link somehow to the Full Text Server.

 

To go deeper, the only thing I could do was to perform some thread dumps of the java process to try to see where the bottleneck was. After doing that, it actually became quite clear that the server had an issue with the entropy generation because the threads were holding and waiting for the rsa libraries to get a random seed from the system. If you are running Documentum (or any other thing) on a Virtual Machine you will most probably want to change the default random generation because the entropy isn’t sufficient usually on VMs. We are always doing that for all our customer’s environments but since this was just a sandbox server to play with the newly released xPlore 1.6, this wasn’t setup at that time. With xPlore 1.5, this wouldn’t have caused any issue but it looks like xPlore 1.6 is much more demanding on the random seed generations. This has most probably something to do with some security improvements done in xPlore 1.6.

 

So to solve this issue, you just need to increase the entropy of your server. This can be done as follow (as root):

  1. Install the rng tools package
  2. Update the options in the /etc/sysconfig/rngd file
  3. Setup the service to start automatically at OS boot

 

You can basically use the following commands for that (I’m using a RHEL so it’s yum based):

[root@full_text_server_01 ~]# yum -y install rng-tools.x86_64
Loaded plugins: product-id, search-disabled-repos, security, subscription-manager
Setting up Install Process
Resolving Dependencies
--> Running transaction check
...
Transaction Test Succeeded
Running Transaction
  Installing : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1
  Verifying  : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1

Installed:
  rng-tools.x86_64 0:5-2.el6_7

Complete!
[root@full_text_server_01 ~]# rpm -qf /etc/sysconfig/rngd
rng-tools-5-2.el6_7.x86_64
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# sed -i 's,EXTRAOPTIONS=.*,EXTRAOPTIONS=\"-r /dev/urandom -o /dev/random -t 0.1\",' /etc/sysconfig/rngd
[root@full_text_server_01 ~]# cat /etc/sysconfig/rngd
# Add extra options here
EXTRAOPTIONS="-r /dev/urandom -o /dev/random -t 0.1"
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# chkconfig --level 345 rngd on
[root@full_text_server_01 ~]# chkconfig --list | grep rngd
rngd            0:off   1:off   2:off   3:on    4:on    5:on    6:off
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# service rngd start
Starting rngd:                                             [  OK  ]
[root@full_text_server_01 ~]#

 

When this is done, you need to force xPlore to use this new random generation mechanism. For that, just put in the profile of the xplore user one environment variable (as xplore):

[xplore@full_text_server_01 ~]$ echo 'export DEVRANDOM=/dev/urandom' >> ~/.bash_profile

 

When this has been done, you can just re-execute the java command and in something like 1 second the command should return a result like that:

[xplore@full_text_server_01 ~]$ $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/my.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/my.tmp/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
0 [main] ERROR com.documentum.fc.common.impl.logging.LoggingConfigurator  - Problem locating log4j configuration
1 [main] WARN com.documentum.fc.common.impl.logging.LoggingConfigurator  - Using default log4j configuration
[xplore@full_text_server_01 ~]$

 

These two messages aren’t important, they are just printed because there is no log4j properties file in the command line so it is using the default one. The only important thing here is that the command returns something so it is working! To confirm that, you can just run the installer once more and this time the screen shouldn’t be stuck on the Connection Broker Information.

 

 

Cet article Documentum – Not able to install IndexAgent with xPlore 1.6 est apparu en premier sur Blog dbi services.

Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6

$
0
0

As mentioned in previous blogs (like this one), we were doing some testing on the newly released versions of Documentum (CS 7.3, xPlore 1.6, aso…). On this blog, I will talk about something that can prevent the WildFly instances to start properly (RedHat renamed the JBoss Application Server to WildFly in 2013 starting with the version 8.0). As this issue is linked to WildFly, you can understand that this might happen on both Content Servers 7.3 and Full Text Servers 1.6 but in this blog, I will use a Content Server for the example.

 

So when I first installed a Content Server 7.3 (in full silent of course!), the Java Method Server wasn’t running properly. Actually worse than that, it wasn’t even starting completely. Here are the entries put in the JMS log file (server.log):

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer

=========================================================================

15:48:41,985 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
15:48:44,301 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
15:48:44,502 INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
15:48:50,818 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,819 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,867 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
15:48:50,868 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
15:48:51,024 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
15:48:51,026 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
15:48:51,109 INFO  [org.xnio] (MSC service thread 1-5) XNIO version 3.3.1.Final
15:48:51,113 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
15:48:51,128 INFO  [org.xnio.nio] (MSC service thread 1-5) XNIO NIO Implementation Version 3.3.1.Final
15:48:51,164 INFO  [org.jboss.remoting] (MSC service thread 1-5) JBoss Remoting version 4.0.9.Final
15:48:51,325 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
15:48:51,335 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
15:48:51,374 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
15:48:51,353 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
15:48:51,359 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
15:48:51,387 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
15:48:51,410 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
15:48:51,429 INFO  [org.jboss.as.security] (MSC service thread 1-3) WFLYSEC0001: Current PicketBox version=4.9.2.Final
15:48:51,499 INFO  [org.jboss.as.connector] (MSC service thread 1-7) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
15:48:51,541 INFO  [org.jboss.as.naming] (MSC service thread 1-4) WFLYNAM0003: Starting Naming Service
15:48:51,554 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
15:48:51,563 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,564 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,583 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
15:48:51,593 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0018: Started Driver service with driver-name = h2
15:48:51,931 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
15:48:51,984 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0012: Started server default-server.
15:48:52,049 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0018: Host default-host starting
15:48:52,112 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-8) MSC000001: Failed to start service jboss.undertow.listener.default: org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:150)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Protocol family unavailable
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
        at org.xnio.nio.NioXnioWorker.createTcpConnectionServer(NioXnioWorker.java:182)
        at org.xnio.XnioWorker.createStreamConnectionServer(XnioWorker.java:243)
        at org.wildfly.extension.undertow.HttpListenerService.startListening(HttpListenerService.java:115)
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:147)
        ... 5 more

...

15:49:37,585 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "native-interface")
]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.remoting.server.management" => "org.jboss.msc.service.StartException in service jboss.remoting.server.management: WFLYRMT0005: Failed to start service
    Caused by: java.net.SocketException: Protocol family unavailable"}}
15:49:37,589 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "http-interface")
]) - failure description: {
    "WFLYCTL0080: Failed services" => {"jboss.serverManagement.controller.management.http" => "org.jboss.msc.service.StartException in service jboss.serverManagement.controller.management.http: WFLYSRV0083: Failed to start the http-interface service
    Caused by: java.lang.RuntimeException: java.net.SocketException: Protocol family unavailable
    Caused by: java.net.SocketException: Protocol family unavailable"},
    "WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
        "Services that were unable to start:" => ["jboss.serverManagement.controller.management.http.shutdown"],
        "Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
    }
}
...
15:49:37,653 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
15:49:37,654 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
...
15:49:37,901 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0063: Http management interface is not enabled
15:49:37,902 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0054: Admin console is not enabled
15:49:37,903 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started (with errors) in 57426ms - Started 225 of 424 services (16 services failed or missing dependencies, 225 services are lazy, passive or on-demand)
...
15:49:38,609 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "acs.ear" (runtime-name: "acs.ear")
15:49:38,610 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "ServerApps.ear" (runtime-name: "ServerApps.ear")
...
15:53:02,276 INFO  [org.jboss.as] (MSC service thread 1-4) WFLYSRV0050: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) stopped in 124ms

 

After all these errors, the logs are showing that the Http Management interface isn’t enabled and then the WildFly is just shutting itself down. So what is this “Protocol family unavailable”? This is actually linked to one of the changes included in WildFly compared to the old version (JBoss 7.1.1): WildFly supports and use by default IPv4 and IPv6. Unfortunately if your OS isn’t configured to use IPv6, this error will appear and will prevent your WildFly to start.

 

Fortunately, it is very easy to correct this, you just have to force WildFly to only use IPv4. For that purpose, we will add a single JVM parameter in the start script of the JMS:

sed -i 's,^JAVA_OPTS="[^"]*,& -Djava.net.preferIPv4Stack=true,' $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh

 

This command will simply add ” -Djava.net.preferIPv4Stack=true” just before the ending double quote of the line that is starting with JAVA_OPTS= in the file $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh. As a side note, to do the same thing on a Full Text Server, just change the path and the name of startup script and it will do the same thing (E.g.: $XPLORE_HOME/wildfly9.0.1/server/startPrimaryDsearch.sh).

 

After that, just restart the JMS and it will be able to start without issue:

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true

=========================================================================

16:10:22,285 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
16:10:22,601 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
16:10:22,711 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
16:10:24,627 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,645 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,664 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
16:10:24,665 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
16:10:24,701 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
16:10:24,702 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
16:10:24,728 INFO  [org.xnio] (MSC service thread 1-2) XNIO version 3.3.1.Final
16:10:24,737 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
16:10:24,745 INFO  [org.xnio.nio] (MSC service thread 1-2) XNIO NIO Implementation Version 3.3.1.Final
16:10:24,808 INFO  [org.jboss.remoting] (MSC service thread 1-2) JBoss Remoting version 4.0.9.Final
16:10:24,831 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
16:10:24,852 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
16:10:24,867 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
16:10:24,890 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
16:10:24,899 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
16:10:24,901 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
16:10:24,916 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
16:10:24,939 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,939 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,959 INFO  [org.jboss.as.security] (MSC service thread 1-2) WFLYSEC0001: Current PicketBox version=4.9.2.Final
16:10:24,967 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
16:10:24,977 INFO  [org.jboss.as.connector] (MSC service thread 1-4) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
16:10:25,034 INFO  [org.jboss.as.naming] (MSC service thread 1-7) WFLYNAM0003: Starting Naming Service
16:10:25,035 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
16:10:25,044 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-6) WFLYJCA0018: Started Driver service with driver-name = h2
16:10:25,394 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
16:10:25,414 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0012: Started server default-server.
16:10:25,508 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0018: Host default-host starting
16:10:25,581 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0006: Undertow HTTP listener default listening on /0.0.0.0:9080
16:10:25,933 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) WFLYJCA0001: Bound data source 
16:10:26,177 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-6) WFLYDS0013: Started FileSystemDeploymentService for directory $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments
16:10:26,198 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0027: Starting deployment of "ServerApps.ear" (runtime-name: "ServerApps.ear")
16:10:26,201 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-4) WFLYSRV0027: Starting deployment of "acs.ear" (runtime-name: "acs.ear")
16:10:26,223 INFO  [org.jboss.as.remoting] (MSC service thread 1-5) WFLYRMT0001: Listening on 0.0.0.0:9084
...
16:10:42,398 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 63) WFLYUT0021: Registered web context: /DmMethods
16:10:44,519 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 64) WFLYUT0021: Registered web context: /ACS
16:10:44,563 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
16:10:44,564 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
16:10:44,911 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9085/management
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://0.0.0.0:9085
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started in 23083ms - Started 686 of 905 services (298 services are lazy, passive or on-demand)

 

As shown above, the WildFly has been started successfully. In this example, I really took a default JMS setup with just a few custom JVM parameters (Xms, Xmx, aso…) but as you can see in the logs above, there are a few issues with the default setup like deprecated or missing parameters. So I would strongly suggest you to take a look at that and configure properly your JMS once it is up & running!

 

Earlier in this blog, I mentioned that there are a few things that changed between JBoss 7.1.1 (CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (CS 7.3 / xPlore 1.6). In a future blog, I will talk about another change which is how to setup a WildFly instance in SSL.

 

 

Cet article Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6 est apparu en premier sur Blog dbi services.

Documentum – Setup WildFly in HTTPS

$
0
0

In a previous blog (See this one), I mentioned some differences between JBoss 7.1.1 (coming with CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (coming with CS 7.3 / xPlore 1.6). In this blog, I will talk about what is needed to setup a WildFly instance in HTTPS only or HTTPS+HTTP. This blog applies to both Content Servers and Full Text Servers. On the Full Text Server side, you might be aware of the ConfigSSL.groovy script which can be used to setup the Dsearch and IndexAgents in HTTPS (see this blog for more information). But if you are using a Multi-node Full Text setup (which most likely involves a lot of CPS (Content Processing Service)) or if you are just installing additional CPS (or Remote CPS), then what can you do?

 

The script ConfigSSL.groovy can only be used to configure a PrimaryDsearch or an IndexAgent in SSL, it’s not able to configure a CPS in SSL. Also, the setup of a CPS in SSL isn’t described anywhere in the official Documentation of EMC (/OpenText) so that’s the main purpose of this blog: what is needed to setup a WildFly instance in HTTPS so you can use that for your CPS as well as for your JMS (Java Method Server).

 

As a prerequisite, you need to have a Java Keystore. There are plenty of documentation around that so I won’t describe this part. In this blog, I will use a Full Text Server and I will configure a CPS in HTTPS. If you want to do that on the Content Server, just adapt the few paths accordingly. As a first thing to do, I will setup two environment variables which will contain the passwords for the Java cacerts and my Java Keystore (JKS):

[xplore@full_text_server_01 ~]$ cd /tmp/certs/
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the Java cacerts password: " jca_pw
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the JKS password: " jks_pw

 

When you enter the first read command, the prompt isn’t returned. Just write/paste the jca password and press enter. Then the prompt will be returned and you can execute the second read command in the same way to write/paste the jks password. Now you can update the Java cacerts and import your JKS. The second and third commands below (the ones with the “-delete”) will remove any entries with the alias mentioned (dbi_root_ca and dbi_int_ca). If you aren’t sure about what you are doing, don’t execute these two commands. If the alias already exists in the Java cacerts, you will saw an error while executing the commands. In such cases, just use another alias (or remove the existing one using the “-delete” commands…):

[xplore@full_text_server_01 certs]$ cp $JAVA_HOME/jre/lib/security/cacerts $JAVA_HOME/jre/lib/security/cacerts_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_root_certificate.cer" -storepass ${jca_pw}
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_int_certificate.cer" -storepass ${jca_pw}

 

In the commands above, you will need to customize at least the “-file” parameter. I used above one Root CA and one Intermediate CA. Depending on your needs, you might need to add the same thing (if you are using a trust chain with 2 CAs) or just import the SSL Certificate directly, aso…

 

When this is done, you need to stop your application. In this case, I will stop my CPS (named “Node2_CPS1″ here). To stay more general, I will define a new variable “server_name”. You can use this variable to keep the same commands on the different servers, just use the appropriate value: “MethodServer” (for a JMS), “PrimaryDsearch” (or the name of your Dsearch), “CPS” (or the name of your CPS) or anything else:

[xplore@full_text_server_01 certs]$ export server_name="Node2_CPS1"
[xplore@full_text_server_01 certs]$ $JBOSS_HOME/server/stop${server_name}.sh

 

In the command above, $JBOSS_HOME is the location of the JBoss/WildFly instance. For an xPlore 1.6 for example, it will “$XPLORE_HOME/wildfly9.0.1″. For a Content Server 7.3, it will be “$DOCUMENTUM_SHARED/wildfly9.0.1″. I kept the name “JBOSS_HOME” because even if it has been renamed, I think it will take time before everybody use WildFly, just like there are still a lot of people using “docbase” while this has been replaced with “repository” with the CS 7.0…

 

Then let’s move the JKS file to the right location:

[xplore@full_text_server_01 certs]$ mv /tmp/certs/full_text_server_01.jks $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore
[xplore@full_text_server_01 certs]$ chmod 600 $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore

 

Now you can configure the standalone.xml file for this application to handle the HTTPS communications because by default only HTTP is enabled:

[xplore@full_text_server_01 certs]$ cd $JBOSS_HOME/server/DctmServer_${server_name}/configuration/
[xplore@full_text_server_01 configuration]$ cp standalone.xml standalone.xml_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 configuration]$ 
[xplore@full_text_server_01 configuration]$ sed -i 's/inet-address value="${jboss.bind.address.management:[^}]*}/inet-address value="${jboss.bind.address.management:127.0.0.1}/' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realm name="sslSecurityRealm"/,/<\/security-realm>/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<https-listener.*name="default-https"/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realms>/a \            <security-realm name="sslSecurityRealm">\n                <server-identities>\n                    <ssl>\n                        <keystore path="'$JBOSS_HOME'/server/DctmServer_'${server_name}'/configuration/my.keystore" keystore-password="'${jks_pw}'"/>\n                    </ssl>\n                </server-identities>\n            </security-realm>' standalone.xml

 

So what are the commands above doing?

  • Line 2: Backup of the standalone.xml file
  • Line 4: Changing the default IP on which WildFly is listening for the Management Interface (by default 0.0.0.0 which means no restriction) to 127.0.0.1 so it’s only accessible locally
  • Line 5: Removing any existing Security Realm named “sslSecurityRealm”, if any (there isn’t any unless you already created one with this specific name…)
  • Line 6: Removing any HTTPS listener, if any (there isn’t any unless you already setup this application in HTTPS…)
  • Line 7: Creating a new Security Realm named “sslSecurityRealm” containing all the needed properties: keystore path and password. You can also define additional parameters like alias, aso… $JBOSS_HOME, ${server_name} and ${jks_pw} are surrounded with single quote so they are evaluated before being put in the xml file.

 

Once this has been done, you defined your security realm but it’s not yet used… So here comes the choice to either completely deactivate HTTP and keep only HTTPS or use both at the same time. From my point of view, it’s always better to keep only HTTPS but in DEV environments for example, you might have requests from the developer to enable the HTTP because it’s simpler for them to develop something (web services for example) using HTTP and then secure it. So it’s up to you!

 

1. HTTP and HTTPS

To keep both enabled, you just have to create a new listener in the standalone.xml file which will be dedicated to HTTPS. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i '/<http-listener .*name="default".*/a \                <https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>' standalone.xml

 

In the above command, just replace “PUT_HERE_SSL_CIPHERS” with the SSL Ciphers that you want your application to be using. Depending on how you configure the SSL in your environment, the list of SSL Ciphers needed will change so I will let you fill that blank. Of course if the name of your http-listener isn’t “default”, then you will need to adapt the command above but it is the default name for all WildFly instances so unless you customized this already, this will do.

 

2. HTTPS Only

If you don’t want to allow HTTP communications, then you just need to remove the HTTP listener and replace it with the HTTPS listener. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i 's,<http-listener .*name="default".*,<https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>,' standalone.xml

 

 

No matter if you choose HTTP+HTTPS or HTTPS only, you then have to start your application again using your prefered way. This is an example of command that will do it:

[xplore@full_text_server_01 configuration]$ sh -c "cd $JBOSS_HOME/server/; nohup ./start${server_name}.sh & sleep 5; mv nohup.out nohup-${server_name}.out"

 

When the application has been started, you can try to access the application URL. These are some examples using the most common ports:

  • JMS => https://content_server_01:9082/DmMethods/servlet/DoMethod
  • PrimaryDsearch => https://full_text_server_01:9302/dsearch
  • IndexAgent => https://full_text_server_01:9202/IndexAgent
  • CPS => https://full_text_server_01:9402/cps/ContentProcessingService?wsdl

 

 

Cet article Documentum – Setup WildFly in HTTPS est apparu en premier sur Blog dbi services.

Documentum – Setup a CPS in HTTPS – Unable to register it

$
0
0

In a previous blog (see this one), I described how to setup a WildFly instance in HTTPS and I quickly mentioned something related to the CPS. When I wrote this previous blog, I was already aware of one specific issue that prevented the CPS to be usable but I didn’t want to mention it to stay focus on the WildFly specific steps for the HTTPS setup which are common between the JMS and the CPS.

 

In this blog, I will NOT talk about the Java cacerts because the commands are exactly the same as in the previous blog and they aren’t dependent on the JBoss/WildFly version. As described in the previous blog, the setup of the CPS in HTTPS isn’t described in the official EMC/OTX Documentation (yet). Because of that, I worked with EMC on defining the minimal steps that are required to get one up&running.

 

So in this blog and just to compare with the previous blog, I will start with setting up a JBoss 7.1.1 in SSL (without the Java cacerts stuff) without using the ConfigSSL.groovy since it’s not able to handle the CPS setup. So as before, I’m starting with defining some environment variables (I’m using the same CPS name) and then stopping the CPS:

[xplore@full_text_server_01 ~]$ cd /tmp/certs/
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the JKS password: " jks_pw
[xplore@full_text_server_01 certs]$ 
[xplore@full_text_server_01 certs]$ export server_name="Node2_CPS1"
[xplore@full_text_server_01 certs]$ $JBOSS_HOME/server/stop${server_name}.sh
[xplore@full_text_server_01 certs]$ 
[xplore@full_text_server_01 certs]$ mv /tmp/certs/full_text_server_01.jks $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore
[xplore@full_text_server_01 certs]$ chmod 600 $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore

 

In this case with JBoss 7.1.1, the $JBOSS_HOME used above is “$XPLORE_HOME/jboss7.1.1″. So now the commands to configure a JBoss 7.1.1 in HTTPS are the following ones:

[xplore@full_text_server_01 certs]$ cd $JBOSS_HOME/server/DctmServer_${server_name}/configuration/
[xplore@full_text_server_01 configuration]$ cp standalone.xml standalone.xml_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 configuration]$ 
[xplore@full_text_server_01 configuration]$ sed -i 's/inet-address value="${jboss.bind.address.management:[^}]*}/inet-address value="${jboss.bind.address.management:127.0.0.1}/' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<subsystem .*xmlns="urn:jboss:domain:web:1.1"/a \            <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true">\n                <ssl name="https" password="'${jks_pw}'" certificate-key-file="'$JBOSS_HOME'/server/DctmServer_'${server_name}'/configuration/my.keystore" cipher-suite="PUT_HERE_SSL_CIPHERS"/>\n            </connector>' standalone.xml

 

And that’s all. As you can see, it’s even easier to setup a JBoss 7.1.1 in SSL that it was with a WildFly 9.0.1. This is a small description of these commands:

  • Line 2: Backup of the standalone.xml file
  • Line 4: Changing the default IP on which WildFly is listening for the Management Interface (by default 0.0.0.0 which means no restriction) to 127.0.0.1 so it’s only accessible locally
  • Line 5: Adding a new connector specific to https and which defines the keystore path, password as well as the SSL Ciphers to be used. Just replace “PUT_HERE_SSL_CIPHERS” with a comma separated list of SSL Ciphers that you want to enable

 

With the above commands, both HTTP and HTTPS will be enabled. If you want to disable all HTTP communications (which I recommend), then you can just remove the http connector:

[xplore@full_text_server_01 configuration]$ sed -i '/<connector .*name="http" .*socket-binding="http"/d' standalone.xml

 

Alright so now JBoss 7.1.1 is setup & enabled in HTTPS only as we did for the WildFly setup. At this point, if you start the CPS again and then access the URL in HTTPS, it will work. Then what is the purpose of this blog? Well the URL will work but the issue I faced is that this CPS is actually unusable.

 

When you install a remote CPS, you then need to register it to the PrimaryDsearch so it can be used. If you do not register it, then it just won’t do much so it’s kind of useless. Usually when you install a remote CPS, you will want to register it in HTTP directly in the PrimaryDsearch. This is done in the following way (I’m considering a PrimaryDsearch on the same host with port 9302 in HTTPS):

  1. Open the dsearchadmin (E.g.: https://full_text_server_01:9302/dsearchadmin) and login with the JBoss admin account
  2. Navigate to “Home > Services > Content Processing Service”
  3. Click on the “Add” button
  4. Select the “remote” checkbox
  5. Set the URL to: http://full_text_server_01:9400/cps/ContentProcessingService?wsdl
  6. Set the Instance & Usage according to your needs
  7. Click on the “OK” button

 

If you are using a CPS in HTTP as in the URL mentioned (step 5 – the port might change depending on what you configured), then you will see a pop-up showing that the PrimaryDsearch was able to connect to the CPS and the CPS is now registered. As a side note, for the CPS to be fully usable, you will need to restart the PrimaryDsearch so the status can be changed to green (INITIALIZED).

 

So what’s the difference when using a CPS in HTTPS-only? Well when you are at the step 5, you will need to use the HTTPS URL of the CPS which is therefore: https://full_text_server_01:9402/cps/ContentProcessingService?wsdl. When you click on the “OK” button at the last step (7), you will not get a pop-up saying that the PrimaryDsearch was able to connect to the CPS but instead you will see a failure message. The exact error message from the pop-up is:

Fail to connect the remote CPS at https://full_text_server_01:9402/cps/ContentProcessingService?wsdl

 

On this failure pop-up, there is only a OK button and when you click on it, you can see that the CPS hasn’t been registered at all. On the CPS logs, there are absolutely no information related to this issue and on the Dsearch logs, it’s not really much better: the only message that is printed is an INFO message with the following information:

2017-05-01 10:59:02,951 UTC INFO [RMI TCP Connection(5)-147.167.175.209] c.e.d.core.fulltext.indexserver.cps.CPSSubmitter - Begin to connect to CPS at (https://full_text_server_01:9402/cps/ContentProcessingService?wsdl)  with connection 16

 

That’s all, no errors, nothing. Of course you can enable some debug. For example you can add the “-Djavax.net.debug=ssl” inside the JAVA_OPTIONS of the PrimaryDsearch and the remote CPS, restart them both and then try again. You should be able (if you followed this blog properly) to see that the Dsearch and the CPS are exchanging properly the Hello requests but then the CPS is still not registered.

 

To solve this issue, there is actually just one small thing that need to be done and that’s updating the web.xml file for the CPS. You just need to add a security-constraint tag inside this file to specify that all pages are now using HTTPS:

[xplore@full_text_server_01 configuration]$ sed -i '/<\/web-app>/i \\n    <security-constraint>\n        <web-resource-collection>\n            <web-resource-name>secured pages</web-resource-name>\n            <url-pattern>/*</url-pattern>\n        </web-resource-collection>\n        <user-data-constraint>\n            <transport-guarantee>CONFIDENTIAL</transport-guarantee>\n        </user-data-constraint>\n    </security-constraint>' ../deployments/cps.war/WEB-INF/web.xml

 

When this has been added, the web.xml file for the CPS should look like that:

[xplore@full_text_server_01 configuration]$ cat ../deployments/cps.war/WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
        <listener>
        <listener-class>com.emc.cma.cps.management.CPSContextListener</listener-class>
    </listener>
        <listener>
        <listener-class>com.sun.xml.ws.transport.http.servlet.WSServletContextListener</listener-class>
    </listener>
    <servlet>
        <servlet-name>AgentService</servlet-name>
        <servlet-class>com.sun.xml.ws.transport.http.servlet.WSServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet>
        <servlet-name>ContextRegistryService</servlet-name>
        <servlet-class>com.sun.xml.ws.transport.http.servlet.WSServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet>
        <servlet-name>ContentProcessingService</servlet-name>
        <servlet-class>com.sun.xml.ws.transport.http.servlet.WSServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>ContentProcessingService</servlet-name>
        <url-pattern>/ContentProcessingService</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
        <servlet-name>ContextRegistryService</servlet-name>
        <url-pattern>/runtime/ContextRegistryService</url-pattern>
    </servlet-mapping>

    <security-constraint>
        <web-resource-collection>
            <web-resource-name>secured pages</web-resource-name>
            <url-pattern>/*</url-pattern>
        </web-resource-collection>
        <user-data-constraint>
            <transport-guarantee>CONFIDENTIAL</transport-guarantee>
        </user-data-constraint>
    </security-constraint>
</web-app>
[xplore@full_text_server_01 configuration]$

 

Then simply start/restart the remote CPS and this time, when you register it inside the PrimaryDsearch, you will get the success message!

 

 

Cet article Documentum – Setup a CPS in HTTPS – Unable to register it est apparu en premier sur Blog dbi services.

Documentum – Dsearch & IA not working in HTTPS using xPlore 1.6?

$
0
0

I’m warning you, this blog will be short, like extra short! Beginning of this year, during our testing of the new xPlore 1.6 (GA release, no patches were available) components, I was following my blog to setup a PrimaryDsearch and an IndexAgent in HTTPS using the ConfigSSL.groovy. As the groovy script is provided with the xPlore components directly, the steps coming from my previous blog also apply to xPlore 1.6 (just need to replace JBoss 7.1.1 with WildFly 9.0.1 in the paths). The script has been updated for WildFly so that’s good but at the end of it when I tried to access the URLs of the PrimaryDsearch and IndexAgent using HTTPS, they weren’t responding at all…

 

When you are testing newly released component, you often face a lot of small issues so I wasn’t really afraid. So then I checked the log files of the xPlore components to see what the issue was:

[xplore@xplore_server_01 ~]$ grep -i ERROR $JBOSS_HOME/server/DctmServer_PrimaryDsearch/log/server.log
2017-02-18 11:17:36,414 UTC ERROR [org.jboss.msc.service.fail] (MSC service thread 1-6) MSC000001: Failed to start service jboss.server.controller.management.security_realm.sslRealm.key-manager: org.jboss.msc.service.StartException in service jboss.server.controller.management.security_realm.sslRealm.key-manager: WFLYDM0085: The alias specified 'xplore' does not exist in the KeyStore, valid aliases are {ft_alias}
2017-02-18 11:17:55,130 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "dsearchadmin.war")]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
2017-02-18 11:17:55,132 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "dsearch.war")]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
2017-02-18 11:17:55,133 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "dsearchinfocenter.war")]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
2017-02-18 11:17:55,135 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
2017-02-18 11:17:55,136 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("subsystem" => "webservices")]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
2017-02-18 11:17:55,137 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("subsystem" => "ejb3")]) - failure description: {"WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
2017-02-18 11:17:55,137 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
2017-02-18 11:17:55,138 UTC ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
2017-02-18 11:17:55,292 UTC ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started (with errors) in 26241ms - Started 208 of 403 services (12 services failed or missing dependencies, 218 services are lazy, passive or on-demand)
[xplore@xplore_server_01 ~]$

 

The only interesting message here is the first ERROR message. All other messages are a consequence of the first one. When I saw this error, I was a little bit surprised… I was pretty sure I specified properly the “-alias ft_alias” as part of the parameters for the ConfigSSL.groovy script and yet the PrimaryDsearch was trying to use the alias “xplore”… So I checked the standalone.xml file and indeed, the alias used inside it was “xplore” while this wasn’t what I asked the script to configure. So what happened exactly? The only way to understand is to check the content of the groovy script itself. So I opened this file and found the following:

  • On line 48, the variable “$alias” is initialized with a default value of “xplore” => This is OK
  • On line 138, the variable “$alias” is properly set to the value given as parameter (if specified) => This is OK
  • On line 597, 626, 916, 1114 and 1322, the value used for the alias is hardcoded as “xplore” and it’s not using the variable “$alias” defined on line 48 and 138… => This is the issue

 

So yeah basically between xPlore 1.5 and xPlore 1.6, the EMC xPlore Team updated the ConfigSSL.groovy script to handle the WildFly setup but they forgot somehow to use the variables and just put a hardcoded value instead… Too bad. So with that, I just opened a bug with the EMC Support and provided them the explanation above. They confirmed the bug and the solution I provided them and they were planning at this time to fix this issue in the xPlore 1.6 P02 release (end of April). I didn’t check this P02 yet but hopefully this small issue is solved there :).

 

So if you are using xPlore 1.6 P00 or P01, I would suggest you to modify the standalone.xml file manually if the alias you are using inside the keystore isn’t “xplore”. Via command line, this can be done like that for example (just replace ‘ft_alias’ with your alias):

[xplore@xplore_server_01 ~]$ sed -i 's/\(.*keystore.*\) alias="[^"]*"/\1 alias="ft_alias"/' $JBOSS_HOME/server/DctmServer_PrimaryDsearch/configuration/standalone.xml

 

 

Cet article Documentum – Dsearch & IA not working in HTTPS using xPlore 1.6? est apparu en premier sur Blog dbi services.

DA 7.2 UCF Transfer failing with SSL

$
0
0

This is a blog following the one I already wrote ealier:  https://blog.dbi-services.com/documentum-administrator-ucf-troubleshooting/
The first one was for “how to find the error” and not how to resolve it. In this blog I will talk about an error I got by a customer about UCF.

I got an error message using DA 7.2 where I coulnd’t download documents, in fact every transfer were failing due to UCF. By following my previous blog I found the specific error in the logs saying:
SSL Handshake failed.

You probably had this issue as well if you used SSL with DA. By default when you configure SSL with DA it tries to find a certificate from the java CA cert. You can add the certificate to your keystore to prevent this issue.
But in my case I had a keystore generated from certs certified by the customer authority. So I had to find another way.

I found the solution in the documentation: https://support.emc.com/docu56531_Documentum-Web-Development-Kit-6.8-Development-Guide.pdf?language=en_US at page 58.

You can deactivate the java validation as follow:
cd $CATALINA_HOME_DA
vi ./webapps/da/wdk/contentXfer/ucf.installer.config.xml

Add the following option:

<option name="https.host.validation" persistent="false">
<value>false</value>
</option>

Now restart tomcat or your application server and you will be able to transfer content.

 

Cet article DA 7.2 UCF Transfer failing with SSL est apparu en premier sur Blog dbi services.


Documentum – Change password – 1 – CS – AEK and Lockbox

$
0
0

This blog is the first one of a series that I will publish in the next few days/weeks regarding how you can change some passwords in Documentum. In these blogs, I will talk about a lot of accounts like the Installation Owner of course, the Preset and Preferences accounts, the JKS passwords, the JBoss Admin passwords, the xPlore xDB passwords, aso…

 

So, let’s dig in with the first ones: AEK and Lockbox passphrases. In this blog, I will only talk about the Content Server lockbox, it’s not about the D2 Lockbox (which is also under the JMS). I’m assuming here that the AEK key is stored in the Content Server lockbox as it is recommended starting with CS 7.2 for security reasons.

 

In this blog, I will use “dmadmin” as the Installation Owner. First, you need to connect to all Content Servers of this environment using the Installation Owner account. In case you have a High Availability environment, then you will need to do this on all Content Servers, obviously.

 

Then, I’m defining some environment variables so I’m sure I’m using the right passphrases and there is no typo in the commands. The first two commands below will be used to store the CURRENT and NEW passphrases for the AEK. The last two commands are for the Lockbox. When you execute the “read” command, the prompt isn’t returned. Just past the passphrase (it’s hidden) and press enter. Then the prompt is returned and the passphrase is stored in the environment variable. I’m describing this in this blog only. In the next blogs, I will just use the commands without explanation:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT AEK passphrase: " c_aek_pp; echo
Please enter the CURRENT AEK passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW AEK passphrase: " n_aek_pp; echo
Please enter the NEW AEK passphrase:
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT lockbox passphrase: " c_lb_pp; echo
Please enter the CURRENT lockbox passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW lockbox passphrase: " n_lb_pp; echo
Please enter the NEW lockbox passphrase:
[dmadmin@content_server_01 ~]$

 

Maybe a small backup of the Lockbox, just in case…:

[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/dba/secure $DOCUMENTUM/dba/secure_bck_$(date "+%Y%m%d-%H%M")
[dmadmin@content_server_01 ~]$

 

Ok then to ensure that the commands will go smoothly, let’s just verify that the environments variables are defined properly (I’m adding “__” at the end of the echo commands to be sure there is no “space” at the end of the passwords). Obviously the “read -s” commands above have been executed to hide the passphrases so if you don’t want the passphrases to be stored in the history of the shell, don’t execute the two commands below.

[dmadmin@content_server_01 ~]$ echo "CURRENT_AEK_PP=${c_aek_pp}__"; echo "NEW_AEK_PP=${n_aek_pp}__"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ echo "CURRENT_LOCKBOX_PP=${c_lb_pp}__"; echo "NEW_LOCKBOX_PP=${n_lb_pp}__"
[dmadmin@content_server_01 ~]$

 

To verify that the CURRENT AEK and Lockbox passphrases are correct, you can execute the following commands. Just a note, when you first create the Lockbox, the Documentum Installer will ask you which algorithm you want to use… I always choose the stronger one for security reasons so I’m using below “AES_256_CBC”. If you are using something else, just adapt it:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -passphrase ${c_aek_pp} -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -resetfingerprint
Lockbox lockbox.lb
Lockbox Path /app/dctm/server/dba/secure/lockbox.lb
Reset host done
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -algorithm AES_256_CBC -check


Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.

 

For the three commands above, the result should always be “Operation succeeded”, “Reset host done” and “got status code returned as ‘0’”. If the second command fail, then obviously, it’s the Lockbox passphrase that isn’t set properly and otherwise it’s the AEK passphrase.

 

Ok now that all the passwords are set and that the current ones are working, we can start the update of the passphrases. Let’s first start with the AEK:

[dmadmin@content_server_01 ~]$ dm_crypto_change_passphrase -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -newpassphrase ${n_aek_pp}
[dmadmin@content_server_01 ~]$

 

Then the Lockbox:

[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -changepassphrase -newpassphrase ${n_lb_pp}
[dmadmin@content_server_01 ~]$

 

To verify that the NEW passphrases are now used, you can again run the three above commands. The only difference is that you need to use the environment variables for the NEW passphrases and not the CURRENT (old) ones:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -passphrase ${n_aek_pp} -all
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -resetfingerprint
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -keyname CSaek -passphrase ${n_aek_pp} -algorithm AES_256_CBC -check

 

Now we are almost complete. If the three previous commands gave the correct output, then it’s pretty sure that everything is OK. Nevertheless, and to be 100% sure that the Content Server Lockbox isn’t corrupted in some way, it is always good to reboot the Linux Host too. Once the Linux Host is up & running again, you will have to execute the first command above (the dm_crypto_boot) to store the Lockbox information into the Shared Memory so that the docbase(s) can start. If you are able to start the docbase(s) using the NEW passphrases, then the AEK and Lockbox have been updated successfully!

 

As a side note, if the Server Fingerprint has been updated (like some OS patching recently or stuff like that), then you might need to execute the second command too (dm_crypto_manage_lockbox) as well as regenerate the D2 Lockbox (which isn’t described in this blog but will be in a next one).

 

 

 

Cet article Documentum – Change password – 1 – CS – AEK and Lockbox est apparu en premier sur Blog dbi services.

Documentum – Change password – 2 – CS – dm_bof_registry

$
0
0

When installing a Global Registry on a Content Server, you will be asked to setup the BOF username and password. The name of this user is by default “dm_bof_registry” so even if you can change it, I will use this value in this blog. This is one of the important accounts that are being created inside the Global Registry. So, what would be the needed steps to change the password of this account?

 

Let’s start with the simple part: changing the password of the account in the Global Registry. For this, I will use iapi below but you can do the same thing using Documentum Administrator, idql, dqMan or anything else that works. First, let’s login on the Content Server, switch to the Installation Owner’s account and start with defining an environment variable that will contain the NEW password to be used:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the dm_bof_registry password: " bof_pwd; echo
Please enter the dm_bof_registry password:
[dmadmin@content_server_01 ~]$

 

Once that is done, we can now execute the iapi commands below to update the password for the dm_bof_registry account. As there is a local trust on the Content Server with the Installation Owner, I don’t need to enter the password, so I use “xxx” instead to login to the Global Registry (GR_DOCBASE). Execute the commands below one after the other and don’t include the “> ” characters, just past the iapi commands and after pasting the final EOF, an iapi session will be opened and all commands will be executed, like that:

[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dm_bof_registry'
> set,c,l,user_password
> $bof_pwd
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000905 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Then to verify that the password has been set properly in the Global Registry, we can try to login with the dm_bof_registry account:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udm_bof_registry -P$bof_pwd


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000906 started for user dm_bof_registry."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

If the password has been changed properly, the output will be similar to the one above: a session will be opened and the only command executed will be “quit” which will close the iapi session automatically. That was pretty easy, right? Well that’s clearly not all there is to do to change the BOF password, unfortunately…

 

The “problem” with the dm_bof_registry account is that it is used on all DFC Clients to register them, to establish trust, aso… Therefore, if you change the password of this account, you will need to reflect this change on all clients that are connecting to your Content Servers. In the steps below, I will provide some commands that can be used to do that on the different typical DFC clients (JMS, xPlore, DA, D2, …). If I’m not talking about one of your DFC client, then basically the steps are always the same, it’s just the commands that differs:

  • Listing all dfc.keystore
  • Updating the dfc.properties
  • Removing/renaming the dfc.keystore files
  • Restarting the DFC clients
  • Checking that the dfc.keystore files have been recreated

 

Before going through the different DFC Clients, you first need to encrypt the BOF user’s password because it is always be used in its encrypted form, so let’s encrypt it on a Content Server:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${bof_pwd}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of our user. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

I. Content Server

On the Content Server, the main dfc client is the JMS. You will have one dfc.properties for each JMS application, one global for the CS, aso… So, let’s update all that with a few commands only. Normally you should only get the definition of the dfc.globalregistry.password in the file $DOCUMENTUM_SHARED/config/dfc.properties. If you got this definition elsewhere, you should maybe consider using the “#include” statement to avoid duplicating the definitions…

[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

If you do it properly, all the dfc.keystore files will be recreated with the restart and you can verify that by comparing the output of the first and last commands.

 

II. WebLogic Server

In this part, I will assume a WebLogic Server is used for the D2, D2-Config and DA applications. If you are using Tomcat instead, then just adapt the path. Below I will use:

  • $WLS_APPLICATIONS as the directory where all the Application WAR files are present. If you are using exploded applications (it’s just a folder, not a WAR file) OR if you are using an external dfc.properties file (it’s possible even with a WAR file to extract the dfc.properties for it), then the “jar -xvf” and “jar -uvf” commands aren’t needed.
  • $WLS_APPS_DATA as the directory where the Application Data are present (Application log files, dfc.keystore, cache, …)

 

These two folders might be the same depending on how you configured your Application Server. All I’m doing below is just updating the dfc.properties files for D2, D2-Config and DA in order to use the new encrypted password.

[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; done
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done

 

Once done, the next steps depend, again, on how you configured your Application Server. If you are using WAR files, you will need to redeploy them. If not, you might have to restart your Application Server for the change to be taken into account and for the keystore file to be re-created.

 

III. Full Text Server

On the Full Text Server, it’s again the same stuff but for all Index Agents this time.

[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `ls $XPLORE_HOME/jboss7.1.1/server/DctmServer_*/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties`; do sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ service xplore stop
[xplore@xplore_server_01 ~]$ service xplore start
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

Again if you do it properly, all the dfc.keystore files will be recreated with the restart.

 

When everything has been done, just let the environment up&running for some time and check the logs for authentication failures regarding the dm_bof_registry user. As you saw above, changing the dm_bof_registry password isn’t really complicated but it’s quite redundant and time consuming so better script all this! :)

 

 

 

Cet article Documentum – Change password – 2 – CS – dm_bof_registry est apparu en premier sur Blog dbi services.

Documentum – Change password – 3 – CS – Installation Owner

$
0
0

In this blog, I will describe the few steps needed to change the Documentum Installation Owner’s password. As you all know, the Installation Owner is (one of) the most important password in Documentum and it is probably the first you define even before starting the installation.

 

As always, I will use a linux environment and in this case, I’m assuming the “dmadmin” account is a local account to each Content Server and therefore the change of the password must be done on all of them. In case you have an AD integration or something similar, you can just change the password at the AD level so that’s not funny, right?!

 

So, let’s start with log in to all Content Servers using the Installation Owner’s account. In case you don’t remember the old password, you will have to use the root account instead. So changing the dmadmin’s password is pretty simple, you just have to change it on the OS level (again this is the default… If you changed the dmadmin’s account type, then…):

[dmadmin@content_server_01 ~]$ passwd
    Changing password for user dmadmin.
    Changing password for dmadmin.
    (current) UNIX password:
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
[dmadmin@content_server_01 ~]$

 

To verify that the dmadmin’s password has been changed successfully, you can use the dm_check_password utility as follow (leave the extra #1 and #2 empty):

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
    Enter user name: dmadmin
    Enter user password:
    Enter user extra #1 (not used):
    Enter user extra #2 (not used):
    $DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)
[dmadmin@content_server_01 ~]$

 

Once you are sure that the password is set properly, one could think that it’s over but actually, it’s not… There is one additional place where this password must be set and I’m not talking about new installations which obviously will requires you to enter the new password. For that, let’s first encrypt this password:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW dmadmin's password: " dmadmin_pw; echo
    --> Enter the NEW dmadmin's password: 
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${dmadmin_pw}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of dmadmin. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

Then where should this be used? On the Full Text Server! So log in to your FT and inside the watchdog configuration, the dmadmin’s password is used for the IndexAgent connection. The commands below will take a backup of the configuration file and then update it to use the new encrypted password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/watchdog/config/
[xplore@full_text_server_01 config]$ cp dsearch-watchdog-config.xml dsearch-watchdog-config.xml_bck_$(date "+%Y%m%d")
[xplore@full_text_server_01 config]$
[xplore@full_text_server_01 config]$ sed -i 's,<property name="docbase_password" value="[^"]*",<property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0",' dsearch-watchdog-config.xml
[xplore@full_text_server_01 config]$

 

Small (but important) note on the above commands: if you are using the same FT for different environments or if one of the IndexAgent is linked to a different dmadmin’s account (and therefore different password), then you will need to open the file manually and replace the passwords for the corresponding xml tags (or use a different sed command which will be more complicated). Each IndexAgent will have the following lines for its configuration:

<application-config instance-name="<hostname>_9200_IndexAgent" name="IndexAgent">
        <properties>
                <property name="application_url" value="https://<hostname>:9202/IndexAgent"/>
                <property name="docbase_user" value="dmadmin"/>
                <property name="docbase_name" value="DocBase1"/>
                <property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0"/>
                <property name="servlet_wait_time" value="3000"/>
                <property name="servlet_max_retry" value="5"/>
                <property name="action_on_servlet_if_stopped" value="notify"/>
        </properties>
        <tasks>
                ...
        </tasks>
</application-config>

 

Once the above modification has been done, simply restart the xPlore components.

 

Another thing that must be done is linked to D2 and D2-Config… If you are using these components, then you will need to update the D2 Lockbox on the Content Server side and you probably defined the LoadOnStartup property which will require you to put the dmadmin’s password in the D2 Lockbox on the Web Application side too. In this blog, I won’t discuss the full recreation of the D2 Lockbox with new password/passphrases since this is pretty simple and most likely known by everybody so I’m just going to update the dmadmin’s password inside the D2 Lockbox instead for the different properties. If you would like a more complete blog for the lockbox, just let me know! This only apply to “not so old nor so recent” D2 versions since the D2 Lockbox has been introduced only a few years ago but is yet not present anymore with D2 4.7, so…

 

On the Content Server – I’m just setting up the environment to contain the libraries needed to update the D2 Lockbox and then updating the D2-JMS properties inside the lockbox. I’m using $DOCUMENTUM/d2-lib as the root folder under which the D2 Installer put the libraries and initial lockbox:

[dmadmin@content_server_01 ~]$ export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
[dmadmin@content_server_01 ~]$ export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
[dmadmin@content_server_01 ~]$ export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/d2-lib/lockbox $DOCUMENTUM/d2-lib/lockbox-bck_$(date "+%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.password ${dmadmin_pw}; done
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.${docbase}.password ${dmadmin_pw}; done

 

The last command above mention “${docbase}.${docbase}”… Actually the first one is indeed the name of the docbase but the second one is the name of the local dm_server_config. Therefore, for a single Content Server the above commands are probably enough (since by default dm_server_config name = docbase name) but if you have a HA setup, then you will need to also include the remote dm_server_config names for each docbase (alternatively you can also use wildcards…). Once that is done, just replace the old lockbox with the new one in the JMS.

 

On the Web Application Server – the same environment variables are needed but of course the paths will change and you might need to include the C6-Common jar file too (which is known to cause issues with WebLogic if it is still in the CLASSPATH when you start it). So on the Web Application Server, I’m also setting up the environment variables with the dmadmin’s password and D2 Lockbox passphrase as well as another variable for the list of docbases to loop on them:

[weblogic@weblogic_server_01 ~]$ for docbase in $DOCBASES; do java com.emc.common.java.crypto.SetLockboxProperty $WLS_DOMAIN/D2/lockbox LoadOnStartup.${docbase}.password ${dmadmin_pw} ${d2method_pp}; done
[weblogic@weblogic_server_01 ~]$

 

With the D2 Lockbox, you will need to restart the components using them when you recreate it from scratch. However, when you update a property inside it, like above, it’s usually not needed. The next time the password is needed, it will be picked from the Lockbox.

 

Last comment on this, if you are using an ADTS and if you used the dmadmin’s account to manage it (I wouldn’t recommend this! Please use a dedicated user for this instead), then the password is also encrypted in a password file for each docbases under “%ADTS_HOME%/CTS/docbases/”.

 

 

Cet article Documentum – Change password – 3 – CS – Installation Owner est apparu en premier sur Blog dbi services.

Documentum – Change password – 4 – CS – Presets & Preferences

$
0
0

In a previous blog (see this one), I already provided the steps to change the BOF password and I mentioned that this was more or less the only important account in the Global Registry. Well in this blog, I will show you how to change the passwords for the two other important accounts: the Presets and Preferences accounts.

 

These two accounts can actually be created in a dedicated repository for performance reasons but by default they will be taken from the Global Registry and they are used – as you can easily understand – to create Presets and Preferences…

 

As said above, these accounts are docbase accounts so let’s start with setting up some environment variable containing the passwords and then updating their passwords on a Content Server:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dmc_wdk_presets_owner'
> set,c,l,user_password
> $prespw
> save,c,l
> retrieve,c,dm_user where user_login_name='dmc_wdk_preferences_owner'
> set,c,l,user_password
> $prefpw
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000907 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> ...
110f123456000145
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Again, to verify that the passwords have been set properly, you can try to login to the respective accounts:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_presets_owner -P$prespw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000908 started for user dmc_wdk_presets_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_preferences_owner -P$prefpw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000909 started for user dmc_wdk_preferences_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

When the docbase account has been updated, the first part is done. That’s good but just like for the BOF account, you still need to update the references everywhere… Fortunately for the Presets and Preferences accounts there are less references so it’s less a pain in the… ;)

 

There are references to these two accounts in the WDK-based Applications. Below I will use Documentum Administrator as an example which is deployed as a WAR file on a WebLogic Server, however the steps would be the same for other Application Servers, except that you might use exploded folders and not war files… Below I will use:

  • $WLS_APPLICATIONS as the directory where the DA WAR file is present.
  • $WLS_APPS_DATA as the directory where the Data are present (log files, dfc.keystore, cache, …).

 

These two folders might be the same depending on how you configured your Application Server. So, first of all, let’s encrypt the two passwords on the Application Server using the DA libraries:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war wdk/app.xml WEB-INF/classes WEB-INF/lib/dfc.jar WEB-INF/lib
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ java -Djava.security.egd=file:///dev/./urandom -classpath WEB-INF/classes:WEB-INF/lib/dfc.jar:WEB-INF/lib/commons-io-1.2.jar com.documentum.web.formext.session.TrustedAuthenticatorTool $prespw $prefpw
Encrypted: [jpQm5FfqdD3HWqP4mgoIIw==], Decrypted: [Pr3seTp4sSwoRd]
Encrypted: [YaGqNkj2FqfQDn3gfna8Nw==], Decrypted: [Pr3feRp4sSwoRd]
[weblogic@weblogic_server_01 ~]$

 

Once this has been done, let’s check the old passwords, updating them in the app.xml file for DA and then checking that the update has been done. The sed commands below are pretty simple: the first part will search for the parent XML tag (so either <presets>…</presets> or <preferencesrepository>…</preferencesrepository>) and the second part will replace the first occurrence of the <password>…</password> line INSIDE the XML tag mentioned in the command (presets or preferencesrepository) with the new password we encrypted before. So, again, just replace my encrypted password with what you got:

[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>tqQd5gfWGF3tVacfmgwL2w==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>LdFinAwf2F2fuB29cqfs2w==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<presets>/,/<\/presets>/ s,<password>.*</password>,<password>jpQm5FfqdD3HWqP4mgoIIw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<preferencesrepository>/,/<\/preferencesrepository>/ s,<password>.*</password>,<password>YaGqNkj2FqfQDn3gfna8Nw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>jpQm5FfqdD3HWqP4mgoIIw==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>YaGqNkj2FqfQDn3gfna8Nw==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war wdk/app.xml
[weblogic@weblogic_server_01 ~]$ rm -rf WEB-INF/ wdk/
[weblogic@weblogic_server_01 ~]$

 

Normally the passwords returned by the second grep command should be different and they should match the ones returned by the JAVA previously executed to encrypt the Presets and Preferences passwords. Once that is done, simply repack the war file and redeploy it (if needed).

 

To verify that the passwords are properly set you can simply stop DA, remove the cache containing the Presets’ jars and restart DA. If the jars are automatically re-created, then the passwords should be OK:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPS_DATA/documentum.da/dfc.data/cache
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l
total 4
drwxr-x---. 4 weblogic weblogic 4096 Jul 15 20:58 7.3.0000.0205
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

This last ‘ls’ command will display a list of 10 or 15 jars (12 for me in DA 7.3 GA release) as well as a few files (content.lck, content.xml and GR_DOCBASE.lck usually). If you don’t see any jar files before the restart, it means the old password was probably not correct… Ok so now to verify that the new passwords have been put properly in the app.xml file, simply stop the Managed Server hosting DA with your preferred way (I will use “msDA-01″ for the example below), then remove the cache folder and restart DA. Once DA is up&running again, it will re-create this cache folder in a few seconds and all the jars should be back:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stop msDA-01
  ** Managed Server msDA-01 stopped
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ rm -rf ./7.3*/
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop start msDA-01
  ** Managed Server msDA-01 started
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sleep 30
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

If you did it properly, the jars will be back. If you want a list of the jars that should be present, take a look at the file “./7.3.*/bof/*/content.xml”. Obviously above I was using the DA 7.3 GA so my cache folder starts with 7.3.xxx. If you are using another version of DA, the name of this folder will change so just keep that in mind.

 

 

Cet article Documentum – Change password – 4 – CS – Presets & Preferences est apparu en premier sur Blog dbi services.

Documentum – Change password – 5 – CS/FT – JBoss Admin

$
0
0

The next password I wanted to blog about is the JBoss Admin password. As you know, there are several JBoss Application Servers in Documentum. The most used being the ones for the Java Method Server (JMS) and for the Full Text Servers (Dsearch/IndexAgent). In this blog, I will only talk about the JBoss Admin password of the JMS and IndexAgents simply because I will include the Dsearch JBoss instance in another blog which will talk about the xDB.

 

The steps are exactly the same for all JBoss instances, it’s just a matter of checking/updating the right file. In this blog, I will still separate the steps for JMS and IndexAgents but that’s because I usually have more than one IndexAgent on the same FT and therefore I’m also providing a way to update all JBoss instances at the same time using the right commands.

 

As always, I will define an environment variable to store the password to avoid using clear text passwords in the shell. The generic steps to change a JBoss Admin password, in Documentum, are pretty simple:

  1. Store the password in a variable
  2. Encrypt the password
  3. Backup the old configuration file
  4. Replace the password file with the new encrypted password
  5. Restart the component
  6. Checking the connection with the new password

 

As you can see above, there is actually nothing in these steps to change the password… We are just replacing a string inside a file with another string and that’s done, the password is changed! That’s really simple but that’s also a security issue since you do NOT need to know the old password… That’s how Documentum works with JBoss…

 

I. JMS JBoss Admin

For the JMS JBoss Admin, you obviously need to connect to all Content Servers and then perform the steps. Below are the commands I use to set the variable, encrypt the password and the update the password file with the new encrypted password (I’m just overwriting it):

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp "$DOCUMENTUM_SHARED/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ mv dctm-users.properties dctm-users.properties_bck_$(date "+%Y%m%d")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[dmadmin@content_server_01 ~]$ echo "admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x" >> dctm-users.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$

 

At this point, the new password has been put in the file dctm-users.properties in its encrypted form so you can now restart the component and check the status of the JBoss Application Server. To check that, I will use below a small curl command which is really useful… If just like me you always restrict the JBoss Administration Console to 127.0.0.1 (localhost only), for security reasons, then this is really awesome since you don’t need to start a X server and you don’t need to start a browser and all this stuff, simply put the password when asked and voila!

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -g --user admin -D - http://localhost:9085/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[dmadmin@content_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status meaning that the JBoss Application Server is up & running and the “result” should be “running”. This proves that the password provided in the command match the encrypted one from the file dctm-users.properties because the JMS is able to answer your request.

 

II. IndexAgent JBoss Admin

For the IndexAgent JBoss Admin, you obviously need to connect to all Full Text Servers and then perform the steps again. Below are the commands to do that. These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same Admin password for all JBoss instances (all IndexAgents JBoss Admin). Therefore, if that’s not what you want, you will have to take the commands from the JMS section but adapt the paths.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ $JAVA_HOME/bin/java -cp "$XPLORE_HOME/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do mv ./$i/configuration/dctm-users.properties ./$i/configuration/dctm-users.properties_bck_$(date "+%Y%m%d"); done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "# users.properties file to use with UsersRolesLoginModule" > ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x" >> ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "--$i:"; cat ./$i/configuration/dctm-users.properties; echo; done
--DctmServer_Indexagent_DocBase1:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase2:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase3:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

[xplore@full_text_server_01 ~]$

 

At this point, the new password has been put in its encrypted form in the file dctm-users.properties for each IndexAgent. So, the next step is to restart all the components and check the status of the JBoss instances. Just like for the JMS, I will use below the curl command to check the status of a specific IndexAgent:

[xplore@full_text_server_01 ~]$ for i in `ls stopIndexag*.sh`; do ./$i; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9205/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If you want to check all IndexAgents at once, you can use this command instead (it’s a long one I know…):

[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="management-http"' ./$i/configuration/standalone.xml|sed 's,.*http.port:\([0-9]*\).*,\1,'`; echo; echo "  ** Please enter below the password for '$i' ($port)"; curl -g --user admin -D - http://localhost:$port/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'; done

  ** Please enter below the password for 'DctmServer_Indexagent_DocBase1' (9205)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:35 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase2' (9225)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:42 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase3' (9245)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:45 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status for all IndexAgents.

 

 

Cet article Documentum – Change password – 5 – CS/FT – JBoss Admin est apparu en premier sur Blog dbi services.

Viewing all 173 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>