Quantcast
Channel: Archives des Documentum - dbi Blog
Viewing all 173 articles
Browse latest View live

Documentum Multiple ADTS – Ratio of rendition creations between instances (Part 2)

$
0
0

This is the second part of my previous blog concerning rendition creation ratio between two ADTS servers. In this part I will talk about another way of getting this ratio. In addition this one doesn’t require to enable auditing and this way is preferable concerning database space footprint.

DMR_CONTENT

This method uses objects that already exist in the docbase and are populated each time a rendition is done. Because in fact, it is the content of the rendition itself! Fortunately the server where the content has been created is listed in the object.

You just have to adapt the following query and execute it:

select set_client,count(*) as val from dmr_content where full_format = ‘pdf’ and set_client like ‘%chbsmv_dmrsp%’ group by set_client;ADTS_dmr_content_ratio_blur.PNG

And now you have all your ADTS servers listed with the total rendition they did.

 

Cet article Documentum Multiple ADTS – Ratio of rendition creations between instances (Part 2) est apparu en premier sur Blog dbi services.


EMC World 2015 – Day 1 at Momentum

$
0
0

The first day of my first EMC World conferences and specially the ones from Momentum wich covers the Enterprise Content Division (ECD) products/solutions/strategies aso. The start was great, being in Las Vegas where you have the feeling you are on another planet, I had the same feel during the General Session or the ECD Keynote; each time good explanations coupled with good shows.

The information I have got was interresting and some questions came in my mind. Questions that I hope can be answered in the next days.

InfoArchive

Before attending the General Session I went to another one which was about EMC InfoArchive. Today I work mainly with the Documentum Content Server and products around it like xPlore, ADTS, D2 aso.

To be prepared for new futur customer requests and challenges I wanted to see what is behind InfoArchive. Let’s give some points:

  • One main goal of using InfoArchive is to reduce the cost of the storage and to keep the assets.
  • Once legacy applications are shut down, you can archive their data into InfoArchive. You can also use it to archive data from active applications where you can build some rules to define which data will be moved to InfoArchive. And this can be done for flat, complex as well as, of course, for document records.
  • When the data are saved into InfoArchive, you can use xQuery, xForm to retrieve the data and display them in a way the user wants to see it.

That’s on the general overview. On a technical point of view here some information:

  • The Archive Service is build using a Data Service (xDB data server) and/or a Content Server. In case you have to archive only metadata the xDB service is sufficient.
  • The storage to be used is obviously the EMC storages but other ones can also be used meaning this solution can be implemented in more type of infrastructures.
  • To the question what is archived, the answer is SIP (Submission Information Package). You have a SIP descriptor  and SIP Data (metadata or/and Content)
  • LWSO objects are stored to use less storage
  • The search is done first against the AIP (Archive Info Packages) and once the object is found, against the AIU (Archive Info Unit).There is no fulltext available on the InfoArchive layer, the reason is that an archive system does not use it in general.
  • RPS can be used to manage the retention.

Open questions

So that for the “facts”, now there are some other open points which could be raised in case InforArchive will be used. You can save you data in normal XML formats but you can also define how the data are saved and how you want to search them. In this case who will manage that, the Record&Archive team or do you need first a business analyste? Can the defined model easily be changed for the current archived information? There are technical questions but I think the organization has first to be defined to have a successfull implementation of InfoArchive

Again, some questions are coming in my mind. And again, let’s see if I can have some answers in … the next days.

 

Cet article EMC World 2015 – Day 1 at Momentum est apparu en premier sur Blog dbi services.

EMC world Las Vegas – Momentum 2015 first day

$
0
0

Before starting talking about EMC event, I just would like to share my feeling about the city it takes place. It is Las Vegas, ok, what an amazing, impressive place! On top of that, we are in one of most beautiful hotels on the strip ! Congratulations to EMC and thanks for this attention.

For this first day of conferences, I decided to attend a first session about xCP platform. I use to manage content with D2, and wondered to know a few background of its brother – process oriented – xCP. Then I will explain a bit further news and forecasts about underlaying content management platform, Documentum itself.

xCP is one of leading application EMC wants to promote with D2 within enterprise content management system. In summary, xCP is described as a “rapid application development platform”. It meas it helps developers and business analyst to build application with lowering as much as possible real development works by providing an “extensive platform for building case management and business processes solution”.
In fact it aims to graphically build up applications by putting together several items, like forms and operational bricks, organized through processes for providing functionalities business teams are looking for. Such approach also aims to reduce development costs and maintainability over the time for more complex applications than only faceted records management.

Meanwhile, EMC released Documentum 7.2 platform with its several clients, D2 4.5, xCP 2.2 and Webtop 6.8.

In this release we can see several improvement areas like for security. With Documentum 7.2, we can now store and transmit contents using AES-256 bits encryption, considered by US NSA good enough as Top Secret protection criteria for the moment.

This version also provides enhanced capability through REST web services for searches, facet navigation, batches transactions, and indeed, further integration extensibility for third party software.
xPlore 1.5 also provides its own growing set of functionalities, like cross-repository subscription, privacy options, enhanced language words splittings and improved warm up. It is also good to know it keeps support with backwards compatibility to Documentum 6.7 SP2.

Then for next upgrades, EMC also provides higher-level migration tools like one for automatic encryption upgrade in Documentum.
This day was also very rich in terms of EMC corporate communication, for overall products policies and strategies for coming years and releases.
I hope you enjoyed reading this short summary of first day at EMC world – Momemtum 2015, and would like to thank you for your attention.

 

Cet article EMC world Las Vegas – Momentum 2015 first day est apparu en premier sur Blog dbi services.

EMC World 2015 – Day 2 at Momentum

$
0
0

Second day in this amazing event. There are not only general and presentation sessions you can also participate on a so called “Hands-on Lab”. The subject was the “EMC Documentum Platform Performance Tuning”. So learning by doing is also a good opportunity you can use at Momentum to enhance your skills.

The session covered the performance tuning using Fiddler for the HTTP requests, the DFC trace, sql traces and how to use the related execution plan, and at the end, how to tune xPlore but only for the ingestion phase meaning the indexing of the documents.The tuning of the fulltext search was not addressed.

Most of the tips I learned was on the xPlore side, which parameters to set to increase the performances or to avoid some errors due timeouts for instance. I skipped more or less the Database tuning, why? that’s not acceptable would you say. Because we have the experts at dbi service to do this kind of tuning!

So let’s me give you some information.

DFC trace
In df.properties add :
dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_deph=0
dfc.tracing.include_rpcs=true
dfc.tracing.mode=compact
dfc.tracing.include_session_id=true
dfc.tracing=c:\dctm\trace
dfc.tracing.enable can be set from false to true and after a couple of seconds the trace file will be created. Of course once the value is set to false, the tracing is stopped.

xPlore tuning
I would recommend to rely to the documentation but here you can find some tips:

  • disable fulltext for a specific format by setting can_index=false
  • filter which document is not indexed by excluding cabinets or folder or even document types
  • reduce the online rebuild index max queue size
  • reduce the number of indexing threads and CPS request threads
  • reduce the number of documents processed simultaneously by the CPS internal processing queue
  • increase the number of CPS daemon processes

WARNING: each parameter can have some drawbacks, so take care when you change this values

So this was on a lab, let’s apply this on real case at customer sites!

 

Cet article EMC World 2015 – Day 2 at Momentum est apparu en premier sur Blog dbi services.

EMC World Las Vegas – Momentum 2015 second day

$
0
0

For second day of conferences, I attended a first session about migration from Webtop to D2 or xCP. Then we attended 2 about Documentum platform performances tuning.

We could see EMC actually putting efforts to take benefits of Open Sources software. They started to package whole platform component by component into Dockers containers. Hope is to try to simplify upgrades of them from one version to another.

They also invited audience to migrate to Documentum 7.1/7.2 because lot of performances enhancements were done, especially for multi-core CPU support and better session pooling management, which last topic took us some maintenance time last months.

Key advantage of migration from Webtop to D2 or xCP is they capability to integrate a lot of business case scenarios out of the box. For instance, when a customer wants to move customizations into D2, by experience they said we could reach up to 50% features coverage by software configuration instead of coding. A great saving of time and money as well as maintenance costs over the time and upon further upgrades.

Finally they also stated EMC is providing few tools to ease processes of migration for former live science appliance to actual one and webtop to D2 and xCP.

For Documentum platform performances tuning, I invite you to read Gérard’s blog following this link.
I hope you enjoyed reading summary of today at EMC world – Momemtum 2015. Thanks for your attention.

 

Cet article EMC World Las Vegas – Momentum 2015 second day est apparu en premier sur Blog dbi services.

EMC World Las Vegas – Momentum 2015 third day D2 news

$
0
0

This was a more day of networking with EMC partner contact and third party software editors. On the other side I attended a session about D2 news and what is comming next.

EMC divided D2 enhancements by 3 main major themes.
First was about productivity and a modern look and feel with:

    • graphical workflow widget
    • drag and drop from D2 to Desktop
    • better browsing with enhanced facet navigation
    • multi-document support in workflows
    • faster content transfer with new D2-BOCS for distributed environments

Second was about Information integrity with

    • more SSO implementation support, like Tivoli
    • folder import with inner documents as virtual document

Then finally about software agility with

    • ease of new user on-boarding with default configuration settings
    • PDF export of D2 configuration for multi environment comparison

I hope you enjoyed reading this summary of today at EMC world – Momentum 2015. Thanks for your attention.

 

Cet article EMC World Las Vegas – Momentum 2015 third day D2 news est apparu en premier sur Blog dbi services.

EMC World 2015 – last day at Momentum

$
0
0

So Momentum is over, Philippe Schweitzer and I finished with a 4 hours hackaton session. For Philippe the subject was “Developing an EMC Documentum application with REST, AngularJS, Bootstrap, Node.js and Socket.io=” and I choosed “From the Ground Up – Developing an EMC InforArchive Solution”.

But the main subject in this post is more to thank all people we met during these four enriching days, EMC people – who hold the sessions, who made demos on the booths, Catherine Weiss – Partner Sales Manager, who introduced us to great people. Also thank you to people from fme, Reveille Software, Flatiron with whom we had good discussions.

The atmosphere was amazing, not only during the working days but also in the evening events organized by EMC.
So to be short, Momemtum 2015 was a great and successful journey.

Philippe and Gerard

 

Cet article EMC World 2015 – last day at Momentum est apparu en premier sur Blog dbi services.

D2 performance issues due to KB3038314 IE patch

$
0
0

I ran into a strange issue by a customer. When trying to open a huge VD on the D2’s right panel the browser freezes.

It seems to be due to an Internet Explorer security patch. It is introducing huge performance issues. So if you run into strange issues concerning your web browser check the patch version of IE. The security patch which causes issues is KB3038314.

 

Cet article D2 performance issues due to KB3038314 IE patch est apparu en premier sur Blog dbi services.


Kerberos configuration for a CS 6.7 SP1

$
0
0

In a previous post, I shared some tips to configure the Kerberos SSO with Documentum D2 3.1 SP1. Since that day, I worked on different projects to also setup the Kerberos SSO on some other components of Documentum. In this post I will try to explain in detail what need to be done to configure the Kerberos SSO for the Content Server. Actually it’s not that hard to do it but you may face some issues if you try to follow the official documentation of EMC.
So what are the pre-requisites to setup the Kerberos SSO for the Content Server? Well in fact you just need a Content Server of course and an Active Directory to generate the keytab(s). Just to let you know, I used a Content Server 6.7 SP1 and an Active Directory on a Windows Server 2008 R2. Let’s define the following properties:

  • Active Directory – user = cskrb
  • Active Directory – password = ##cskrb_pwd##
  • Active Directory – domain = DOMAIN.COM
  • Active Directory – hostname1 = adsrv1.domain.com
  • Active Directory – hostname2 = adsrv2.domain.com
  • Documentum – repository (docbase) = REPO

I. Active Directory prerequisites

As always when working with Kerberos on an Active Directory, the first thing to do is to create a user. So let’s create this user with the following properties:

  • User name: cskrb
  • Support AES 128 bits encryption
  • WARNING: This account MUST NOT support AES 256 bits encryption
  • Trust for Delegation to any service (Kerberos Only)
  • Password never expires
  • Account never expires
  • Account not locked

Once the user has been created, you can proceed with the keytab creation using the comment prompt on the Active Directory host:

cs_keytab.png
According to the documentation of EMC, you can create one keytab with several keys inside for the Documentum repositories. Actually, that’s wrong! It’s not possible in the Microsoft world to generate a keytab with more than one Service Principal Name (SPN) in it, only the Linux implementations of Kerberos allow that. If you try to do so, your Active Directory may loop forever trying to add a second SPN to the keytab. That will considerably slow down your Active Directory and it may even crash…
If you want to setup the Kerberos SSO for more than one repository, you will have to create one user per repository and generate one keytab per user. So just repeat these two steps above for each repository, replacing the user name, user password and repository name… What is possible with an Active Directory is to map more than one SPN to a user. That can be useful for a Load Balancer setting for example but the keytab will always contain one SPN and therefore it seems that this solution isn’t suitable for the Content Server.
The second remark here is that the documentation of EMC often uses the DES encryption only for the keytab but as shown above, you can of course specify the encryption to use or simply specify “ALL” to add all possible encryptions in this keytab. By default Kerberos will always use the stronger encryption. In our case as the Content Server doesn’t support AES 256 bits encryption, the AES 128 bits encryption will be used instead.

II. Configuration of the Content Server side

So let’s start the configuration of the Kerberos SSO for the Content Server. The first thing to do is of course to transfer the keytab created previously (REPO.keytab) from the Active Directory to the Content Server’s host. This CS’s host can be a Windows Server or a Linux Server, it doesn’t matter as long as the Linux Server is part of your enterprise network (well if it’s properly configured). In this post, I will use a Linux server because we usually install Documentum on Linux.
During the installation of the Content Server, the installer creates some default authentication folders, some security elements, aso… Therefore, you have to put the newly created keytab in this specific location for the Content Server to automatically recognize it. Please make sure that the keytab belongs to the Documentum installation owner (user and group) on the file system with the appropriate permissions (640). The correct location is:

$DOCUMENTUM/dba/auth/kerberos/

Then create the file “/etc/krb5.conf” with the following content:

[libdefaults] noaddresses = true
udp_preference_limit = 1
default_realm = DOMAIN.COM
default_tgs_enctypes = aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc rc4-hmac
default_tkt_enctypes = aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc rc4-hmac
permitted_enctypes = aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc rc4-hmac
dns_lookup_realm = true
dns_lookup_kdc = true
passwd_check_s_address = false
ccache_type = 3
kdc_timesync = 0
forwardable = true
ticket_lifetime = 24h
clockskew = 72000

[domain_realm] .domain.com = DOMAIN.COM
domain.com = DOMAIN.COM
adsrv1.domain.com = DOMAIN.COM
adsrv2.domain.com = DOMAIN.COM [realms] DOMAIN.COM = {
master_kdc = adsrv1.domain.com:88
kdc = adsrv1.domain.com:88
kpasswd = adsrv1.domain.com:464
kpasswd_server = adsrv1.domain.com:464
kdc = adsrv2.domain.com:88
kpasswd = adsrv2.domain.com:464
kpasswd_server = adsrv2.domain.com:464
} [logging] default = /var/log/kerberos/kdc.log
kdc = /var/log/kerberos/kdc.log [appdefaults] autologin = true
forward = true
forwardable = true
encrypt = true

You can of course customize this content with what is suitable for you depending on your environment. Moreover, if the file “/etc/krb5.conf” already exist and you don’t want to modify it or if you can’t modify it, then you can still create this file wherever you want. For example, create the folder “$DOCUMENTUM/kerberos/” and put this file inside. Then edit the file “~/.bash_profile” and add the following line into it to reference this new location:

export KRB5_CONFIG=$DOCUMENTUM/kerberos/krb5.conf

Once done, simply restart your ssh session or source the file “~/.bash_profile” for the new environment variable to be available.
The last thing to do is to refresh the Kerberos configuration for the Content Server to know that a keytab is available on the File System (and therefore enable the dm_krb authentication plugin). This process is known as the re-initialization of the Content Server. There are two main ways to do it: with or without Documentum Administrator (DA). When using DA, you can simply click on a button while the Content Server is running and that’s the main advantage of this method. If you don’t have a DA installed, then I guess you will have to reboot the Content Server for the changes to take effect.
To re-initialize the Content Server using DA, here is what need to be done:

  • Open DA in Internet Explorer
  • Log in to the repository “REPO” with the account of the Documentum installation owner or any account with sufficient permissions
  • Expand the “Basic Configuration item”
  • Click on “Content Servers”
  • Right-click on the repository you are connected to
  • Click on “Properties”
  • Check “Re-Initialize Server”
  • Click on “Ok”

Once done, you should be able to confirm that the reload of the dm_krb plugin was successful by checking the log file of the repository:

$DOCUMENTUM/dba/log/REPO.log

If everything goes well, you will see some lines showing that the Content Server was able to parse the keytab successfully. On a next blog, I will certainly explain how to configure the Kerberos SSO for the DFS part. Stay tuned!

 

Cet article Kerberos configuration for a CS 6.7 SP1 est apparu en premier sur Blog dbi services.

Kerberos configuration for DFS 6.7 SP1

$
0
0

In my last post, I explained how to configure Kerberos for a CS 6.7 SP1. Unfortunately if you only configure the Content Server, it will almost be useless… If you want this configuration to be useful, then you will also have to configure the Kerberos SSO for the Documentum Foundation Services (DFS). That’s why in this blog post I will describe step by step what need to be done for that purpose.
So what are the pre-requisites to setup the Kerberos SSO for the Documentum Foundation Services? Well of course you will need an application server for your DFS, a Content Server that is already installed and an Active Directory to generate the keytab(s). Just to let you know, I used (for the DFS) a Tomcat application server that is on the Content Server’s machine and an Active Directory on a Windows Server 2008 R2. Let’s define the following properties:

  • Active Directory – user = dfskrb
  • Active Directory – password = ##dfskrb_pwd##
  • Active Directory – domain = DOMAIN.COM
  • Active Directory – hostname1 = adsrv1.domain.com
  • Active Directory – hostname2 = adsrv2.domain.com
  • Alias of the DFS’ host = csdfs.domain.com (can be a Load Balancer alias)
  • $CATALINA_HOME = /opt/tomcat

I. Active Directory prerequisites

As always when working with Kerberos on an Active Directory, the first thing to do is to create a user. So let’s create this user with the following properties:

  • User name: dfskrb
  • Support AES 128 bits encryption
  • This account MUST NOT support AES 256 bits encryption. I set it that way because the Content Server doesn’t support AES 256 bits encryption so I disabled it for the DFS part too.
  • Trust for Delegation to any service (Kerberos Only)
  • Password never expires
  • Account never expires
  • Account not locked

Once the user has been created, you can proceed with the keytab creation using the comment prompt on the Active Directory host:

dfs_keytab.png
For the Content Server part, the name of the “princ” (SPN) has to be “CS/##repository_name##”. For the DFS part, the EMC documentation ask you to generate a keytab with a SPN that is “DFS/##dfs_url##:##dfs_port##”. In fact, if you are going to use only one DFS url/port, then you don’t need to add the port in the SPN of the DFS.
Regarding the name of the keytab, for the Content Server part, it has to be “##repository_name##.keytab” for the Content Server to be able to automatically recognize it during the server re-initialization. For the DFS part, the name of the keytab isn’t important because you will have to configure it manually.

II. Configuration of the Documentum Foundation Services side

So let’s start the configuration of the Kerberos SSO for the DFS. The first thing to do is of course to transfer the keytab created previously (dfs.keytab) from the Active Directory to the host of the DFS (a Linux in my case). There are no specific locations for this keytab so you just have to put it somewhere and remember this location. For this example, I will create a folder that will contain all elements that are required. Please make sure that the keytab belongs to the Documentum installation owner (user and group) on the file system with the appropriate permissions (640).

[dmadmin ~]$ echo $CATALINA_HOME
/opt/tomcat
[dmadmin ~]$ mkdir /opt/kerberos
[dmadmin ~]$ mv ~/dfs.keytab /opt/kerberos
[dmadmin ~]$ chmod 640 /opt/kerberos/dfs.keytab

Create the file “/opt/kerberos/jaasDfs.conf” with the following content:

[dmadmin ~]$ cat /opt/kerberos/jaasDfs.conf
DFS-csdfs-domain-com {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
principal="DFS/csdfs.domain.com@DOMAIN.COM"
realm="DOMAIN.COM"
refreshKrb5Config=true
noTGT=true
useKeyTab=true
storeKey=true
doNotPrompt=true
useTicketCache=false
isInitiator=false
keyTab="/opt/kerberos/dfs.keytab";
};

The first line of this file jaasDfs.conf is the name of the “module”. This name is derived from the SPN (or principal) of the DFS: take the SPN, keep the uppercase/lowercase characters, remove the REALM (everything that is after the at-sign (included)) and replaced all special characters (slash, back-slash, point, colon, aso…) with a simple dash “-“.
The next thing to do is to modify the DFS war file. So let’s create a backup of this file and prepare its modification:

[dmadmin ~]$ cd $CATALINA_HOME/webapps/
[dmadmin ~]$ cp emc-dfs.war emc-dfs.war.bck_$(date +%Y%m%d)
[dmadmin ~]$ cp emc-dfs.war /tmp/
[dmadmin ~]$ cd /tmp
[dmadmin ~]$ unzip emc-dfs.war -d /tmp/emc-dfs

The setup of the Kerberos SSO requires some jar files that aren’t necessarily present on a default installation. For that purpose, you can copy these jar files from the Content Server to the new dfs:

[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/jcifs-krb5-1.3.1.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/krbutil.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/vsj-license.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/vsj-standard-3.3.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/questFixForJDK7.jar /tmp/emc-dfs/WEB-INF/lib/

Once done, a Kerberos handler must be added to the DFS. For that purpose, open the file authorized-service-handler-chain.xml, locate the XML comment that start with “Any handler using ContextFactory” and add the following lines just before this comment:

authorization_chain.png
Then, some Kerberos specific configurations must be added to the web.xml file. For that purpose, open this file and add the following lines at the end, just before the web-app end tag (before the last line):

web_xml.png
In the above configuration, only the “env-entry-value” for each “env-entry” section should be changed to match your environment. As you can see, the krb5.conf file referenced here is in /opt/kerberos. You can use the same krb5.conf file as the one used for the Content Server or you can specify a separate file. As this file can be the same for the Content Server and the DFS I will not set it here but just check my last post to get more information about that.
So the configuration is now over and you can just repackage and re-deploy the new DFS:

[dmadmin ~]$ cd /tmp/emc-dfs/
[dmadmin ~]$ jar -cvf emc-dfs.war *
[dmadmin ~]$ $CATALINA_HOME/bin/shutdown.sh
[dmadmin ~]$ mv emc-dfs.war $CATALINA_HOME/webapps/
[dmadmin ~]$ cd $CATALINA_HOME
[dmadmin ~]$ rm -Rf emc-dfs/
[dmadmin ~]$ $CATALINA_HOME/bin/startup.sh

Once done, the Tomcat application server should have been started and the new version of the DFS WAR file should have been deployed. If the Content Server and the DFS are properly setup to use the Kerberos SSO, then you should be able to execute a .NET or Java code to get a Kerberos Ticket for the DFS and work with the DFS features.

 

Cet article Kerberos configuration for DFS 6.7 SP1 est apparu en premier sur Blog dbi services.

ADTS missing profiles

$
0
0

He he, long time no see, ADTS…

It has been a long time I didn’t go into trouble with ADTS. As the new 7.2 ADTS is pretty well constituted this time the issue came from me, I admit.

Once upon a time I had to migrate from ADTS 6.7 to 7.2 for a customer. Everything went fine in DEV, same in TEST and CLONE. Then the day came to push it in PROD. And *CRACK!*, nothing working anymore. I was decomposed, I followed the IQ I wrote as accurate as I could. But absolutely no rendition was made, even for text files…

Here are the issues I had:
– Missing profiles at loading
– Cannot find profile when trying a rendition, whatever format.
– Cannot activate DOC10 plugin

Doesn’t sounds good huh?

The thing is that I did the same steps as other environments. But here in PROD, it didn’t work.

So I started looking to these missing profiles and tried to find them in the docbase. And I figured out that every missing profile were actually missing in the docbase, but they were missing in my other installations as well!

Thus why I had these profiles said as missing in CTS log whereas it was not mentioned in DEV, TEST and CLONE?

At this point I had no idea, but I checked if these profiles were present in 6.7. Because I already had to modify some of these profiles in 6.7. And yes, they were present in the old version.

Now maybe you found why I had these errors?

I did one thing differently than another customer, I kept the old cabinet Media Server, I just renamed it to BCK6.7 Media Server. But I didn’t know at this time that it kept the dm_media_profile objects as well.

When you install an ADTS it will create profiles in the docbase as .xml files. But their format is actually dm_media_profile. Hence if you keep the old profiles (I kept them for comparison with the new ones, and just in case I had to rollback), you will also keep these objects and when installing the new ADTS it will corrupt these profiles and the new ones.

So even if you want to keep a backup of your old profiles you better get them out of your docbase and delete all ADTS related files.

Thus I deleted all profiles, re-installed ADTS into the docbase and everything went fine!

With ADTS, do not keep, move forward!

 

Cet article ADTS missing profiles est apparu en premier sur Blog dbi services.

xPlore list all ids from xhive

$
0
0

I went into a case that I found a kid of ‘Ghost’ index which I may present in another blog. But to summarize I had to get the full list of indexes from the xPlore server.
The problem is that there isn’t such a list easily accessible. Hence I wondered, all indexes are stored in a database, what if I could query this database directly?
So I chose this way, and finally found out some tool in dsearch/xhive/admin like:
– XHAdmin
– XHCommand

The first one XHAdmin is a graphical tool to connect and manage the database, it looks pretty well done and it’s easy to find our indexes. It requires credentials which are: Administrator as username and the password is the same as the one you chose for dsearch admin interface.

xhive admin 1

xhive admin 2

I didn’t find an easy way to export my ids and I definitely prefer command line for this kind of job. So I searched the web for XHCommand and I found how to launch it:
./XHCommand -d xhivedb -u Administrator -p pwd

As previously the username is Administrator and the password is the same as Dsearch Admin. Once this command is started, it opens a dedicated shell, and it gently propose to type help for more info, that’s what I did!
We can now see that we can navigate through the database like in linux with cd, ls, ll and so on. Hence I searched for my ids and found them as expected:
/DOCBASE_NAME/dsearch/ApplicationInfo/group
/DOCBASE_NAME/dsearch/ApplicationInfo/acl
/DOCBASE_NAME/dsearch/Data/default

The interesting thing with these commands is that we can call them from our linux shell directly like:
./XHCommand -d xhivedb -u Administrator -p pwd ls /DOCBASE_NAME/dsearch/ApplicationInfo/group > total_ids.txt
./XHCommand -d xhivedb -u Administrator -p pwd ls /DOCBASE_NAME/dsearch/ApplicationInfo/acl >> total_ids.txt
./XHCommand -d xhivedb -u Administrator -p pwd ls /DOCBASE_NAME/dsearch/Data/default >> total_ids.txt

This will list all ids from these collections and put them in the same file, but beware, it could take ages, don’t forget to nohup it.

So I gave you a first hint to xhive management in xPlore, I’ll try to go deeper on this topic for another blog.

 

Cet article xPlore list all ids from xhive est apparu en premier sur Blog dbi services.

Upgrading to 7.2 created a new ACS. How to remove it and test it?

$
0
0

I had this strange behavior that once upgraded from 6.7 to 7.2 a new ACS was created. I think it’s because the current ACS name didn’t fit the new ACS name pattern. Well it’s not a big issue to have 2 ACS configured. But in my case they pointed both to the same port and servlet so… I had to remove one.

Hence, how can we know which one is used?

That’s easy, just find the acs.properties file located in:

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties

 

In this file you should find the line:

repository.acsconfig=YOUR_ACS_NAME.comACS1

 

In fact my previous ACS was named YOUR_ACS_NAME.cACS1 that’s why I think a new one was created. So here you have the ACS used and you just have to remove the other one:

delete dm_acs_config objects where object_name = ‘YOUR_OLD_ACS_NAME';

Fine, now how can we check that the ACS is working properly?

First you can paste the ACS url i your browser to check if it’s running, it should look like this:

http://your-content-server-host:9080/ACS/servlet/ACS

 

If you installed your method server on another port than 9080, use it.

You should see the following result (maybe with a different version):

ACS Server Is Running - Version : 7.2.012.0.0114

 

If you can’t find the ACS url, login to Documentum Administrator and navigate to:
Administration -> Distributed Content Configuration -> ACS Server
If you right click on it you will see the url at the bottom of the page.

At this point the ACS is running but is documentum using it properly?

In order to verify this point a bit of configuration is needed. Login to the server on which you have DA installed, in the DA application search for a log4j.properties file and add the following lines:

log4j.logger.com.documentum.acs=DEBUG, ACS_LOG
log4j.logger.com.documentum.fc.client.impl.acs=DEBUG, ACS_LOG
log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
log4j.appender.ACS_LOG.File=${catalina.home}/logs/AcsServer.log
log4j.appender.ACS_LOG.MaxFileSize=10MB
log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
log4j.appender.ACS_LOG.layout.ConversionPattern=%d{ABSOLUTE} %5p [%t] %c - %m%n

You may have to update the line log4j.appender.ACS_LOG.File.

Restart the tomcat or whatever webapp server you have. In order to generate logs you’ll have to open a document from DA. Let’s say we have a document called TESTDOC.doc.
Once you open it you’ll have around 3 to 4 lines in AcsServer.log. In order to verify that everything went fine, you should NOT see the following line:
INFO [Timer-161] com.documentum.acs.dfc – [DFC_ACS_LOG_UNAVAILABLE] “userName=”test”, objectId=”0903d0908010000″, objectName=”TESTDOC.doc””, skip unavailable “ACS” serverName=”YOUR_ACS_NAME_HERE” protocol “http”

Instead you must have a kind of ticket/key formed by a lot of letters/numbers. This step will validate that you have been served by the ACS.

 

Cet article Upgrading to 7.2 created a new ACS. How to remove it and test it? est apparu en premier sur Blog dbi services.

Documentum Multiple ADTS: Switching rendition queue

$
0
0

As part of my previous posts about having 2 rendition servers for one docbase (see below), I’ll show you how to simply switch a rendition queue item to the other server
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-1/
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-2/

I had an issue by a customer where one of the two rendition server was stuck since 2 days. As I explained in my previous posts, each server will reserve a group of items from the queue for it to process. Let’s say we got the threshold to 5 items. Each server will reserve 5 items in the dmi_queue_item and set the attribute sign_off_user to itself. E.g. RNDSERVER_DOCBASE1.

Then it will process each items one by one; onces one is done it will reserve a new one from the queue, and so on.

The problem is: if the rendition server is stuck for whatever reason all reserved items will NOT go back to the available pool. It means that they will be reserved by THIS rendition server until you fix the server and it starts processing them again.

You can imagine what I got by the customer, some documents were not rendered since 2 days!

So here is the simplest solution to put the items back in the pool:

update dmi_queue_item objects set sign_off_user ='' where sign_off_user ='RNDSERVER_DOCBASE1';

Hence all items will be set as available. The other rendition server should reserve them now as the current server is stuck and can’t reserve more items.

In the case of a big file beeing processed by the first server and you want the documents to be processed by the other one you can reserver items by yourself manually with:

update dmi_queue_item objects set sign_off_user='RNDSERVER2_DOCBASE1' where item_id in ('09xxxx','09xxxx');

If you have any questions please use the comment section.

 

Cet article Documentum Multiple ADTS: Switching rendition queue est apparu en premier sur Blog dbi services.

Documentum Administrator UCF Troubleshooting

$
0
0

Maybe you had some issues with UCF in DA as me. I had this for no reason since few days at a customer. The problem was that we use SSL with DA and the Unified Content Facilities (UCF) wasn’t happy about it.
Thus, in this short blog I’ll speak about troubleshooting UCF.

The error I got happened when trying to edit, view or create documents; I had a popup saying an error occured with UCF.

First, we must know our enemy in order to fight it!

UCF stands for Unified Content Facilities. It’s a java applet made by EMC and used by wdk applications in order to manage and optimize content transfer between the application and your workstation. Thanks to UCF you can transfer large files with compressions and reconnect if the network failed some packets. The applet is downloaded to your workstation at runtime when you connect to a wdk application.
You can find the UCF configuration in your user folder like follow:
C:\Users\<USER>\Documentum\ucf

Refresh UCF Cache

Before going deeper in the debugging, maybe try to clear the ucf cache first and re-download the latest one from the server. In order to do so you’ll have to perform the following steps:
Clear you browser cache. If you have IE, go to Tools -> Delete Browsing History (or press CTRL+SHIFT+DEL).
Then check each checkboxes and click Delete.

Capture1

Close the browser afterwards.

Now be sure that you don’t have any browser pointing to a wdk application and go to C:\Users\<USER>\Documentum and try deleting the ucf folder.
If you have an error telling you it is already used, open the task manager and search for javaw.exe processes, and then close them down.
You should be able to delete the ucf folder now.

Also clear the cached ucf jar files by opening the java control panel. Go to Control Panel -> search for Java -> General Tab -> Temporary Internet Files -> Settings -> Delete Files

Now test again by opening Documentum Administrator and creating/editing a document. You shouldn’t have a popup error about UCF.

If you reached this point in the blog that means you didn’t get rid of the problem, so didn’t I. Well at this point we did some corrections but we don’t know what is the real error about UCF, we only get this stack trace saying UCF failed. We can now enable the ucf tracing in order to see if something more interesting is written in the logs.
You can enable the tracing on both sides, the server and your workstation. The easiest is to begin with your workstation so go back to the ucf folder C:\Users\<USER>\Documentum\ucf
Then navigate to <PCNAME>\shared\config and edit ucf.client.config.xml
Add the following options between <configuration/>:

<option name="tracing.enabled">
    <value>true</value>
</option>
<option name="debug.mode">
    <value>true</value>
</option>

Also edit the file: ucf.client.logging.properties by changing .level=WARNING to .level=ALL

Now reproduce the error and check what has been written in C:\Users\<USER>\Documentum\Logs

If you can’t see what the problem is you can also activate the tracing on the webserver by editing the same way: ../WEB-INF/classes/ucf.server.config.xml but note that you need to restart the webserver for it to take effect.

The errors in the generated log should allow you to find the real cause of the ucf error. In my case it was the SSL handshake that was failing.

 

Cet article Documentum Administrator UCF Troubleshooting est apparu en premier sur Blog dbi services.


Documentum story – Management of DARs and unexpected errors

$
0
0

During a recent project at one of our customers, we often saw the message “Unexpected errors occurred while installing DARs”. In our case, this message happened when installing, migrating or upgrading a docbase on an already existing Content Server. We never saw this message during the first initial phase of installation of our repositories but we started to see it some months later with the first migration/upgrade. In this blog I will show you where does this issue can come from and how DARs are managed by Documentum for new/migrated docbases. In a future blog I will show you a home-made script that can be used to manually install DARs on docbases which tips, aso…

 

For this blog, let’s use the following:

  • Documentum CS 7.2
  • RedHat Linux 6.6
  • $DOCUMENTUM=/app/dctm/server
  • $DM_HOME=/app/dctm/server/product/7.2

 

Most of the time, these errors are thrown because Documentum isn’t able to install the needed DARs but what’s the reason behind that? First of all, there is one important thing to know: when installing a new docbase, Documentum will check which DARs should be installed by default. This list is dynamically generated based on a file and this file is the following one:

[dmadmin@content_server_01 ~]$ cat $DM_HOME/install/darsAdditional.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<actions>
    <dar name="TCMReferenceProject">
        <description>TCMReferenceProject</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/TCMReferenceProject.dar</darFile>
    </dar>
    <dar name="Forms">
        <description>Forms</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/Forms.dar</darFile>
    </dar>
    <dar name="Collaboration Services">
        <description>Collaboration Services</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/Collaboration Services.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="xcp">
        <description>xCP</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/xcp.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="bpm">
        <description>BPM</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/BPM.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="C2-DAR">
        <description>C2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/C2-DAR.dar</darFile>
        <javaOptions>
           <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2-DAR">
        <description>D2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2Widget-DAR">
        <description>D2Widget-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2Widget-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2-Bin-DAR">
        <description>D2-Bin-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2-Bin-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="O2-DAR">
        <description>O2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/O2-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
</actions>

 

Some of the dars inside this file are configured by Documentum directly (BPM & xcp DARs) and some others have been added by us, manually (D2 DARs). If you want to install new DARs for future installations of a docbase, then you can just update this file with a new section using this template:

    <dar name="DAR_NAME">
        <description>DAR_NAME</description>
        <darFile>/ABSOLUTE_LOCATION_OF_FILE/DAR_NAME.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>

 

By default Documentum will always put the DARs inside the folder $DM_HOME/install/DARsInternal/ so I would recommend you to do the same, update the xml file and that’s it. Now this *can* also bring some trouble where some DARs aren’t installed anymore with the error shown at the beginning of this blog and the reason for that is – most of time – simply because there is a space in the name of the DAR… Yes from time to time, depending on the DARs, Documentum might not be able to properly manage spaces in the name of the DARs. It doesn’t always happen and that’s the annoying part because I didn’t find any logical behavior.

 

There is a way to verify which DAR might cause this issue and which one will not: when installing a CS patch, the folder “$DOCUMENTUM/patch/bin” is usually created and inside this folder, there is a file named “repositoryPatch.sh”. This script will be used by the patch to do some work and to install some DARs if needed. The interesting thing here is that this script include a small bug which you can use to find the troublesome DARs and you can also easily fix the script. After doing that, you will be able to use this script for all DARs no matter if they include spaces or not. So let’s take a look at the default file in one of our Content Server:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/patch/bin/repositoryPatch.sh | grep "^dars"
dars="/app/dctm/server/product/7.2/install/DARsInternal/LDAP.dar,/app/dctm/server/product/7.2/install/DARsInternal/MessagingApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/MailApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/Extended Search - SearchTemplates.dar,/app/dctm/server/product/7.2/install/DARsInternal/ATMOS Plugin.dar,/app/dctm/server/product/7.2/install/DARsInternal/VIPR Plugin.dar"

 

As you can see above, you just need to define the full path of each DAR file separated by a comma. To fix this script for all DARs, a first solution would be to rename the DARs but there is actually a simpler solution: use single quotes instead of double quotes in the dars definition:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/patch/bin/repositoryPatch.sh | grep "^dars"
dars='/app/dctm/server/product/7.2/install/DARsInternal/LDAP.dar,/app/dctm/server/product/7.2/install/DARsInternal/MessagingApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/MailApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/Extended Search - SearchTemplates.dar,/app/dctm/server/product/7.2/install/DARsInternal/ATMOS Plugin.dar,/app/dctm/server/product/7.2/install/DARsInternal/VIPR Plugin.dar'

 

By doing that, you corrected the bug in the script and now you should be able to execute this script to deploy all DARs to a single repository using:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/patch/bin/repositoryPatch.sh DOCBASE USERNAME PASSWORD

 

Note: As always, if you are using the Installation Owner as the USERNAME, then the PASSWORD can be a dummy password like “xxx” since there is the local trust on the Content Server.

 

This conclude this blog about the principal issue that we can face when installing a DAR, about how to manage the automatic deployment of some DARs in new docbases and finally how to use the script provided by a patch to do that manually. See you!

 

Cet article Documentum story – Management of DARs and unexpected errors est apparu en premier sur Blog dbi services.

Documentum story – Manual deployment of X DARs on Y docbases

$
0
0

In a previous blog (click here), I presented a common issue that might occur during the installation of some DARs and how to handle that with what Documentum provides but there are some limitations to that. Indeed the script repositoryPatch.sh is pretty good (except the small bug explained in the other blog) but its execution is limited to only one docbase and it is pretty boring to always put the full path of the DARs file knowing that usually all DARs will be at the same place (or at least this is what I would recommend). In addition to that, this script repositoryPatch.sh might not be available in your Content Server because it is normally available only after applying a patch of the Content Server. Therefore we usually use our own shell script to deploy X DARs on Y docbases with a single command.

 

For this blog, let’s use the following:

  • Documentum CS 7.2
  • RedHat Linux 6.6
  • $DOCUMENTUM=/app/dctm/server
  • $DM_HOME=/app/dctm/server/product/7.2

 

I will propose you in this blog three different solutions to avoid the issue with the space in the name of a DAR and to be able to deploy all DARs that you want on all docbases that you define.

  1. Variable with space separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list=("DAR 1.dar" "DAR 2.dar" "DAR 3.dar")
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        for dar in "${dar_list[@]}"
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

This is probably not the best solution because you have to manually add double quotes around each DAR name so that’s a little bit boring, unless you already have such a list. Please note that with this script, all DARs must be in the folder $DM_HOME/install/DARsInternal/ which is the folder used by Documentum by default for DARs.

 

  1. No variable but still space separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        for dar in "DAR 1.dar" "DAR 2.dar" "DAR 3.dar"
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

Same as before for this one, you don’t need the @ trick since the list of DARs is in the for loop directly but you still need to manually put double quotes around the file names.

 

  1. Variable with comma separated list
#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        IFS=',' ; for dar in $dar_list
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

This version is my preferred one because what you need is just a list of all DARs to be installed and the separation is just a comma so that’s pretty simple to obtain and simpler to manage than double quotes everywhere. Now these versions will all provide the following output showing that the script is working properly even for DARs containing spaces in their names:

Deploying DAR 1.dar into DOCBASE1
Deploying DAR 2.dar into DOCBASE1
Deploying DAR 3.dar into DOCBASE1
Deploying DAR 1.dar into DOCBASE2
Deploying DAR 2.dar into DOCBASE2
Deploying DAR 3.dar into DOCBASE2
Deploying DAR 1.dar into DOCBASE3
Deploying DAR 2.dar into DOCBASE3
Deploying DAR 3.dar into DOCBASE3

 

So that was for the deployment of several DARs in several docbases. By default Documentum will consider that the username is “dmadmin”. If this isn’t the case, then this script will not work in its current state. Yes I know, we specified the user in the script but Documentum doesn’t care and it will fail if you aren’t using dmadmin. If you need to specify another name for the Installation Owner, then you need to do three additional things. The first one is to add a new parameter in the script that will therefore now look like the following:

#!/bin/sh
docbases="DOCBASE1 DOCBASE2 DOCBASE3"
dar_list="DAR 1.dar,DAR 2.dar,DAR 3.dar"
username="INSTALL_OWNER"
password="xxx"
dar_location="/app/dctm/server/product/7.2/install/DARsInternal"
 
for docbase in $docbases
do
        IFS=',' ; for dar in $dar_list
        do
                darname=${dar##*/}
 
                echo "Deploying $darname into $docbase"
                ts=$(date "+%Y%m%d-%H%M%S")
 
                $JAVA_HOME/bin/java -Ddar="$dar_location/$dar" \
                        -Dlogpath="$dar_location/dar-deploy-$darname-$docbase-$ts.log" \
                        -Ddocbase=$docbase -Duser=$username -Ddomain= -Dpassword="$password" \
                        -Dinstallparam="$dar_location/installparam.xml" \
                        -cp $DM_HOME/install/composer/ComposerHeadless/startup.jar \
                        org.eclipse.core.launcher.Main \
                        -data $DM_HOME/install/composer/workspace \
                        -application org.eclipse.ant.core.antRunner \
                        -buildfile $DM_HOME/install/composer/deploy.xml
        done
done

 

After doing that, the second thing to do is to create the file installparam.xml that we used above. In this case, I put this file in $DM_HOME/install/DARsInternal but you can put it wherever you want.

[dmadmin@content_server_01 ~]$ cat $DM_HOME/install/DARsInternal/installparam.xml
<?xml version="1.0" encoding="UTF-8"?>
<installparam:InputFile xmlns:installparam="installparam" xmlns:xmi="http://www.omg.org/XMI" xmi:version="2.0">
    <parameter value="dmadmin" key="YOUR_INSTALL_OWNER"/>
</installparam:InputFile>

 

Just replace in this file YOUR_INSTALL_OWNER with the name of your Installation Owner. Finally the last thing to do is to update the buildfile. In our script, we are using the default one provided by EMC. In this buildfile, you need to specifically tell Documentum that you want it to take into account a custom parameter file and this is done by adding a single line in the emc.install XML tag:

[dmadmin@content_server_01 ~]$ grep -A5 emc.install $DM_HOME/install/composer/deploy.xml
        <emc.install dar="${dar}"
                     docbase="${docbase}"
                     username="${user}"
                     password="${password}"
                     domain="${domain}"
                     inputfile="${installparam}" />

 

Once this is done, you can just restart the deployment of DARs and it should be successful this time. Another solution to specify another Installation Owner or add more install parameters is to not use the default buildfile provided by EMC but use your own custom buildile. This will be an ANT file (xml with project, target, aso…) that will define what to do exactly so this is highly customizable. So yeah there are a lot of possibilities!

Note: Once done, don’t forget to remove the line from the file deploy.xml ;)

 

Hope you enjoyed this blog and that this will give you some ideas about how to improve your processes or how to do more with less. See you soon!

 

Cet article Documentum story – Manual deployment of X DARs on Y docbases est apparu en premier sur Blog dbi services.

Documentum story – dm_LogPurge and dfc.date_format

$
0
0

What is the relation between dfc.date_format and dm_LogPurge? This is the question we had to answer as we hit an issue. An issue with the dm_LogPurge job.
As usual once a repository has been created we are configuring several Documentum jobs for the housekeeping.
One of them is the dm_LogPurge. It is configured to run once a day with a cutoff_days of 90 days.
So all ran fine until we did another change.
On request of an application team we had to change the dfc.date_format to dfc.date_format=dd/MMM/yyyy HH:mm:ss to allow the D2 clients to use Months in letters and not digits.
This change fulfilled the application requirement but since that day, the dm_LogPurge job started to remove too many log files (to not write ALL). :(

So let’s explain how we proceed to find out the reason of the issue and more important the solution to avoid it.
We have been informed not by seeing that too many files have been removed but by checking the repository log file. BTW, this file is checked automatically using nagios with our own dbi scripts. So in the repository log file we had errors like:

2016-04-11T20:30:41.453453      16395[16395]    01xxxxxx80028223        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213d2 failed "
2016-04-11T20:30:41.453504      16395[16395]    01xxxxxx80028223        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content of StateOfDocbase sysobject. "
2016-04-11T20:26:10.157989      14679[14679]    01xxxxxx80028220        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213c7 failed "
2016-04-11T20:26:10.158059      14679[14679]    01xxxxxx80028220        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content

 

Based on the time stamp, I saw that the issue could be related to the dm_LogPurge. So I checked the job log file as well the folders which are cleaned out. In the folder all old log files were removed:

[dmadmin@content_server_01 log]$ date
Wed Apr 13 06:28:35 UTC 2016
[dmadmin@content_server_01 log]$ pwd
$DOCUMENTUM/dba/log
[dmadmin@content_server_01 log]$ ls -ltr REPO1*
lrwxrwxrwx. 1 dmadmin dmadmin      34 Oct 22 09:14 REPO1 -> $DOCUMENTUM/dba/log/<hex docbaseID>/
-rw-rw-rw-. 1 dmadmin dmadmin 8540926 Apr 13 06:28 REPO1.log

 

To have more information, I set the trace level of the dm_LogPurge job to 10 and analyzed the trace file.
In the trace file we had:

[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,sessionconfig,r_date_format ") ==> "31/1212/1995 24:00:00 "
[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,08xxxxxx80000362,method_arguments[ 1] ") ==> "-cutoff_days 90 "

 

So why did we have 31/1212/1995 ?

Using API I confirmed an issue related to the date format

API> get,c,sessionconfig,r_date_format
...
31/1212/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
14/Apr/2016 08:36:52

(1 row affected)

 

Date format? So as all our changes are documented, I easily found that we changed the dfc_date_format for the D2 application.
By cross-checking with another installation, used by another application where we did not change the dfc.date_format, I could confirm that the issue was related to this dfc parameter change.

Without dfc.date_format in dfc.properties:

API> get,c,sessionconfig,r_date_format
...
12/31/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
4/14/2016 08:56:13

(1 row affected)

 

Just to be sure that I did not miss something, I checked also if not all log files were removed after starting manually the job. They were still there.
Now the solution would be to rollback the dfc.date_format change but this would only help the platform but not the application team. As the initial dfc.date_format change was validated by EMC we had to find a solution for both teams.

After investigating we found the final solution:
Add dfc.date_format=dd/MMM/yyyyy HH:mm:ss in the dfc.properties file of the ServerApps (in the JMS directly so!)

With this solution the dm_LogPurge job does not remove too many files and the Application Team can still use the Month written in letters in its D2 applications.

 

 

Cet article Documentum story – dm_LogPurge and dfc.date_format est apparu en premier sur Blog dbi services.

Documentum story – ADTS not working anymore?

$
0
0

A few weeks ago, on one of our Documentum environments, we find out thanks to our monitoring that the Renditions weren’t generated anymore by our CTS/ADTS Server… This happened in a Sandbox environment where a lot of dev/testing was done in parallel between EMC, the different Application Teams and the Platform/Architecture Team (us). A lot of changes at the same time means that it might not be easy to find out what caused this issue…

 

After a few checks on our monitoring scripts just to ensure that the issue wasn’t the monitoring itself, it appears that this part was working properly and indeed the renditions weren’t generated anymore. Therefore we checked the configuration of the docbase/rendition server but didn’t find anything suspicious on the configuration side and therefore we checked the logs of the Rendition Server. The CTS/ADTS Server often print a lot of different errors that are all linked but which appear to have a different root cause. Therefore to know which error is really relevant, I cleaned up the log file (stop CTS/ADTS, backup log file and remove it) and then I launched our monitoring script that basically remove all existing renditions for a test document, if any, and then request a new set of renditions to be generated by the CTS/ADTS Server.

 

After doing that, it was clear that the following error was the real one I needed to take a look at:

11:14:58,562  INFO [ Thread-61] CTSThreadPoolManagerImpl -       Added ICTSTask to the ICTSThreadPoolManager: dm_transcode_content
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Start. About to get Next ICTSTask from pool manager...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       ICTSThreadPoolManager: removing first item from the list for processing...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Removing a task to execute it. Number in waiting list: 1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Next CTSTask received...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       CTSThreadPoolManager has threadlimit -1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Processing next CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get notifier from CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get session from CTSTask...
11:14:59,203  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get RUN CTSTask...
11:14:59,859  WARN [Thread-153] CTSOperationsUtils -       [BOCS/ACS] exportContentFiles error - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: java.security.cert.CertPathBuilderException: Could not build a validated path.

https://content_server_01:9082/ACS/servlet/ACS?command=read&version=2.3&docbaseid=0f3f245a&basepath=%2Fdata%2Fdctm%2Frepo01%2Fcontent_storage_01%2F003f245a&filepath=80%2F06%2F3f%2F46.docx&objectid=093f245a800a303f&cacheid=dAAEAgA%3D%3DRj8GgA%3D%3D&format=msw12&pagenum=0&signature=wnJx0Z9%2Brzhk3ZMWNQj5hkRq1ZtAwqZGigeLdG%2FLUsc8WDs8WUHBIPf5FHbrYsmbU%2Bby7pTbxtcxtcwMsGIhwyLzREkrec%2BZzMYY3bLY88sad%2BLlzJfqzYveIEu4iebZnOwm4g%2FxyZzfR3C4Yu3W5FgBaxulngiopjVMe587B6k%3D&servername=content_server_01ACS1&mode=1&timestamp=1465809299&length=12586&mime_type=application%2Fvnd.openxmlformats-officedocument.wordprocessingml.document&parallel_streaming=true&expire_delta=360

11:14:59,859 ERROR [Thread-153] CTSThreadPoolManagerImpl -       Exception in CTSThreadPoolManagerImpl, notification :
com.documentum.cts.exceptions.internal.CTSServerTaskException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
com.documentum.cts.exceptions.CTSException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
DfException:: THREAD: Thread-153; MSG: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (090f446e800a303f, msw12, null); ERRORCODE: ff; NEXT: null
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx5(CTSOperationsUtils.java:626)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx4(CTSOperationsUtils.java:332)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx3(CTSOperationsUtils.java:276)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx2(CTSOperationsUtils.java:256)
                at com.documentum.cts.impl.services.task.CTSTask.exportInputContent(CTSTask.java:4716)
                at com.documentum.cts.impl.services.task.CTSTask.retrieveInputURL(CTSTask.java:4594)
                at com.documentum.cts.impl.services.task.CTSTask.initializeFromCommand(CTSTask.java:2523)
                at com.documentum.cts.impl.services.task.CTSTask.execute(CTSTask.java:922)
                at com.documentum.cts.impl.services.task.CTSTaskBase.doExecute(CTSTaskBase.java:514)
                at com.documentum.cts.impl.services.task.CTSTaskBase.run(CTSTaskBase.java:460)
                at com.documentum.cts.impl.services.thread.CTSTaskRunnable.run(CTSTaskRunnable.java:207)
                at java.lang.Thread.run(Thread.java:745)
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotificationAdmin : false
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotification : true

 

Ok so now it is clear that the error is actually the following one: “java.security.cert.CertPathBuilderException: Could not build a validated path”. This always means that a specific SSL Certificate Chain isn’t trusted. As you can see above, it is mentioned “BOCS/ACS” on the same line and actually the line just below contains the URL of the ACS… Therefore I thought about that and yes indeed one of the changes planned for that day was that the D2-BOCS has been installed and enabled on this environment. So what is the link between the ACS URL and the D2-BOCS installation? Well actually when installing the D2-BOCS, if you want to keep your environment secured, then you need the ACS to be switched to HTTPS because the D2-BOCS will force D2 to use the ACS URLs to download the documents to the client’s workstation when it is actually not using the ACS at all when there is no D2-BOCS installed. Therefore the installation of the D2-BOCS isn’t linked to the CTS/ADTS at all but one of our pre-requisites to install it was to setup the ACS in HTTPS and that is linked to the CTS/ADTS Server because it is actually using it to download the documents as you can see in the error above.

 

Now we know what was the error and just to confirm that, I switched back the ACS URL to HTTP (using DA: Administration > Distributed Content Configuration > ACS Servers > Right-click on ACS objects > Properties > ACS Server Connections) and re-init the Content Server (using DA: Administration > Basic Configuration > Content Servers > Right-click on CS objects > Properties > Check “Re-Initialize Server” and click OK).

 

Right after doing that, the monitoring switched back to green, meaning that renditions were created again and therefore this was indeed the one and only issue. So what to do if we want to use the ACS in HTTPS in correlation with the Rendition? Well we just have to explain to the CTS/ADTS Server that he can trust the ACS SSL Certificate and this is done by updating the cacerts file of Java used by the Rendition Server. This is done pretty easily using the following commands for which I will suppose that the Rendition Server has been installed on a D: Drive under “D:\CTS”.

 

So the first thing to do is to upload your Certificate Chain to the Rendition Server and put them under “D:\certs” (I will suppose there are two SSL Certificates in the chain: a Root and a Gold). Then simply start a command prompt as Administrator and execute the following commands to update the cacerts file of Java:

D:\> copy D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts.bck_YYYYMMDD
        1 file(s) copied.

D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias root_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Root_CA.cer
Enter keystore password:
…
Trust this certificate? [no]:  yes
Certificate was added to keystore
 
D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias gold_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Gold_CA1.cer
Enter keystore password:
Certificate was added to keystore

 

Now just switch the ACS to use HTTPS again, restart the Rendition services using the Windows Services or command line tool and the next time you will request a rendition, it will work without errors even in HTTPS. That’s actually a very common mistake to setup the SSL on the Content Server side which is great but you need not to forget that some other components might use what you switch to HTTPS and therefore these additional components need to trust your SSL Certificates too!

 

Note 1: in our case, the WebLogic Server hosting D2 was already in HTTPS and therefore it was already trusting the Internal Root & Gold SSL Certificates, reason why we could use the ACS in HTTPS from D2 without issue.

Note 2: in case you didn’t know about it, I think it is now clear that the CTS/ADTS Server is using the ACS to download the files… Therefore if you want a secured environment even without D2-BOCS, you absolutely need to switch your ACS to HTTPS!

 

Cet article Documentum story – ADTS not working anymore? est apparu en premier sur Blog dbi services.

Documentum story – IAPI login with a DM_TICKET for a specific user

$
0
0

During our last project, one of the Application Teams requested our help because they needed to execute some commands in IAPI with a specific user for which they didn’t know the password. They tried to use a DM_TICKET as I suggested them but they weren’t able to do so. Therefore I gave them detailed explanations of how to do that and I thought I should do the same in this blog because I was thinking that maybe a lot of people don’t know how to do that.

 

So let’s begin! The first thing to do is obviously to obtain a DM_TICKET… For that purpose, you can log in to the Content Server and use the local trust to login to the docbase with the Installation Owner (I will use “dmadmin” below). As said just before, there is a local trust in the Content Server and therefore you can put any password, the login will always work for the Installation Owner (if the docbroker and docbase are up of course…):

[dmadmin@content_server_01 ~]$ iapi DOCBASE -Udmadmin -Pxxx
 
 
        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084
 
 
Connecting to Server using docbase DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a8000b7ff started for user dmadmin."
 
 
Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0

 

Ok so now we have a session with the Installation Owner and we can therefore get a DM_TICKET for the specific user I was talking about before. In this blog, I will use “adtsuser” as the “specific user” (ADTS user used for renditions). Getting a DM_TICKET is really simple in IAPI:

API> getlogin,c,adtsuser
...
DM_TICKET=T0JKIE5VTEwgMAoxMwp2ZXJzaW9uIElOVCBTIDAKMwpmbGFncyBJTlQgUyAwCjEKc2VxdWVuY2VfbnVtIElOVCBTIDAKMTgwCmNyZWF0ZV90aW1lIElOVCBTIDAKMTQ1MDE2NzIzNwpleHBpcmVfdGltZSBJTlQgUyAwCjE0NTAxNjc1MzcKZG9tYWluIElOVCBTIDAKMAp1c2VyX25hbWUgU1RSSU5HIFMgMApBIDggYWR0c3VzZXIKcGFzc3dvcmQgSU5UIFMgMAowCmRvY2Jhc2VfbmFtZSBTVFJJTkcgUyAwCkEgMTEgU3ViV2F5X2RlbW8KaG9zdF9uYW1lIFNUUklORyBTIDAKQSAzMSBQSENIQlMtU0QyMjAwNDYuZXUubm92YXJ0aXMubmV0CnNlcnZlcl9uYW1lIFNUUklORyBTIDAKQSAxMSBTdWJXYXlfZGVtbwpzaWduYXR1cmVfbGVuIElOVCBTIDAKMTEyCnNpZ25hdHVyZSBTVFJJTkcgUyAwCkEgMTEyIEFBQUFFSWlRMHFST1lIZEFrZ2hab3hTUUEySityd2xPdnZVcVdKbFdVdTUrR2lDV3ZtY1dkRzVwZnRwVWRDeVVldE42QjVOMnVxajZwYnI3MEthaVNpdGU5aWdmRk43bDA0cjM0d0JtYlloaUpQWXgK
API> exit
Bye

 

Now we do have a DM_TICKET for the user “adtsuser” so let’s try to use it. You can try to login in the “common” way as I did above but that will just not work because what we got is a DM_TICKET and that’s not a valid password. Therefore you will need to use something else:

[dmadmin@content_server_01 ~]$ iapi -Sapi
Running with non-standard init level: api
API> connect,DOCBASE,adtsuser,DM_TICKET=T0JKIE5VTEwgMAoxMwp2ZXJzaW9uIElOVCBTIDAKMwpmbGFncyBJTlQgUyAwCjEKc2VxdWVuY2VfbnVtIElOVCBTIDAKMTgwCmNyZWF0ZV90aW1lIElOVCBTIDAKMTQ1MDE2NzIzNwpleHBpcmVfdGltZSBJTlQgUyAwCjE0NTAxNjc1MzcKZG9tYWluIElOVCBTIDAKMAp1c2VyX25hbWUgU1RSSU5HIFMgMApBIDggYWR0c3VzZXIKcGFzc3dvcmQgSU5UIFMgMAowCmRvY2Jhc2VfbmFtZSBTVFJJTkcgUyAwCkEgMTEgU3ViV2F5X2RlbW8KaG9zdF9uYW1lIFNUUklORyBTIDAKQSAzMSBQSENIQlMtU0QyMjAwNDYuZXUubm92YXJ0aXMubmV0CnNlcnZlcl9uYW1lIFNUUklORyBTIDAKQSAxMSBTdWJXYXlfZGVtbwpzaWduYXR1cmVfbGVuIElOVCBTIDAKMTEyCnNpZ25hdHVyZSBTVFJJTkcgUyAwCkEgMTEyIEFBQUFFSWlRMHFST1lIZEFrZ2hab3hTUUEySityd2xPdnZVcVdKbFdVdTUrR2lDV3ZtY1dkRzVwZnRwVWRDeVVldE42QjVOMnVxajZwYnI3MEthaVNpdGU5aWdmRk43bDA0cjM0d0JtYlloaUpQWXgK
...
s0

 

Pretty simple, right? So let’s try to use our session like we always do:

API> retrieve,c,dm_server_config
...
3d3f245a80000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                                   : DOCBASE
  title                                         : 
  ...

 

And that’s it, you have a working session with a specific user without the need to know any password, you just have to obtain a DM_TICKET for this user using the local trust of the Installation Owner!

 

Cet article Documentum story – IAPI login with a DM_TICKET for a specific user est apparu en premier sur Blog dbi services.

Viewing all 173 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>