Posts

Showing posts from August, 2014

High Number of JOBLG Files

Call transaction SP12 and go to the menu path TemSe Database > Memory Allocation The column "bytes in files" is the data going to filesystem. At the very bottom it counts data by date, and even by hour for the current date of the report run. In order to clean the files up, you can rely on the following SAP notes 6604        Deleting job logs at operating system level 16513      File system is full - what do I do? 48400      Reorganization of TemSe and Spool 666290    Deleting "orphaned" job logs A high number of JOBLG files in the global directory are usually due to orphaned job logs. You can delete all orphaned job logs files using RSTS0043.

Print Error: Internal Error

Some popular errors and the reasons for those errors: Internal Error (-6723) occured This error occurs if fields from the system layout are deleted while creating the custom template. Create the custom template without deleting the system layout fields. If the fields are not required, they may be marked as hidden fields. Internal error (-4007) occured If this error occurs when printing  more than one copy from the printer window, press on print preview and then print the report. Internal error (-101) occurred Internal error is misleading here. This error occurs if the printer/print driver is wrong/missing/corrupted. Install a correct printer and/or driver. It could also be because of insufficient authorization in case you are printing from a workstation/citrix. Internal Error (-50) occured If you are trying to print to the local workstation as a pdf file or any other format, ensure that the destination folder has write permission on your user ID/group. Internal error ...

Clean up of old repositories from SDM

When you deploy patched or support packs , all the previous versions are retained as a backup under SDM repositories. If you wish to clean them up, for example to Gain disk space, you can follow these steps from program directory ./StopServer.sh ./sdm.sh jstartup mode=standalone ./sdm.sh gc sdmhome=/usr/sap/SID/inst#/SDM/program ./sdm.sh jstartup mode=integrated ./StartServer.sh

Work Processes held by the program SAPLSENA

The program SAPLSENA is the lock handler that takes care of locking and unlocking operations in an SAP system. If a process is held by SAPLSENA for a long duration, it means that an application program is making a large number of lock requests. A large number of work processes are occupied because the DEQUEUE function module is called separately for each request. You can analyse the lock situation by following the steps provided in the SAP note 653996. You may have to run the application program ( by controlling selection criteria) to limit the number of lock requests. Alternately you can adjust the enqueue parameters (enqueue/tablesize, enqueue/snapshot_pck_size, enqueue/snapshot_pck_ids etc.).

Changing redo log files location or size online

You may want to change the redo log file size or it's location online (in case you are facing space issues on the existing disk drive). Before changing the redo logs, take a full backup. Drop the first redo log by using the SQL command alter database drop logfile '<path to the redo log file>'; Now create it with your preferred location or size using the command alter database add logfile '<preferred path and name of the log file>' size <preferred size>M; Repeat this on the rest of the redo logs. You will encounter ORA1515 error at the point when you are deleting a redo log file that is currently in use. You can wait or skip the redo log file for now and proceed later with this redo log file. You may also force log switch to let you change the current log file using the command alter system switch logfile;

Execution of SLIC_LIKEY_CHECK aborted with rc 2

If a check of maintenance certificates fails with the following error: Execution of SLIC_LIKEY_CHECK aborted with rc 2 Increase the value of the parameter slic/buffer_entries_dig_sig to 15 (or higher if it is already set to the benchmark value as per SAP note 1280664)

Background jobs stuck in Ready status or their poor load balancing

If you have multiple application servers, change and set the parameter rdsip/btctime to a different value on each one eg. 56, 57,58,59 etc. The job scheduler "SAPMSSY2" runs every "rdisp/btctime" seconds (60 by default) and looks for all the pending jobs in the job queue and assigns them to the available free batch work processes in the system. If there are other batch work processes waiting while the jobs are in the ready status, ensure that you have not over-allocated batch WPs for Class A jobs only. You can change such allocation from the transaction RZ04. If you find that WPs are not over-allocated to Class A jobs, it could be that a job is blocking access to the WPs. Run temse check as explained below and kill/restart the WPs that are in waiting status to get the WP working. Run Transaction SM65 Select Goto > Additional tests Select these options: Perform TemSe check Consistency check DB tables List Check profile parameters Check host names ...

Integration Processes with status 99 in SXI_CACHE

If you see that the cache update of Integration Processes created in Integration Builder is not success (the corresponding entry shows with status 99), it is possible that the ABAP classes (SE24) are not created for the XML object (with proxy definition present in SWF_XMP1). To recreate the missing objects, run the report RSWF_XMP_CHECK_CLASS attached to SAP note 896249. After the ABAP class is created, redeploy the processes via SXI_CACHE transaction.

Too many cache updates hangs SAP PI systems

If you find SAP PI system is unresponsive and thread dump shows most of the java threads showing the following stack information, it means that there are a lot of CPA cache update requests that are waiting for an active cache update to complete. at java.lang.Thread.sleep(J)V(Native Method)  at com.sap.aii.af.service.cpa.impl.j2ee.sapengine630.SAPJ2EEClusterController.blockAndSetLock(Ljava.lang.String;)Z(SAPJ2EEClusterController.java:784)  at com.sap.aii.af.service.cpa.impl.cache.CacheManager.performCacheUpdate(ZZ)Lcom.sap.aii.af.service.cpa.impl.cache.directory.DirectoryDataParseErrorInfo; (CacheManager.java:466) At a given time only one cache update has access to update the database with the change. Other cache requests have to wait for the ongoing cache update to remove its enqueue on completion. While these requests are waiting, they occupy application threads. In the event of high number of cache updates, it may lead the application to stall as the threads are all ...

Disabling data collection job in SAP PI

From PI 7.31 onwards, SAP has introduced Integration Visibility, to discover message flows for consuming applications to subscribe and get monitoring events on the discovered message flows. The data collector jobs can cause OutOfMemory errors. To fix the OOM errors, check SAP notes 1888326 and 2006192. It is very likely that this feature is not used yet. You can disable data collection in that case from Configuration Tool. Open ConfigTool and click on menu view --> expert mode, click the template Click the "Filter" tab, and new a 'custom rules' as: Action: stop; component:application; Vendor Mask: sap.com; component name mask: tc~iv~* Click Add Click Save and restart the system