question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: By default, an alert configured in aspenONE Process Explorer will send e-mail notifications that includes a URL pointing back to the Alerts page that is directed to the Aspen InfoPlus.21 server. This | Solution: addresses how to send the URL to the aspenONE Process Explorer Web Server if the web server is hosted on a different machine than the Aspen InfoPlus.21 server.
Solution
Open the Aspen InfoPlus.21 Manager on the Aspen InfoPlus.21 server and locate TSK_ALRT. This task is responsible for sending alerts when alarms are triggered. In the command line parameters, add an argument of
-H=WEBSERVERNAME
Make sure to replace “WEBSERVERNAME” with the computer name hosting the aspenONE Process Explorer web server. Click “Update” to save the changes to TSK_ALRT and stop and restart TSK_ALRT from the Aspen InfoPlus.21 Manager to initialize the task with the new changes.
Also, make sure to clear browser cache, A1PE Admin Cache and run IISRESET.
Keywords: TSK_ALRT
References: None |
Problem Statement: How do I set up multiple particle types (substreams) for solids modeling in Aspen Plus? | Solution: The attached Aspen Plus V8 demo will show you how to define multiple particle types (solid substreams) and demonstrates the usage of multiple solids substreams with an example of a simple screening process. There is an associated PDF to guide you through the steps.
This example will cover
· Defining different particle types described by composition, particle size distribution, and moisture content.
· Using screens to separate particle types
Keywords: Solids Capabilities, Unit Operations, Set Up, Screens, Substreams, Particle Types
References: None |
Problem Statement: Right-Clicking on right side pane of Aspen Production Record Manager Administrator does not bring up dialogs for New Characteristic Definition, New Phase Definition, etc. The options do appear on the R-Click menu although nothing is displayed when they are selected. | Solution: Working within the right side of the Aspen Production Record Manager Administrator is not recomended as some of the functions may not be available. When performing administration AspenTech recommends working on the left side tree view of the Aspen Production Record Manager Administrator.
Keywords: None
References: None |
Problem Statement: What could cause the events screen in Aspen Petroleum Scheduler to run slowly? Adding or modifying events and saving events take a long time to process. | Solution: As of V8.7, the events are stored in ATOrionEvents and APSEVENTMASTER (for the new pipeline and dock scheduling events)
Over the course of time, these tables can get really big. So it is important to archive these events periodically. The size of the tables could be directly related with the slow speeds, because any operation on the events screen runs a query against these event tables. So, larger the size of these tables, longer the processing time for these queries. Please referSolution article 145577 for more details on event archiving and cleanup to better help APS performance
Keywords: ATOrionEvents, APSEVENTMASTER, model cleanup, database archive
References: None |
Problem Statement: This article discusses the ability to use Aspen SQLplus to retrieve/analyze data from the Aspen Production Record Manager (formerly known as Aspen Batch.21). | Solution: Distributed with the Aspen Production Record Manager product is a .chm (help) file called atbatch21applicationinterface.chm It contains several VB examples of such as the one listed below. This example contains several comments and demonstrates how to Get a Batch list then get Data for each Batch found. The query is called BatchListAttributeValueQuery:-
LOCAL DATA_SOURCES, I INT, K INT, L INT;
LOCAL lclBatchQuery, lclBatchList;
LOCAL lclStartTime, lclEndTime;
LOCAL lclCharSpec, lclBLAVQ, lclBatchDataList, lclBatchData, lclCharValuesList, lclCharValues, lclCharValue;
DATA_SOURCES = CREATEOBJECT('ASPENTECH.BATCH21.BATCHDATASOURCES');
-- Get query object
lclBatchQuery = DATA_SOURCES('Batch21 V7.2').AREAS('Batch Demo Area').BATCHQUERY;
lclBatchQuery.CLEAR;
-- Configure query
lclStartTime = CURRENT_TIMESTAMP-32:40;
lclEndTime = CURRENT_TIMESTAMP-08:40;
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('BATCH NO',1,4,'5');
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('End Time',1,3,lclStartTime);
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('End Time',1,2,lclEndTime);
lclBatchQuery.CHARACTERISTICCONDITIONS.Clause = '(1 AND 2 AND 3)';
-- Retrieve list of batches
lclBatchList = lclBatchQuery.GET;
-- Output list of batch IDs (optional)
FOR I=1 TO lclBatchList.COUNT DO
BEGIN
WRITE lclBatchList(I).ID; -- internal batch id (handle)
EXCEPTION
WRITE 'No Batch ID';
END
END
-- Configure data requests
lclBLAVQ = lclBatchList.AttributeValueQuery;
lclCharSpec = lclBLAVQ.CharacteristicSpecifiers.Add('%',0);
lclCharSpec.AdvancedQuery.UseLikeComparison = True;
-- Get data
lclBatchDataList = lclBLAVQ.GetData;
-- Show data
FOR I=1 TO lclBatchDataList.Count DO
lclBatchData = lclBatchDataList.Item(I);
lclCharValuesList = lclBatchData.Characteristics;
FOR K=1 TO lclCharValuesList.Count DO
lclCharValues = lclCharValuesList.Item(K);
WRITE lclCharValues.RequestedCharacteristic.Description;
FOR L=1 TO lclCharValues.Count DO
lclCharValue = lclCharValues.Item(L);
BEGIN
WRITE lclCharValue.ReturnedCharacteristic.Description;
WRITE lclCharValue.FormattedValue;
EXCEPTION
WRITE 'Bad data';
END
END
END
END
Keywords: None
References: None |
Problem Statement: This knowledge base article describes the procedure to log on to the Rename Tag Utility for Oracle or SQL databases. | Solution: To log on to the Rename Tag Utility:
1. Start the Rename Tags utility
2. From the Model Type drop down list, select the application model type you are working with.
3. In the Database field, enter the location and name of the model database. You can click Browse to navigate to the location and select the desired file
4. In the Excel Unit field, enter the location and name of the Units file. You can click Browse to navigate to the location and select the desired file
5. After selecting the dsn location and excel file, the first login (User Name and Password) should be a user id and password on the model, the same as when opening your APS model so this which is stored in USERS table in DB)
6. After clicking OK, if you are accessing an ODBC file and have not provided a password, you will see the DSN information dialog box prompting you to enter a password (to SQL DB). Do so and click OK.
Keywords: ATRenameTags Utility
References: None |
Problem Statement: How to use Aspen Custom Modeler to model sub models in Aspen PIMS | Solution: ThisSolution provides an overview of utilizing an Aspen Custom Modeler (ACM) model in Aspen PIMS. The primary objective of theSolution is to demonstrate the modeling involved in both ACM and PIMS. ThisSolution also helps to understand the tags that need to be used in ACM side so that PIMS could recognize and use it during matrix generation.
Example model: Volsamp ACM SLPR the comes is PIMS installation is for explaining thisSolution
The PIMS sub-model for the low pressure reformer in the Volsamp ACM SLPR is as shown below, in this model the reformate yields NC1, NC2, NC3, IC4, NC4, RFT and reformate RFT qualities SPG, RON, DON, CNX, D11, C11, RVI, BNZ , ARO is a function of the feed rate of reformer (SLPRRFF), N2A of reformer feed (QN2ARFF), and the severity of the reformer.
* TABLE
SLPR
*
Low Pressure Reformer
*
TEXT
RFF
HYL
NC1
NC2
NC3
IC4
NC4
XFR
LOS
RFT
*
VBALRFF
Reformer Feed
1
*
VBALHYL
Low-Purity H2 (FOE)
-1
VBALNC1
Methane (FOE)
-1
VBALNC2
Ethane (FOE)
-1
VBALNC3
Propane
-1
VBALIC4
Iso-Butane
-1
VBALNC4
N- Butane
-1
VBALRFT
Total Reformate
-1
VBALLOS
Loss
-1
*
RBALRFT
-1
1
RSPGRFT
999
RRONRFT
999
RDONRFT
999
RCNXRFT
999
RD11RFT
999
RC11RFT
999
RRVIRFT
999
RBNZRFT
999
RARORFT
999
*
The mathematical equations governing the above calculation of volume rate of yields and qualities are developed as an ACM model, the snap shot of the ACM code is given below. As observed: reformer feed SLPRRFF, N2A of reformer feed QN2ARFF, and severity of the reformer SLPRSEV are the inputs and the reformate qualities are the output. The intermediate variable N2AFRAC serves to simplify the equation, in this ACM model N2AFRAC is defined as N2AFRAC = (QN2ARFF-70)/5;
ACM code
Model LPR
//ACM model of Low Pressure Reformer
//Based on volsamp XNLP model
// Port Definitions
Port Inlet
SLPRSEV as pos_small;
SLPRRFF as pos_large;
QN2ARFF as pos_small;
DUMMY1 as pos_large;
DUMMY2 as RealVariable;
DUMMY3 as RealVariable;
End
Port Outlet
SLPRHYL as pos_large;
SLPRNC1 as pos_large;
SLPRNC2 as pos_large;
SLPRNC3 as pos_large;
SLPRIC4 as pos_large;
SLPRNC4 as pos_large;
SLPRXFR as pos_large;
SLPRLOS as notype;
SLPRFUL as pos_large;
SLPRKWH as pos_large;
SLPRSTM as pos_large;
SLPRH2O as pos_large;
SLPRCCC as pos_large;
QSPGRFT as pos_ltrge;
QRONRFT as pos_large;
QDONRFT as pos_large;
QD11RFT as pos_large;
QRVIRFT as pos_large;
QARORFT as pos_large;
QBNZRFT as pos_large;
QCNXRFT as pos_large;
QC11RFT as pos_large;
End
Feed as Input Inlet;
Prod as Output Outlet;
// Intermediate variable declaration
N2AFRAC as RealVariable;
// Bound computed variables
Prod.QSPGRFT.Lower: 0.01;
Prod.QSPGRFT.Upper: 1.5;
//Prod.QRONRFT.Lower: 0;
Prod.QRONRFT.Upper: 104;
//Prod.QDONRFT.Lower: 0;
Prod.QDONRFT.Upper: 101;
//Prod.QD11RFT.Lower: 70;
Prod.QD11RFT.Upper: 105;
//Prod.QRVIRFT.Lower:;
Prod.QRVIRFT.Upper: 210;
//Prod.QARORFT.Lower:;
Prod.QARORFT.Upper: 100;
//Prod.QBNZRFT.Lower:;
Prod.QBNZRFT.Upper: 10;
//Prod.QCNXRFT.Lower:;
Prod.QCNXRFT.Upper: 101;
//Prod.QC11RFT.Lower:;
Prod.QC11RFT.Upper: 105;
// Model code
N2AFRAC = (Feed.QN2ARFF-70)/5;
Prod.SLPRHYL = ( -1.775838E-05*Feed.SLPRSEV^2 + 4.495001E-03*Feed.SLPRSEV - 2.214171E-01 + N2AFRAC*0.0015912)*Feed.SLPRRFF;
Prod.SLPRNC1 = (1.926384E-05*Feed.SLPRSEV^2 - 3.098810E-03*Feed.SLPRSEV + 1.298786E-01 + N2AFRAC*-0.0012329)*Feed.SLPRRFF;
Prod.SLPRNC2 = ( 3.376732E-05*Feed.SLPRSEV^2 - 5.443291E-03*Feed.SLPRSEV + 2.286452E-01 + N2AFRAC*-0.0022512)*Feed.SLPRRFF;
Prod.SLPRNC3 = (7.394552E-05*Feed.SLPRSEV^2 - 1.194264E-02*Feed.SLPRSEV + 5.026451E-01 + N2AFRAC*-0.0048717)*Feed.SLPRRFF;
Prod.SLPRIC4 = ( 3.147852E-05*Feed.SLPRSEV^2 - 5.124704E-03*Feed.SLPRSEV + 2.172270E-01 + N2AFRAC*-0.0020146)*Feed.SLPRRFF;
Prod.SLPRNC4 = ( 4.362481E-05*Feed.SLPRSEV^2 - 7.025491E-03*Feed.SLPRSEV + 2.946230E-01 + N2AFRAC*-0.0027920)*Feed.SLPRRFF;
Prod.SLPRXFR = ( -1.466876E-04*Feed.SLPRSEV^2 + 1.985870E-02*Feed.SLPRSEV + 2.907492E-01 + N2AFRAC*0.0098777)*Feed.SLPRRFF;
Prod.SLPRLOS = ( -3.763406E-05*Feed.SLPRSEV^2 + 8.281227E-03*Feed.SLPRSEV - 4.423509E-01 + N2AFRAC*0.0016935)*Feed.SLPRRFF;
Prod.SLPRFUL = ( -1.562500E-05*Feed.SLPRSEV^2 + 3.675000E-03*Feed.SLPRSEV + 6.976250E-02)*Feed.SLPRRFF;
Prod.SLPRKWH = (1.210000E-02*Feed.SLPRSEV + 3.665900E+00)*Feed.SLPRRFF;
Prod.SLPRSTM = (1.562500E-05*Feed.SLPRSEV^2 - 2.825000E-03*Feed.SLPRSEV + 1.846375E-01)*Feed.SLPRRFF;
Prod.SLPRH2O = (2.50E-04*Feed.SLPRSEV + 6.65E-02)*Feed.SLPRRFF;
Prod.SLPRCCC = (1.562500E-05*Feed.SLPRSEV^2 - 2.825000E-03*Feed.SLPRSEV + 2.106375E-01)*Feed.SLPRRFF;
Prod.QSPGRFT = 1.907000E-05*Feed.SLPRSEV^2 - 8.962900E-04*Feed.SLPRSEV + 7.155445E-01;
Prod.QRONRFT = Feed.SLPRSEV;
Prod.QDONRFT = Feed.SLPRSEV-5.5;
Prod.QD11RFT = Feed.SLPRSEV-2.75;
Prod.QRVIRFT = 1.787215E-03*Feed.SLPRSEV^2 - 3.028934E-01*Feed.SLPRSEV + 1.592524E+01;
Prod.QARORFT = 1.093750E-02*Feed.SLPRSEV^2 - 4.425000E-01*Feed.SLPRSEV + 6.236250E+00;
Prod.QBNZRFT = 1.562500E-03*Feed.SLPRSEV^2 - 5.750000E-02*Feed.SLPRSEV - 3.886250E+00;
Prod.QCNXRFT = Feed.SLPRSEV-5.5;
Prod.QC11RFT = Feed.SLPRSEV-2.75;
END
As observed in the above code the variables used in the ACM are 7 letter PIMS tags (variables), this serves PIMS to recognize the variable in the ACM model when this model is imported to PIMS. Once coding is finished in ACM, the file is saved in a .dll format
The .dll file derived from ACM is imported to PIMS by following the procedure described in KB# 127072. The ACM equations integrated to PIMS can be visualized from the XSLP_Equation.log file generated by PIMS, a snap shot of XSLP_Equation.log is as shown below. Further information on adjusting the trace level of XSLP_Equation.log can be found in KB# 136905.
The screen shot below illustrates the equations from the ACM model in the matrix analyzer in PIMS
After importing the ACM external model to PIMS the PIMS model tree is as shown below
Keywords: Using ACM in PIMS-AO
Integration in PIMS-AO
Using external models in PIMS-AO
References: None |
Problem Statement: When plotting data retrieved across a Wide Area Network (WAN), SPC Charts (such as XBAR) may be very slow to appear, may not even appear, may be incomplete such as Control Limits not displaying, may result in frozen screens, etc. | Solution: Aspen Process Explorer is a thick-client application that is intended for use within a plant. The underlying RPC-based protocol allows Process Explorer on one computer to access an Aspen InfoPlus.21 server that is hosted on a local area network within the plant. In other words, Process Explorer -- especially the underlying protocol -- is not intended for use across a wide area network.
In contrast, Aspen IP.21 Process Browser (aka Aspen Web.21) is a thin-client application that can be used both within a plant (via LAN) and between sites (via WAN).
Wordarounds have been tried in the past to find ways to allow Aspen Process Explorer normal trend plots to perform within limits, across the WAN. However none of these options, nor Aspen IP.21 Process Browser support SPC.
Therefore AspenTech does not support viewing SPC charts with data retrieved across a WAN.
Keywords: None
References: None |
Problem Statement: What is the benefit of removing unused components in Aspen Plus Dynamics? | Solution: The calculation of the properties of unused components (with mole fraction of 0) can lead to convergence problems. Also removing unused components will make the Dynamics file size smaller and improves run time.
Unused components can be removed either from Aspen Plus or Aspen Plus Dynamics.
In Aspen Plus: Please use Dynamic Configuration > Dynamic Options form
In Aspen Plus Dynamics: Please use the Arrow tabs
Key Words
Components, Convergence, Dynamics
Keywords: None
References: None |
Problem Statement: This | Solution: outlines:
I. The reliability of data exchanged between Aspen Cim-IO server and Client can be verified using 'Checksum'.
II. Recommended practice to follow when turning on the 'Checksum' featureSolution
Checksum is used to validate message exchange between the Cim-IO server and Cim-IO client. Turning on Checksum calculates an associated value for the data being transferred and attaches it to the message sent from the Cim-IO server. A similar computation is performed by Checksum on the Cim-IO client. Both values are compared to determine the validity of the data. If the values match the message delivered is authenticated and confirmed by Checksum as a valid response. If the results do not match the message is discarded and an error logged in the cimio_msg.log file.
Checksum is enabled from Cim-IO Checksum tab in CimioProperties.exe located under the directory \AspenTech\Cim-IO\code\.
For more information on How to enable Checksum please refer theSolution 121932
The following is the recommended procedure for configuring the Cim-IO Checksum feature.
Checksum is turned off by default. If you had made any changes to this particular feature in the Cim-IO Properties make sure that Checksum is turned OFF on both the Cim-IO client as well as Cim-IO server.
Create a device with either IO wizard from the Aspen InfoPlus.21 Administrator or manually . If the device is already configured then proceed to the next step.
After the device is successfully created configure the transfer records and start the device. Make sure data is transferred as expected from the device before turning on the Checksum.
Once the data exchange is OK go ahead and turn Checksum ON for both Cim-IO client and Cim-IO server. Restart the Interface services, device as well as client tasks on the Aspen InfoPlus.21 machine. After this check to see if data is scanning from the Cim-IO server as anticipated.
Following the above procedure confirms that Checksum feature was enabled and all messages transferred between the Cim-IO client and Server are validated.
Note: If Checksum is not turned on for both the Cim-IO Client or Cim-IO Server or if not properly configured a break in communication and socket error messages my occur when creating new device or running the Cim-IO Test API.
Keywords:
References: None |
Problem Statement: This knowledge base article provides best practice recommendations to tune the feasibility objective function factor for a PIMS-AO model. | Solution: In some cases the model may have difficulties converging due to high non-linearity introduced by external models. At this instance the feasibility objective function factor value present in Model Settings | Non-linear Solver | Advanced 1 tab may be tuned appropriately for the model to converge.
Note that the Feasibility Objective Factor is one of many parameters that can be tuned to change the performance of a PIMS-AO model. The decision about which specific parameter to modify should come from someone with deep experience with the model.
The default value for the Feasibility Objective Factor is 10,000 because this has been observed to work well for a typical PIMS model. Typical PIMS models often have objective function values on the order of 1,000.
The Feasibility Objective Factor value should be changed only when the model is marginally infeasible or if the user encounters micro infeasibilities frequently. The characterization of infeasibility as marginally infeasible or micro infeasible depends on the nature of the constraint and the user must have a good understanding of what constitutes a micro infeasibility for the given model.
Increasing the Feasibility Objective Factor may increase the probability of convergence to a localSolution instead of to a globalSolution. Care should be made to check the results once this parameter is adjusted.
It is not recommended to increase the Feasibility Objective Factor beyond three orders of magnitude because such a large change may eventually cause numerical issues with the model.
Keywords: PIMS-AO settings
Feasibility objective factor
Convergence
Non-Convergence
References: None |
Problem Statement: Installation of software hot fixes and other hardware and software failures can corrupt a working Aspen OnLine installation. A good backup strategy for the Aspen OnLine system is necessary to be able to recover quickly from these type of interruptions. What is the recommended backup strategy? | Solution: In the recommended backup strategy, the goal is to implement a backup/restore strategy that will minimize downtime associated with both the backup procedures and the restore after a hardware or software failure. The backup strategy has several components.
1. Image backups of System and Software Application disks.
Create an image backup of the System and Application disks which will allow the complete restoration of the production system after a catastrophic hardware or software failure. An image backup of the system disk should be captured after the original installation of the Aspen Online Software has been confirmed and again before any Windows Operating System hot fixes are applied. This will ensure quick recovery if a Microsoft hot fix has any unintended side effects. An image backup of the Application disk should be taken after the initial installation has been confirmed and again before any patches are applied to the Aspen Software.
Restoration in this scenario is a 2 step process. Restore the last image backup for the system and/or application disks, and then apply the last successful set of patches. (If you want to make an image backup after you have verified that the software works after the upgrade, then you could have a one step restore).
2. Backup Project Data
The Aspen Online System uses temporary files, located in the project directory and its subfolders, during its execution. For that reason the project directory should be excluded from any automated backup procedure that might lock files. Many of the data files in these directories are static unless the engineers are making configuration changes. The exceptions are the log files, temporary files in the ONLINE directory, and the history directories. In general the log files and temporary files have very little value if the system is running normally and don?t need to be backed up. The configuration files and history files are the primary files that should be backed up.
The backup strategy for the project data is to copy the configuration and history files to a folder that can be backed up as part of the normal backup process and exclude the project directory from that process. To do this, set up an area of a disk that is not under the project directory, for the history and for the project files. Make an initial copy of the OFFLINE and ONLINE directories into this area and as the configuration is modified, use the backup facility within the windows GUI to save the configuration data to this area. Configure Aspen Online to store the history files in this area rather than under the project directory.
Keywords: Backup
Patches
Hot Fix
References: None |
Problem Statement: While using Aspen Report Wizard, can there be an Orion.exe instance running in the background even after closing APS? | Solution: Yes. Consider the following scenario:
1. User opens APS and turns the memory cache option on in order to run Aspen Report Wizard from APS
2. Then user simulates for the given horizon and opens the Aspen Refinery Report Wizard
This would open excel where user can use the Aspen Refinery Report add-in to generate customized reports using report writer templates
3. Now if the user closes APS application and keeps working on the Reports generated from the wizard, then this will not close Orion.exe application.
4. The reason for orion.exe not closing is because the refinery report wizard is working off the cache memory of the application. So, though the application is closed, the reports would still be connected and use the cache memory of the application.
5. In order to avoid having an instance of orion.exe running in the background it would be a good practice for the user to manually disconnect from the report wizard addin once he/she is finished generating all the reports
Keywords: Orion.exe
Aspen report wizard
memory cache
References: None |
Problem Statement: How do I generate PMLINK table for an XNLP model? | Solution: To generate Table PMLINK for an XNLP model in database (such as Results Database) after PIMS execution, there are several things need to keep in mind. First, user has to uncheck Use Classic Output Database Format from General Model Setting | Output Database.
However, if the model is in XNLP mode already, this option will be grey out. User has to go back to DR mode in order to uncheck the option of Use Classic Output Database Formation, then switch back to XNLP. Second, the creation and population of the PMLINKS table is depending on the existence of the flowsheet file. Therefore, user needs to make sure the option from Reporting | Selection | FLO is checked.
The following are the detail procedures for generating PMLINK table,
1. From the Model Settings | General, select Database tab, make sure the Use Classic Output Database Format is UNCHECKED. Then skip to step 7. If it is grey out, go to next step.
2. Switch XNLP to DR.
3. From Model Settings | General | Output Database tab, UNCHECK Use classic Output Database Format.
4. Switch back to XNLP.
5. From Model Settings | General | Miscellaneous tab, make sure that the option of Use XNLP is checked. Otherwise, the PMLINKS table will not be created.
6. From Model Settings | Reporting | Selection tab, make sure the FLO option is CHECKED.
7. Execute PIMS model.
Once the execution finishes, open results.mdb file. Verify that the PMLINKS table exists and is populated.
Keywords: PMLINK
XNLP
Database
Generate
References: None |
Problem Statement: The quality of opening inventory can be entered in Table PINV. If such entries are not present, PIMS will use the property values from Table PGUESS for the opening inventory.
If an initial entry is provided in Table PINV, is it necessary to enter it again in Table PGUESS? What happens if the column or row name is not entered in Table PGUESS? | Solution: When there are property entries in Table PINV, Aspen PIMS will overwrite the non-period specific entries in Table PGUESS from Table PINV. In order to do this correctly, the structure must already exist in Table PGUESS. Therefore, the PGUESS table must be built successfully first before it can be modified from Table PINV later and all inventory properties should be present in Table PGUESS.
NOTE: While all recursed properties, including inventory, should be present in Table PGUESS, it is still best practice to enter the properties of your opening inventory in Table PINV. This is because if the model is pulling the qualities from PGUESS, a user may update the PGUESS table for convergence reasons and not realize that it can have a substantial impact of the qualities used for the opening inventory.
Keywords: inventory
properties
PGUESS
PINV
References: None |
Problem Statement: In a weight-based model, to limit the volume of a straight crude cut whose rate is not impacted by swing cuts, the weight to volume conversion can be entered in the capacity row in Table ASSAYS. However this is trickier if the crude cut rate is potentially increased by the addition of swing cuts from above or below. When there are swing cuts involved, how can the total crude cut volume be controlled in a weight model? | Solution: This cannot be done by simply adding a capacity row in Table ROWS that intersects the columns representing the cut and each of the swing cut contributions. This is because we need the recursed specific volume of the swing cuts to be able to convert from weight to volume. In Table ROWS, we cannot use a 999 to retrieve these recursed SPVs because PIMS does not allow 999's. However we can use this approach in a dummy submodel.
We will create a dummy submodel and use it to drive the rates of the swing cut contributions. We can then use PCALC to transfer the corresponding SPVs to make the conversion to volume for the capacity control.
The attached model is a modified copy of our weight sample model. Structure has been added (highlighted in yellow) to control the volume rate of crude cut KE1. This is potentially impacted by swing cut NK1 from above and swing cut KD1 from below. Note that cut KE1 is a single pool that combines the material from all three crude units (See Table CRDCUTS). Therefore the swing cuts from all three must be taken into account.
The changes are as follows:
TABLE ROWS
* TABLE
ROWS
*
User Defined Rows
TEXT
SCD1NK-
SCD2NK<
SCD2KD>
SCD3NK[
SCD3KD]
EBALnk-
-1
EBALnk<
-1
EBALkd>
-1
EBALnk[
-1
EBALkd]
-1
Table ROWS is used to bring the weight activities of each swing cut impact into the corresponding E-row. In the new submodel SWNG, these activities will be used to drive a new column.
TABLE SWNG
In Table SWNG, several things are being done. First, the EBAL rows from Table ROWS are being completed. This drives the weight activities of each swing cut contribution into the correspondingly named column.
Next, the SPV's of NK1, NK2, NK3, KD1, KD2, and KD3 are transferred to the dummy stream that is representing each of them through Table PCALC. This allows us to use the ESPV row to sum the weight activity * SPV for each stream into a collector column (ke1). To complete the weight to volume conversion, we use the 0.1587 VTW factor (as defined in the General Model Settings) as a coefficient in column ke1.
Back in Table SWNG, there are dummy WBAL rows. This is so PIMS will recognize the dummy columns and allow us to PCALC the SPV as described above.
Finally we use column ke1 to drive the swing cut contributions into our desired capacity row (CCAPKER). Remember to also define CCAPKER in Table CAPS. The original part of KE1 also needs to contribute to the CCAPKER. This can be done in the ASSAY tables using a coefficient for each crude of (wt% KE1 / (SPG of the crude * VTW)).
The attached model demonstrates the structure discussed above.
Keywords: swing
control
References: None |
Problem Statement: What is the best way to add user-defined penalties and have them reported in the Penalty Report section of the full | Solution: report?
Solution
When manually building a penalty on something in your model, we recommend pushing the penalty into the Epenrow - not directly into the OBJFN row. The Epenrow is automatically made by PIMS and is used to accumulate all the active penalties. This allows you to have consistency in OBJFN reporting and also to have reporting of your penalty.
To report the penalty in the Penalty Report section of the fullSolution, care must be used when setting up the penalty structure. The reporting of user defined penalties is based on the name of the column that drives the penalty activity into the Epenrow. The penalty will NOT be reported if the column name begins with the lower case letters u, d, x, or n. These are excluded from reporting due to conflict with internally generated PIMS structure. If the column name begins with any other character, then the penalty is reported. Note that use of the upper case characters U, D, X, and N still prompts reporting.
Below is an example that shows columns driving penalty activity into the Epenrow via Table ROWS and the resulting report. Note that for this example, the activity of columns Upendrv and upenact are set to 1 in Table BOUNDS. In normal use, these column activities would be defined by the scenario for which the user is defining a penalty.
* TABLE
ROWS
*
User Defined Rows
TEXT
Upendrv
upenact
*
Epenrow
-111
-222
Excerpt from the resulting Penalty Report in the fullSolution.html:
Penalty Report
Penalties: Recursion
Pool
Quality
$/DAY
drv
pen
111,000
Total
111,000
Notice that the penalty driven by column Upendrv is reported and the penalty driven by column upenact is not. The Model has active penalties message will also be printed at the top of the fullSolution file. In versions of PIMS prior to V7.1, PIMS reports the penalty as if it is a recursion penalty on property pen in pool drv. This is based on the column name of Upendrv and will vary based on the column name. As of version V7.1, PIMS will be able to discern that pen is not a valid property in the model and therefore will report it as a user-defined penalty with better labeling that what is demonstrated above.
While the penalty for upenact is not reported above, it is still active in the matrix as is shown by the excerpt from the matrix analyzer below:
Keywords: user
penalty
References: None |
Problem Statement: What are the requirements for DCS interface to DMCplus controllers if PCWS is used as a primary operator interface? | Solution: Solution136711 discussed the advantages of using PCWS as a primary HMI for DMCplus controller. Here is the minimum requirements for a DCS interface for that recommendation:
-ON/OFF switch for the Main controller as well as Subcontrollers built on the DCS. The point needs to be READ/WRITE so operators can turn the controllers/subcontrollers ON or OFF from either DCS or PCWS.
-ON/OFF status for controller/subcontrollers on the DCS. This serves as a hand shake status allow the DCS to know that DMCplus controller has recognized that the controller/subcontroller is ON and all the validation check has passed. The DMCplus controller is prepared to write the set points out to the DCS. An alarm is highly recommended for these status points in the OFF state.
-A watchdog point that receives the constant from DMCplus engine. A DCS timer will count the constant down and DMCPlus engine would reset the constant every execution cycle. If the timer reaches zero, an alarm should be activated letting operators know that the communication between DMCplus and DCS is lost.
-Mode switching for all the DCS loops associated with the MVs should be done through a program running on the DCS. We have sample programs that you can use as a template to develop your own DCS specific program. The reason we recommend the program resides on the DCS is once we lose communication then the DCS should turn the ON/OFF to OFF. Moreover it should shed those DCS loops to a safe mode (not accepting remote set point) until the communication is re-established and the controller is turned ON again manually.
Keywords: PCWS alarms DCS mode switching
References: None |
Problem Statement: How many weeks/days should I employ for a frozen time period in Aspen Plant Scheduler? | Solution: A frozen time period specifies the period in which new planned orders are not to be created and where planned orders from previous requirement calculations are not to be changed.
The purpose of creating a frozen time period or frozen activities in Aspen Plant Scheduler (PS) is to prevent the Plant Scheduler's decision making algorithms (e.g. LP model, SEARCH, M NET, or custom configuration) from changing the activities within this specified time fence. Once the frozen time period is established, if changes to activities need to occur, the schedule must manually make the changes.
There are several scenarios that lead to creating frozen period or frozen activities:
1. Integration with an ERP system could be complicated if certain activities are not frozen. For example, if known activity is currently running in the plant, then you don't want Aspen SCM to change it, and since ?live? status from the plant floor is not available in Aspen SCM, it is advisable to freeze the activities that might be running (hence a freeze period). Also partially freezing activities that are close to running is advisable. For example, if the plant staff is preparing to run an activity, i.e., gathering the required components, changing the start time but not the lot size is acceptable. Changing the scheduled downstream of the batch could also be done. Long freeze periods to deal with integration would be a red flag that the integration is not good enough.
2. Allow the user to make scheduling changes that the Aspen SCM algorithms would not be allowed to do. These decisions would only be done at the last minute to deal with problems. For example, the Aspen SCM algorithm may be forced to run a specific lot size but the user could be allowed to go larger or smaller at the last minute. Aspen SCM algorithms are often asked to honor minimum runs but the user may override that at the last minute to fill order changes. By creating the frozen time period the changes made by the user will not be changed again by SCM.
3. Many companies use MRP to manage purchased materials. MRP processes are not agile; MRP only looks at material, and ignores capacity. Reaction to schedule changes may be slow even the companies suppliers reaction time is not. So, the company may freeze or partially freeze the upstream production for sometimes a week or more just to support MRP. Aspen gears the company away from this by implementing Aspen SCM decision making models that understand the current purchased material situation and constraints. A long freeze period to deal with MRP is a sign that the Aspen SCM decision making model is not scheduling effectively and needs to be improved.
4. The Aspen SCM decision making model, as implemented, may be unrealistic and require substantial user corrections. Its decisions may be ?good enough? far out in the future but not in the near term. So, the user does not want to spend hours fixing up the results only to have it changed by Aspen SCM. This is not a good reason for a freeze period. The Aspen SCM model should be improved.
5. Some supply chain philosophies (e.g. MRP Class A) call for long freeze times. The stated reason is that change is bad and expensive. The real reason is that MRP systems don't work if you allow change. That's because they MRP system can't make decisions. Changes must be entered by a person into the MRP system. The change result can take days or even weeks. Even though Aspen SCM can make decisions, the change is bad philosophies may still exist and require long frozen periods in Aspen SCM to match those in the MRP system.
Frozen time periods should be used to a minimum, let Aspen's SCM decision making tool tackle the scheduling complexity.
Keywords: Time window
Frozen time fence
Firm zone
Frozen term
References: None |
Problem Statement: Article 133651 shows an example of a VB Application using Aspen SQLplus to retrieve/analyze Batch list then get Data for each Batch found from data generated by Aspen Production Record Manager (formerly known as Aspen Batch.21).
This article takes that a little further showing the ability to get the characteristics from the sublevels in the subbatches from the batch.. | Solution: Use the Advanced query settings on the characteristic specifier to control the data returned.
You can have either a Subbatch request OR Characteristic request.
If subbatch, then ReturnChildSubbatches will be processed and ALL subbatches and their characteristics will be returned.
If subbatch, then ReturnChildCharacteristics will return ALL the characteristics at the subbatch level indicated (UNIT in the below example)
If characteristic, then ReturnChildCharacteristics will return all compound chars under the char specified
This batch
This query:
-- Configure data requests
lclBLAVQ = lclBatchList.AttributeValueQuery;
lclCharSpec = lclBLAVQ.CharacteristicSpecifiers.Add('%',0,'Mix');
lclCharSpec.AdvancedQuery.IsSubbatchRequest = True;
lclCharSpec.AdvancedQuery.ReturnChildSubbatches = True;
lclCharSpec.AdvancedQuery.ReturnChildCharacteristics = True;
-- Get data
lclBatchDataList = lclBLAVQ.GetData;
-- Show data
FOR I=1 TO lclBatchDataList.Count DO
lclBatchData = lclBatchDataList.Item(I);
lclCharValuesList = lclBatchData.Characteristics;
FOR K=1 TO lclCharValuesList.Count DO
lclCharValues = lclCharValuesList.Item(K);
WRITE lclCharValues.RequestedCharacteristic.Description;
FOR L=1 TO lclCharValues.Count DO
lclCharValue = lclCharValues.Item(L);
BEGIN
WRITE lclCharValue.ReturnedCharacteristic.Description;
WRITE lclCharValue.FormattedValue;
EXCEPTION
WRITE 'Bad data';
END
END
END
END
Returned these results
21598
Mix,%[All]
MIX
MIX,FILL
MIX,UNIT
45
MIX,FILL,END TIME
1/20/2012 7:02:42 AM
MIX,FILL,START TIME
1/20/2012 7:02:39 AM
Keywords: None
References: None |
Problem Statement: Per the CIM-IO User's Manual, CIMIO ...uses the device deadband data to filter out minor changes in the input value before sending the value to the unsolicited client task. When using an Absolute IO_DEVICE_DEADBAND it is not clear if this deadband is compared against the last value actually transmitted to Aspen InfoPlus.21 or to the last value scanned by the interface.
To make a simple example, let's say we are scanning a tag with an IP_DC_SIGNIFICANCE of 1 in IP.21 and a configured IO_DEVICE_DEADBAND of 0.5. Consider where the interface scans a 0 and sends that value to IP.21 and then each subsequent scan the value is slowly creeping up by 0.1. If CIM-IO used IO_DEVICE_DEADBAND to compare against the last scanned value, then each time the change from the previous scan would only be 0.1 (that is < 0.5) and the value might not get sent up to IP.21. But if CIM-IO knew it last sent a 0 to IP.21, then after the 5th scan it would see that the value had risen to 0.5 and therefore exceeded the IO_DEVICE_DEADBAND.
It is important to understand how this value is applied. Otherwise, if CIM-IO compares it against the last scanned value and you make it look large then you might not get data sent up to IP.21.
Considering the scenario described above, this Knowledge Base article answers the following question:
How is the IO_DEVICE_DEADBAND applied when using Unsolicited CIM-IO transfer records? | Solution: The the IO_DEVICE_DEADBAND feature should only be used when the DCS device does not support internally dead banding for unsol data and it is desired that CIMIO Async transfer records perform deadband checking by specifying a negative scan frequency. In this particular case, the async reply will be converted to an unsol reply to be handled by the UNSOL task instead of the ASYNC task.
If the device supports report by exception, e.g. deadband filtering, then the unsol transfer records could specify a positive scan frequency and individual device deadband values in the transfer record. The CIMIO to DEVICE logic will make these deadband values available to the device and CIMIO will not have to compute or filter incoming values as these are supposed to be filtered already by the DCS.
Please keep in mind that when you use any kind of filtering you are surrendering incoming values for the sake of ?significant? values.
The device deadband logic done by CIMIO, when configured to do so, basically consists of the check:
Filter incoming value if fabs(oldVal - newVal) >= dbChkVal
Where:
oldVal is the last value reported to the database
newVal is the incoming newest value
dbChlVal is the Absolute deadband value specified in the selector record Io-Dev-DeadBands. This value will be computed as a percentage from oldVal if the deadband was specified as a relative deadband value.
Furthermore, the CIM-IO Core manual says a positive IO_FREQUENCY causes the interface to scan the points at the configured rate and send a new value if it has changed, and a negative IO_FREQUENCY causes the interface to scan at the configured rate but it ...[checks] for changes which exceed the deadband. It doesn't call out the IO_DEVICE_DEADBAND field as the deadband, but that is, in fact, what is being referenced. The manual indicates part of the unsolicited behavior is dependent on the interface you are talking to (i.e., CIM-IO for OPC). It also indicates that IO_FREQUENCY must be > 0 for this interface. Then it says this about IO_DEVICE_DEADBAND for unsolicited records:
Point-by-point relative deadbanding is only supported in OPC DA 3.0. If you are connecting to an OPC DA 3.0 server, you may use relative (percent) deadbanding, which will be passed on to the OPC server. In all other cases (including attempting to pass an absolute deadband to a DA 3.0 server), this field is ignored.
The manual also says that normal GET records perform synchronous cache reads while Unsolicited records perform subscription-based OPC callbacks.
So, as far as the CIM-IO for OPC scenario described above is concerned, the IO_DEVICE_DEADBAND will basically do nothing unless the OPC server is compliant with v3.0 of the OPC-DA specification. The only thing Unsolicited in general will do is (1) cause CIM-IO for OPC to perform OPC callbacks against the OPC server and (2) make the interface only send a new value to IP.21 if it has changed. Effect #1 should reduce inter-process communications between CIM-IO for OPC and the OPC server because data is transferred only when it changes. Effect #2 should provide a very basic on-change type filter that would reduce the amount of data sent to IP.21 and increase the duration of a fixed size store file. The store file duration is increased as filtered values assure less space consumed as opposed to a continuous polling scanning with subsequent filtering on the client side for example.
Finally, the CIMIO for OPC will not support unsolicited processing at the expense of computer power; unsolicited processing is meant to be supported by the OPC server or subsequently by the DCS the OPC server talks to. If the OPC Vendor supports v2.05 of the OPC specification, it can also support unsolicited deadband but for a complete group as opposed to the individual deadband 3.0 specifications. In this case the group (a transfer record) will have its device deadband specified with the first valid entry in the group. In order to avoid not having a device deadband working effectively either because the first n entries in the group were not valid or entry n+1 happened to not have a device deadband defined, it is recommended that all occurrences in the group be defined with the desired group device deadband.
Keywords: None
References: None |
Problem Statement: How can I reduce the file size of an archived model? | Solution: The Aspen PIMS model folder includes model files and Aspen PIMS generated files. If we only archive the model, the size of the archive file will be reduced.
You can archive the model without the Aspen PIMS generated files. That will reduce the size of the archive model.
1. From menu Model | Archive Model ?
2. Click OK. The zip file with folder name will be created under the model folder.
Keywords: archive, size, small, files, file
References: None |
Problem Statement: If your company or department has a standardized report structure, it would be nice to set that up once instead of having to do it every time you write a report query. What's the easiest way to accomplish that? | Solution: Aspen SQLplus uses SET options to establish a report's format. There are several SET options available from setting page headers and footers to determining what character is going to be at the bottom left hand corner of your report. To get a complete list of these options, go to the SQLplus online help and search on SET/Query Format and SET/Page Headers & Footers. Below is an example of a series of SET options that format a report:
SET UNDERLINE_BETWEEN = '|';
SET HEADER_BETWEEN = '|'
SET PAGE_HEADER_LEFT = 'Company XYZ';
SET PAGE_HEADER_RIGHT = '(SUBSTRING(CURRENT_TIMESTAMP from 1 for 9));
SET PAGE_FOOTER_LEFT = 'Page #';
SET SUM_TEXT = 'Total';
This would print the company name in the upper left of each page of the report. It would also print the date in the upper right of each page of the report as well as the page number at the bottom left. It would put the pipe character (|) in between column headers as well as in between the column underlines. And finally, it would label any SUM's from the CALCULATE statement with the word Total.
Instead of having to include these 6 lines at the beginning of every report query, you could save these 6 lines in an SQLplus text file. e.g. repsetup.sql. Then at the beginning of every report query, you would include this formatting by including the line:
START 'repsetup';
You can also save the above 6 lines of SET options in a QueryDef record in the database. The START statement for a record varies slightly. The syntax is:
START RECORD 'repsetup'; -- where repsetup is a QueryDef record in your database
This way, if the company's report structure changes in any way, you only have to make the changes in one place.
Keywords: report
standardize
SET options
References: None |
Problem Statement: | Solution: Document changes in a file and attach it to the model tree as explained inSolution 125807
Keywords:
References: None |
Problem Statement: Local optima is a condition in nonlinear models where the | Solution: satisfies the convergence and optimality criteria but does not arrive at the best possibleSolution. Sometimes the final answer can be different depending on the starting point of the optimization process. Converging to a locally optimalSolution is a normal outcome of solving nonlinear equations. However, since the global optimum is often close to the local optimum it is often difficult to determine if an optimumSolution is the local optimum or global optimum. This article describes how the multi-start tool can be used to resolve local optima.
Solution
Multi-start is a simple tool which solves the same problem many times. EachSolution starts from a different set of initial points. Comparison is then done on each of theSolutions to determine whichSolution is the best answer. The starting points are randomly selected by the multi-start tool.
Each set of initial points passes two internal checks:
1) Merit Check: How good is the set of initial points?
2) Distance Check: How different is the set of initial points?
The multi-start method works on all models, regardless of the source of the nonlinearity. The user can simply define the maximum number of starting points to begin using the tool.
The number of starting points to select is model specific. Some models will require more starting points to arrive at the global optima. The general recommendation is to select the minimum number of starting points that yields a significant change in optimality. Consideration should also be made to allow for the time required to solve the extra cases required by the multi-start tool.
The multi-start feature can be accessed from Model Settings | Non-linear model (XNLP) | Global optimization tab | Multi Start
The final reports are created using the bestSolution. The progress chart for a multi start run is shown in the Figure below
The best optimalSolution can also be observed in XSLP-MultiStart.log file generated in the model folder after the end of multi-start.
Keywords: None
References: None |
Problem Statement: How does the XLP matrix get its starting point and what are the best practices for managing it? | Solution: The XLP matrix will get it's starting point as described below:
1. Flows and qualities available in an inputSolution file are used
2. Qualities that are not in the inputSolution file are filled in from Table PGUESS
3. Flows, Capacities, Severities, etc. that are not in the inputSolution file are filled in from the a??Default Initial Valuea?? setting
Best Practices:
A? It is recommended to provide an inputSolution when possible. When making an inputSolution file it is recommended to run a case where as many purchases, sales, flows, capacities, etc. have activity as possible.A Such a case may not reflect a realistic operating scenario, however it can be helpful to have as a generic inputSolution.
A? The Default Initial Value has a default value of 1. However for many models a different value may be more appropriate. The value should be close to the expected value for the refinery pool flows so it can depend on how the model vectors are scaled.
A? Even when an inputSolution file is provided, the PGUESS properties can have an impact, therefore these should be within a reasonable range
Keywords: XLP
input
References: None |
Problem Statement: This knowledge base article describes the best practice to volumetrically pool volume-based properties in a weight-based model.
This article specifically discusses pooling and recursion in submodels. This article does not discuss blending configured with the PIMS blending tables such as BLENDS and BLNMIX. | Solution: In a Distributive Recursion (DR) model, all properties are pooled on the same basis as the model. So in a volume-based model, all properties are pooled on a volume basis. In a weight-based model all properties are pooled on a weight basis. The only way to volumetrically pool volume-based properties in a weight-based DR model is to manually create submodel structure to perform the conversion.
In a model using PIMS-Advanced Optimization (PIMS-AO) there is a standard feature that facilitates this. This option is in MODEL SETTINGS | Non-Linear Model Settings (XNLP) | Advanced and is called Automatic Volume/Weight Quality Balancing. This feature should be selected to allow PIMS to automatically compensate for the inherent volume to weight conversions. In a volume-based model, weight-based qualities that exist in a submodel are updated using the current SPG (Specific Gravity) value. Similarly, Specific Volume (SPv) is used for updating volume-based qualities in a weight-based submodel. By default, this option is off. For this option to work properly in a weight-based model, all the SPG qualities, fixed or recursed, must be exposed through either the blend property tables or table PGUESS. Note that if a model has manually generated structure to perform weight/volume conversions, then such structure should be removed before turning this option on.
When this setting is ON in a weight-based model, all R/E/L/G rows in submodels that are written for volume-based qualities are converted from weight to volume. In a weight-based model, the material balance equations are in the basis of the model, which is weight, UNLESS they are sold in volume (i.e., have an entry in column VOL in table SELL) and then they are balanced in volume. The submodel equations are also in the same basis as the model, so they are in weight, in other words, all of the S-columns are in weight. This setting aims to convert all of the R/E/L/G rows to be written in volume, if the associated quality is volume-based. Thus, in these equations, the terms with S-column entries need to be converted from weight to volume, or multiplied by (1/VTW) or QSPvxxx. This is what happens to all of the equations in a weight-based model when the option is ON.
Let's consider a specific example of a row converted from weight to volume. Consider row RAFCAR1 from our Weight Sample mdoel, which originally looks like:
-20.326500*SCD1ANS - 11.101500*SCD1NSF - 15.052200*SCD1TJL + 1.000000*SCD1AR1*QAFCAR1 = 0.000000
To convert this equation from weight to volume, we need conversion factors or recursed quality variables for each term. So ANS, NSF, TJL, and AR1 must all recurse SPG or have constant property values in BLNXXX or in ASSAYS. The conversion terms are all found, so the equation becomes:
-20.326500*SCD1ANS*QSPvANSAR1 - 11.101500*SCD1NSF*QSPvNSFAR1 - 15.052200*SCD1TJL*QSPvTJLAR1 + 1.000000*SCD1AR1*QAFCAR1*QSPvAR1 = 0.0000
where QSPvANSAR1 = 1.03694, QSPvNSFAR1 = 1.06923, QSPvTJLAR1 = 1.04698 and those values come from ASSAYS. QSPvAR1 is a recursed quality where QSPGAR1*QSPvAR1 = 1.
Let's consider another example of a row not in the crude unit. Consider row RAPIVRF, which originally looks like:
-1.000000*SDLCVR1*QAPIVR1 - 1.000000*SDLCVR2*QAPIVR2 + 1.000000*SDLCVRF*QAPIVRF = 0.000000
To convert, VR1, VR2, and VRF must all have constant or recursed data for SPG. The equation becomes:
-1.000000*SDLCVR1*QAPIVR1*QSPvVR1 - 1.000000*SDLCVR2*QAPIVR2*QSPvVR2 + 1.000000*SDLCVRF*QAPIVRF*QSPvVRF = 0.000000
where all three are recursed and the appropriate relationship between SPG and SPv for each are introduced.
This process is done for each equation in the submodels and warning W736, which is in the XLP matrix generator only, is given for those equations that are not converted because of missing terms.
Keywords: None
References: None |
Problem Statement: Can I use the same name for multiple tables? | Solution: It is recommended to use different name for each PIMS table - for example don't use the same name for a crude unit and an assay table (as defined in Table ASSAYLIB). Duplicated tables names can cause errors in report files.
Keywords: duplicate
name
table
References: None |
Problem Statement: When a user runs Aspen PIMS Platinum against SQL database across a network, how do I setup the machine where the SQL server database is located? | Solution: Setup the Firewall on the machine where the SQL server located. The steps below are based on Windows 7 operating system.
1. from Start | Control Panel | Windows Firewall, Click 'Advanced Settings' on the left,
2. In the new window, click 'Inbound Rules' on the left, then 'New Rule...' on the right
3. In the new window, choose 'Port' then 'Next>',
4. In the next window, in the field 'Specific local ports' enter '1433', then 'Next>', 'Next>', 'Next..'
5. In the last Window, type in a name to indicate this is for a SQL server connection, for example,
Then 'Finish'.
In SQL server, setup Security:
From 'Security', right click Login, then 'New Login...', Enter the login name 'NT AUTHORITY\NETWORK SERVICE', and choose the PIMS database.
Keywords: Platinum
SQL
SQL server
connection
database
firewall
security
authority
service
network
References: None |
Problem Statement: What are the best practices for keeping track of the Aspen PIMS model history, key structures and parameters? | Solution: When working in an Aspen PIMS model, it is important for users to be able to document the key model structures, parameters used and model changes history. This is especially important when multiple users are involved.
For the model documentation process, three different approaches can be used:
Notes in the model Tables
Comments in specific cells
Detailed documentation attached to the model
Note: The effort in documenting the model should be focused on the Process Submodels because they are the most complex part of the model.
Notes in the model Tables
By starring out rows (the first character in column A for a row is a *), you can enter notes in the input tables. For simplification purposes, only short descriptions and summarized information should be added. Some key aspects that should be documented are:
Normal operational Parameters: feed properties, severity, conversion, feed composition, etc.
Expected Operational Range: include Min and Max values for feed properties and unit parameters. This is specially important when using Delta Base structure, as the linearization of the model becomes less accurate the further you deviate from the base conditions that are assumed when linearizing the model.
Description of the purpose of special structure: for example, rows that extend through multiple submodels (e.g. EBALXYZ) should be flagged, if tables CURVE, NONLIN or Non Linear formulas (PIMS-Advanced Optimization) are used to modify a coefficient should be noted.
If there are more detailed documentation files attached under table USER for this particular unit or structure, it should be noted here. See below for a description on table USER.
Comments in specific cells
If some coefficient's source is not obvious or needs some clarification, a comment can be attached to that cell. The drawback with this approach is that it is not directly readable and therefore, it should be used only in specific opportunities.
Detailed documentation attached to the model
Long descriptions, graphs, mathematical background of some structures, etc. should be documented in the appropriate file format (MS Word, MS PowerPoint, PDF format, etc.) and attached to table USER, under the Miscellaneous branch.
The model history (major revisions, updates, etc.) should be logged here.
A flowchart of the process and a typical material balance should be attached for reference using this approach.
SeeSolution 125807 for a description on how to use this table.
Keywords: Table USER
Documentation
Model History
References: None |
Problem Statement: User needs to change the quality specified in Table BLNNAPH/BLNREST thru Table CASE. Once such a change is specified in Table Case PIMS should automatically regenerate the Matrix. E.g. the user writes the following structure in Table Case to change the specific gravity for C3M [in Model VOLSAMP] CASE 2 Change quality
*
TABLE
BLNNAPH
TEXT
SPG
C3M
C3 Mixture
0.52
PIMS processes and solves the above case
a. without regenerating the Matrix
b. without giving any error or warning msg and
c. the SPG of C3M remains unchanged. | Solution: PIMS allows users to store the quality data in different files/ worksheets with different names. E.g. all the naphtha or similar stream going to gasoline pool can be stored in Table BLNNAPH and the rest of the Streams in Table BLNREST. All these worksheets are attached to BLNPROP on the model tree. Internally PIMS ONLY recognizes BLNPROP as the table name storing the static quality data. Thus, if the user needs to make any change in the quality data specified in BLNNAPH or BLNREST, the table name to be specified in Table Case is BLNPROP.
The correct structure for problem mentioned above is as given below. CASE 2 Change quality
*
TABLE
BLNPROP
TEXT
SPG
C3M
C3 Mixture
0.52
The PIMS Help file specifies the following rule, when constructing table CASE : The rows after a case identifier can contain references to a number of PIMS input tables. These tables are identified by the keyword TABLE in column A and with the appropriate...PIMS input table name... in column B.
The PIMS input table name referred to above means the table name [specified in upper case] on the PIMS Menu tree to which the individual Excel Sheets are attached. The reason the Case did not modify Table BLNNAPH/ BLNREST was because BLNNAPH/ BLNREST are the names of the excel sheets, and NOT PIMS input table names.
Keywords: Aspen PIMS
Table BLNPROP
Table CASE
References: None |
Problem Statement: How to blend properties in weight basis using Table WSPECS | Solution: In PIMS, all blending is done on a volume basis unless you specifically designate otherwise. This is true for both volume based models and weight based models. (Note this refers to BLENDING, not recursion or pooling. Recursion and pooling are handled on the basis of the model.) Of course there are properties such as sulfur wt%, that should be blended on a weight basis. To allow for this, Table WSPECS must be setup.
Table WSPECS defines for PIMS which blend properties are to be blended on a weight basis. The basic format of the table is shown below:
The properties listed in the first column will be blended on a weight basis. To do this, PIMS will use an equation like the one shown below for specific gravity (SPG) when calculating the blend property:
In volume-basis models, you can use the FACTOR column in Table WSPECS to identify the relationship between a weight-basis quality and a volume-basis equivalent. This column also allows recursed volume-basis properties to be entered directly in weight-basis specification blending. In the example table above, this is done for property BNT. BNT (basic nitrogen) is a weight-basis property. In the model there exists the volume-basis equivalent of PNB (Pounds Nitrogen per Barrel). The first column identifies that BNT is to be blended on a weight basis. Since the entry in the TEXT column is an existing property (PNB), PIMS understands that this is a volume-basis equivalent for BNT. The entry in column FACTOR will be used to convert PNB values into BNT values before PIMS calculates the BNT value of a blend.
Some items to note:
1. If you do not have a volume-basis equivalent quality tag in the TEXT column then the FACTOR column entry is ignored for that row.
2. The FACTOR is the number that multiplies the volumetric quality of the component before it goes into the spec row.
3. If you have both qualities for a component (PNB and BNT already exist in the model) AND table WSPECS has both tags, the system will go through the FACTOR calculation regardless. However, if the quality is limiting then the system will report the weight quality for the component. The reported weight quality is NOT back calculated from the volumetric equivalent in the report so it could be inconsistent. So you may not want to specify a FACTOR in WSPECS when both properties already exist in the model.
4. If you take a shortcut and leave out the 349.8 (pound per barrel of water) in your calculation of the volumetric equivalent quality then FACTOR should contain that multiplier. That is if you calculate a volume based equivalent quality of a component with (weight fraction * SPG) then the FACTOR should be [100 * (349.8 lb/bbl) / (2204.6 lb/ton) which is just 100 * VTW or 15.87]. If you use ( wt% * SPG) then leave the 100 out of FACTOR which would be VTW or 0.1587.
Keywords: WSPECS
References: None |
Problem Statement: What are the best practices related to the use of penalties to diagnose infeasibilities or help with convergence? | Solution: These best practices recommendations are related to the following items:
Pre-configuration of tables for quick activation if needed
Recommended penalty values
Pre-configuration of tables for quick activation if needed
Several tables have columns that allow for automated penalty structure. The main tables are listed below. A more detailed list is available inSolution 115365.
TABLE
COLUMN
USE
BUY
IPRICE
Sell feedstock at IPRICE to avoid material out of balance or infeasibility if FIXBAL is turned on
SELL
ICOST
Buy product at ICOST to avoid infeasibility if demand can not be met
CAPS
PENALTY
Allow capacity violations (Capacity > MAX or <MIN) paying a penalty for it
PROCLIM
PENALTY
Allow Process Limits violations (Process Limit > MAX or <MIN) paying a penalty for it
SCALE
PENALTY
Allow violation of blending specifications (e.g. SUL>MAX) paying a penalty for it
Note: In addition, there are some other types of penalties that can be activated under Model Settings | Recursion | Penalties.
RPENALTY: Recursion Penalty. For a detailed explanation of this penalty, seeSolution 127439
VPENALTY: Virtual Pool Penalty. For a detailed explanation of this penalty, seeSolution 127448
XPENALTY: PSPAN or Property Span Penalty. This setting applies to multi period models only to be used in conjunction with table PSPAN, which allows to keep constant specification blends across periods.
The columns that enable the penalty structure should be already available in the tables and the data should be loaded. For normal operation, disable the Penalty column using a ! in front of the column name.
For example, for table CAPS:
If you have problems running the model, simply enable that column (by deleting the !) and run the model again.
Recommended penalty values
In general, the value of penalties should be high enough to ensure that a penalty is used only to overcome an infeasibility, not simply because it is economically more convenient. However, if the penalty value is too high, it may distort theSolution path.
Therefore, a rule of thumb that can be used to assign the value of penalties is to take the highest valued product in table SELL, multiply that PRICE by 1.5 or 2 and use the resulting value as the penalty value. If the model still activates some penalties for merely economic reasons (i.e. it is better to pay the penalty to increase the OBJFN), then increase the value.
In the special case of ICOST (T. SELL) and IPRICE (T.BUY), use an ICOST that is around 10%-20% above the actual price of the product and in IPRICE that is around 10%-20% below the actual cost of the feedstock.
Keywords: Penalty
Penalties
References: None |
Problem Statement: The old Events table (non normalized) of Aspen Petroleum Scheduler (Orion) would automatically number the X_SEQ field. With the new structure, the table ATORIONEvents no longer does this.
With the system we are putting together, events will be updated through an external interface, and the user will also create events in Orion.
Do you have a mechanism to ensure the uniqueness of the X_SEQ column under these circumstances? | Solution: Aspen Petroleum Scheduler manages the table using the ATOrionKey table. This table has one record for each table which it manages. The last X_SEQ number used by the system is stored here and this number is incremented whenever a new record is added (the number is never decremented). The interface program could increment the appropriate record(s) in this table.
As a best practice, It is not recommended to use interfaces which write directly to the ATORIONEvents table as it will not work well in a multi-user environment. We have two recommended approaches:
1. Use the ORION_MGR_* tables to stage the event imports so the user can review and accept them from Integration | Import | Events
2. Use Orion Event Automation Interface to add the events into a user's schedule as if the user had entered them manually.
Review the Orion Help file documentation for more details about ORION_MGR_* tables and Orion event automation : Event Imports Dialog box and Orion.EventUDT items.
ATOrionKey table description:
Use this table to store the last primary key used in selected database tables. This provides a way for concurrent users to add entries to database tables that need unique keys.
Example Table:
Keywords: X_SEQ key
ATOrionKey table
ATOrionEvent table
References: None |
Problem Statement: How to implement conditional selling in Aspen PIMS. | Solution: Conditional selling is a situation in which the planner would like to sell only few among the specific type of finished products produced in the refinery. For example, in a refinery three different types of gasoline: LG1, LG2, and LG3 were produced, however; the planner would like to sell only any two among the three which generates maximum profit. This conditional selling mechanism can be implemented using the MIP feature available in Aspen PIMS.
The logical flow for implementing this in Aspen PIMS follows:
1) The given example requires that only two of the 3 possible gasoline types be sold. Therefore; based on a combination formula given in the equation below, three different combinations or three different modes of selling are possible:
2) The three different selling options can be inputted to PIMS through Table ROWS. Assuming the selling price of gasoline LG1, LG2, and LG3 are $4, $5 and $6 respectively.
Table
Rows
*
TEXT
SELLEG1
SELLEG2
SELLEG3
OPTION1
OPTION2
OPTION3
ESELLG1
4
5
-1
ESELLG2
4
6
-1
ESELLG3
5
6
-1
In Table ROWS, the SELLxxx columns are PIMS variables which represent the activity of each gasoline in the optimalSolution. The three E-Rows drive the profit evaluated in three modes of selling to variables OPTION1, OPTION2, and OPTION3.
3) Now enforce only one mode of selling to be active at a time using Table MIP:
Table
MIP
*
TEXT
SOSSET
SOSTYPE
*
SOS Type 1
OPTION1
1
1
OPTION2
1
1
OPTION3
1
1
The SOSTYPE 1 in Table MIP will ensure that only one member of the set can have non-zero activity.
Keywords: MIP
Conditional selling
MIP sell
References: None |
Problem Statement: My execution log has the warning below. What do I do about this?
*** Warning W063. Pool Collector CFP has Entry in Row CCAPCFP in Table SCFP | Solution: This warning is generated when a recursion pool collector column contains matrix elements other than recursion structure. A common mistake is to use the pool collector column to drive a capacity. For example, in the submodel below CFP is the pooled FCCU feed. The user has used this column (CFP) to be the pool collector and also used it to drive the capacity so the rate of CFP can be controlled with CCAPCFP.
The problem here is that the column CFP is a recursion pool collector column. Therefore it should not have any entries that are not related to recursion structure. This is especially important for PPIMS models where the pool collector column assumes the activity of the production plus the inventory in the period.
To fix this, use an alternative method for the desired calculation. For the example above, the modification shown below to the CCAPCFP row corrects this problem.
Keywords:
References: None |
Problem Statement: How to re-order the event screen list in Orion/MBO? For example, if the user wants to move the event screen 'Crude and Vacuum' to the first, how do you do it? | Solution: Here is the procedure to implement,
1. From the database table GANTT_LIST, copy the event screens ID into table GANTT_GROUP,
2. In table GANTT_GROUP, use the ORDER_BY column to order them
Now the event screens order become,
If the option below is checked from Settings | Advanced tab, the 'Event Screens' will sort in Alphabetical order and the field we defined above will be ignored.
Keywords: order
GANTT
event screens
re-order
reorder
References: None |
Problem Statement: Changed Prediciton History offset, now getting message - Bad unbiased prediction | Solution: An explanation for the correlation of PREDOFST and HISTOFST
The key point is that the LBU history is a subset of the LDC history. A subtle point that necessitates setting the HISTITVL and HISTOFST in LDC when setting PREDITVL and PREDOFST in LBU.
If the LDC is set up to collect data with an offset of 10 minutes and a duration of 40 minute. I can tell this my looking at the LDC:HISTITVL and LBU:HISTOFST parameters. This means that if a sample comes in at 2:00 then the LDC will pull data out of the DC history file from 1:10-1:50 and put it into its own history file. When the LBU runs it uses this data.
Next, I notice that perhps the LBU is set up to analyze data with an offset of 5 minutes and a duration of 20 minutes. I can tell this by looking at LBU:PREDITVL and LBU:PREDOFST. These numbers need to be a subset of the LDC numbers. The LBU then tries to retrieve data from the LDC history from 1:35-1:55. Of course the data from 1:50-1:55 is not there. The criteria for good predictions is 75% as specified by CONFIG:DCPCTGOOD. You need to configure the history storage parameters to line up better.
Also see the AspenIQ Online help for addition information for HISTOFST and PREDOFST.
Keywords:
References: None |
Problem Statement: What could cause the events screen in Aspen Petroleum Scheduler to run slowly? Adding or modifying events and saving events take a long time to process. | Solution: As of V8.7, the events are stored in ATOrionEvents and APSEVENTMASTER (for the new pipeline and dock scheduling events)
Over the course of time, these tables can get really big. So it is important to archive these events periodically. The size of the tables could be directly related with the slow speeds, because any operation on the events screen runs a query against these event tables. So, larger the size of these tables, longer the processing time for these queries. Please referSolution article 145577 for more details on event archiving and cleanup to better help APS performance
Keywords: ATOrionEvents, APSEVENTMASTER, model cleanup, database archive
References: None |
Problem Statement: This article discusses the ability to use Aspen SQLplus to retrieve/analyze data from the Aspen Production Record Manager (formerly known as Aspen Batch.21). | Solution: Distributed with the Aspen Production Record Manager product is a .chm (help) file called atbatch21applicationinterface.chm It contains several VB examples of such as the one listed below. This example contains several comments and demonstrates how to Get a Batch list then get Data for each Batch found. The query is called BatchListAttributeValueQuery:-
LOCAL DATA_SOURCES, I INT, K INT, L INT;
LOCAL lclBatchQuery, lclBatchList;
LOCAL lclStartTime, lclEndTime;
LOCAL lclCharSpec, lclBLAVQ, lclBatchDataList, lclBatchData, lclCharValuesList, lclCharValues, lclCharValue;
DATA_SOURCES = CREATEOBJECT('ASPENTECH.BATCH21.BATCHDATASOURCES');
-- Get query object
lclBatchQuery = DATA_SOURCES('Batch21 V7.2').AREAS('Batch Demo Area').BATCHQUERY;
lclBatchQuery.CLEAR;
-- Configure query
lclStartTime = CURRENT_TIMESTAMP-32:40;
lclEndTime = CURRENT_TIMESTAMP-08:40;
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('BATCH NO',1,4,'5');
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('End Time',1,3,lclStartTime);
lclBatchQuery.CHARACTERISTICCONDITIONS.ADD('End Time',1,2,lclEndTime);
lclBatchQuery.CHARACTERISTICCONDITIONS.Clause = '(1 AND 2 AND 3)';
-- Retrieve list of batches
lclBatchList = lclBatchQuery.GET;
-- Output list of batch IDs (optional)
FOR I=1 TO lclBatchList.COUNT DO
BEGIN
WRITE lclBatchList(I).ID; -- internal batch id (handle)
EXCEPTION
WRITE 'No Batch ID';
END
END
-- Configure data requests
lclBLAVQ = lclBatchList.AttributeValueQuery;
lclCharSpec = lclBLAVQ.CharacteristicSpecifiers.Add('%',0);
lclCharSpec.AdvancedQuery.UseLikeComparison = True;
-- Get data
lclBatchDataList = lclBLAVQ.GetData;
-- Show data
FOR I=1 TO lclBatchDataList.Count DO
lclBatchData = lclBatchDataList.Item(I);
lclCharValuesList = lclBatchData.Characteristics;
FOR K=1 TO lclCharValuesList.Count DO
lclCharValues = lclCharValuesList.Item(K);
WRITE lclCharValues.RequestedCharacteristic.Description;
FOR L=1 TO lclCharValues.Count DO
lclCharValue = lclCharValues.Item(L);
BEGIN
WRITE lclCharValue.ReturnedCharacteristic.Description;
WRITE lclCharValue.FormattedValue;
EXCEPTION
WRITE 'Bad data';
END
END
END
END
Keywords: None
References: None |
Problem Statement: This knowledge base article describes the procedure to log on to the Rename Tag Utility for Oracle or SQL databases. | Solution: To log on to the Rename Tag Utility:
1. Start the Rename Tags utility
2. From the Model Type drop down list, select the application model type you are working with.
3. In the Database field, enter the location and name of the model database. You can click Browse to navigate to the location and select the desired file
4. In the Excel Unit field, enter the location and name of the Units file. You can click Browse to navigate to the location and select the desired file
5. After selecting the dsn location and excel file, the first login (User Name and Password) should be a user id and password on the model, the same as when opening your APS model so this which is stored in USERS table in DB)
6. After clicking OK, if you are accessing an ODBC file and have not provided a password, you will see the DSN information dialog box prompting you to enter a password (to SQL DB). Do so and click OK.
Keywords: ATRenameTags Utility
References: None |
Problem Statement: Any release between two new major releases is called a patch. For example, when upgrade from 2006.5 to v7.1 is a major release, while a new build (within a major release) release to a client is called patch. Typically a new patch file is sent to our customer through a | Solution: link.
Solution
When user installs a new patch file, he/she needs to make sure it is installed in a clean environment which means there is no PIMS application on the machine. Otherwise, PIMS may not work properly after you ?upgrading?.
Here are the procedures to install a PIMS new patch,
1. Use Aspen uninstall tools to uninstall PIMS
2. Make sure the application directory all clean, which located at
C:\Program Files\AspenTech\Aspen PIMS
3. Download PIMS patch from website
4. Install
The key part is step one. When you upgrade to a major release, the installation package will take care of uninstall part before install. This uninstall procedure is no included when you install a patch. That is why you need to manually perform it.
Keywords: patch
release
install
uninstall
References: None |
Problem Statement: There are several ways to turn off a submodel in my PIMS model such as:
1) Have a capacity on the feed rate and set the MAX = 0
2) Use Table BOUNDS to set the MAX on the feed column to zero
3) Similar to a capacity, a Process Limit (PROCLIM) could be applied to the feed rate and it could
be set to MAX = 0
4) The Submodel could be disabled using Table DISABLE | Solution: The best practice for turning off a submodel is the use of a capacity and setting its maximum equal to zero. This is best practice because it allows PIMS to make adjustments to the matrix for you that can prevent infeasibilities. No other method provides this functionality. This feature only works if the best practice naming convention for rows in a submodel has been followed. That convention is to make sure that E-rows in the submodel end with 3 character tag that matches the tag used for the unit capacity.
For example, in the submodel below, the naming convention has been followed as highlighted in red.
When CCAPDHT has a MAX value of zero, then PIMS will automatically remove error distribution from the E-rows that end in the same tag (ie, ECHGDHT and ESULDHT). This prevents those E-rows from potential infeasibilities when there is no activity in the submodel columns.
Keywords: shutdown, close, turn off, capacity, E-row
References: None |
Problem Statement: There are many occasions when users would like to be able to purchase a recursed pool (i.e. FCC Feed, Coker Feed, etc.). There is, however, some confusion about how this capability actually works.
The key thing to remember is that the qualities of the purchased stream is defined by the entries in table PGUESS. These qualities are used to build the initial matrix and are NOT updated during the recursion process. The end result is that the model is purchasing a stream of a fixed quality. Some users have mistakenly believed that we were updating the quality of the purchased stream to match the internal production. | Solution: In order to reduce the chance of bad quality estimates getting into a model, we recommend that users not purchase the recursed pool directly. The best way to handle this situation is to create a new stream tag for the purchased feed and define its qualities in a BLNPROP table. You can then simply add the new stream tag to the recursed pool.
Keywords: None
References: None |
Problem Statement: I noticed in my execution log the following message:
Some Recursed Qualities Limited by MIN/MAX Values. See Iteration Log and Help.
What does this mean and how do I eliminate it? | Solution: This is a message that should not be ignored. PIMS is telling you that it is calculating a property value that is outside the allowable range and therefore is resetting that property to the limit of the range. This can affect the quality of your results. To fix this, please follow the steps below:
1) Open the iteration log from the run and go to the very bottom (we are only interested when this happens in the last recursion pass). Going up from the bottom, look for messages like, Property ABC in stream XYZ limited by MIN value. Note which properties that are cited and if it is the MIN or the MAX that is limiting.
2) Run a model validation and open the validation.lst report. Go to the Recursed Property Range Report and find the property(ies) cited in the iteration log. You want to see what the defined range is and this section of the report will tell you the defined MIN, MAX and which tables defined these values.
3) Decide how to open the property ranges. For example in the message above, the property ABC would need to have the MIN value decreased. Note the current MIN and decide on a new value.
4) The new range can be set in several Tables such as BLNPROP or SCALE. If one of these is already the source of the limit, then go to that table and adjust the current value. If the current limit is from a submodel or Table ASSAYS, then use BLNPROP or SCALE to define the new limit. Follow the instructions below to make the change in the desired location (use only 1 of these methods, but either is fine).
a. In Table BLNPROP make a row called MIN (or MAX) and in the desired property column, enter the new MIN (or MAX) limit.
b. In Table SCALE make a row with the desired property name and in column MIN (or MAX) enter the new limit.
5) Once the new limit(s) is set, re-run the model. If you continue to get the message, repeat the process above until the message disappears. You may notice that you have to open the ranges to unreasonable values for the message to stop. For example, perhaps property ABC is the % sulfur in a stream and the message only goes away if you reset the MIN to -10. Obviously the real value of a composition cannot be negative and this tells you that you need to investigate why PIMS is calculating such a value. This could be the result of bad input data, typos, etc. Once the true source is found, update any unreasonable limits back to a reasonable range and re-test.
Keywords: Recursed Qualities
Limited
Recursed Property Range Report
References: None |
Problem Statement: Do you have any tips to help have a smooth upgrade of Aspen PIMS? | Solution: If you currently haveAspen PIMS on your machine and are upgrading to a newer version, below are some tips to help make the process smooth.
1) If the major release version number has changed (ie, PIMS 2006 (PIMS 17) vs PIMS V7.1 (PIMS 18)), then a new license file is required. Customers should request their license files via support.aspentech.com. This does take some time to process, so please make the request in advance of the planned upgrade.
2) For standalone licenses, make sure there is only one license file in the license directory. You can leave other files in there as long as you change the file extension, etc. For several tips on checking the configuration of your licensing system, please seeSolution 130050.
3) If you are upgrading from one major version to another (PIMS 17 to PIMS 18) you do not have to un-install the old version.
4) If you are upgrading within a major version (PIMS 18.1.11 to PIMS 18.x.y) then you should un-install the old version before installing the newer version.
5) If you un-install the existing version of PIMS, DO NOT un-install SLM. Only un-install PIMS. This can be done by going to START | Programs | Aspentech | Uninstall Aspentech Software. This will provide a list of currently installed Aspentech software. Select PIMS. DO NOT select SLM.
Keywords: upgrade
installation
uninstall
un-install
References: None |
Problem Statement: Can you give us some clean up recommendations for Orion database version 10.0 or higher? For instance, we need a list of tables to be purged. | Solution: As part of the model maintenance activities for the new Orion database schema (version 2006 and higher), you need to periodically remove old data as follows:
_EVENTS table according to STOP
EV_* records which have SEQ corresponding to Z_SEQ numbers deleted from _EVENTS
Other (Results ) tables beginning with _ by DATE_
ORION_MGR_EVENT_IMPORT table according to STOP_DATE
Other ORION_MGR tables keyed to the same combination of APPLICATION_ID and MOVEMENT_ID as records deleted from ORION_MGR_EVENT_IMPORT table
ATORIONEvents* table according to STOP
Other ATORIONEvents* tables where they have EVENT_XSEQ same as records deleted from ATORIONEvents
CRDINV, TNKINV, PLINV, TANK_SERVICE, PARAMINV tables by DATE_
AB_BLN_EVENTS table according to STOPDATE
Other AB* tables where SEQ is same as SEQ of deleted AB_BLN_EVENTS records
Keywords: Orion tables
Model Maintenance
References: None |
Problem Statement: After upgrading to Office 2010, some users have experienced tremendous slowness in running/opening some particular report writer templates. What could be affecting the performance of Aspen Report Writer in Office 2010? | Solution: Check the template and see if it contains many formulas dependent on other cells.
For example, let us assume there are two worksheets, the first is called 'PIMS TAGS' while the second is called 'Key Indicators'. Suppose in a cell, let's say C5 in a second sheet 'Key Indicators', the formula is ='PIMS TAGS'!I15. If multi-threaded calculation is disabled, Excel will calculate the first sheet 'PIMS TAGS', then calculate the second sheet.
When Excel calculates cell C5 in 'Key Indicators', the dependent cell 'PIMS TAGS'!I15 has already been calculated, so Excel can calculate it by the dependent cell value. But if multi-threaded calculation is enabled, cell 'Key Indicators'!C5 and' PIMS TAGS'!I15 can be calculated in different processors. If Excel calculates 'Key Indicators'! C5 before 'PIMS TAGS'!I15, Excel needs to calculate the dependent cell first. In this case, PIMS TAGS'!I15 will be calculated twice. That is why multi-threaded calculation slows down the performance of the template.
TheSolution is:
Open the file in excel and Uncheck Formulas option Enable multi-threaded calculation in Excel Options dialog, then click OK and save the workbook, next time opening this workbook should be much faster.
Keywords: performance, slowness, Enable, multi-threaded calculation, multi-threaded, formula, dependencies
References: None |
Problem Statement: After installing Aspen Report Writer V7.3.03 and Office 2010, calculating cells in a file after Excel start-up is very slow if the file contains Aspen Report Writer (ARW) functions to read any Results.mdb database. | Solution: To resolve this problem, open the file in excel and uncheck 'Formulas' option 'Enable multi-threaded calculation' in Excel Options dialog.A Then click OK and save the workbook. This will improve the speed of ARW resolving the cell values.
The slower performance usually happens when a template contains many formulas dependent on other cells. For example a cell C2 in a second sheet 'Key Indicators' depends on the first sheet 'TAGS' cell, the formula is ='TAGS'!B5. If multi-threaded calculation is disabled, Excel will calculate the first sheet 'TAGS', then calculate the second sheet. Therefore, when Excel calculates cell C2 in 'Key Indicators', the dependent cell 'TAGS'!B5 has already been calculated. Excel then calculates cell C2 by the dependent cell value.
However, when a multi-threaded calculation is enabled, cell 'Key Indicators'!C2 and 'TAGS'!B2 can be calculated in different processors. If Excel calculates 'Key Indicators'!C2 before 'TAGS'!B5, Excel needs to calculate the dependent cell first. In this case, 'TAGS'!B5 will be calculated twice. That is why multi-threaded calculation can slow down the performance of the template.
If an user's template doesn't contain too many formulas which depend on other cells containing ARW functions, multi-threaded calculation will boost the performance.
Keywords: multi-threaded, multi-threaded calculation, enable, disable, Report Writer, RWm ARW, Excel
References: None |
Problem Statement: I have materials out of balance in my PIMS model | Solution: . What is the cause of this problem?
Solution
Materials go out of balance in a PIMS modelSolution because the model makes (or is forced to buy) materials that have no destinations. The PIMS Execution Log indicates material imbalances with messages like the following:
THE FOLLOWING MATERIALS OR UTILITIES ARE OUT OF BALANCE IN THESolution MATERIAL XXX BY 3.800
where XXX is a material tag.
The causes of material imbalances can be structural, flow-related, quality-related, or (more rarely) economic.
A. Structural Causes
Structural problems are the easiest material imbalances to diagnose and correct. This is caused by material balance rows (VBAL or WBAL) with negative entries but no positive entries. In other words, materials can be purchased or produced in the model with no possibility of being sold or consumed.
To look for structural imbalances, run a Validation Summary report for your model. PIMS will indicate structural imbalances with the following message:
*** Warning. Incomplete Disposition for Stream XXX
Also, the Stream Disposition Map section of the Validation Summary report will indicate the imbalance with a minus sign (-) in the CHEK column.
To remedy a structural imbalance, you may try the following:
1. The material destination may have been omitted by mistake. If so, specify the correct destination, which may be a submodel, or a blend (via Table BLNMIX), or sales (via Table SELL).
2. If the material is a waste stream of little interest, you can always put it to sales (via Table SELL) at zero cost.
3. If the material represents a loss of volume or weight, try changing the material tag to LOS. This tag is reserved in PIMS to represent a loss, and should stop the Materials out of Balance messages.
B. Flow-related Causes
If the problem is not structural, it may be caused by flow constraints. You may investigate the following causes:
1. Supply-side constraints:
a. If the material is purchased, there may be a MIN or FIX constraint in Table BUY that forces the model to buy more than the model needs.
b. If the material is produced, the model may be forced to produce too much. Is there a bound or capacity that forces the model to produce an unrealistically large amount of the material? Is there an unrealistically large yield coefficient in a submodel?
2. Demand-side constraints:
a. If the material is sold, there may be a MAX or FIX constraint in Table SELL that prevents a realistic amount of sale.
b. If the material is consumed in a submodel, there may be a bound or capacity that prevents a realistic amount of consumption.
c. If the material is consumed in a blend, there may be a constraint that prevents a realistic amount of the product from being blended. Is there an unrealistically small MAX or FIX value for the blend in Table SELL? You might also check Table BOUNDS for bounds on column names starting with B.
C. Quality-related Causes
Having investigated structural and flow-related causes and still not solving the problem, you may need to investigate quality-related causes. These are surprisingly common and are sometimes difficult to find. Quality-related material imbalances fall into two categories:
If the problem is not structural, it may be caused by flow constraints. You may investigate the following causes:
1. Imbalanced material has not enough of a good quality.
Examples of this include a gasoline blending component with an undefined octane property and a reformer feed with a 0 value for NPA.
2. Imbalanced material has too much of a bad quality.
Examples of this include too-high contaminant values (like SUL or metals) for blend components.
Another quality-related cause can be incorrect blend specifications in Table BLNSPEC.
D. Economic Causes
Economic causes for material imbalances are relatively rare, but we do observe them occasionally. Sometimes the model will dump a low-valued stream rather than send it to its correct destination. Please note that the model does not pay a direct penalty for this dumping, but pays indirectly through loss of revenue. If the stream has little economic value, the model sees little benefit to processing it. This may be related to a locally optimalSolution.
A small material imbalance with an economic cause can usually be solved by fixing the material balance row, either via the FIXBAL setting or by using the FIX column in table ROWS.
Keywords: FIXBAL
out of balance
References: None |
Problem Statement: Best practice is to eliminate very small and very large coefficients in Aspen PIMS model data to avoid scaling issues within the optimizer. How do I identify such coefficients? | Solution: In general we recommend that coefficients smaller than 1e-6 or 1e-7 be avoided. Data of this nature sometimes creeps into the model via Assay data. Eliminating such small coefficients can improve model convergence by avoiding scaling issues within the optimizer.
You can identify such coefficients using the Matrix Analyzer with the following steps:
1. After running PIMS, open the initial matrix with the Matrix Analyzer tool
2. On the top menu, click on TOOLS | Program Options to open the dialog
3. On the General tab, check the box for Filter Tolerance
4. Select whether you would like to search for Large Coefficient, Small Coefficient, or Large Ratio and designate your filter threshold, then click OK
5. The right side of the Matrix Analyzer dialog will populate with the rows and columns in which there are coefficients outside your threshold.
You can now use this information to strategically eliminate the coefficients by replacing them with zero where appropriate.
Keywords:
References: None |
Problem Statement: Process Limits created through table PROCLIM are like blending specifications in the submodels; the matrix formulation is identical.
For each ZLIMXYZ or ZPRPXYZ row, three matrix rows are created: E, L and G rows to represent the calculation. See | Solution: 129960 for details in the formulation.
By default, all the columns that have coefficients in the ZLIMXYZ or ZPRPXYZ row, will be included in the E row, which represents the denominator of the division that PROCLIM is essentially doing.
You can declare vectors FREE so that they do not appear in the E-row (denominator), therefore increasing the range of applications of table PROCLIM. SeeSolution 114406 for details on this situation.
Solution
An example will illustrate how to apply this principle (i.e. that FREE vectors do not appear in the E-row, or denominator).
Suppose that on the Cat Cracker feed pool you want to limit the ratio of Atmospheric Resid to the Gasoils. In this specific examples we have 5 gasoils (LV1, LV2, HV1, HV2, DCG) and one Atmospheric Resid (AR2).
Therefore the limit we want to impose is on this ratio:
AR2 / (LV1+ LV2 + HV1 + HV2 + DCG)
By default, a PROCLIM formulation would limit the following ratio, i.e. it will include AR2:
AR2 / (LV1+ LV2 + HV1 + HV2 + DCG + AR2)
To be able to work with the first ratio, you need to declare column AR2 as FREE in the submodel, so that it does not get incorporated into the denominator.
The property that is used for the process limit is % of AR2, which is 0 for all gasoils and 100% for the AR2.
Enter a 1 in row FREE under column AR2 to declare that column as free (i.e. non bounded).
Note: As now this variable is FREE, it could take negative activity. However, there is no economic incentive to do so, so there should be no problem with this formulation. To make sure that it will always be greater than zero you can define a G row with a 1 in column AR2. Since there are no other entries in the G-row, this for force the column to only have positive activity.
In table PROCLIM, enter the limit:
This is the resulting matrix formulation:
As can be seen, the AR2 column is not included in the ELIMARG row, which represents the denominator, but it is included in the GLIMARG and LLIMARG rows, which represent the numerators of the ratio.
Keywords: Table PROCLIM
Process Limits
Free vectors
References: None |
Problem Statement: What are the advantages and disadvantages of using CV costing to drive internal LP or QP opitimization? | Solution: In V7.1 and later version of Aspen DMCplus controller, there is an option to use CV cost in addition to using LP cost to economically drive the controller toward the most profitable operating points.
Advantages:
A? The cost can be directly set using the economic variables such as feeds, products and utilities.
A? Eliminate the use for the external spreadsheet to calculate MV cost thus keeping cost information self contained within the controller.
Disadvantages:
A? The size of the controller increases due to the addition of the economic variables.
A? Care need to be taken to ensure that the economic variables do not come into control at any time because they are not in the controller for control.
A? It may be harder to determine what driving the MV without a direct cost associated with that MV.
The way the objective function is written, it is possible to have both MV and CV costs in the controller. If they are consistent with each other, the controller would still give the same answer with the absolute value of the objective function twice as large. The problem comes in where the costs are not consistent. It would still choose the lowest costSolution but the user may be confused when no changes in theSolution after the 'wrong' cost is adjusted. Thus it is recommended to zero out the MV cost if CV cost is used.
Keywords: None
References: None |
Problem Statement: What are parallel groups in colinearity analysis tools (SmartAudit, colinearity in APC Builder) and how should they be used? | Solution: A controller with large RGAs could result in unpredictable behavior and/or LP internal error in run mode. AspenTech colinearity analysis tools, either SmartAudit or colinearity tool in APC Builder, are used to fix steady state gains of 2x2 sub-matrices with large Relative Gain Analysis (RGA) number in a controller's model.
In many applications, parallel MVs or redundant CVs are part of the model. Moreover they usually exist in a non-square matrix formulations (3x2, 4x2, 4x3,...). An example of parallel MVs is multiple passes on a furnace, redundant CVs includes temperatures and composition in a column, etc.
From an engineering point of view, it is desirable that each of the non-square gain matrices be of a uniform gain ratio (only one degree of freedom). A parallel group in the Colinearity analysis tool is defined to treat the variables as a single MV/CV in the overall model. For example, if column overhead temperature and top product purity are parallel CVs present in the system, a parallel group comprising of these two CVs and all MVs that affect these CVs in a similar manner would be used to indicate to the controller that the two CVs are essentially identical. Thus when we repair the square matrices, the software modifies the pivoted MV/CV gain of a parallel group to fix the remaining 2x2 sub matrices in the model. After all the square matrices are fixed, the parallel group is fixed by populating the pivoted MV/CV repaired gain to the rest of the MVs/CVs in the group to achieve uniform gain ratio.
Process knowledge is essential in order to recognize parallel MVs or CVs and deciding the pivoting variable. Don't attempt to group unrelated MVs and CVs together if it does not make physical sense. This is true even in cases where the gain matrices show them to be related. This step can be skipped if no parallel groups exist in the model.
Keywords: SmartAudit
Colinearity
Parallel group
References: None |
Problem Statement: Why do I encounter errors while creating event using the edit in Excel functionality? | Solution: For example when a new event is created in Excel, the following error may pop up.
The above error is prompted when the user enters a sequence number while creating the event. APS will automatically create a sequence number for the new event, therefore; sequence number should not be entered when creating a new event.
Keywords: Event ignored
Edit in excel
Edit in excel error
Not a valid event
References: None |
Problem Statement: What is the syntax to enter tag type fields in automation code? | Solution: For crude pipeline shipments, crude runs, and blend events the source tanks are entered in a comma delimited format where each tank identifier is followed by the volume fraction of the component. Note that each entry must be exactly 11 characters long including the comma. For example:
T102 0.733,
The above is the old format which needed to be followed before the introduction of long tags. Currently after the introduction of long tags, counting backwards from the comma and up to the tag name there should be 6 characters. After that, the tag name can have any number of characters for example:
a.cId = KIR 1.00,
Here if we count the number of characters from `comma? backwards: 0 0 . 1 BlankSpace BlankSpace, there is a total of 6 characters. This has to be 6 characters all the time. From the point onwards the tag can have any number of characters.
Similar syntax should be followed for crude receipts and pipeline crude receipts, destination tanks
Keywords: Problems with APS automation
Automation not working
Create event not working
References: None |
Problem Statement: When an APC application (DMCplus, IQ, etc.) encounters this error,
CIMIO_MSG_RECV_CHK, Error checking connections
CIMIO_SOCK_CHK_RECV_FAIL, Error peeking at socket
System Error, errno=2 No such file or directory
CIMIO_SOCK_RD_BROKEN_PIPE, Connection was terminated by peer
WNT Error=10054 An existing connection was forcibly closed by the remote host.
It usually means that the CIMIO for OPC server (the OPC client) is either crashed or unresponsive to any requests. Listed below a systematic way of troubleshooting such problems. | Solution: · First clear out all the log files and start from the beginning.
o Make back-up copies of the cimio_msg.log (C:\Program Files (x86)\AspenTech\CIM-IO\log) and valid.err (C:\ProgramData\AspenTech\APC\Online\etc). Then delete the original valid.err and cimio_msg.log files. We are assuming that CIMIO for OPC is running on the APC server. If it is not, make sure you are on the right server.
o Open Task Manager and observe how many AsyncDlgp.exe are running. These are your CIMIO for OPC processes. Add the Process ID column so you can identify them.
· Launch CIMIO for OPC Startstop Utilities by navigating to C:\Program Files\AspenTech\CIM-IO\io\cio_opc_api and double-clicking on StartStop.exe.
· Stop the offending CIMIO for OPC server, CIOEVIQ in this example case, as shown below. Take a look at the Task Manager, you may or may not see a decrease in the number of AsyncDlgp.exe running (depending whether CIOEVIQ is still running or not).
· Restart the CIOEVIQ.
You should see a new AsyncDlgp.exe in the task manager. Note the PID of the new AsyncDlgp.exe
· Perform a test_api with a single tag on the CIOEVIQ logical device. Check the task manager to see if the AsyncDlgp.exe is still running.
· If everything is still OK. Load the APC application. If you get the same error, the AsyncDlgp.exe for CIOEVIQ have crashed again.
· Examine the cimio_msg.log and valid.err for any clue on which tag caused the AsyncDlgp.exe to crash
· Look into Event Viewer>Windows log>Applications to see whether any error message regarding the AsyncDlgp crashed.
· If any of the logs does not show the problem, you can try to remove all the DCS tag from the APC application and try to load it. Remember to restart the CIMIO for OPC if it crashes. If it loads, then you can add the tags in slowly and see what tag caused the problem
· Alternatively, you can try to read all the tags of this APC application using test_api. Here is theSolution to read multiple tags using test api, 103099.
Keywords: CIMIO for OPC
Asyncdlgp
References: None |
Problem Statement: What is the best way to enable workspaces to be opened quickly? | Solution: For deployments with large number of workspaces and users, users will be able to open workspaces faster if
Users are defined to roles using domain groups instead of as individual users
Individual user accounts that have been disabled are removed from roles assignments
Local groups on the ABE server are used to define roles if a domain group can not be used.
Using groups instead of individual user accounts can reduce the size of the user list for each role.
Keywords: best practice, connect to workspace, server busy message
References: None |
Problem Statement: Our Aspen Petroleum Scheduler (Orion) database sometimes become corrupted. The problem is sometimes so severe that we need to create a new database.
Can you give any advise on how to avoid these problems? | Solution: We have the following suggestions:
Run Check Disk on the computer's hard drive to eliminate bad sectors. Defective disks can cause corrupted files.
Repair/Compact the MS Access, SQL Server, or Oracle database to reduce file size and maintain performance. We recommend that you do this on a weekly basis.
Review the interfaces that write external data into the Orion database.
Back up the Orion database, ideally on a daily basis, and at least on a weekly basis.
Periodically delete old data to keep the database to a manageable size. Old data should be deleted from the results tables, the initial inventory tables (CRDINV,TANKINV, PLINV, PARAMINV), and from EVENTS Tables. For MS Access databases, seeSolution # 119062, How does cleanup option work. for more information.
Define roles, responsibilities, and access level for all users of the Orion application.
Keywords: -Database maintenance
-Corrupt database
Cleanup database
References: None |
Problem Statement: How to re-order the event screen list in Orion/MBO? For example, if the user wants to move the event screen 'Crude and Vacuum' to the first, how do you do it? | Solution: Here is the procedure to implement,
1. From the database table GANTT_LIST, copy the event screens ID into table GANTT_GROUP,
2. In table GANTT_GROUP, use the ORDER_BY column to order them
Now the event screens order become,
If the option below is checked from Settings | Advanced tab, the 'Event Screens' will sort in Alphabetical order and the field we defined above will be ignored.
Keywords: order
GANTT
event screens
re-order
reorder
References: None |
Problem Statement: Changed Prediciton History offset, now getting message - Bad unbiased prediction | Solution: An explanation for the correlation of PREDOFST and HISTOFST
The key point is that the LBU history is a subset of the LDC history. A subtle point that necessitates setting the HISTITVL and HISTOFST in LDC when setting PREDITVL and PREDOFST in LBU.
If the LDC is set up to collect data with an offset of 10 minutes and a duration of 40 minute. I can tell this my looking at the LDC:HISTITVL and LBU:HISTOFST parameters. This means that if a sample comes in at 2:00 then the LDC will pull data out of the DC history file from 1:10-1:50 and put it into its own history file. When the LBU runs it uses this data.
Next, I notice that perhps the LBU is set up to analyze data with an offset of 5 minutes and a duration of 20 minutes. I can tell this by looking at LBU:PREDITVL and LBU:PREDOFST. These numbers need to be a subset of the LDC numbers. The LBU then tries to retrieve data from the LDC history from 1:35-1:55. Of course the data from 1:50-1:55 is not there. The criteria for good predictions is 75% as specified by CONFIG:DCPCTGOOD. You need to configure the history storage parameters to line up better.
Also see the AspenIQ Online help for addition information for HISTOFST and PREDOFST.
Keywords:
References: None |
Problem Statement: Aspen Process Explorer (aka APEx or PE) as a product has been around since late 1996. It was built using Microsoft COM technology. Since its early days, its code has been re-architected several times to conform to ever-changing Microsoft standards and provide positive user experience in terms of robustness and speed.
Since those early days we've been focusing, among other things, on getting lots of data, fast, in a point-to-point, LAN based environment. However, with more and more customers running Process Explorer in a WAN environment, often with centralized role-based security and ADSA (Aspen Data Source Administrator) servers located in distant corporate offices, and lots of new features, there's been some degradation in performance, especially when it comes to displaying data from WAN-based data sources.
This Knowledge Base article provides suggestions that, if implemented correctly, may improve performance of Aspen Process Explorer. | Solution: · Try using a locally configured data source in Process Explorer. This will bypass the ADSA and additionally will remove 50% of the DCOM and DNS calls that Process Explorer makes to ADSA to find the IP.21 server. If you see distinct improvement then you know that there is an issue with how DCOM is working or with your DNS reSolution speed.
· Follow the DNS instructions below to see if your DNS management is up to expectations.
· Try is to use IP addresses instead of server names in your data sources (in ADSA). This will again separate the name reSolution problem from your Aspen software.
· If name to IP address reSolution is a problem, add relevant server/client name and IP address to your Hosts file. This should only be treated as a temporary measure until the root of the problem is discovered and corrected.
· Don't (or avoid) using <any> as a data source.
· Change trends to .apx files instead of .atplot files (yes, it will take two sets of central files until you have converted everyone).
· Install the latest version of PE, preferably the same as your Aspen InfoPlus.21 (IP.21) version (this should work better than a mixed bag of older versions of PE connecting to a more recent version of IP.21).
· Set UAC (User Account Control) to low Never notify (Here's a link for how to do this in Windows 7: http://windows.microsoft.com/en-us/windows7/turn-user-account-control-on-or-off )
· Set DEP (Data Execution Prevention) to Turn on DEP for essential Windows programs and services only (Here's a link for how to do this in Windows 7: http://windows.microsoft.com/en-us/windows7/Change-Data-Execution-Prevention-settings?SignedIn=1 )
Is your DNS management up to expectations?
Before actually deploying Manufacturing and Execution System (MES) applications, test the connectivity between relevant computers. Obviously, the computers that host MES applications must be able to communicate with each other, but it is sometimes less obvious that other computers are involved. For example, when an Aspen Process Explorer computer connects to an InfoPlus.21 host computer, a number of other computers may be contacted, such as computers hosting one of the following applications:
· ADSA
· Aspen Framework or Local Security Server
· DNS
· WINS
· Domain controllers (to authenticate users)
The primary tool used for testing basic connectivity is the ping utility. The ping utility sends packets to a user-specified target computer, waits for the target computer to reply, and then prints out a message for every response. To ping another host from a command prompt, you can use its IP address: ping 10.32.27.18 . You should also try pinging the target computer by specifying its host name (for example, molly) instead of IP address: ping molly . If the target computer responds, then the user knows that the lower level networking services and hardware are functioning. However, no response or a slow response (more than 0.5 seconds) indicates a problem. Ensure you test connectivity from both ends. The fact that computer A can quickly access computer B does not necessarily mean that computer B can quickly access computer A. It is possible that name reSolution occurs quickly during testing because the name has recently been resolved and is still in the NetBIOS cache. Before testing connectivity, flush this cache by issuing the following command from a command prompt: nbtstat -R . The current contents of the cache can be viewed by using the following command: nbtstat -c Note: The NSLOOKUP utility can be used to test forward and reverse name lookup using any DNS server.
Performance Considerations
Minimize the number of hops between an MES computer and any other computer (such as DNS or domain controller) that it accesses. Ideally, computers that communicate frequently should be on the same subnet and should have fixed IP addresses.
· Test basic connectivity between all relevant computers.
· Make sure host name reSolution is fast, both forward (name-to-address) and reverse (address-to-name). Proper DNS settings on each computer are especially important in this regard. Proper configuration generally results in name reSolution taking less than a second. Improper configuration can often result in name reSolution taking a minute or more.
· Ensure the NetBIOS name reSolution is fast, both forward and reverse. In this case, the WINS settings are especially important.
· Add relevant server names and IP addresses to each others Hosts files.
Keywords:
References: None |
Problem Statement: Prelude:
This tutorial provides an example to educate users about how to configure a Custom Library and to install the same in a Custom CAP file. There are no particular pre-requisites required for reading this document; though, prior working experience in Aspen SCM will aid in easier understanding; it is also recommended to review the Import Data and Data Viewer documents before reading this document. The version used in this tutorial is Aspen SCM v7.3.1.
Reasons for building Custom Libraries:
i. Upgrading a customer CAP base model to a new software version is simplified by using libraries that are created with appropriate implementation conventions.
ii. Customers with multiple similar models can share a single functional library. When new features are introduced for a single model, they are introduced to all models using the same shared library.
iii. If a patch is created to fix existing bugs gathered from different models, then it is easy to re-install libraries rather than edit each and every customer model.
iv. Some local sets may need replacement of entries while some may need entries appended to the already existing entries; these types of changes can be made easily using custom libraries.
v. It provides a clear separation between case data and custom code and prevents the code from inadvertent changes by the user. | Solution: Steps to configure a Custom Library:
i. In the Import Data and Data Viewer tutorials, there are some tables that were created and there were some existing tables that were populated to accomplish those steps. The objective of this tutorial is to create and customize a custom library with the sets and tables used in the above mentioned tutorials, so that the following elements are controlled appropriately:
Custom Features: _MONTHS, _YEARS, _IMPODER, IMPODEC, _IMPODE.
Import Features: IMPITEMS, IMPLNG, IMPGRPS, IMPMAP, IMPCTL, IMPASCID, FMTFDEF.
Data Viewer Features: DVTABLES, DVCTL, DVATT, DVATTUSG, CNAVOVRR, CNAVOVER, DVTBLLNG.
ii. Go to Start | All Programs | AspenTech | Aspen Supply Chain Suite | Aspen SCM. This will open the Aspen SCM application. Go to the case file attached along with this KB and download it. Go to File | Open | Browse to the location where this file is stored and click `Open?.
iii. Please create and populate the above mentioned sets and tables as directed by the previous tutorials in CUSLIB. If there are any previous entries in these sets and tables, please null them out. _IMPODER should be left blank, since it will get populated when Import is run. FMTFDEF should only contain IMPORT2 row, since this is the only row in this table to be used in this entire exercise. Create two sets _MYMAC and _MYMAC2 and define the data as Macro. These macros should contain the rules and steps that need to be performed to make the Data import and Data Viewer configurations complete.
_MYMAC:
>RACTCONV
CSTPOVRR = NULL
CSTPOVRR = CSTPCUSR
CSTPOVER = CSTPCUST
VMACTION CGLOBAL:GLOBAL REFRESH
END
? _MYMAC2:
>RDVCONV
VMACTION CGLOBAL:GLOBAL REFRESH
END
iv. In the Command Window type: LIBSYS; add EXAMPLE and 1 in the Code column; add EXAMPLE and Version 1 in the corresponding description. This is the name and the version number of the library.
Figure 1 LIBSYS
v. In the Command Window type: LIBFTRS; enter _FTREX, _IMPORT and _DVEX in the Code column and CUSTOM FEATURES, IMPORT FEATURES and DATA VIEWER FEATURES in the corresponding rows of the Description column. This creates features named CUSTOM FEATURES, IMPORT FEATURES AND DATA VIEWER FEATURES in the library EXAMPLE.
Figure 2 LIBFTRS
vi. In the Command Window type: LIBOBJS; in this set, enter the feature name first and comment it; e.g.: --CUSTOM FEATURES. Under the feature name, enter all the objects: e.g.: _MONTHS, _YEARS, _IMPODEC, _IMPODER and _IMPODE. To the existing set of objects, please add _MYMAC to IMPORT FEATURES and _MYMAC2 to DATA VIEWER FEATURES.
Figure 3 LIBOBJS
vii. In the Command Window type: LIBCTL; for all the row entries (LIBOBJS), the FEATURE, RLT, SETREPL, SETDESC and TABREPL columns will need to be updated.
FEATURE is the name of the feature that was created in step: v.
RLT defines an object as Remote (for those objects where the end user is not expected to modify), Local (for those objects which should be saved to the local CAP file and not the library) or Temporary (for those objects which do not need to be saved with the case).
If a set is saved as a Local object, then there are further options to either:
o Replace the set completely (SETREPL) or
o Append new entries to the set (SETAPPND) during the library feature re-installation.
If the object is a table, then there is the option of replacing the table completely during the re-installation. You can choose on these features and configure them by entering Y on the corresponding cells.
In this particular example, _IMPODE is a table that is going to contain data specific to each model; hence this is stored as a Local object. Since _IMPODE is a Local object, its column and row sets, _IMPODEC and _IMPODER, should also be Local objects. _YEARS and _MONTHS will typically be generated through a logic; hence it is stored a Local object. _IMPODEC is an object which should typically be controlled by the Consultants, since all the rules and macros are written based on the column and the row sets; so SETREPL for this object is set to Y. All the objects in the IMPORT FEATURES and DATA VIEWER FEATURES are defined as Remote, since these objects are usually configured before the application is run by the end-user. FMTFDEF is declared as a Local object and TABREPL is set to Y; this will make sure that only the IMPORT2 row (in CUSLIB) is replaced in the already existing FMTFDEF table in LIBEX; the other rows of FMTFDEF will be intact.
Figure 4 LIBCTL
viii. In the Command Window type: LIBDPND; for the rows _FTREX, _IMPODE and DVEX enter the VERSION as 1 and the MIMIVERS as 7.3.1, since this is created in v7.3.1. In SPECIALM column: and _IMPORT row, add <_MYMAC; in the _DVEX row, add <_MYMAC2. With the addition of these macros, when the library is installed, the case is ready to import the data to _IMPODE and that can be viewed in the Navigation Bar under Data Management.
7.3.1
7.3.1
7.3.1
Figure 5 LIBDPND
ix. In the Command Window type: LIBMSG; In the Code column, enter the names of the features; the Description can be anything. In the Command Window type: LMMSGLNG; fill in the USENGLISH and UKENGLISH columns for all three rows, with `The Feature is installed!?. This message will appear in the Library Manager log, immediately after the feature is installed. In some cases, there might be some manual steps needed to be performed after a feature is installed; those steps can be described here.
Figure 6 LIBMSG
Figure 7 LMMSGLNG
x. Go to File | Save to save the library.
The steps described above are the configurations that need to be done on the custom library; the following steps explain how to install a library in a CAP file.
xi. Go to File | Open | C:\Users\Public\Documents\AspenTech\Aspen CAPS\Plant Scheduler\PS_V7-3-1.cas; Go to File | Save As and browse to the desired location for the file to Save in and enter a desired File name. In this example, this file has been stored in C:\My Files as LIBEX.CAS. The following sequence is going to explain how to install the previously created CUSLIB.CAS custom library into LIBEX.CAS custom CAP.
xii. Go to Modeling | Tools | Library Manager; go to Home | Properties in the ribbon at the top. Since this is a new custom library, the path along with the file name needs to be entered in the Library Case File text box. By clicking the Add/Upgrade button, the new custom library will be linked to the custom CAP.
Figure 8 Library Manager
Figure 9 Library Linked ? Informational Message
xiii. Once the custom library is linked, click on EXAMPLE 1 in the Linked Libraries list box. This will list all the features available in the library in the Available Features list box; click on the Re/Install button to install the features in the LIBEX.CAS. If there are any problems during the installation, that should show up as an Information Message and in the Library Manager Log.
Figure 10 Library Manager
Figure 11 Feature Installed ? Informational Message
Figure 12 LIBMSG Contents appear in the Library Manager Log
xiv. Go to Developer | Catalog in the ribbon; you will be able to find all the sets and tables populated in the CUSLIB. The objects which were not in LIBEX would have been newly created, while the items previously created in LIBEX would have been replaced or appended as per the configurations in LIBCTL.
xv. In the Command Window type: IMPITEMS; you will be able to find all the entries that you had entered, when in CUSLIB. Right click on the set and open the Set Attributes. Since this set was configured as Remote, you will find that the Load Case is `G-CUSLIB.CAS?, as opposed to `Current Case? had IMPITEMS been created in LIBEX.
Figure 13 Set Attributes of IMPITEMS
xvi. In the Command Window type: IMPITEMS; you will be able to find _IMPODE that you had entered, when in CUSLIB.CAS. Right click on the set and open the Set Attributes. Since this set was configured as Remote, you will find that the Load Case is `G-CUSLIB.CAS?, as opposed to `Current Case? had IMPITEMS been created in LIBEX.
xvii. With the help of installation macros, LIBEX.CAS is now setup to import _IMPODE and view it in the Data Viewer using a Navigation Bar entry. Go to Messages | Logs | Data Import | General Import Statistics in the Navigation Bar; you should be able to see DEMAND EXAMPLE row. Now go to Data | Run Step | Import Data | MY GROUP | DEMAND EXAMPLE in the Ribbon; this step will import _IMPODE into the SCM application. Go to Data Management | Data Example | Demand Example in the Navigation Bar; you should be able to view _IMPODE in the Data Viewer. If there are any problems in viewing the data, run >RDVCLEAN _IMPODE; if there are more problems, please check the configurations in CUSLIB and in LIBEX.
Figure 14 Run Step
Figure 15 Entry in Navigation Pane and _IMPODE in Data Viewer
Keywords: None
References: None |
Problem Statement: As the events are imported using EIU, sometimes staging tables and baseline tables can get very big in size. Which tables should the user look at in order to purge the data? | Solution: Following are the Baseline tables which needs to be maintained from time to time
CRDINV
TNKINV
PLINV
PARAMINV
TANK_SERVICE
Following are the Import Staging tables which needs to be maintained from time to time
ORION_MGR_CRDINV_IMPORT
ORION_MGR_TNKINV_IMPORT
ORION_MGR_EVDESTIN_IMPORT
ORION_MGR_EVENT_IMPORT
ORION_MGR_EVPROP_IMPORT
ORION_MGR_PARAM_IMPORT
ORION_MGR_PLANT_VALUES
ORION_MGR_PLINV_IMPORT
ORION_MGR_PLPROP_IMPORT
ORION_MGR_TNKSERVICE_IMPORT
Below are the scripts that can be used to setup the auto-clean up schedule for the event tables
DECLARE @DaysBack AS INT; -- number of days to go back in time
SET @DaysBack = 60; -- Note: Please setup the proper number of days that you want to keep the data
--DELETE HISTORY DATA FROM BASELINE TABLES
DELETE FROM CRDINV WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM TNKINV WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM PLINV WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM PARAMINV WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM TANK_SERVICE WHERE DATE_ < GETDATE() - @DaysBack;
--DELETE HISTORY DATA FROM IMPORT STAGING TABLES
DELETE FROM ORION_MGR_CRDINV_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_TNKINV_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_EVDESTIN_IMPORT WHERE MOVEMENT_ID IN (SELECT MOVEMENT_ID FROM ORION_MGR_EVENT_IMPORT WHERE STOP_DATE < GETDATE() - @DaysBack);
DELETE FROM ORION_MGR_EVENT_IMPORT WHERE STOP_DATE < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_EVPROP_IMPORT WHERE MOVEMENT_ID IN (SELECT MOVEMENT_ID FROM ORION_MGR_EVENT_IMPORT WHERE STOP_DATE < GETDATE() - @DaysBack);
DELETE FROM ORION_MGR_PARAM_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_PLANT_VALUES WHERE DATETIME < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_PLINV_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_PLPROP_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM ORION_MGR_TNKSERVICE_IMPORT WHERE DATE_ < GETDATE() - @DaysBack;
It is also recommended to clean the DB log and shrink the DB periodically
Keywords: Database,
Staging tables
Baseline tables
SQL Queries
model cleanup
References: None |
Problem Statement: In the Volume Sample model, there is a blend DSL with ALTTAGS DSX. However the blend DSL has an incentive that is not applicable to DSX. This incentive is based on a minimum property specification NCTI=46.2, but the incentive threshhold may not equal to 46.2, lets assume it is 47.
How can it be implemented? | Solution: We can use Table ROWS, UTILSELL and a dummy submodel to accomplish this.
First, let's look at DSL related tables,
Table ALTTAGS,
* TABLE
ALTTAGS
*
Alternate Tags for Tierred Pricing
TEXT
***
*
DSX
DSL
Diesel
SFB
SFA
Sulfuric Acid
Table SELL,
* TABLE
SELL
Table of Contents
*
Product Sales
TEXT
MIN
MAX
FIX
PRICE
*
('000
('000
('000
$/BBL
*
BPD)
BPD)
BPD)
DSL
Diesel
0.00000
22.00000
55.02
DSX
Export Diesel
0.10000
54.71
In BLNSPEC,
* TABLE
BLNSPEC
*
Blended Product Specifications
TEXT
DSL
*
*
RVP Max Spec, PSI
NCTI
Cetane Index
46.2
The specification NCTI=46.2 is applied to all the DSL streams, include ALTTAGS DSX. However, the incentive should be only for the DSL exclude the ALLTAG DSX, that is,
SELLDSL * ( actual CTI - incentive threshhold CTI) = SELLDSL x (actual CTI - 47)
We will use table ROW to implement this equation. However, table ROWS can not populate recursed property, we will have to use a dummy submodel SDSD. In addition, CTI is a final blend property, we need to enter it in PGUESS table to force the blend property to recurse.
* TABLE
PGUESS
Table of Contents
*
Initial Property Estimates
SPG
SPV
API
SUL
PLN
CTI
*
*
DSL
45
* TABLE
SUBMODS
Table of Contents
*
Unit Submodel List
TEXT
REPORT
COMBINE
*
SDSD
dummy block for DSL
* Table SDSD
Table of Contents
*
TEXT
DSL
CTI
EdrvDSB
drive SELLDSL activity to column DSL
-1
ECTIDSB
col CTI = CTI prop * DSL activity
-999
1
*
UBALDSB
calclulate CTI incentive
47
-1
* TABLE
ROWS
Table of Contents
*
User Defined Rows
TEXT
FIX
MAX
FREE
SLACK
SELLDSL
EdrvDSB
drive SELLDSL activity to column DSL
1
*
We use table UTILSELL to enter the incentive price,
* TABLE
UTILSEL
Table of Contents
*
Utility Sales
TEXT
MIN
MAX
FIX
PRICE
***
*
STM
Steam MLBS
$2.00
DSB
CTI incentive
$0.20
*** End
The above configurations in the tables will generate the following equations,
EdrvDSB: 1.000000 * SELLDSL -1.000000 * SDSDDSL = 0.000000
ECTIDSB: -52.937500 * SDSDDSL +1.000000 * SDSDCTI -0.994820 * RCTIDSL = 0.000000
UBALDSB: 1.000000 * SELLDSB +47.000000 * SDSDDSL -1.000000 * SDSDCTI <= 0.000000
In the fullSolution report,
Product Sales
Units
Units/DAY
Minimum
Maximum
$/Unit
$/DAY
Marg Val
Weight
LPG
LPG
BBLS
453
0
37.000
16,775
37
LPG & Gases
453
16,775
37
LRG
Leaded Regular
BBLS
5,000
5,000
5,000
62.480
312,399
-1.005
607
URG
Unleaded Regular
BBLS
38,397
1,000
100,000
65.000
2,495,784
4,480
UPR
Unleaded Premium
BBLS
7,071
1,000
15,000
67.520
477,402
834
Total Gasolines
50,467
100,000
3,285,585
5,921
JET
Kero/Jet
BBLS
12,803
10,000
59.960
767,668
1,651
DSL
Diesel
BBLS
19,205
0
22,000
55.024
1,056,717
2,551
DSX
Export Diesel
BBLS
100
100
54.709
5,471
-0.770
13
Total Distillates
32,108
1,829,856
4,215
Utility Sales
Units/DAY
Minimum
Maximum
$/Unit
$/DAY
Marg Val
STM
Steam MLBS
0
0
2.000
0
-7.738
DSB
CTI incentive
114,104
0
0.200
22,821
22,821
Total Utilities Sales
22,821
The actual CTI for DSL = -52.937500
Therefore,
The CTI incentive = SELLDSL x (52.9375 - 47)
Keywords: BLNSPEC
Property
Property Incentive
Incentive
Blend
References: None |
Problem Statement: Since event tables can get really huge over the course of time, what event tables should I look at to regularly archive/ cleanup my database? | Solution: These are some event related tables in APS, and archiving these tables could be healthy for the application's performance.
ATORIONEventResources, ATORIONEventResourceDetails, ATORIONEventProps, ATORIONEventPipelines , ATORIONEventParams, ATORIONEventComments , ATORIONEventTanks , ATORIONEvents , EV_CRUDE_COMP, EV_DAILY_DEST_TANKS, EV_DAILY_SRC_TANKS, EV_CRUDE_COMP, EV_DEST_TANKS, EV_SOURCE_TANKS, EV_REC_PROPS, EV_PARAMS
Below are the scripts that can be used to setup the auto-clean up schedule for the event tables
DECLARE @DaysBack AS INT; -- number of days to go back in time
SET @DaysBack = 60; -- Note: Please setup the proper number of days that you want to keep the data
--DELETE HISTORY DATA FROM EVENT TABLES
DELETE FROM ATORIONEventResources WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventResourceDetails WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventProps WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventPipelines WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventParams WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventComments WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEventTanks WHERE EVENT_XSEQ IN (SELECT X_SEQ FROM ATORIONEvents WHERE (STOP < GETDATE() - @DaysBack));
DELETE FROM ATORIONEvents WHERE STOP < GETDATE() - @DaysBack;
DELETE FROM EV_CRUDE_COMP WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_DAILY_DEST_TANKS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_DAILY_SRC_TANKS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_DEST_TANKS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_SOURCE_TANKS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_REC_PROPS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_PARAMS WHERE SEQ IN (SELECT SEQ FROM EV_DAILY WHERE (DATE_< GETDATE() - @DaysBack));
DELETE FROM EV_DAILY WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM EVENTS WHERE STOP < GETDATE() - @DaysBack;
Keywords: Database Archive, Event Archive, ATOrionEvents, SQL Queries, model cleanup, APS Performance
References: None |
Problem Statement: After installing Aspen Report Writer v7.2 and selecting Aspen Report Writer from the Excel menu, the following error messages pop up. The application can not be run. | Solution: This error could be caused because the Aspen Report Writer dll is not correctly registered.
Follow these steps to fix this problem:
1) From Excel, hit Alt + F11, which will open the VBA editor. Go to Tools |
Keywords: -Installation
References: s and check whether there is one line that says :
Missing: AspenRpt. (This means that it is not seeing the dll).
2) Go to the C:\Program Files\Common Files\AspenTech Shared, and check whether the AspenRpt.dll is found there (if this is not the correct location, find it and use this address for the rest of the steps below)
3) Un-register and Re-register the dll:
-Open an MS-DOS session:
Start | Run | cmd | <enter>
-Change to the correct directory (whatever the name is in the machine):
c:\> cd C:\Program Files\Common Files\AspenTech Shared <enter>
-First un-register the .dll file:
C:\Program Files\Common Files\AspenTech Shared\> regsvr32 /u AspenRpt.dll <enter>
-Then re-register it:
C:\Program Files\Common Files\AspenTech Shared\> regsvr32 AspenRpt.dll <enter>
The confirmation message DllRegisterServer in AspenRpt.dll succeeded should appear.
4) Check in VBA whether the AspenRpt is now as a valid reference.
5) Recheck the Report Writer Add-In in Excel, and then try to run a template again. |
Problem Statement: How do I archive/cleanup my publish tables periodically? | Solution: Over the course of time, publish tables get really huge. Necessary maintenance steps need to be taken to ensure stable performance of APS because huge database sizes could lead to more processing time and in cases, query timeouts (especially while publishing)
These are some publish tables which would be worth taking periodic backups of and making sure that the production model is compact, for best APS performance:
_BIASRESULTS, _CRDRUNS , _EVENTS, _APSEVENTMASTER, _PARAMS , _PL ,_RECEIPTS ,_RUNS , _SCHEDBIAS , _SERVICE , _STRMS ,_TANKS ,_ZSTRMS , _ZTANKS ,_ZRUNS, _ZPL ,_ZRECEIPTS
Below are the scripts that can be used to setup the auto-clean up schedule for these publish tables:
DECLARE @DaysBack AS INT; -- number of days to go back in time
SET @DaysBack = 60; -- Note: Please setup the proper number of days that you want to keep the data
DELETE FROM _BIASRESULTS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _CRDRUNS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _EVENTS WHERE STOP_DAY < GETDATE() - @DaysBack;
DELETE FROM _PARAMS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _PL WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _RECEIPTS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _RUNS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _SCHEDBIAS WHERE STOP < GETDATE() - @DaysBack;
DELETE FROM _SERVICE WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _STRMS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _TANKS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZPARAMS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZPERIODS WHERE PERIOD_END < GETDATE() - @DaysBack;
DELETE FROM _ZPL WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZRECEIPTS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZRUNS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZSERVICE WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZSTRM_MOVEMENTS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZSTRMS WHERE DATE_ < GETDATE() - @DaysBack;
DELETE FROM _ZTANKS WHERE DATE_ < GETDATE() - @DaysBack;
Keywords: Publish tables, Model Cleanup, APS Performance
References: None |
Problem Statement: Let's say for example: once an import has been made, the import log, validation log, import statistics screens if already opened, needs to be refreshed. Similarly, if a change has been made on the planning board and if the reports of the planning board are already opened, then they need to be refreshed. SCM has the capability to automatically refresh screens. This | Solution: explains how to trigger the refresh and how to setup the same for custom screens.
Solution
The feature used to accomplish this requirement is the RACTREF rule.
Let's consider an example:
In the attached case file, there is a custom screen called “Test” under the custom group “After Commands”. The objective here is to refresh that screen as soon as the import action is completed. The following steps will help you to accomplish this configuration:
1. NAVCAT Set: Add this new Navigation Category to NAVCAT set. As a convention, the code of this set should match with CNAVOVER's XML keyword.
In this case file, CNAVOVER contains the following code:
<?xml version=1.0 encoding=utf-8?>
<NAVBAR>
<AFTER Caption=After Commands AccessControlDefault=True>
<Children>
<USER Caption=Test Type=CAPS_SCREEN Config=_TEST:SAMPLE/>
</Children>
</AFTER>
</NAVBAR>
So, in this case, AFTER should be added to NAVCAT set.
2. ACTNAV Table: Map this new category to the required action in this table.
The objective for this example is to refresh the screen under this category when the Data Import action is run. So an appropriate X is marked in ACTNAV table.
3. VMNAVR Set: Add the new screen's ViewModelID to this set.
This new screen's View Model ID is _TEST:SAMPLE_VM - this should be added to VMNAVR set.
4. VMNAV Table: This new screen should be mapped to the relevant section in this table.
X should be entered in the intersection of AFTER and _TEST:SAMPLE_VM at VMNAV table.
5. RCDATIMP Set: The following lines were taken from the CACTIONS table:
<?xml version=1.0 encoding=utf-8?>
<CONFIG>
<!-- Actions Configuration -->
<ACTIONS AccessControlSource=RBNAVMWK AccessControlPrefix=ACTIONS_>
<ImportALLGroup Caption==RBNAVLNG(ACTIONS_IMPORTALLGROUP,@) Command=>RCDIGRP ALL VisibleSource=CSTYLE(CSCEN_NOT_IN_COMP,1)/>
…
</ACTIONS>
</CONFIG>
As you can see, the rule that is called when a user clicks on the Import All Data option under the Run Action menu is RCDIGRP ALL.
If you look into the RCDIGRP rule, you will find that the RACTREF_REFRESH_SCREEN_AFTER_ACTION DATAIMPORT predicate is called:
RCDIGRP /* Import a group */
IF ACTIONLOGS_ITEMS_UPDATED_BY_PARAMETER DATIMP
AND ?GRP = %1
AND ?GRP IN IMPGRPS
OR ?GRP EQ
AND ?GRP = ALL
AND SELECT_ITEMS_BY_DATA_GROUP ?GRP
AND PROCESS_IMPORT_VALIDATION_CONVERSION_ROUTINES
AND RACTREF_REFRESH_SCREEN_AFTER_ACTION DATAIMPORT
THEN RCDATIMP_IMPORT_GROUP
This subroutine looks at the tables configured earlier and will refresh the appropriate screens using the VMACTION command.
Once the above configurations are performed, you can open the TEST screen and run the Import procedure from the Run Action menu - you would see that the Refresh is performed on the TEST screen.
If your model's screens take a long time to refresh and if you want to avoid this wait time, then you can instead use the refresh icon to indicate the screens which need to be refreshed, so that the user can manually trigger the Refresh button on the Home tab of the standard ribbon. The procedure to implement this refresh icon is described inSolution # 139595.
Keywords: RACTREF
Auto-refresh
XML
References: None |
Problem Statement: There have been some misunderstandings about how Property Calculation Formula is designed to work. Below is a summary that is intended to clarify the functionality of this feature. | Solution: Property Calculation Formula (PCF) was originally developed as a replaced for Table INDEX. As such, it defines calculations that PIMS performs only at matrix generation or back-calculations that are done for reporting. There can be an exception to this when it is used in conjunction with ABML, however if ABML is not involved, PCF will only calculate the defined property at matrix generation. This means that it will not calculate properties at each recursion pass.
Let's take a look at an example where a user has RVP data in their ASSAY and BLNPROP tables and defines how to calculate RVI from RVP in Property Calculation Formula. At matrix generation time, PIMS will use PCF to calculate the RVI that corresponds to each RVP entry in ASSAYS or BLNPROP. The user should have PGUESS entries for RVI for the appropriate crude streams. In this case, PIMS calculates the RVI that corresponds to each individual RVP entry in Table ASSAYS. Then those values are used together with the other Table ASSAYS data to help PIMS select the optimal crude slate. Once that crude slate is selected, PIMS will calculate the RVI value of a given crude unit product based on the RVI values calculated at matrix generation time and the crude slate.
If the user is also recursing RVP, then this give the appearance of a discrepancy between the final product RVP and RVI values. This is because the RVP is recursed based on the crude slate and the RVI is also recursed based on the crude slate. The RVI is not calculated from the recursed value of the RVP. This is the whole point of having an index because the RVP is known to not blend linearly. Therefore the recursed value for RVP is not an accurate value. RVI does blend linearly and therefore is the better representation.
If the RVP is not independently recursed, then PIMS will just recurse the RVI and at report time, it will use the PCF equation to back-calculate the corresponding RVP. This is the recommended way to handle such indices and keep the reported values consistent with each other.
Keywords: None
References: None |
Problem Statement: What is the best way to manage Table PGUESS in my Distributed Recursion (DR) model? | Solution: The PGUESS table is required on the model tree. Whenever you run Aspen PIMS, the values in Table PGUESS are used as initial starting points for all the recursed properties. Therefore, changing the PGUESS table can result in a different calculation path. Whenever Aspen PIMS completes a run, it creates a file called !PGUESSnnn.xls where nnn is the case number. This file is a PGUESS table where the value for each recursed property comes from theSolution of the indicated case. This is commonly used to update the PGUESS table on the model tree.
Here are some guidelines for managing Table PGUESS:
Do not attach an Aspen PIMS generated PGUESS table, like !PGUESS001.xls, to the model tree without first renaming the file. If you attach the file without renaming it, then every time Aspen PIMS runs that case, the values in the file may change and this can impact the reproducibility of your results.
It is not recommended to purchase recursed streams in Table BUY. If you do this, then the properties for that purchased stream are fixed at the values for that stream in Table PGUESS. So if you have a recursed stream in Table BUY and you change Table PGUESS, then you could be making unintended changes to the properties Aspen PIMS associates with that purchased stream.
In Table PGUESS you can use 999's as entries for the property initial guesses. If you do this for streams that are in Table ASSAYS, then Aspen PIMS will use the ESTxxx rows in Table CRDCUTS to determine an initial crude mix and use that crude mix to generate the initial property guesses from the data in Table ASSAYS. Therefore if using 999's in Table PGUESS, it is best to minimize the specified rate in the Table CRDCUTS ESTxxx rows for any crudes that are unusual in their properties.
Table PGUESS also allows 999 entries for properties of streams that are not in Table ASSAYS. In this case, Aspen PIMS will take the average of the highest and lowest values found anywhere in the model data for that property and use it as an initial estimate. This means that if other tables are updated (typically BLNPROP, ASSAYS, SCALE) which establish a new highest or lowest for the property, then the initial value generated by the 999 in Table PGUESS will also change.
Since Table PGUESS provides the initial value for the recursed stream properties, the values should represent typical operations when making normal runs. When modeling an unusual scenario like a unit shutdown, it is not uncommon to update the PGUESS table to get better convergence since properties for unusual scenarios may be different than normal operations.
Keywords: PGUESS
References: None |
Problem Statement: Gasoline produced from PIMS model is ethanol free product. To calculate reformatted gasoline emission, first we use PIMS ABML to calculate the gasoline properties with ethanol addition using RBOB or CARBOB correlation. Those calculated properties are then used for emission calculation in CARB correlation.
The following details how to use ABML emission models, and what need to be cautious. | Solution: Let's put them in 3 groups:
1. CARB2 (types 3 or 10 which is CARB 2 by itself or with CARBOB2) uses cnx, thc, pot in BLNSPEC and tag 'CARB' in ABML
2. CARB3 (types 4-9 which is CARB 3 with various combinations of CARBOB and CARBOB2) uses CNX, THC, POT in BLNSPEC and tag 'CARB3' in ABML
3. CARB4 (types 12-14 which is CARB 4, or amended CARB 3, with various combinations of CARBOB2) uses Cnx, Thc, Pot in BLNSPEC and tag 'CARB3M' in ABML
So there are 3 different CARB correlations that can be used - CARB, CARB3, and CARB3M.
Now, you do not have to explicitly put the CARB correlation(s) in the ABML table. XNLP will automatically add CARB to ABML on the fly and will create all of the correct tags and what not. In order to make sure there is no confusion, the tag that is put into ABML on the fly is dependent on which type of CARB you are using.
So if you are using type 13, it will put 'Cnx' in ABML for correlation CARB3M and will change the tag in BLNSPEC from 'CNX' to 'Cnx' and give you warning W639. Note that you will see W639 with the same text whether or not it lowercases to 'Cnx' or 'cnx'. This is why you will also see an error message (E682) saying that you cannot use some combinations of types in the same model.
You cannot use type 1 or 2 with type 11 for EPA, type 3 with type 10, types 4 or 5 with any of types 6-9 UNLESS you have the correct mapping in table ABMLMAP to get around the aliasing of the tags. For instance, if you wanted to use both types 3 and 10, you would need to put correlation CARB in table AMBL with input tags that are different from the input tags to CARBOB2. Then in table ABMLMAP, for the type 3 blends, you would map the input tags for CARBOB2 to the input tags for CARB (i.e., skipping CARBOB2 and just using CARB). For the type 10 blends, you would map the output tags for CARBOB2 to the input tags for CARB, making sure CARBOB2 is evaluated for those blends.
Please refer toSolution 131651 'Understand ABML CARBOB and CARB model in PIMS' for all the types.
Keywords: CARB 2
CARB2
CARB 3
CARB3
CARB3M
CARB 4
CARB4
CARBOB
CARBOB2
ABML
RBOB
References: None |
Problem Statement: Basic XML knowledge in Aspen SCM is a prerequisite for reading this document. | Solution: s # 135832, 135995, 136528, 136585, 137107 and 138030 are recommended, if you are looking to gain basic XML knowledge in Aspen SCM.
This article explains how to modify the rule which runs when you click on the Apply Changes button in the Ribbon or when you click on the Apply Changes option in the right-click contextual menu in the data viewer/editor.
Solution
When a custom editable screen is built, there might be a necessity to execute some additional commands while applying the changes made by the end user on the screen.
In the attached example, there is a sample screen called “Save Rule Example”. This is a very simple screen which displays an editable table. The Apply Changes and the Cancel Changes buttons are linked to the table - i.e. they would enable when a change has been made to the contents of the table. To learn how to link these buttons to the displayed content, please refer toSolution # 137533.
The following is the corresponding XML code for this screen:
<?xml version=1.0 encoding=utf-8?>
<CONFIG>
<RESTS
Header=Screen using Modified Save Rule
ViewModelID=:RESTS_VM>
<Views>
<TABLE
Type=PropertyView
DataSource=TABLESOURCE/>
</Views>
<Ribbon
ApplyDataChangesCommand=APPLYSCREENDATACHANGES
CancelDataChangesCommand=CANCELSCREENDATACHANGES/>
</RESTS>
<RESTS_VM>
<States>
<STATE1>
<Properties>
<TABLESOURCE
Type=Table
ValueSource=_SAMPLE/>
<APPLYSCREENDATACHANGES
Type=Command
DisableWhenNoChanges=True>
<After
A1=!Save/>
</APPLYSCREENDATACHANGES>
<CANCELSCREENDATACHANGES
Type=Command
DisableWhenNoChanges=True>
<After
A1=!Reset/>
</CANCELSCREENDATACHANGES>
</Properties>
</STATE1>
</States>
</RESTS_VM>
</CONFIG>
From the above XML code, it is evident that there is only one State Group configured for this screen - STATE1. That State Group encompasses the table, the Apply Changes command button and the Cancel Changes command button. With the current code, whenever there is a change made to this table and the Apply Changes button is clicked, the changes would get written out to the base table - _SAMPLE, since the Apply Changes button has the After command !Save. To learn more about the different After commands, please refer toSolution # 137770.
In this example, there are two ways in which an additional rule, apart from the Saving operation, can be executed:
Option 1:
A subsequent rule after !Save for the Apply Changes button, can be added:
…
<APPLYSCREENDATACHANGES
Type=Command
DisableWhenNoChanges=True>
<After
A1=!Save
A2=MSGBOX ESETUP/>
</APPLYSCREENDATACHANGES>
…
By this way, the MSGBOX would get executed after the Save command executes.
The contents of ESETUP would look like:
Option 2:
A new attribute: SAVE can be added to STATE1:
…
<STATE1
Save=MSGBOX DSETUP>
<Properties>
<TABLESOURCE
…
The contents of DSETUP would look like:
If both these setup rules are executed together, MSGBOX DSETUP would get run first followed by MSGBOX ESETUP.
Keywords: None
References: None |
Problem Statement: Some user wants to reset the X_SEQ value which shows up in the Event dialog as Sequence. What is the recommended way to do this? Can I reset the key value in the ATOrionKey table, look up the column ATORIONEVENTS and change the LAST_ID value to be the next one to be used? | Solution: If the method described above is used, you will run the risk that as you keep using it eventually you will hit what is currently in us. This will create problems and we strongly advise against that.
The safer way is to use the following procedures as a guideline to reset the EVENT sequence numbers.
You can use SQL queries to reset EVENT sequence numbers (X_SEQ in ATOrionEvents), detail procedures shown below,
1. Remove entry from the ATORIONKey table so APS can automatically populate it with the right value when needed, after the change in the sequence number.
Delete from ATOrionKey where TABLE_NAME='ATORIONEvents'
2. Change the values in the ATORIONEvents and the related tables
Get the minimum value in the ATOrionEvents table
select min(x_seq) from AtOrionEvents
If the minimum value was 10001 then use a value slightly less than that in other queries, then decrease the x_seq id in the following way:
update AtOrionEvents set x_seq=x_seq-10000
3. Change the start link and stop link value
update AtOrionEvents set start_link=start_link-10000 where start_link>0
update AtOrionEvents set stop_link=stop_link-10000 where stop_link>0
4. For recurring events
update AtOrionEvents set start_link=-(-start_link-10000) where start_link<0
update AtOrionEvents set stop_link=-(-stop_link-10000) where stop_link<0
5. For all other ATORIONEventsxxx tables
update AtOrionEventAdditive set event_xseq=event_xseq-10000
update AtOrionEventComments set event_xseq=event_xseq-10000
update AtOrionEventExternalSystemIds set event_xseq=event_xseq-10000
update AtOrionEventParams set event_xseq=event_xseq-10000
update AtOrionEventPipelines set event_xseq=event_xseq-10000
update AtOrionEventProps set event_xseq=event_xseq-10000
update AtOrionEventResourceDetails set event_xseq=event_xseq-10000
update AtOrionEventResources set event_xseq=event_xseq-10000
update AtOrionEventTanks set event_xseq=event_xseq-10000
Note:
All children tables will follow the pattern shown for ATORIONEventTanks above. One can refer to OrionDBGen.mdb and look for ATORIONEvent prefix under the table names to check for the other children tables. Then run queries similar to that given for ATORIONEventTanks.
Aspen has no plans to change the naming convention of the children Event tables in APS. In the event of a change, APS documentation will be appropriately updated.
6. Other Event related tables
Delete FROM BLEND_SPEC WHERE E_SEQ < 10000+1 (actually should be minvalue +1) to clean up records in this table that do not belong to the events)
update BLEND_SPEC set E_SEQ= E_SEQ-10000
DELETE FROM _EVENTS WHERE event_seq < 10000+ 1
update _EVENTS set event_seq=event_seq-10000
delete from _EVENTS_MBO where event_Seq < 10000+1
update _EVENTS_MBO set event_seq = event_seq-10000
delete from _PL where event_Seq < 10000+1
update _PL set event_seq = event_seq-10000
delete from _ZPL where event_Seq < 10000+1
update _ZPL set event_seq = event_seq-10000
delete from AB_ADDITIVES where SEQ < 10000+1
update AB_ADDITIVES set SEQ = SEQ-10000
delete from AB_BLN_EVENTS where SEQ < 10000+1
update AB_BLN_EVENTS set SEQ = SEQ-10000
delete from AB_BLN_QUALITIES where SEQ < 10000+1
update AB_BLN_QUALITIES set SEQ = SEQ-10000
delete from AB_BLN_RECIPES where SEQ < 10000+1
update AB_BLN_RECIPES set SEQ = SEQ-10000
delete from AB_TANK_QUALITIES where SEQ < 10000+1
update AB_TANK_QUALITIES set SEQ = SEQ-10000
delete from AB_BCI_SUBMITTED where SEQ < 10000+1
update AB_BCI_SUBMITTED set SEQ = SEQ-10000
7. Change the value in EVENTS table - Can be difficult as it has the x_seq as autonumber
This has been phased out, if you are still using it we will provide the sql query to do this too.
Keywords: EVENT
sequence number
sequence numbers
reset EVENT ID
reset
EVENT ID
References: None |
Problem Statement: How to update a property inferential (IQ) based on an algerbraic equation with an online analyzer. | Solution: There are two types of inferential predictions calculated in IQ; prediction based on a steady-state function and prediction based on a dynamics function. Typically a steady state prediction uses a lab sample to update its prediction bias and the dynamics prediction uses an online analyzer to update its prediction bias. Lab sample method assumes that the process is at steady state where as the online analyzer method does not make this assumption.
The most common way of implementing a steady state prediction is to use an algebraic equation that derived from a steady state model such as AspenPlus or Aspen HYSYS. The prediction bias for the steady state predictions should typically be calculated using a lab sample. However, using an online analyzer to update the steady state predictions would provide faster bias updates and thus eliminate drifts between the prediction and the measurements. The best practice when using an online analyzer to update a steady state prediction calculated using an algebraic equation is to introduce dynamics to the model. This will ensure the prediction bias is calculation properly by finding the delta between the dynamic analyzer measurement and the dynamic predictions from the model.Â
In an IQ application, output filter and dead times can be used to add dynamics to the steady state unbiased prediction (UPR). The result of this filter calculation is the unbiased dynamics prediction (UAZ). The UAZ is used, along with the valid analyzer reading, to calculate a prediction bias (PREDBIAS) everytime a new analyzer reading is detected. The PREBIAS is then used with the unbiased steady state prediction (UPR) to calculate a biased steady state prediction (BPR) at every prediction cycle.
The parameters required for output filtering are OUTFILTT1, OUTFILTT1, and OUTFILTDT under the PR module. Please notethat if OUTFILT1 and OUTFILTT2 are both zero, the output filter function is disabled even if OUTFILTDT has a value.
Keyword
Algebraic equation
Output filter
Dynamics update
Keywords: None
References: None |
Problem Statement: We have multiple refineries that are required to evaluate different scheduling scenarios during the current schedule time. What is the best practice to evaluate this kind of scenario in Aspen Petroleum Scheduler (APS)? | Solution: The Case management feature users to create multiple versions (cases) as needed to represent the different operations of the refinery. This built-in case management capabilities to enable the schedulers to perform What-if analysis. Case Comparison interface compares the trend data for multiple cases on a single trend chart.
The Schedulers can quickly simulate the Refinery Operation and see the impact of the changes across the entire Refinery.
The requirement that each Case use the same configuration means that you should include in the model any units that are included in any Case. For example, if you want to evaluate the economics of adding an isomerization unit to the refinery, then the model will include an isomerization unit. Run two Case, one with the isomerization capacity set to zero, and the other Case with the capacity set to the maximum amount of available feed.
Event States (Case Management):
This task in the Case comparison interface allows to save the state of the current schedule and its associated trend charts. This is similar to taking a snapshot of the current schedule. Event States have no affect on the application database because they are saved in memory only. Therefore, once user exists the current session, releoad data, or save data, all event states created will be loose during the current session.
Case and Event States (trends) can be compared as shown:
For more details, review the APS Help file, Working with the Case Comparison Interface section that contains the following topics:
? Accessing the Case Comparison Interface
Creating/Restoring/Deleting an Event State
Comparing Cases and States
Changing the Position of the Case Comparison Legend
Displaying/Modifying Trend Limits during Case Comparison
Zooming in on a Section of a trend chart during Case Comparison
Undoing the last zoom/all previous zooms during Case Comparison
Keywords: case management, scenarios, comparison
References: None |
Problem Statement: What is the advantages of using IQ application for an analyzer predictor? | Solution: An analyzer predictor is used to predict a stream property that has an online analyzer. The predictor can reduced the delay due to transportation lag in the sample system or the location of the analyzer. The idea is to control the steady state value of an open FIR model between the analyzer readers and primary variables in the system such as pressured-compensated temperatures and reflux/feed ratio. The implement of an analyzer predictor using DMCplus or IQ. The mechanics of the implementation can be found elsewhere.
The advantage of using IQ are:
-Allow more flexibility in updating in the analyzer bias which is the prediction error of the FIR model such as option to filtering in the bias, option to take only part of the error, and option to use advanced update method such as CUSUM and SCORE
-Allow more complex calculations of the inputs and outputs
-Allow a more intuitive interface to operator thus more maintainable application
Keywords: IQ, analyzer predictor
References: None |
Problem Statement: What is the procedure to set up security to give different write privileges to each operator user? | Solution: Aspen Process Recipe (APR) uses AFW security to give different users various levels of privileges to the APR database. However, it is set up as an overall system security with different roles such as Administrator, Engineer, Supervisor and Operators. It does not distinguish between operators that can be assigned to different areas of the plant. Thus an operator assigned to the first line can access the recipe in the second line and perform a download on the second line, which might not be desirable. To address this issue, users can configure security directly in the IP.21 database instead of configuring the additional security in APR.
Consider the following APR example with two lines. The settings shown below indicate the configuration required to allow OperatorA to download only to line1_Flow and OperatorB to download only to line2_Flow.
The security permission for the line1_ flow record in IP.21 for this configuration would be set as follows:
OperatorA would have read and write permissions, while OperatorB would have just read permissions. The permission for Line2_flow would the exact opposite.
So now if OperatorB logs in and initiates a download to Line1_Flow, they would get the following error message:
But when the OperatorB initiates a download to Line2_Flow, it would download successfully .
In addition to the record specific configuration shown above, the administrator would also need to configure the permissions on the whole IP.21 database as follows:
KB 141963 provide some additional information on how to manage permissions for multiple record together.
Keywords: Security
Aspen Process Recipe
References: None |
Problem Statement: I am setting up the simulation flowsheet in Aspen Custom Modeler to test my model which I want later to export to Aspen Plus. However I am unable to specify correctly the input and output stream substream attributes. | Solution: We recommend generally to use the embedded Aspen Properties file when setting up the physical properties to be used in Aspen Custom Modeler simulation. This has the advantage that the properties file is stored inside the simulation file, and therefore there's no risk to delete the properties file by mistake, the update to a new version is made automatically, and a few more advantages.
This does not work when you set up a simulation which will require the substream attributes defined in Aspen Plus, as those are only available in Aspen Plus and not in Aspen Properties. Therefore the recommendation is to use an Aspen Plus simulation file, set up a dummy flowsheet with the required stream class definitions (particle size distribution, etc). Run the simulation and save as an Aspen Plus Document (*.apw). This will generate the required *.appdf file which you can use in the properties initialization in Aspen Custom Modeler (under Component Lists). Make sure to keep a safe copy of the bkp file of your Aspen Plus simulation.
Setting up the properties in Aspen Custom Modeler
Example of substream attributes available in the Component List
From a modeling point of view you can review the SolidPort definition in the Modeler library: for example we can review this:
SubStreamName as SubStreamNamesType (Description:Substream Name, Valid:ComponentList.Option(SUBSTREAM-NAMES));
SubStreamNamesType is declared as a string parameter type, this allows the specification of the valid values which is a string set. This valid value string set is assigned the list of strings returned by ComponentList.Option(SUBSTREAM-NAMES), which is simply the list of substream names defined in Aspen Plus file. Similar syntax is used in the model (see the MyScreen example for details) to access the number of particle size classes and sizes.
Keywords: appdf, aprpdf, bkp, aprbkp, psd, substream-names
References: None |
Problem Statement: In most organizations, the server computers usually belong to a secured network domain guarded by a firewall.
The client machines belong to the business LAN. The business LAN usually resides on a different domain that does not have a trust relationship with the server domain.
In such cases, to access the Aspen MES applications residing on the Web Server such as Web21 and A1PE, the users on the business LAN who need access to the server must use their Windows credentials.
Due to company security policies granting user privileges might be difficult.
The following workaround can be adopted to allow users of the business LAN to access the Web based applications. | Solution: Aspen MES Web Server (32-bit and 64-bit) use the Basic/Windows authentication methods to authenticate the users.
Both the Basic and Windows Authentication methods require the business users to enter a valid domain\username and password. If the credentials do not correspond to a valid Windows user account, the browser gets directed to an HTTP 401 authentication error page.
1. Hence the first step is to disable the Basic and Windows Authentication methods and enable Anonymous Authentication
Open IIS Manager ->Sites -> Default Web Site
Locate Process Explorer
Click on Authentication -> Disable Windows Authentication and Basic Authentication -> Enable Anonymous Authentication
Next, Locate Web21 and make the same changes.
2. The second step is to grant Read/Write privileges to the IIS_IUSR group for the respective folders on the Web Server.
Locate C:\inetpub\wwwroot\AspenTech. Right click on ProcessExplorer à Security
Ensure that the IIS_IUSR group has full Control.
Similarly, ensure that the IIS_IUSR group has Full Control to the Web21 folder as well.
3. The third step is to create a user that belongs to the IIS_IUSR group on the Web Server. Create a dummy user say netuser with a password on the Web Server.
Add the netuser to IIS_IUSR group.
Once the netuser user is created, all the Business LAN users can use the netuser account to access A1PE and Web21
Keywords: Domain
Web Server
Firewall
References: None |
Problem Statement: This Knowledge Base article (KB) is the fifth in a series of articles under the topic `Linear Programming using Aspen Supply Chain Management?. This series is intended for users who do not have any background in LP or in Aspen Supply Chain Management (SCM) programming; the pre-requisites for reading this KB are the previous articles in the `Linear Programming using Aspen Supply Chain Management? stored in | Solution: s # 135232, # 135398, # 135871 and # 136072. The screenshots in this document were created using Aspen SCM version 8. At the end of this tutorial, users will be able to formulate and solve simple Linear Programming problems in Aspen SCM.
Example Problem:
Similar to the example discussed in the previousSolution # 136072: A company is doing aggregate planning and has the demand forecast for the next 6 months. In addition to the Inventory holding cost and starting Inventory, this problem also tells us the number of Working Days in a month, cost for hiring and training a Worker, cost for laying off a Worker, payroll cost for a Worker and the amount of Time required for a product. Instead of a variable Production cost, a fixed Production cost is listed here. Hence the objective of this problem is to find out the optimal inventory, workforce level and total cost of operation for the horizon. Since workforce is involved here, it is best to solve this problem using Integer Programming.
JAN
FEB
MAR
APR
MAY
JUN
Demand Forecast
1600
3000
3200
3800
2200
2200
Inventory Holding cost
$2/unit/month
Hiring and Training cost
$300/worker
Layoff cost
$500/worker
Labor Hours
4 hours/unit
Payroll Cost
$4/hour
Starting Inventory
1000 units
Work days
20 days/month
Work hours
8 hours/day/employee
Initial Workforce
80 employees
Unit Price
$40/unit
Solution
I. Algebraic Formulation:
a. Find out the decision variables:
Apart from the production and the inventory, number of workers in payroll, number of hires and fires are also decision variables for this problem. Hence 6 variables for each of these factors are the decision variables and these are declared as follows:
P[1] a AMOUNT OF UNITS TO PRODUCE IN MONTH 1
.
.
P[6] a AMOUNT OF UNITS TO PRODUCE IN MONTH 6
I[1] a AMOUNT OF UNITS IN INVENTORY AT THE END OF MONTH 1
.
.
I[6] a AMOUNT OF UNITS IN INVENTORY AT THE END OF MONTH 6
E[1] a NUMBER OF EMPLOYEES AT THE START OF MONTH 1
.
.
E[6] a NUMBER OF EMPLOYEES AT THE START OF MONTH 6
H[1] a NUMBER OF EMPLOYEES HIRED AT THE START OF MONTH 1
.
.
H[6] a NUMBER OF EMPLOYEES HIRED AT THE START OF MONTH 6
F[1] a NUMBER OF EMPLOYEES LAID OFF AT THE START OF MONTH 1
.
.
F[6] a NUMBER OF EMPLOYEES LAID OFF AT THE START OF MONTH 6
b. Formulate the Objective function:
The objective is to reduce the total cost by multiplying the respective costs and the decision variables. One variable to remember while writing the objective function, is I[0] ? Starting Inventory i.e. Amount of Units in Inventory at the End of Month 0 ? which is given in the question as 1000 units. Hence, the objective function in this problem is to:
MINIMIZE 40*{ P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } + 2*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } + (20*8*4)*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } + 500*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] } + 300*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] }
c. Identify the constraints:
Production in Month 1 along with the starting Inventory (i.e. Inventory at the end of Month 0) is responsible for the Demand in Month 1 and the Inventory at the end of Month 2. Likewise, for the rest of the 5 months:
P[1] + I[0] = D[1] + I[1]
P[2] + I[1] = D[2] + I[2]
.
.
P[6] + I[5] = D[6] + I[6]
Production in Month 1 through Month 6 should be equivalent to the number of employees multiplied by number of days and number of hours divided by labor hours.
P[1] = E[1] * 20 * 8 / 4
P[2] = E[2] * 20 * 8 / 4
.
.
P[6] = E[6] * 20 * 8 / 4
Total number of employees in Month 1 is equal to the starting number of employees minus the number of employees fired at the start of Month 0 plus number of employees hired at the start of Month 1.
E[1] = E[0] ? F[1] + H[1]
E[2] = E[1] ? F[2] + H[2]
.
.
E[6] = E[5] ? F[6] + H[6]
d. Other Common Constraints:
The units produced and stored cannot be negative. Normally, this rule would result in additional constraints:
P[1] >= 0
P[2] >= 0
.
.
I[1] >= 0
I[2] >= 0
.
.
E[1] >= 0
E[2] >= 0
.
.
H[1] >= 0
H[2] >= 0
.
.
F[1] >= 0
F[2] >= 0
.
.
In this formulation, since the demand is positive, these numbers will never be negative. In addition to these constraints, the following constraints should be included, as they are already given in this problem:
I[0] = 1000
E[0] = 80
Hence, the algebraic formulation for this problem is:
MINIMIZE 40*{ P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } + 2*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } + (20*8*4)*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } + 500*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] } + 300*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] }
SUBJECT TO:
P[1] + I[0] = D[1] + I[1]
P[2] + I[1] = D[2] + I[2]
P[3] + I[2] = D[3] + I[3]
P[4] + I[3] = D[4] + I[4]
P[5] + I[4] = D[5] + I[5]
P[6] + I[5] = D[6] + I[6]
E[1] = E[0] ? F[1] + H[1]
E[2] = E[1] ? F[2] + H[2]
E[3] = E[2] ? F[3] + H[3]
E[4] = E[3] ? F[4] + H[4]
E[5] = E[4] ? F[5] + H[5]
E[6] = E[5] ? F[6] + H[6]
P[1] = E[1] * 20 * 8 / 4
P[2] = E[2] * 20 * 8 / 4
P[3] = E[3] * 20 * 8 / 4
P[4] = E[4] * 20 * 8 / 4
P[5] = E[5] * 20 * 8 / 4
P[6] = E[6] * 20 * 8 / 4
I[0] = 200
E[0] = 80
II. Formulate the problem using tables:
In this section, the above developed algebraic program will be converted directly into the corresponding tabloid program. The ultimate aim will be to achieve this algebraic formulation in the MATX table. The following formulation will be a good representation of how this algebraic formulation is going to be modified to fit into SCM:
MINIMIZE 40*{ P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } + 2*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } + 640*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } + 500*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] } + 300*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] }
SUBJECT TO:
-I[0] = -200
P[1] + I[0] - I[1] = 1600
P[2] + I[1] - I[2] = 3000
P[3] + I[2] - I[3] = 3200
P[4] + I[3] - I[4] = 3800
P[5] + I[4] - I[5] = 2200
P[6] + I[5] - I[6] = 2200
E[1] ? F[1] + H[1] = 80
E[2] + F[2] - H[2] - E[1] = 0
E[3] + F[3] - H[3] - E[2] = 0
E[4] + F[4] - H[4] - E[3] = 0
E[5] + F[5] - H[5] - E[4] = 0
E[6] + F[6] - H[6] - E[5] = 0
P[1] - 40 * E[1] = 0
P[2] - 40 * E[2] = 0
P[3] - 40 * E[3] = 0
P[4] - 40 * E[4] = 0
P[5] - 40 * E[5] = 0
P[6] - 40 * E[6] = 0
The one disadvantage of this modeling method is to have -200 as the Starting Inventory value. A detailed explanation of why this should be done will be provided in the COEF table section.
a. Open Aspen SCM:
Save the file available in the attachment of thisSolution `lpcourse.cas?. Start Aspen SCM, go to File | Open and point to the location where you saved `lpcourse.cas?.
b. COL Set:
The Production, Inventory, Employees, Hired and Fired variables are categorized under PROD, INVE, EMPL, HIRE and FIRE respectively. It is entered in the Code section of the COL set.
c. COLS Table:
Create two sets: TIME with 1 through 6 in the Code section and TIMESS with 0 through 6 in the Code section. Please make sure to make TIME as a subset of TIMESS. Both these sets denote the number of months.
TIME should be mentioned in the FLD2 column for all the variables except INVE. The reason is because INVE has a 0 variable in the formulation (Starting Inventory) while others do not.
Algebraic formulation?s objective function:
MINIMIZE
40*{ P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } a PROD
+ 2*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } a INVE
+ 640*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } a EMPL
+ 500*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] } a FIRE
+ 300*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] } a HIRE
The decision variables P, I, E, H and F should be entered in the TABL column.
d. ROW Set:
You can identify three groups of constraints from the algebraic formulation. For Material and Workforce constraints: there are two entries to adapt the algebraic formulation to Aspen SCM. MAT.BAL and NEXT.BAL represent the Material constraints while WOR.BAL and WOR.NEXT represent the Workforce constraints. EQ.WOR constraint represents the Production constraint.
Algebraic formulation?s constraints:
-I[0] = -200 a MAT.BAL
P[1] + I[0] - I[1] = 1600 a MAT.BAL and NEXT.BAL
P[2] + I[1] - I[2] = 3000 a MAT.BAL and NEXT.BAL
P[3] + I[2] - I[3] = 3200 a MAT.BAL and NEXT.BAL
P[4] + I[3] - I[4] = 3800 a MAT.BAL and NEXT.BAL
P[5] + I[4] - I[5] = 2200 a MAT.BAL and NEXT.BAL
P[6] + I[5] - I[6] = 2200 a MAT.BAL and NEXT.BAL
E[1] ? F[1] + H[1] = 80 a WOR.BAL
E[2] + F[2] - H[2] - E[1] = 0 a WOR.BAL and WOR.NEXT
E[3] + F[3] - H[3] - E[2] = 0 a WOR.BAL and WOR.NEXT
E[4] + F[4] - H[4] - E[3] = 0 a WOR.BAL and WOR.NEXT
E[5] + F[5] - H[5] - E[4] = 0 a WOR.BAL and WOR.NEXT
E[6] + F[6] - H[6] - E[5] = 0 a WOR.BAL and WOR.NEXT
P[1] - 40 * E[1] = 0 a EQ.WOR
P[2] - 40 * E[2] = 0 a EQ.WOR
P[3] - 40 * E[3] = 0 a EQ.WOR
P[4] - 40 * E[4] = 0 a EQ.WOR
P[5] - 40 * E[5] = 0 a EQ.WOR
P[6] - 40 * E[6] = 0 a EQ.WOR
e. POL Set:
POL set has two sections:
i. Column Section:
In the Column Section, the decision variables P, I, E, H and F are declared.
ii. Row Section:
In the Row Section, the Right Hand Side (RHS) of the constraints: EZ, DV, SI and ER are declared.
Algebraic formulation?s constraints:
-I[0] = -200 a DV and SI
P[1] + I[0] - I[1] = 1600 a DV
P[2] + I[1] - I[2] = 3000 a DV
P[3] + I[2] - I[3] = 3200 a DV
P[4] + I[3] - I[4] = 3800 a DV
P[5] + I[4] - I[5] = 2200 a DV
P[6] + I[5] - I[6] = 2200 a DV
E[1] ? F[1] + H[1] = 80 a ER
E[2] + F[2] - H[2] - E[1] = 0 a EZ
E[3] + F[3] - H[3] - E[2] = 0 a EZ
E[4] + F[4] - H[4] - E[3] = 0 a EZ
E[5] + F[5] - H[5] - E[4] = 0 a EZ
E[6] + F[6] - H[6] - E[5] = 0 a EZ
P[1] - 40 * E[1] = 0 a EZ
P[2] - 40 * E[2] = 0 a EZ
P[3] - 40 * E[3] = 0 a EZ
P[4] - 40 * E[4] = 0 a EZ
P[5] - 40 * E[5] = 0 a EZ
P[6] - 40 * E[6] = 0 a EZ
From the algebraic formulation we can see that E[0] is being treated more like a constant and will be declared in the POLI table. That?s why E did not need TIMESS domain, while I required it.
f. POLI Table:
POLI Table has two sections:
i. Column Section:
In the Column section, the CST column should specify the co-efficient that has to be multiplied with the decision variables in the objective function. A new table: INVCOST need to be created for Inventory decision variables. Though the Holding cost is constant, the initial inventory need not be accounted for this period of planning. Hence this table should have TIMESS as the Rowset, with 0 in the first row and 2 in all the other rows.
As mentioned earlier, this model should be solved using Integer programming. So, UI (Upper Integer) should be mentioned in the MIN column for all the variables. This will make all the variables to fix on close-to-optimal integers.
Algebraic formulation?s objective function:
MINIMIZE
40* { P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } a PROD
+ 2* { I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } a INVE
+ 640*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } a EMPL
+ 500*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] } a FIRE
+ 300*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] } a HIRE
ii. Row Section:
In the Row section, SENSE for each of the constraint groups, E or EQ should be entered, since all the constraints on the algebraic formulation has an Equal to symbol. The RHS column should contain constants or sets for all the three constraint groups. A new table DEMAND with TIME as the Row Set should be created and populated based on the data given in the problem. Demand for Month 0 is declared as negative SI.
Algebraic formulation?s constraints:
-I[0] = -200 a DV and SI
P[1] + I[0] - I[1] = 1600 a DV
P[2] + I[1] - I[2] = 3000 a DV
P[3] + I[2] - I[3] = 3200 a DV
P[4] + I[3] - I[4] = 3800 a DV
P[5] + I[4] - I[5] = 2200 a DV
P[6] + I[5] - I[6] = 2200 a DV
E[1] ? F[1] + H[1] = 80 a ER
E[2] + F[2] - H[2] - E[1] = 0 a EZ
E[3] + F[3] - H[3] - E[2] = 0 a EZ
E[4] + F[4] - H[4] - E[3] = 0 a EZ
E[5] + F[5] - H[5] - E[4] = 0 a EZ
E[6] + F[6] - H[6] - E[5] = 0 a EZ
P[1] - 40 * E[1] = 0 a EZ
P[2] - 40 * E[2] = 0 a EZ
P[3] - 40 * E[3] = 0 a EZ
P[4] - 40 * E[4] = 0 a EZ
P[5] - 40 * E[5] = 0 a EZ
P[6] - 40 * E[6] = 0 a EZ
g. ROWS Table:
FLD1 column of the ROWS table should specify the name of the group of constraints - A and W. A will represent the Material Balance constraints and W will represent the Workforce.
MAT.BAL constraint should specify TIMESS as the domain, which covers Production and Inventory of Month 0 to Month 6. NEXT.BAL will add on to this constraint. This will have the domain as *TIME, with * being the mask, so that it is separate from TIMESS. This will cover Inventory of Month 0 to Month 5. A table, TIMN, should be created with TIMESS and TIME as Row and Column Sets. Contents of TIME act as the constraints while TIMESS contents act as the variables. So EZ should be populated for all the diagonal variables, so that I[0] will appear in A1 constraint, I[1] will appear in A2 constraint and so on.
WOR.BAL should have TIME as the domain, which covers E, F and H variables. WOR.NEXT will add on to WOR.BAL, with the domain as *TIN. TIN should be a new set created with Months 2 through 6, which covers E variable. Similar to TIMN, TINM should be created and similarly populated.
EQ.WOR should have TIME as domain, which will cover all the Production constraints.
Algebraic formulation?s constraints:
-I[0] = -200 a DV and SI
P[1] + I[0] - I[1] = D[1] a DV
P[2] + I[1] - I[2] = D[2] a DV
P[3] + I[2] - I[3] = D[3] a DV
P[4] + I[3] - I[4] = D[4] a DV
P[5] + I[4] - I[5] = D[5] a DV
P[6] + I[5] - I[6] = D[6] a DV
E[1] ? F[1] + H[1] = E[0] a ER
E[2] + F[2] - H[2] - E[1] = 0 a EZ
E[3] + F[3] - H[3] - E[2] = 0 a EZ
E[4] + F[4] - H[4] - E[3] = 0 a EZ
E[5] + F[5] - H[5] - E[4] = 0 a EZ
E[6] + F[6] - H[6] - E[5] = 0 a EZ
P[1] - 40 * E[1] = 0 a EZ
P[2] - 40 * E[2] = 0 a EZ
P[3] - 40 * E[3] = 0 a EZ
P[4] - 40 * E[4] = 0 a EZ
P[5] - 40 * E[5] = 0 a EZ
P[6] - 40 * E[6] = 0 a EZ
h. COEF Table:
The COEF table contains all the coefficients for the constraints. For MAT.BAL, PROD should have +1 and INVE should have -1. This represents P[1] and I[1] for A1 constraint. Since P[0] is not included in the model, I[0] alone is in A0 constraint and hence ?I[0] will be the first constraint. To compensate this negative sign, Starting Inventory should be made negative in DEMAND. For NEXT.BAL, INVE should have +1 representing I[0] for A1.
For WOR.BAL, +1 should be entered for E and F, while -1 should be entered for H. For WOR.NEXT -1 should be entered for E variable.
In the EQ.WOR constraint, P should have +1 and E should have -40.
III. Generation &Solution
After the model is formulated, the next step is to generate the model and find theSolution.
a. Generation:
The Generation step enumerates all the decision variables across the corresponding domains. It is executed by typing GEN in the command line.
As a result of GEN, an information dialog box opens and also a variety of tables are generated. This dialog box is the place to look for errors, if any. Detailed message on errors can be found by typing ERROR in the command line. The tables generated with the GEN command, can be checked for consistency in the formulation:
i. MATX Table:
This table helps to confirm the coefficients of constraints in the table formulation with the algebraic formulation.
Algebraic formulation?s constraints:
-I[0] = -200 a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
E[1] ? F[1] + H[1] = E[0] aW1
E[2] + F[2] - H[2] - E[1] = 0 a W2
E[3] + F[3] - H[3] - E[2] = 0 a W3
E[4] + F[4] - H[4] - E[3] = 0 a W4
E[5] + F[5] - H[5] - E[4] = 0 a W5
E[6] + F[6] - H[6] - E[5] = 0 a W6
P[1] - 40 * E[1] = 0 a RQ1
P[2] - 40 * E[2] = 0 a RQ2
P[3] - 40 * E[3] = 0 a RQ3
P[4] - 40 * E[4] = 0 a RQ4
P[5] - 40 * E[5] = 0 a RQ5
P[6] - 40 * E[6] = 0 a RQ6
ii. RHSX Table:
This table can be used to verify the right hand side of the corresponding constraints.
Algebraic formulation?s constraints:
-I[0] = -200 a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
E[1] ? F[1] + H[1] = E[0] aW1
E[2] + F[2] - H[2] - E[1] = 0 a W2
E[3] + F[3] - H[3] - E[2] = 0 a W3
E[4] + F[4] - H[4] - E[3] = 0 a W4
E[5] + F[5] - H[5] - E[4] = 0 a W5
E[6] + F[6] - H[6] - E[5] = 0 a W6
P[1] - 40 * E[1] = 0 a RQ1
P[2] - 40 * E[2] = 0 a RQ2
P[3] - 40 * E[3] = 0 a RQ3
P[4] - 40 * E[4] = 0 a RQ4
P[5] - 40 * E[5] = 0 a RQ5
P[6] - 40 * E[6] = 0 a RQ6
iii. SENX Table:
This table defines the sense of both the constraint groups.
Algebraic formulation?s constraints:
-I[0] = -200 a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
E[1] ? F[1] + H[1] = E[0] aW1
E[2] + F[2] - H[2] - E[1] = 0 a W2
E[3] + F[3] - H[3] - E[2] = 0 a W3
E[4] + F[4] - H[4] - E[3] = 0 a W4
E[5] + F[5] - H[5] - E[4] = 0 a W5
E[6] + F[6] - H[6] - E[5] = 0 a W6
P[1] - 40 * E[1] = 0 a RQ1
P[2] - 40 * E[2] = 0 a RQ2
P[3] - 40 * E[3] = 0 a RQ3
P[4] - 40 * E[4] = 0 a RQ4
P[5] - 40 * E[5] = 0 a RQ5
P[6] - 40 * E[6] = 0 a RQ6
iv. POLX Table:
This table can be used to verify the coefficients of the objective function of the tabloid formulation (CST column) with the algebraic formulation.
Algebraic formulation?s objective function:
MINIMIZE 40*{ P[1] + P[2] + P[3] + P[4] + P[5] + P[6] } + 2*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] } + 640*{ E[1] + E[2] + E[3] + E[4] + E[5] + E[6] } + 500*{ F[1] + F[2] + F[3] + F[4] + F[5] + F[6] } + 300*{ H[1] + H[2] + H[3] + H[4] + H[5] + H[6] }
b. Solution
To solve a model, the XPRESS solver should be used. That can be called through XPRESS command. To specify the Maximization or Minimization of the objective function, CXPRESS control table should be called and MNMX values should be changed accordingly. Here it is MIN, as discussed in the algebraic formulation. Once the solving is complete, there are a variety of tables that are generated. These can be checked forSolutions:
i. COLX Table:
The X column represents the optimal value of the decision variables. The XCST column specifies the cost that each of these decision variables contribute to the objective function. Note that all the variables are fixed on integers.
ii. OBJX Table:
OBJECTIVEFUNCTION column provides the value of the objective function i.e. the total minimum cost of transportation.
Keywords: None
References: None |
Problem Statement: If one of my Excel input spreadsheets for PIMS has recalculated and some cells have errors (like #N/A, #REF, etc) it can cause unexpected results that are very hard to diagnose. Is there a way to find cells with errors in a very large Excel spreadsheet? | Solution: The following macro code will search a workbook and display all the cells that contain errors in a sheet called Errors.
Option Explicit
Sub istErrors()
Dim ws As Worksheet
Dim RNG As Range, Cell As Range
Dim NR As Long
Application.ScreenUpdating = False
If Not Evaluate(ISREF(Errors!A1)) Then
Worksheets.Add(After:=Worksheets(Worksheets.Count)).Name = Errors
Else
Sheets(Errors).UsedRange.Clear
End If
NR = 1
On Error Resume Next
With Sheets(Errors)
For Each ws In Worksheets
If ws.Name <> Errors Then
Set RNG = ws.Cells.SpecialCells(xlFormulas, xlErrors)
If Not RNG Is Nothing Then
For Each Cell In RNG
.Range(A & NR) = ws.Name
.Range(B & NR) = Cell.Address
NR = NR + 1
Next Cell
End If
End If
Next ws
.Activate
End With
Application.ScreenUpdating = True
End Sub
Keywords: Excel
error
References: None |
Problem Statement: What is the best way to troubleshoot network issues? | Solution: These are some existing tools and tests that customers can perform to troubleshoot network issues:
1. PathPing test: To check for dropped packets and routing issues.
a) Open a command window and type:
pathping <remoteHostName>
b) Do the same test using the IP address (to help check that the host name can be resolved)
pathping <remoteHostIPaddress>
2. Large file copy test: Copy a large file (250MB) between two machines to see if the performance is acceptable.
If not...
a) Check NIC configuration settings (Speed/Duplex) on both computers and all Routers/Switches between them, they should be consistent.
b) Check for broken clips on network cables or other signs of damage.
c) Update NIC drivers to the latest version (a commonSolution to high-traffic issues).
3. Use a third-party diagnostic tool like Wireshark to see what type of errors are occurring and work with IT to correct the problem.
4. Check the DNS tables for consistency, name reSolution done differently on multiple DNS servers can be responsible for slow response.
For firewall considerations or to include confirmation of port availability, please refer toSolution 123707.
Keywords: network, configuration, connectivity, diagnostic, ping, traceroute, error re
References: None |
Problem Statement: What is the benefit of having the CV's grouped instead of having each and every CV in its own rank group? | Solution: The CV ranking method ensures that the more important CV constraints (lower rank) are satisfied first regardless of the extent of violations in the less important (higher ranking) constraints. Therefore, the optimizer will work on solving/satisfying all CVs belonging to the same CV rank group first and then move on to the next rank group. Once a rank group is solved, the constraints belonging to the particular rank group are as treated as hard constraints by the optimizer. For CVs that belong to the same rank group, the constraints are relaxed based on the weights specified for each CV in the group. So, higher weight to a CV within a CV rank will result in lower give-up on the constraints of that particular CV.
In case each and every CV is assigned to its own rank group, DMCplus would solve the CVs in the order of their rank, satisfying the lower rank CV first followed by the next higher rank CV. This might result in a situation where major constraint violation might occur for the less important CVs, which might not always be the desirable result. Grouping CVs in a rank group allows to work around this problem by relaxing the constraints between the CVs belonging to a specific rank, depending on the weighting factor specified.
The benefit of a lot of LP rank groups is that the order of give-up is well-known to the user. ECEs and gains and GMULTs will not affect the order. Many real projects with LP rank groups tend to use a lot of them.
There are a couple of drawbacks of multiple LP rank groups:
1. TheSolution from the steady state optimizer may not be as accurate. The rank group optimization problem is an under-determined optimization problem that can be challenging. This can be exacerbated if there are a very large number of rank groups. If accuracy is a concern, there is a more accurate optimization algorithm available in DMCplus that can be accessed by setting EPSMVPMX equal to 8 (in version 2006.5 or later). The entry dictionary has some more details about this.
2. TheSolution can take longer to run. Since it is usually the move plan that takes the majority of calculation time, the performance penalty is usually not severe.
3. It can make maintenance of the controller more difficult and harder to understand. This is not necessarily true for most cases, but some have mentioned this as a reason for not using a lot of rank groups.
However, it would not be reasonable to put a lot of constraints into an LP rank group and have to worry about ECEs and gains and GMULTs. It does not seem like a numerically stable formulation; however, many users and even experienced ones do it. It makes more sense to use a QP rank group in that case.
The following recommendation can be made based on the above description:
1. Avoid LP rank groups. Groups with multiple constraints should use QP rank groups.
2. When ordering is well-understood, use as many rank groups as required.
Keywords: CV Rank Group
ECE
GMULT
LP
References: None |
Problem Statement: What is the recommended path for upgrading from Aspen DMCplus Controller 5.0 to Aspen DMCplus Controller V7.3? | Solution: Use PRTUpdate, attached to thisSolution and the steps below.
To use:
1. Use DMCplus build to create a new ccf file for the application reading the existing mdl file.
2. Set the PRTSWC on the existing DMC controller to a value of 1 to generate a current PRT file.
3. Put the PRT file and ccf file in the same directory and run PRTUpdate
It has 4 options of PRT file types. DMCi is the one most like DMC version 5.3
4. The PRTUpdate transfers all of the limits and tuning parameters from the PRT to the ccf file.
5. Run simulate to verify and re-tune the application. One of the things that have changed significantly are the CV ranks.
6. Use DMCplus Build to update all of the database connections. This can be done using a template or the tag entry wizard.
Note: DMC5.0 does not have a ccf file , it contains a DMC configuration file (usually a cfg file but may have been called ccf)
While this cfg file is similar to a ccf file , it is not the same. It contains much of the same information though and should be used as a reference while creating the new DMCPlus controller configuration. The PRTUpdate will migrate much of the tuning and limits from the PRT file to the new DMCplus ccf file. The tag connections in the cfg file should also be used as a reference when making the new connections in the DMCPlus ccf file.
7. If there are customizations in the existing controller these will have to be migrated into ccf calculations.
Note: The calc engine in the product DMC5.0 and the calc engine in the DMCplus product is similar and you should be able to use the calcs from the DMC5.0 controller as a reference to recreate the calc in DMCPlus ccf file. It is also possible in the DMC5.0 controller that there were custom calculations that were not implemented in the calcs. This version of the DMC controller had entry points for custom FORTRAN routines that could be used for calculations. DMCplus does not allow for this type of custom FORTRAN calculations. DMCplus has other mechanisms to handle these. Standard transforms and CCF calculations. If the DMC5.0 controller contains these custom FORTRAN calculations then they will need to be implemented using standard calculations , ccf calcs or moved out of the DMCplus controller.
Uninstall the DMC 5.3 software and install the Aspen APC Online software or install on a separate server. Since the hardware requirements have changed it is likely that an updated server will be required.
Deploy the updated application. Re-commission the application.
Keywords: None
References: None |
Problem Statement: Some properties of a stream are a complex function of two or more other properties from the same or a related stream.
For example, the Cetane Index of a stream can be expressed as a function of SPG (or API) and T50 of the same stream:
CTI = -420.34 + 0.016*(141.5/SPG-131.5)^2 + 0.192*(141.5/SPG-131.5)* lg(T50) + 65.01*(lg(T50))^2 - 0.0001809*T50^2
The Non Linear Equation facility available with the XNLP platform allows to model this so that it can be effectively used as any other regular property, for blending specifications or for submodel pooling calculations. | Solution: The attached model shows the described structure.
To model this, we will create a Property Recursion structure in a submodel to transform an activity of a vector into a quality. The vector's activity will actually represent the property's equation value. XNLP is required for this structure to work.
In this particular example, we will model a dummy property, TST for the Reformate stream (RFT).
The property's equation will be: TST = LOG(SPGRFF*N2ARFF), where RFF is the Reformer's feed stream.
The recursion structure set up in submodel SRFT is:
The activity of column ONE, is fixed to 1, therefore the activity of the dummy pool NLE is 1 also.
Then we recurse quality TST for pool NLE. The activity of column PRP (column's Matrix name is SRFTPRP) will be calculated through the non-linear equation to be equal to LOG(SPGRFF*N2ARFF).
An initial guess in table PGUESS is required for property TST, stream NLE.
The set up of the equation is done in the Non Linear Equations branch of the model tree:
The actual equation is shown below:
The equation says that SRFTPRP - lg(QSPGRFF*QN2ARFF) = 0, i.e. the activity of column
SRFTPRP = lg(QSPGRFF*QN2ARFF)
The variables QSPGRFF and QN2ARFF already exist in the model and they represent the SPGRFF and N2ARFF
Then, in table PCALC we calculate the TST value from NLE to RFT:
The property TST for stream RFT can now be used as any regular property. In this particular model, it is used for a blending Specification for stream URG.
Keywords: Property
Properties
XNLP
Nonlinear Equation
References: None |
Problem Statement: Aspen Fleet Optimizer services randomly stop processing data. | Solution: A few Aspen Fleet Optimizer clients have reported cases where the AFO services appear to be running but stop processing data. The fix for this particular issue have been stopping the services and then simply restarting them. In order to try and avoid any future unexpected data stoppages we would highly recommend a planned weekly automated stop and start of all the AFO services.
Keywords: None
References: None |
Problem Statement: What values should be entered for TYPE, ID, ID1, ID2 and ID3 fields depending on the type of event in automation code? | Solution: The values for the TYPE, ID, ID1, ID2 and ID3 fields depends on the type of event as shown in the following Table:
Description
Type
ID
ID1
ID2
ID3
Unit Parameter Event
1
Prameter values
Unit ID
-
-
Unit Mode Event
2
Parameter values
Unit ID
-
-
Crude Receipt Event
11
Crude comp.
Dest. Tank IDs
-
Ship. System
Crude Transfer Event
12
-
Source tank IDs
Dest.tank ID
Ship. System
Crude Run Event
13
Parameter values
Source tank IDs
Crude unit ID
-
Product Transfer Event
14
-
Source tank IDs
Dest.tank ID
-
Product Shipment Event
15
Dest.tank IDs
Tank ID
PL
Ship. System
Pump Event
16
Product no.
Pump unit ID
-
Ship. System
Blend Event
17
Blend composition
Dest.tank ID
Blender ID
Template name
Crude PL Shipment
18
Dest.tank IDs
Pipeline ID
Source tanks
2nd Pipeline
Product Receipt Event
19
Stream properties
Dest. tank ID
-
Ship. System
Pipeline Crude Receipt
20
Crude comp.
Dest tank ID
Pipeline ID
Ship. System
PL Receipt Event
30
Pipeline ID
Dest tank IDs
-
-
Crude Run by Tank
31
-
Event no. of crd.run
Charge tank ID-
PL Shipment by Tank
32
-
Event no. of ship.
Source tank ID
-
Crude Reciept by Tank
33
Event no. of receipt
Dest tank IDs
Crude comp.
-
Crude Receipt (Trs.MD)
34
Event no. of receipt
-
Ship. System
Crude PL Shpmt to 2nd PL
35
Dest.tank ID
Sec. pipeline ID
-
-
Pipleline Product Receipt
36
Event no. of ship.
Dest.tank ID
-
-
Product Reciept by Tank
37
-
Dest.tank ID
-
-
Blend Component
38
-
Component Tank
-
-
Crude Transfer Event
39
-
Source Tank ID
-
-
Keywords: Automation
Event ID
Cerate Event through automation
References: None |
Problem Statement: After I solve for optimization in Aspen Supply Chain Planner, I am looking at the MIMIXPRS.LOG - what is it telling me? | Solution: The first section of the MIMIXPRS.log, ?Setting MIMI/XPRESS Control variables?, lists out the settings from the CXPRESS table that are configured at the time of model creation and while some settings are system generated. Model specific settings can be adjusted from the CXPRESS table as needed.
The second section of the log ?Xpresslog message? gives details about the optimization of the linear program. Below is sample of the MIMIXPRS.log:
Reading Problem MIMIXPRS
Problem Statistics
286626 ( 0 spare) rows
445617 ( 0 spare) structural columns
1278507 ( 0 spare) non-zero elements
Global Statistics
0 entities 0 sets 0 set members
Presolved problem has: 71013 rows 177170 cols 509692 non-zeros
Its Obj Value S Ninf Nneg Sum Inf Time
0 861350944.1 D 2157 0 419164351.7 1
100 550270345.3 D 2033 0 419164351.7 1
??.
30000 -41428370.76 D 692 0 648207682.2 29
30100 -41435650.92 D 666 0 363253552.1 29
Uncrunching matrix
31443 -41472736.22 P 0 0 .000000 31
OptimalSolution found
========== MIMI log message (16:36:13) =========Solution status:
Optl sltn found (rtn=1) (-41472736.221244)
Let's go thru the sample log.
Problem Statistics
286626 ( 0 spare) rows -This represents the number of rows in the model.
445617 ( 0 spare) structural columns - This represents the number of columns in the model.
1278507 ( 0 spare) non-zero elements - This represents the number of non-zero coefficients in the model.
Presolve problem has: 7,013 rows 177170 columns 509692 non-zeros
Presolve will first attempt to simplify the problem by detecting and removing redundant constraints, tightening variable bounds. In some cases, infeasibility may even be determined at this stage, or the optimalSolution may be found.
In our sample log we see that Presolve has reduced the number of rows to 71,013 from 286,626, the number of columns to 177,170 down from 445,617 and non-zeros to 509,692 from 1,278,507.
The reduction in the number of rows, columns and non-zero intersections greatly reduces the amount of time and memory needed to solve the LP.
Below is a sample of number of iterations the solver generated before it came to the optimalSolution. We will go through each column and describe the column name and the data portrayed.
Its Obj Value S Ninf Nneg Sum Inf Time
0 861350944.1 D 2157 0 419164351.7 1
100 550270345.3 D 2033 0 419164351.7 1
??.
30000 -41428370.76 D 692 0 648207682.2 29
30100 -41435650.92 D 666 0 363253552.1 29
??..
31443 -41472736.22 P 0 0 .000000 31
Its - the number of iterations. In this sample the number iterations starts at 0 and continues through 31,443. The solver took 31,443 iterations to solve the problem.
Obj. Value - on the first iteration the solver found the objective value to be 861,350,944.1 (not the optimalSolution) but on the last iteration the solver found the optimalSolution -41472736.22.
Ninf - The number of infeasibles - in the first iteration there were 21,157 infeasible constraints in theSolution. The solver continued on for another 31,443 iterations and finally found a optimalSolution with zero infeasible constrains. Note: the objective value at iteration zero was actually better than the optimalSolution objective value, but it was not feasible so the process continued.
Nneg - The number of variables not at the optimal level. In our sample log, all variables are at their optimal level.
Sum Inf - The sum of the infeasibles. All of the infeasibles are added together. In the first iteration the sum is 419,164,351.7 while the last iteration the sum is zero.
Time - The cumulative amount of time it took the solver through this iteration. The entireSolution process of 32,443 iterations took 31 seconds to solve.
In our sample log theSolution status is telling us that an optimalSolution has been found with a return code of 1 (optimalSolution found) and the objective value of -41472736.221244.Solution status:
Optl sltn found (rtn=1) (-41472736.221244)
Keywords: Optimization log
References: None |
Problem Statement: The new Aspen SCM GUI interface is built using XML. This knowledge base article demonstrates how to build custom screens in the new Aspen SCM GUI by programming in XML. This is a follow-up article of the previous | Solution: #136585 'XML Tutorial 4 - How to build or edit a sample User Management Screen'. The case file containing the previousSolution can be found in the attachment.
Example
Keywords: None
References: None |
Problem Statement: This Knowledge Base article is the first in a series of articles under the topic 'Advanced Linear Programming using Aspen Supply Chain Management'. The prerequisites for this article are the previous articles written under the series 'Linear Programming using Aspen Supply Chain Management' which contains simple textbook type problems formulated and solved using Aspen SCM; this series is covered under | Solution: s # 135232, # 135398, # 135871, # 136072 and # 136349.
In this series, the magnitude of the problems discussed will be identical to real model implementations. This article discusses a sample scenario and how the linear program was formulated to model this scenario; algebraic representation of the model is discussed here.
Scenario
Let's consider a terminal where products are received in pipelines from different refineries. These products are loaded into cargos on a daily basis. The terminal has tanks which store inventory. The terminal's schedulers need to know how much of inventory they should store to meet the demand and how to optimally schedule the loading.
Solution
The schedulers are willing to work within specific limits so as to avoid run out risks and holding costs during loading operations. So, they currently have alerts when the inventory reaches different levels. The linear program should be modeled in such a way that all these alerts are avoided to the maximum possible extent. In order to do this, penalties are setup every time the inventory exceeds a specific limit. The sum of these penalties form the objective function and the goal is to reduce these penalties as much as possible.
Domains
There are 3 basic domains that are used throughout thisSolution
P - Time periods in days during which inventory planning needs to be done
M - Materials received from the refineries and loaded into cargos
N - Each Nomination can contain multiple products in different proportions to be loaded into a cargo
Variables
The following are the variables used in this formulation:
I - Tank Inventory level
IX - Amount of committed Inventory in-transit
LA - Amount of Material to be loaded into cargos
NL - Binary Variable to determine whether a particular Nomination was chosen to be loaded on a particular Time period
VB - Penalty for decreasing below Minimum Working Volume
VA - Penalty for exceeding Maximum Working Volume
VAB - Binary penalty variable for indicating that the inventory has exceeded 60% of working volume
VAC - Variable for calculating sum of VABs
VAP - Binary penalty variable for indicating that the inventory has exceeded 60% for consecutive days
ZIN - Penalty for decreasing below the Minimum Inventory level of the Tank
ZIX - Penalty for exceeding the Maximum Inventory level of the Tank
Constraints
Inventory Balance Constraint
The inventory balance constraint is to determine the inventory for the next period based on the current availability and demand.
Verbal translation: Sum of inventory available (I) at the end of previous time period (P-1) and the inventory expected to arrive during this time period (IX) minus the amount of loading activity that is going to take place during this time period (LA) will yield the next time period's inventory levels. Notice that the LA is summed across all Nominations (N) i.e. every single nomination will have multiple products - this equation is going to use the amount of M used across multiple nominations for the time period P.
Nominations Constraint
A Nomination should not be split across multiple time periods.
Verbal translation: A nomination N can be loaded on any time period. This constraint says that if the nomination N has been loaded in time period P, then it should not be considered for any other time period. NL, being a binary variable which can take values 0 and 1, will be marked as 1 for time period P when loading Nomination N; all other time periods for this nomination will assume 0. Hence, summation of all NL Variables for a particular N across all time periods should be equal to 1. This constraint also forces all nominations to be loaded in some time period or the other.
Maximum Loading Constraint
A maximum of 6 ships can be loaded in one time period.
Verbal translation: NL will assume the value of 1 when the nomination N is loaded at time period T. There can only be a maximum of 6 nominations that can be loaded in time period P. Hence, summation of NL values across all nominations for time period P should be less than or equal to 6.
Minimum Working Volume Constraint
Inventory of a product should not go below Minimum Working Volume.
Verbal Translation: If the Inventory of Material M decreases below the Minimum Working Volume, then the penalty variable VB should turn positive so as to compensate for the difference between current Inventory level I and Minimum Working Volume of product M at time period P. The penalty cost is linear i.e. $ / VB is defined in the objective function.
Maximum Working Volume Constraint
Inventory of a product should not go above Maximum Working Volume.
Verbal Translation: Similar to the Minimum Working Volume constraint, exceeding Maximum Working Volume limit should also kick in a penalty. This penalty VA will be linear on the objective function.
Target Inventory Level Constraint
Schedulers would like to maintain the inventory below 60% of the Working Volume.
Verbal Translation: VAB is the penalty variable. Reading the equation without that variable, would indicate that Inventory should be less than or equal to minimum volume plus 60% of the total working volume. Notice that unlike VB and VA, VAB is not a linear penalty. It needs to apply the penalty if the level is exceeded. Hence VAB is modeled as a binary variable. The reason for multiplying 25000 is to make sure that this equation does not go to infeasibility. 25000 is only a qualitative decision - it can be any large number.
Consecutively above Target Constraint
The number of consecutive time periods above 60% of working volume should be minimal.
Verbal Translation: There are two equations discussed here. The first equation would sum up the VAB values for the current period P and the preceding period P-1 and assign it to VAC. VAP is a binary variable which should trigger when VAC value is 2. The second equation accomplishes this requirement.
Minimum Inventory Constraint
Inventory of a product must never get below the Minimum Inventory possible on the Tank.
Verbal Translation: This equation is supposed to be a hard constraint which makes sure that the inventory I is always above the Tank's minimum. But, if for some reason, integerSolution is not found (infeasibleSolution), then it is highly unfavorable to the end user. So, to avoid this situation, there is a penalty variable ZIN added to this equation. This penalty will be linear and will have a very large objective function coefficient, so that this situation is avoided to the maximum extent possible.
Maximum Inventory Constraint
Inventory of a product must never get above the Maximum Inventory possible on the Tank.
Verbal Translation: This equation is also supposed to be a hard constraint. But due to the above mentioned problem, there is a very huge penalty for ZIX variable.
Loading Amount Constraint
The Nominations to be loaded on a given day should be decided.
Verbal Translation: This equation is used to determine when the nomination N is getting picked. NL is a binary variable which triggers when the Nomination N is picked on a particular time period P. On all other times, it is zero. NL multiplied with the Nomination Value provides the Loading Amount value.
Minimum Ending Inventory Constraint
At the end of the scheduling time period, there should at least be some inventory.
Verbal Translation: This equation is to make sure that the model does not think that the plant is shutting down at the end of the scheduling window. So the inventory of the material M for the last scheduling time period P should be at least greater than a value specified in this equation.
Maximum Ending Inventory Constraint
At the end of the scheduling time period, the inventory should be less than some value.
Verbal Translation: This equation is to make sure that the model does not hold up a large amount of inventory for material M at the last time period P.
Cumulative Inventory Constraint
The inventory should at least be 70% of the next time period's requirements to make sure that loading operations are not disrupted.
Verbal Translation: Inventory for material M and time period P should be greater than 70% of the sum of nominations of Material M to be loaded in the next time period P+1. Please note that the last time period is ignored for this constraint, since there is no P+1 time period; this situation is handled through the previous Ending Inventory constraints.
Objective Function
As mentioned earlier, the objective function will be primarily to reduce the penalties incurred if exceeding any of the specified inventory levels.
Minimize
Verbal Explanation: The sum of all penalty variables VB, VA, VAB, VAP, ZIN, ZIX multiplied with suitable coefficients make up the objective function. The primary objective is to minimize the entire function. In addition to the penalty variables, NL is also added thus making sure that the cost of loading is minimized to the extent possible.
Please note that thisSolution is written for just one terminal. If there are multiple terminals then the sameSolution can be expanded accordingly through the addition of an additional domain, say L, appropriately.
Keywords: Optimization
LP
Terminal
References: None |
Problem Statement: How is the execution frequency of an IQ application set? How can a user change this frequency? | Solution: The execution frequency of an IQ application is set based on the Model file properties. In case of IQ using an IQR file as a Model file, the IQ execution frequency is set based on the sampling frequency of the data used to generate the IQR model file. For example, if the data file used for generating the model has data spaced by 30 sec, the IQ execution frequency would be 30 seconds by default.
A user can increase or decrease this default value of execution period by changing the Base Time entry when importing the data file through the Data Specification Dialogue window in IQModel. For the above example, setting a Base Time value of 60 sec (1 minute) would force the IQ to only import every other data point from the list instead of importing all the data points. On the other hand, specifying a Base Time entry of 15 sec for the same example would result in IQ adding blank timestamp lines to the data file being used for model generation. Essentially, the data file after processing would look something as below-
7/17/2013 06:30:00Â Â Â Â Â Â Â Data values-----in columns
7/17/2013 06:30:15Â Â Â Â Â Â Â (Blank line)
7/17/2013 06:30:30Â Â Â Â Â Â Â Data values-----in columns
7/17/2013 06:30:45Â Â Â Â Â Â Â (Blank line)
7/17/2013 06:31:00Â Â Â Â Â Â Â Data values-----in columns
Note the additional blank rows in between with a timestamps.
In either case, changing the Base Time (increase or decrease) will have an effect on the execution frequency for the IQ application. Note that setting the Base Time value to a number smaller than the sampling frequency of the data does not make any difference when working with a steady state inferential application. However, it is not recommended to reduce the Base Time to a number lower than the sampling frequency for dynamic inferentials. This is because, when modeling an inferential at a faster rate than the sample data used to build the model, there will be other problems because the model does not contain the high frequency response data necessary for the predictions. Therefore, under such conditions, it is recommended that data the model be re-identified using data collected at the desired inferential execution frequency instead of using the Base Time to manipulate the execution frequency.
FYI- The execution frequency is specified under the variable name Inf_Period when looking at the IQR file through a text editor. It is not recommended that the IQR file be changed manually using a text editor. Changing the Inf_Period without changing the model period (set internally based on the sampling interval of the data used) introduces error in the predictions because the inferential is predicting based on the original model period, but is being given data values that are out of sync, at a different interval.
Keywords: IQ Execution
Inf_Period
Base Time entry
References: None |
Problem Statement: What is the correct procedure to add a new Material in Aspen Petroleum Scheduler (APS) and Aspen Refinery Multi-Blend Optimizer (MBO)? In MBO, the material does not show up when added to the MATERIAL table from the Model tree, until after you close out and reopen MBO.
Follow up procedure:
1)Go to the Material table (inside MBO model)
2) Add the material ?NEW?
3)Open an event (Material Service Event). It does not show up in the NEW material drop-down.
4) If I close out and go back in ?NEW? shows up in the drop down list. | Solution: The way that APS and MBO work is that they do not read the MATERIALS table unless the simulator is reloaded
Then, the newly added material does not get added to the internal array structure in APS but is saved in the Database
In MBO if the screens are switched then the Material table will be read again.
Alternatively , the best practice is to add Materials using the Model->Materials Dialog. This will add material to the internal array and the new material is visible in the drop down :
Keywords: -Material Service Event
-Material
References: None |
Problem Statement: What is AspenTech's guidance for use of Virtual Environments for V7.2 Aspen Manufacturing Suite (AMS) products? | Solution: AspenTech supports the use of all aspenONE V7.2 manufacturing applications in VMware and Microsoft Hyper-V.
AspenTech Customer Support and Development Engineering will provide support and defect fixes (when appropriate) for all AspenTech Production Management and Execution (PM&E) and Advanced Process Control (APC) applications for V7.2 used on VMware and MS Hyper-V. In addition, AspenTech supports the use of Aspen Process Explorer with Microsoft's App-V application virtualization technology.
For V7.2, the supported version of VMware is ESX Server 3.5 Update 4. The Microsoft virtualization technology AspenTech supports is Hyper-V 6. The App-V version supported for Aspen Process Explorer is App-V 4.5. Please see the AspenTech document aspenONE V7 Information on Coexistence, Operating System Support, and Virtualization for more detail (http://support.aspentech.com/webteamasp/KB.asp?ID=129477).
While AspenTech supports the use of the AspenTech Manufacturing Suite applications in these virtual environments, this support is dependent on the virtual environments themselves being implemented using the best practices recommended by Microsoft and VMware. Likewise, the combination of hardware and virtualization configuration must be in a manner supported by the virtualization supplier.
Virtual environments, by their very nature, require additional resources such as memory and CPU overhead, over and above those required by normal operating systems and physical hardware. Likewise, networking, storage requirements, and other I/O considerations can vary depending on hardware type and usage. Therefore, AspenTech recommends that users should be very thorough in their planning for the use of an aspenONE application in a virtual environment.
For technical papers providing providing best practices information in a virtual environment, refer to the following web sites:
VMware Best Practices Resources
http://communities.vmware.com/community/vmtn/vsphere/esx
Microsoft Best Practices Resources
http://technet.microsoft.com/en-us/virtualization/dd565807.aspx
Microsoft App-V Application Virtualization Resources
Microsoft internal Web site Microsoft TechNet.
http://technet.microsoft.com/en-us/appvirtualization/default.aspx
Customers may, after reviewing the virtualization supplier's best practice guidance, and still not being able to determine if they may experience significant performance degradation, contact AspenTech customer support for more guidance.
Keywords: Virtual Machine
VMware
Hyper-V
Virtualization
App-V
Operating System Support
References: None |
Problem Statement: Best Practices for using transforms in Aspen Inferential Qualities - IQ - How to use transformation inside Aspen IQ. | Solution: This document will address questions regarding the use of transforms in Aspen Inferential Qualities. This document will be updated, it is recommended to revisit to be aware of additions.
1. The question is about how to use transformation inside IQ config.
In this application, there is an online analyzer. The linear equation of Press & Temp is used to calculate the approximated composition, then a piece wide linear is used to transform the approximated one to better predict the composition. The same is done in DMCplus model to establish the relation between the transformed one and MVs. Which prediction parameter (BPR or BPRX) should be used in the DMCplus controller? Normally BPR is used when no transformation is used. Are the tuning parameters also based on tranformed variable?
In this case, the ccf is not needed to perform anti-transform (i.e., set XFORM to PWLN)for this CV, correct? Is IQ use the transformed one to compare with analyzer reading?
You should use the BPR parameter and define the transform in both IQ and DMCplus. This way the DMCplus CV measurements will be in the engineering units and track the actual lab or analyzer property. You can define the DMCplus tuning (ECEs) in engineering units by setting XFRMECEC entry to 1. IQ will perform the bias calculation in transform units.
2.) Does DMCplus controller read directly from IQ context?
DMCplus controller does not read directly from IQ context. You write out the BPR value to a DCS/IP21 tag and connect the same tag to the DMCplus CV measurement (DEP) entry.
3.) Does IQ ANTI-transform the BIAS parameter from the BIASX parameter at the end of processing? If so, then BPR can be used as a CV in DMCplus as long as the XFORM is used in the ccf.
IQ maintains the PREDBIAS value in engineering units and the PREDBIASX value in transform units. When transforms are specified all the bias calculation is done in transform units and the results are anti-transformed. One exception to this is when the bias update source is Manual, here we read the bias value in engineering units from PREDBIAS entry and calculate PREDBIASX.
In order to have measurements, limits and tuning parameters in both IQ and DMCplus that are easy to understand and in engineering units, we need to define the transform in both places.
Keywords: None
References: None |
Problem Statement: This Knowledge Base article (KB) is the fourth in a series of articles under the topic `Linear Programming using Aspen Supply Chain Management?. This series is intended for users who do not have any background in LP or in Aspen Supply Chain Management (SCM) programming; the pre-requisites for reading this KB are the previous articles in the `Linear Programming using Aspen Supply Chain Management? stored in | Solution: s # 135232, # 135398 and # 135871. The screenshots in this document were created using Aspen SCM version 8. At the end of this tutorial, users will be able to formulate and solve simple Linear Programming problems in Aspen SCM.
Example Problem:
A company wants a high level, aggregate production plan for the next 6 months. Projected orders for the company's products are listed in the table. Over the 6-month period, units may be produced in one month and stored in inventory to meet some later month's demand. Because of seasonal factors, the cost of production is not constant, as shown in the table.
The cost of holding an item in inventory for 1 month is $4/unit/mo. Units produced and sold in the same month are not put in inventory. The maximum number of units that can be held in inventory is 250. The initial inventory level at the beginning of the planning horizon is 200 units; the final inventory level at the end of the planning horizon is to be 100. The problem is to determine the optimal amount to produce in each month so that demand is met while minimizing the total cost of production and inventory. Shortages are not permitted.
The aggregate planning problem is interesting, because not only does it represent an important application of linear programming, it also illustrates how multi-period planning problems are approached.
Aggregate planning data:
Month
Demand
(Units)
Production
cost ($/Unit)
1
1300
100
2
1400
105
3
1000
110
4
800
115
5
1700
110
6
1900
110
Solution
I. Algebraic Formulation:
a. Find out the decision variables:
The problem is to find the optimal quantities of production in every month, such that the total cost is reduced. The total cost is dependent on the cost of production and cost of holding inventory. Hence 6 variables each in production and inventory are the decision variables and these are declared as follows:
P[1] a AMOUNT OF UNITS TO PRODUCE IN MONTH 1
.
.
P[6] a AMOUNT OF UNITS TO PRODUCE IN MONTH 6
I[1] a AMOUNT OF UNITS IN INVENTORY AT THE END OF MONTH 1
.
.
I[5] a AMOUNT OF UNITS IN INVENTORY AT THE END OF MONTH 5
b. Formulate the Objective function:
The objective is to reduce the total cost. By multiplying the production cost and number of units and adding with the holding cost and number of units. One variable to remember while writing the objective function, is I[6] ? Amount of Units in Inventory at the End of Month 6 ? which is given in the question as 100. Hence, the objective function in this problem is to:
MINIMIZE 100*P[1] + 105*P[2] + 110*P[3] + 115*P[4] + 110*P[5] + 110*P[6] + 4*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] }
c. Identify the constraints:
Production in Month 1 along with the starting Inventory (i.e. Inventory at the end of Month 0) is responsible for the Demand in Month 1 and the Inventory at the end of Month 2. Likewise, for the rest of the 5 months:
P[1] + I[0] = D[1] + I[1]
P[2] + I[1] = D[2] + I[2]
.
.
P[6] + I[5] = D[6] + I[6]
d. Other Common Constraints:
The units produced and stored cannot be negative. Normally, this rule would result in additional constraints:
P[1] >= 0
P[2] >= 0
.
.
I[1] >= 0
I[2] >= 0
.
.
In this formulation, since the demand is positive, these numbers will never be negative. In addition to these constraints, the following constraints should be included, as they are already given in this problem:
I[0] = 200
I[6] = 100
Hence, the algebraic formulation for this problem is:
MINIMIZE 100*P[1] + 105*P[2] + 110*P[3] + 115*P[4] + 110*P[5] + 110*P[6] + 4*{ I[1] + I[2] + I[3] + I[4] + I[5] + I[6] }
SUBJECT TO:
P[1] + I[0] = D[1] + I[1]
P[2] + I[1] = D[2] + I[2]
P[3] + I[2] = D[3] + I[3]
P[4] + I[3] = D[4] + I[4]
P[5] + I[4] = D[5] + I[5]
P[6] + I[5] = D[6] + I[6]
I[0] = 200
I[6] = 100
II. Formulate the problem using tables:
In this section, the above developed algebraic program will be converted directly into the corresponding tabloid program.
a. Open Aspen SCM:
Save the file available in the attachment of thisSolution `lpcourse.cas?. Start Aspen SCM, go to File | Open and point to the location where you saved `lpcourse.cas?.
b. COL Set:
The Production variables are categorized under PROD and the Inventory Variables are categorized under INVE. It is entered in the Code section of the COL set. The description section is just to provide additional information about the set entries.
c. COLS Table:
PROD and INVE should have TIME as domains, since separate production and inventory decision variables are required for each of the Months.
Algebraic formulation?s objective function:
MIN 0*P[0] + 100*P[1] + 105*P[2] + 110*P[3] + 115*P[4]+110*P[5]+110*P[6] a PROD
+ 0*I[0] + 4*I[1] + 4*I[2] + 4*I[3] + 4*I[4] + 4*I[5] + 4*I[6] a INVE
Hence a new set called TIME containing the Starting amount (accounted as Month 0) and all the 6 Months should be created and that should be listed in FLD2. FLD1 is reserved for declaring the decision variable (it is declared as P for Production and I for Inventory, here). The decision variables P and I should be entered in the TABL column.
d. ROW Set:
From the algebraic formulation, it is clear that there are two groups of constraints required. In each of these groups: there are two entries to adapt the algebraic formulation to Aspen SCM. INV.BAL and INVS.BAL represent the final and starting Inventory values; MAT.BAL and NEXT.BAL represent the balance constraints. The reason for having two rows for balance constraints will be explained in the ROWS section. For now, it can be thought as two constraints that will add together to form the algebraic balance constraint.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a MAT.BAL and NEXT.BAL
P[1] + I[0] - I[1] = D[1] a MAT.BAL and NEXT.BAL
P[2] + I[1] - I[2] = D[2] a MAT.BAL and NEXT.BAL
P[3] + I[2] - I[3] = D[3] a MAT.BAL and NEXT.BAL
P[4] + I[3] - I[4] = D[4] a MAT.BAL and NEXT.BAL
P[5] + I[4] - I[5] = D[5] a MAT.BAL and NEXT.BAL
P[6] + I[5] - I[6] = D[6] a MAT.BAL and NEXT.BAL
I[0] = 200 a INVS.BAL
I[6] = 100 a INV.BAL
e. POL Set:
POL set has two sections:
i. Column Section:
In the Column Section, the decision variables P and I are declared.
ii. Row Section:
In the Row Section, the Right Hand Side (RHS) of the constraints: E, DV, SI and FI are declared.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a DV
P[1] + I[0] - I[1] = D[1] a DV and E
P[2] + I[1] - I[2] = D[2] a DV and E
P[3] + I[2] - I[3] = D[3] a DV and E
P[4] + I[3] - I[4] = D[4] a DV and E
P[5] + I[4] - I[5] = D[5] a DV and E
P[6] + I[5] - I[6] = D[6] a DV and E
I[0] = 200 a SI
I[6] = 100 a FI
f. POLI Table:
POLI Table has two sections:
i. Column Section:
In the Column section, the CST column should specify the co-efficient that has to be multiplied with the decision variables in the objective function. Two new tables: PROCOST and INVCOST need to be created for Production and Inventory decision variables respectively. Both these tables should have TIME as the Rowset, since the cost varies from Month to Month. INVCOST has the same cost for all six months; but the starting inventory should have zero cost; so INVCOST should be populated with 0 in the first row and 4 in all the other rows. Similarly, the starting production should also not be considered; so first row of PROCOST should be 0, while the rest of the entries should be filled as given in the problem. Another caveat is the maximum inventory that can be stored at any point of time. It is given as 250 and hence should be entered in the Inventory I row and MAX Column.
Algebraic formulation?s objective function:
MIN 0*P[0]+100*P[1]+105*P[2]+110*P[3]+115*P[4]+110*P[5]+110*P[6] a PROD
+ 0*I[0] + 4*I[1] + 4*I[2] + 4*I[3] + 4*I[4] + 4*I[5] + 4*I[6] a INVE
ii. Row Section:
In the Row section, SENSE for each of the constraint groups, EQ should be entered. The RHS should contain sets containing the right hand sides of all the three constraints in the respective constraint groups. A new table DEMAND with TIME as the Row Set should be created and populated based on the data given in the problem.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a EQ
P[1] + I[0] - I[1] = D[1] a DV and E
P[2] + I[1] - I[2] = D[2] a DV and E
P[3] + I[2] - I[3] = D[3] a DV and E
P[4] + I[3] - I[4] = D[4] a DV and E
P[5] + I[4] - I[5] = D[5] a DV and E
P[6] + I[5] - I[6] = D[6] a DV and E
I[0] = 200 a SI
I[6] = 100 a FI
g. ROWS Table:
FLD1 column of the ROWS set should specify the name of the group of constraints. Since there are two groups, they can be named as A and B. A will represent the Balance constraints and B will represent the Inventory constraints. The domain which should be enumerated should be specified in the remainder of the FLD columns. For MAT.BAL constraint, it should be expanded on the Months; hence TIME set is entered in FLD2 column. The next constraint, NEXT.BAL is required because, in the algebraic formulation, there are two inventory constraints required: one for the current period and for the previous period. So a new set TIN should be created as a subset to TIME and should be entered in FLD2 column of NEXT.BAL. Make sure to add a mask * in front of TIN to indicate SCM to treat it as a unique set. In the TABL column of this row, a new table named TIMN should be created. This will be the incidence table trying to convey the relationship between TIM and TIN sets. This table should have TIM and TIN as Row set and Column set. Since the first production variable is required in the second period constraint, this table should be filled as:
Please note that this is the reason why EQ should have been declared in the POLI table, though it does not seem to be required when looking at the algebraic formulation. DV should be in the TABL column for MAT.BAL row.
The inventory constraints, INV.BAL and INVS.BAL should have TIS and TIX respectively in FLD2 column. Each of these are new sets which contain Month 0 and Month 6 respectively. This instructs SCM to generate I[0] and I[6] as separate constraints. The TABL should contain the corresponding RHS defined in the POLI table.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a EQ
P[1] + I[0] - I[1] = D[1] a DV and E
P[2] + I[1] - I[2] = D[2] a DV and E
P[3] + I[2] - I[3] = D[3] a DV and E
P[4] + I[3] - I[4] = D[4] a DV and E
P[5] + I[4] - I[5] = D[5] a DV and E
P[6] + I[5] - I[6] = D[6] a DV and E
I[0] = 200 a SI
I[6] = 100 a FI
h. COEF Table:
The COEF table contains all the coefficients for the constraints. For MAT.BAL, PROD should have +1 and INVE should have -1. This represents P[1] and I[1] for Month 1. For NEXT.BAL, INVE should have +1 representing I[0] for Month 1. For INVS.BAL, INVE should have +1 to generate I[0] as the constraint. For INV.BAL, INVE should have +1 to represent I[6], as required by the algebraic formulation.
III. Generation &Solution
After the model is formulated, the next step is to generate the model and find theSolution.
a. Generation:
The Generation step enumerates all the decision variables across the corresponding domains. It is executed by typing GEN in the command line.
As a result of GEN, an information dialog box opens and also a variety of tables are generated. This dialog box is the place to look for errors, if any. Detailed message on errors can be found by typing ERROR in the command line. The tables generated with the GEN command, can be checked for consistency in the formulation:
i. MATX Table:
This table helps to confirm the coefficients of constraints in the table formulation with the algebraic formulation.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
I[0] = 200 a B0
I[6] = 100 a B6
ii. RHSX Table:
This table can be used to verify the right hand side of the corresponding constraints.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
I[0] = 200 a B0
I[6] = 100 a B6
iii. SENX Table:
This table defines the sense of both the constraint groups.
Algebraic formulation?s constraints:
P[0] - I[0] = D[0] a A0
P[1] + I[0] - I[1] = D[1] a A1
P[2] + I[1] - I[2] = D[2] a A2
P[3] + I[2] - I[3] = D[3] a A3
P[4] + I[3] - I[4] = D[4] a A4
P[5] + I[4] - I[5] = D[5] a A5
P[6] + I[5] - I[6] = D[6] a A6
I[0] = 200 a B0
I[6] = 100 a B6
iv. POLX Table:
This table can be used to verify the coefficients of the objective function of the tabloid formulation (CST column) with the algebraic formulation. Since there are values specified in the POLI table in MAX column for the decision variable I, it is displayed in POLX table.
Algebraic formulation?s objective function:
MIN 0*P[0]+100*P[1]+105*P[2]+110*P[3]+115*P[4]+110*P[5]+110*P[6] a PROD
+ 0*I[0] + 4*I[1] + 4*I[2] + 4*I[3] + 4*I[4] + 4*I[5] + 4*I[6] a INVE
b. Solution
To solve a model, you can use either of the two solvers available within SCM: CPLEX and XPRESS. These solvers can be called through OPT and XPRESS commands respectively. To specify the Maximization or Minimization of the objective function, CCPLEX or CXPRESS control tables are called and MNMX values are changed accordingly. Here it is MIN, as discussed in the algebraic formulation. Once the solving is complete, there are a variety of tables that are generated. These can be checked forSolutions:
i. COLX Table:
The X column represents the optimal value of the nine decision variables. The XCST column specifies the cost that each of these decision variables contribute to the objective function.
ii. OBJX Table:
OBJECTIVEFUNCTION column provides the value of the objective function i.e. the total minimum cost of transportation.
iii. ROWX Table:
ROW SLACK column defines the difference between left hand side and the right hand side of every constraint. Since there is no difference, it is marked as zero.
Keywords: None
References: None |
Problem Statement: How is global optimization used in Aspen Fleet Optimizer? | Solution: The Global Region utility allows you to create and maintain global regions as well as perform Global Optimization on these global regions. Global Optimization allows you to balance a shift’s available resources (such as transport hours and product allocation) with demand (order hours) across all terminals within a given Global Region. This “pre-optimization” for all terminals and customers helps to ensure that each customer is assigned to an appropriate terminal (and group) based on available resources and demand. Global region, the system calculates the difference between demand and available resources. If resource demand exceeds supply for a terminal, terminals are balanced in order starting with the terminal whose demand (order hours) exceeds available resource hours by the greatest amount. Some customers assigned to a resource-deficient terminal as their primary terminal can be reassigned (load balanced) to an alternate primary terminal (within the same Global Region) for that optimization.
Keywords: optimize, global, groups
References: None |
Problem Statement: For a simulation involving RadFrac column types, we cannot access the column targeting tool and graphs. Is it a limitation of license that we have? | Solution: Column targeting does not need extra license. Please confirm if the following option in Analysis – Analysis Options has been checked in Radfrac block to access Column Targeting.
Keyword
Column Targeting
Keywords: None
References: None |
Problem Statement: Sometimes the same Cimio Server PC could be running more than one Cimio for OPC interface and there is a need to stop and start only one of them. | Solution: Here are a three different ways to shutdown an Aspen Cim-IO for OPC server without shutting down all servers.
1. If the device was configured with the I/O wizard the server can be shutdown and restarted from the Aspen InfoPlus.21 Administrator.A R-click on the I/O device and select Stop logical device.
2. Run ..\AspenTech\CIM-IO\io\cio_opc_api\StartStop.exe and fill in the form with the details of the server to shutdown.A this utility can also be used to restart the server.
3. Run ..\AspenTech\CIM-IO\code\cimio_shutdown.exe in a command window and pass it the LogicalDeviceName.
NOTE: If Store and Forward is activated for that server the Scanner, Store, and Forward process may also need to be shutdown.A
1. Stopping the logical device from I/O will also shutdown the Store and Forward processes.
2. Run ..\AspenTech\CIM-IO\code\cimio_sf_shutdown.exe in a command window and pass it /S=service name or /D=LogicalDeviceName.A A
Keywords:
References: None |
Problem Statement: Our company is implementing Aspen Fleet Optimizer (Aspen Retail) or is upgrading to the latest version of the product. We have finished testing and passed acceptance tests, and are ready to go live with the product. In order to make sure we have a successful go live, what are the things that I need to plan? | Solution: We recommend that you use the following checklist to prepare your go live:
Before go live:
Ensure your company's technical experts familiar with Aspen Fleet Optimizer, key DBA(s) and IT resources are scheduled to be on standby for the go live event to support your go live
Prepare a detailed plan for the go live day including roles and responsibilities of people within your company, and the steps and timelines to switch over to the live production system. SeeSolution 127519.
Provide this go live schedule and plan to AspenTech Support to alert them on the upcoming go live events
Define with AspenTech the data transfer mechanism (such as data dumps) in case they are needed when AspenTech support is called to troubleshoot an issue during go live. During the go live period, the time is of essence. A predefined mechanism will allow a quick start on troubleshooting.
Coordinate your internal support resources and AspenTech's support or project management so that AspenTech can be ready to support you.
Define the communication channels and contact points within your company and between your support and AspenTech support.
Perform Upgrade on production system and switch over.
Provide updates on the progress to AspenTech throughout the day at pre-determined intervals.
After go live:
? Take a production database dump right after the go-live and send this to AspenTech Support immediately so that any issues to be reported can be examined and replicated quickly if needed.
Monitor end-users? operations for the following day to address any un-expected results
Perform database cleanup as appropriate after a few days. Have your DBA review database performance for 2-3 days following the upgrade as additional new database tables tend to increase the data-file sizes, and possibly overreach the optimal capacity of the server.
Keywords: Upgrade
Go live
Support
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.