question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: When I try to open Aspen Adsorption the following error appears: I am unable to explain the reason for this error message? Why I am I not able to open Aspen Adsorption normally? Why does this error message not appear when I open an existing simulation file?
Solution: This error message says that the temporary directory in default location is not available. The default directory is located in C:\Users\<user_account_name>\Documents\AspenTech folder and it is called Aspen Adsorption V8.x (depending on installed version). If the user removed or moved this folder by mistake, then Aspen Adsorption will show this error message about the missing working folder. To resolve this issue, the user can perform the actions listed below: 1. Click OK and select another folder from the file Browser where Aspen Adsorption will keep all temporary files for non-saved files 2. The user can create an Aspen Adsorption V8.x folder in a default location C:\Users\<user_account_name>\Documents\AspenTech, and then close this error message window and run Aspen Adsorption again. Keywords: Missing path, unavailable working folder, Aspen Adsorption References: None
Problem Statement: How you model a dynamic volume gas pump (without a user subroutine)?
Solution: The following are the steps to model a dynamic gas pump model in Aspen Adsorption without any subroutine. This is merely an alternative procedure. One can use a subroutine or flowsheet constraints to define user constant volume based performance curve instead. 1. Define the Pump as shown below: 2. Open the pump specify table and add the upper limit as one of the columns (right click the table and go to Properties | Attributes Tab and move Upper from the left column to the right column using the -> button) 3. Set the pump fixed volume to a number close to the upper limit 4. Run the program initialization step to get an initial estimate of work and power (by default efficiency and Vinlet are fixed). The estimates given here at maximum; a pump that can do that much work or requires that much power will be able to work at volumetric flows up to the maximum point. 5. Check that the program has accepted the values for the initial run by freeing Vinlet and fixing power (if you want to work with a fixed power value, but you can also fix work) 6. Now you can run with constant volume as a free variable, and the power is fixed at something close to the maximum value. Keywords: Adsorption, gas pump, volume, initialize References: None
Problem Statement: AspenTech strongly recommends stopping all Aspen applications before rebooting the system. This ensures that all applications are shutdown cleanly and close their files. However, users sometimes reboot the Aspen Cim-IO server without first stopping the Aspen Cim-IO Interface which closes the Cim-IO Store file. In some instances customers report that the transfer records in the Aspen InfoPlus,21 system will not collect data until they're manually activated.
Solution: Under normal conditions,stopping Microsoft Windows sends a shutdown message to the Aspen Cim-IO Manager Service. This special message causes the Aspen Cim-IO Manager Service to start a graceful shutdown of the Aspen Cim-IO server processes in the background. Usually Microsoft Windows gives active services 20 seconds or less to stop. After that time Windows kills remaining tasks before shutting down. The problem is that it may take more than 20 seconds to stop the Aspen Cim-IO Interface processes, which means that the processes may not have stopped completely and the Cim-IO Store file may not have been closed correctly, which may corrupt it and render it useless. The website "//support.microsoft.com/kb/146092" and "//technet.microsoft.com/en-us/library/cc976045.aspx" describe how to increase the shutdown time for services to close properly. Adjusting the registry entry WaitToKillServiceTimeout to a larger number provides a better chance for Aspen Cim-IO server processes to stop gracefully before Windows shuts down. Just how long a clean Aspen Cim-IO server shutdown takes depends on many local conditions and settings. Time one or two clean shutdowns under normal conditions and then enter a number equal to twice that time in the registry. Keywords: cim-io, shutdown, transfer References: None
Problem Statement: Some customers reported that when many (different) cimio devices are installed, the CIMIO_MSG.LOG can report "Maximum number of facilities exceeded". There may also be a Dr. Watson error generated. Initially the CIMIO interface will seem running. The cause of the problem is the contents of the cimio_errors.def file in the C:\Program Files\AspenTech\CIM-IO\etc folder, it is using more than the 20 facilities associated with it. The cimio_errors.def file contains the error.def files of the kernel and the installed interfaces. It converts the errors codes from the interface to understandable error messages in the CIMIO_MSG.LOG file.
Solution: The workaround is to remove one or more entries in the cimio_errors.def file (after making a backup), then restart the CIMIO system. The consequence of this action is that error codes from the affected interface will be displayed in the the cimo_msg.log without the english explanation. However users can open the related interface-specific error definition file and convert the error code to text. Keywords: maximum facilities exceeded References: None
Problem Statement: Starting with version 2004, AspenTech included a feature to the Aspen Cim-IO Core called Aspen Cim-IO Redundancy. This allows you to configure two Aspen Cim-IO servers on redundant nodes. If communications with the first is lost, the Aspen Cim-IO client will switch to the second. If communications with the second is lost, the Aspen Cim-IO client will switch to the first. The Aspen InfoPlus.21 external task that monitors this communication loss is called TSK_DETECT. In version 2004, TSK_DETECT wrote its messages to TSK_DETECT.out. Now, TSK_DETECT directs its messages to the file CIMIO_MSG.LOG. You may see messages in this file saying that Get Transfer records have been re-activated. e.g. 21-MAY-10 08:44:10.9 : Device : IOMODNET : Re-activated : MOD_5_DI : DB Time : 21-MAY-10 08:44:10. Last Update : 21-MAY-10 08:42:40 What exactly does this mean and why did it happen - especially as you may see this even though there has not been a switch?
Solution: These 'warnings' are generated by a feature of TSK_DETECT that is not directly associated with a server switch. Whether a switch has occurred or not, TSK_DETECT monitors every Get Transfer record trying to determine whether the transfer record has activated in an acceptable period since the last activation. If TSK_DETECT thinks that the transfer record has failed to activate, TSK_DETECT activates the transfer record and writes a message similar to the one above. The activation time-out value is given by the formula IO_TIMEOUT_VALUE * 2 + IO_FREQUENCY where both of these values are obtained from the transfer record itself. For example, a Get record with an IO_FREQUENCY of 60 seconds and an IO_TIMEOUT_VALUE of 15 seconds will be re-activated after if more than 90 seconds has passed between record activations. The message: 21-MAY-10 08:44:10.9 : Device : IOMODNET : Re-activated : MOD_5_DI : DB Time : 21-MAY-10 08:44:10. Last Update : 21-MAY-10 08:42:40 means TSK_DETECT activated the IOMODNET transfer record MOD_5_DI at 21-MAY-10 08:44:10. The previous activation occurred at 21-MAY-10 08:42:40 more than 90 seconds ago. Keywords: None References: None
Problem Statement: When trying to start Aspen Calc, the following error is encountered and Aspen Calc does not open:
Solution: This is because the machine on which Aspen Calc is being launched already has another instance of Aspen Calc running in the background. 1. Launch Windows Task Manager. 2. Select Processes tab. 3. Ticked the checkbox beside "Show process from all users" (circled in red). 4. Select AspenCalc.exe. 5. Click on the "End Process" button. 6. Launch Aspen Calc again. NOTE: There may be a situation when after killing the AspenCalc.exe process in Windows Task Manager, Aspen Calc will start but does not open up. It simply shows the Aspen Calc splash screen. In this case, please check the hard drive and free up disk space if it is low. Keywords: Run-time error '5' Invalid procedure call or argument References: None
Problem Statement: What are the possible reasons that can cause a lab sample to fail the validation step in Aspen IQ?
Solution: A bad lab value for an inferential is accompanied by a message explaining the reasons of why the value is marked bad. This information can be deduced from the "Combined Status Message" for the particular lab sample entry in PCWS. In the example shown below, the lab value is marked as bad due to a decrement outlier status (Bad. Dec. Outlier). The validation process in IQ screens the incoming lab data for valid range (minimum and maximum), rate of change limits, and outliers. A lab value (PLBULAB) is flagged as bad in the validation message (PLBUVLDNUM) if: · It exceeds the lab maximum limit, (LVMAXLIM). · It is lower than the lab minimum limit, (LVMINLIM) · The steady-state value at sample time (PLBUILSS) is lower than the lab minimum percentage steady-state limit (LVMINPCTSS) and LVMINPCTSS is greater than zero. · The lab value has moved lower in one iteration than the lab decreasing rate of change limit (LVROCLIMD) and LVROCLIMD is greater than zero. · The lab value has moved higher than the lab increasing rate of change limit (LVROCLIMI) and LVROCLIMI is greater than zero. · If the lab sample is older than the maximum number of hours for lab bias update (LVMAXHOURS) then the validation message number for last good lab value (LBUVLDNUM) is flagged as bad status. The old lab sample detection is bypassed when LVMAXHOURS is less than or equal to zero and LVMAXHOURS is greater than zero. · If absolute difference between the lab value (PLBULAB) and the biased prediction at lab sample time (PLBUIBPR) exceeds the lab outlier error limit (LVOLRLIM) and LVOLRLIM is greater than zero. The lab sample is considered an outlier if this is the only failed check. Each subsequent outlier in the same direction will cause the lab outlier counter, (LVOLRCNTI for increasing or LVOLRCNTD for decreasing) in that direction to be incremented and the current lab sample will be rejected. When the counter for number of outliers in the same direction reaches the respective limit set in LVOLRLIMI and LVOLRLIMD, then all of the outliers are determined to be valid values and the statuses for each value will be changed to "Good." If a "Good" lab is received before the counter limit is reached, then previously flagged outliers will stay "Bad." If the outlier count limit is set to zero, then the "Bad" statuses will always remain and not be flagged "Good." The new lab bias value (LBIASNEW) is available for validation via the validation section. Note: See the complete list of Lab Combined Status on the PCWS help files. Keywords: IQ Config PCWS Lab value References: None
Problem Statement: When trending a IP_CalcDef record in Aspen Process Explorer, the level column in legend is showing "Unknown Level (-1)" while the status column is showing "Unknown Status (-32768)".
Solution: This problem is due to the calculation returning a constant number which does not have a status and a level associated with it. By default, Aspen Calc will denote them to be Unknown Status and Unknown Level. The TestCalc calculation above have the following calculation script: IF (ATCAI < 8) THEN 0 ELSE 1 Without having to create any new tag, the calculation script can be changed to the following to show a level and status value. IF (ATCAI < 8) THEN ATCAI * 0 ELSE ATCAI * 0 + 1 Keywords: Shared Calculation On Demand calculation References: None
Problem Statement: How does Aspen InfoPlus.21 (IP.21) verify Aspen Local Security (ALS) or Aspen Framework (AFW) security?
Solution: IP.21 is a securable application that uses the ALS or AFW security servers. Roles created on the security server can contain both individual domain accounts and domain groups. The roles can then be applied to the IP.21 database and granted permissions such as read, write and so forth. When a user logs into an IP.21 client attempting to view the data from a secured IP.21 system for the first time, several actions occur to validate the request. In the default, for version 6.x and above installations, those steps are as follows: 1. The desktop machine requests data from IP.21, identifying the user logged onto that machine. This happens when the remote user first connects to the IP.21 API server. 2. The AFW Security Client service on the IP.21 server executes the 'pfwauth.dll' to get a list of AFW roles and role members from the Aspen Security server (The 'pfwauth.dll' is the process which contains all security access checking routines. This method of launching the security routines is referred to as "Out of Process" because the 'pfwauth.dll' is running within a service, as opposed to running within the secured application.) 3. Once the list of AFW roles are obtained by this service, the process then connects to the nearest domain controller (typically running Active Directory) and attempts to retrieve a list of all domain groups the requesting user account belongs to. NOTE: The AFW Security Client Service needs permission to resolve group memberships in Active Directory. Since this service typically runs under the LocalSystem account, it may be necessary to change the login account of the service to a domain account with 'Read Group Permission' in Active Directory. 4. Once the list of the user's domain groups has been obtained from the domain controller and has been compared to the list of AFW roles, IP.21 will grant or deny access to the specific data requested by the client. Upon successful access, the IP.21 server will also cache the connected user information in the User Roles table. (This table is viewable through the IP.21 Manager.) NOTE: If the "UseServerADSI" registry key (accessible through AFW Tools) is set to 1, the AFW Security Client service will contact the ALS server for AFW role information, but it will delegate the action of looking up group membership from the domain controller/Active Directory server to the Local Security server. (All clients prior to AMS version 5 delegated the group lookup to the Local Security server.) Therefore, the account running the ALS server will be the account requiring permission to 'Read Group Membership' in the Active Directory server. As of AMS version 5, the default value of the "UseServerADSI" key is set to zero, therefore IP.21 will have the AFW Security Client service perform the lookup on the domain controller/Active Directory server. For more information on the "UseServerADSI" key, please see solution 110390. For more information on configuring the ALS server to use a specific account, please see solution 113233. The above steps define the process of AFW security verification for a first time connection to IP.21. If the user has already connected to IP.21, the system is able to obtain the user's role information from a cache. Note: The AFW Security Client service will attempt to refresh cached role information at a predefined interval of 5 minutes for users who have previously connected to IP.21. Troubleshooting: If users are unable to view data from a secured IP.21 database, and they belong to a role that should have permission, likely the problem stems around the functioning of the AFW Security Client service. Steps to troubleshoot are: 1. Invoke the User Roles dialog from the IP.21 Manager ( go to the Actions menu and select "User Roles") to see if the user has previously connected and can view his/her roles. 2. Verify that the AFW Security Client service is running. 3. If the AFW Security Client service is running, verify that the account this service runs under has permission to read group membership in the Active Directory server. Typically the service is configured to use the LocalSystem account, which probably doesn't have any permissions in the Active Directory server. 4. Try changing the login account of the AFW Security Client service to a domain account and restart it. Is access granted? 5. Has the AFW role membership for this user recently changed? If so, the AFW Security Client service only attempts to refresh its list of roles every 5 minutes. You can restart the service to initiate the update of the cache. 6. Has the desktop user's account recently been added or removed from a domain group? If so, the domain controller replication process may take some time, so the domain controller being contacted with the request may not yet see the changes. Use the AFW Security Client Tool to verify if the user can be resolved to the correct role. For more information on using AMS products in conjunction with Active Directory, see solution 113224. For information on how the various AMS clients verify security, please see the following knowledge base articles: 113222 - How does SQLPlus verify AFW security? 113223 - How does Web.21 verify AFW security? 113231 - How does Process Explorer verify AFW security? 113233 - What account is running the Aspen Local Security Server? KeyWords access denied the users permission does not allow the operation Keywords: None References: None
Problem Statement: Users who make use of the HistoryBackupDef records to backup their history files will find that the amount of disk space taken up by these backup files are very large. Maintenance will have to be carried out regularly to avoid running out of harddisk space. It is actually possible to automate the process of deleting redundant history backup files, maintaining only the more recent backup copies at any time. The explanation below shows how this is done.
Solution: The POST BACKUP COMMAND and LAST ACTIVE COMMAND fields in the HistoryBackupDef records are used to run batch files immediately after the backup operation is complete. There are 2 batch files located in the \h21\etc\ folder: system_cleanup.bat and active_cleanup.bat. The active_cleanup.bat file performs the following functions: copy or backup the newly copied active fileset to a permanent backup location (eg. a tape device) delete older copies of the backup active filesets, maintaining only a pre-determined number of backup files at any time This batch files accept 3 parameters: the path of the directory where the active archives are saved, the name of the repository and the number of active archive backups that should be kept. You will need to add the active_cleanup.bat file in the LAST ACTIVE COMMAND field in the repeat area, eg. %h21%\etc\active_cleanup.bat D:\IP21_DAILY_BKUP\ACTIVE TSK_DHIS 2 This command will keep the 2 most recent active fileset backup for TSK_DHIS. For the other repositories, you will need to copy the same command into their respective LAST ACTIVE COMMAND field, and modify the backup folder and repository name parameters. To copy the active fileset to a permanent backup location, remove "REM" in the statement: REM copy %1\%current%\*.* TO_MY_PERMANENT_BACKUP_DEVICE and replace the last parameter with the backup location. The function of the system_cleanup.bat file is similar: copy or backup the newly copied system files to a permanent backup location (eg. a tape device) delete older copies of the system backup files, maintaining only a pre-determined number of backup files at any time This batch files accept 2 parameters: the path of the directory where the history system files are saved, and the number of history system file backups that should be kept. The system_cleanup.bat is normally used in the POST BACKUP COMMAND field because it backs up the system files, which apply to all your repositories, eg. %h21%\etc\system_cleanup.bat %h21%\backups\system 5 We do not have any batch files for the changed and shifted filesets. This is because we cannot predict which filesets will be changed or shifted. Thus, you will have to manually maintain these backup files to ensure they do not occupy too much disk space. KeyWords POST BACKUP COMMAND LAST ACTIVE COMMAND system_cleanup.bat active_cleanup.bat repository backup Keywords: None References: None
Problem Statement: What are the concerns when choosing between POLYPCSF and PC-SAFT property method?
Solution: Aspen Plus / Aspen Polymers have two PC-SAFT based property methods, POLYPCSF and PC-SAFT. POLYPCSF is a homopolymer version while PC-SAFT is a copolymer version. Below are a few things to take note of: 1. Since both models share the same pure and binary parameters, each has its own databank and they should NOT be used at the same time for the same simulation. Consequently, POLYPCSF databank should be selected only for POLYPCSF as Property Method and PC-SAFT databank should be selected only for PC-SAFT as Property Method. 2. Another difference between PC-SAFT and POLYPCSF is PC-SAFT contains the association and polar terms while POLYPCSF does not. For example, for any component that may have association or polar effect (e.g. methanol and other alcohols), only PC-SAFT should be used. For hydrocarbons, both are equivalent. The help of these two property method can be found in the online help of Aspen Plus. On the Aspen Plus menu bar: Help | Help Topics | Aspen Polymers Help | Aspen Polymers Plus Keywords: PC-SAFT, polymer property method References: | Equation of State Models.
Problem Statement: 1. According to the Intel compiler web page ( http://www.intel.com/cd/software/products/asmo-na/eng/compilers/fwin/278834.htm), there are two versions of the compiler available: the Standard Edition, and the Professional Edition. Will the Standard Edition be sufficient for Aspen Plus/ACM user Fortran? 2. The Intel Compiler web page indicates that the Standard Edition includes the Intel Debugger. On the other hand, there is a note under the "New Features in Depth" section that states: "Note: Use of Intel Visual Fortran Compiler 9.1 for Windows* requires purchase of Microsoft development tools. Please see System Requirements for details." Will I need to purchase MS Development Tools in order to compile and debug user Fortran for Aspen Plus and Aspen Custom Modeler applications?
Solution: 1. The Standard edition is sufficient unless the user uses IMSL library. 2. Yes, the Intel libraries rely on the linker and C libraries from Microsoft in the Developer Studio. Keywords: Fortran user subroutine References: None
Problem Statement: In my simulation, I get the Input Warning 1116: For which configurations is the Advanced Calculation Method not available?
Solution: The Advanced Calculation Method is available for most shell types except Kettle or Flooded Evaporator: If either Kettle or Flooded Evaporator configurations is selected, the simulation will be solved with the Standard Method. Keywords: Vaporizer type, K shell References: None
Problem Statement: In a simulation created in Aspen Shell & Tube Exchanger, the shell thickness is entered as input and the OD calculated. When the simulation is exported to Aspen Shell & Tube Mechanical, the shell thickness input form is deactivated. Why is this form deactivated?
Solution: When exporting a simulation from the thermal to the mechanical program, some of the inputs remain deactivated to maintain the tubesheet layout. In order to activate this form, follow the next steps: 1. Go to Input | Exchanger Geometry | Tubesheet Layout and for tube layout option select "Create a new layout". The option to modify the elements that affect the tubesheet layout will be now active 2. Go to Input | Exchanger Geometry | Shell | Shell Cylinder. This tab will be now active and the shell cylinder thickness can be altered. Keywords: Tubesheet layout, shell cylinder thickness References: None
Problem Statement: How can we modify the segment types of pipelines (short/long) after setting them in the pipeline configuration wizard?
Solution: We can modify them by using an UPDATE database query in the PIPELINES_SEGMENTS table and the PL_TYPE Column The syntax to modify all the segments to long would be like this: UPDATE PIPELINE_SEGMENTS SET PL_TYPE=1 Note that the default segment type setting is SHORT. Keywords: Pipeline Segments, Update Query, PL_TYPE References: None
Problem Statement: What are the differences in algorithm for different settings in RadFrac with respect to Absorber (a) YES and (b) NO
Solution: The Standard RadFrac Convergence uses the Inside-Out algorithm originally developed by Boston and Sullivan. On the RadFrac's Convergence | Advanced sheet, the option for Absorber can be changed from No (default) to Yes. Absorber=Yes invokes an algorithm option for absorber/stripper calculations. Absorber=Yes is allowed for RadFrac only when there is no condenser, condenser and reboiler duties are specified, and Standard algorithm is used for the simulation. We have two standard inside-out algorithms 1. Absorber=No uses the original inside-out algorithm by Boston and Sullivan with several extensions (e.g. 3-phase column) and refinements. Keywords: Inside-Out, Stripping factor, Absorber References: : Boston, J. F., and Sullivan, S. L. Jr., A New Class of
Problem Statement: How do you set up and perform a sensitivity analysis in Equation-Oriented (EO) mode? An example would be very beneficial.
Solution: Sensitivity analysis is available in Aspen Plus in both Sequential Modular (SM) mode and EO mode, but the results are presented in different forms. In SM mode a sensitivity analysis is carried out by repeatedly executing the blocks involved at all possible combinations of different values of all varied variables. The results are tabulated for variables specified by the user under those conditions. In EO mode, on the other hand, sensitivity analysis is done by calculating the Jacobian matrix, in which the partial derivative of each dependent variable with respect to every independent variable is evaluated at current steady state condition. There is no functionality in the Graphical User Interface to run several EO cases automatically in a similar fashion as in SM mode, even though using a script to run several cases automatically in EO is possible. To perform EO sensitivity analysis, user must specify a set of independent and dependent EO variables. Before EO variables can be accessed, user has to synchronize the model for EO solution strategy. The independent variables have to be constant (user-set variables), while the dependent variables have to be calculated ones. If the sensitivity analysis has to be computed for a constant variable with respect to a calculated one, this is also possible but a specification swap has to be performed first. The following procedure describes how to set up and perform a sensitivity analysis, to study the impact of changing feed temperature and reflux ratio on the column condenser duty and methanol purity in the distillate (the files can be opened in Aspen Plus 2006.5 or higher): 1. Open EO_example_starter.bkp and run it in SM mode. 2. In control panel, change solution strategy from Sequential Modular to Equation Oriented to synchronize the model. 3. Go EO configuration | EO sensitivity and click new to create an EO sensitivity analysis. Accept the default name ES-1 (or enter a new name). 4. In the configuration sheet, click the field under independent variable. Then click on the ??? button at the end to bring up the EO variable list. Select FEED.BLK.TEMP (temperature of the feed stream) and COLUMN.BLK.REFL_RATIO_MOLE (column mole reflux ratio) as the independent variables. Similarly select COLUMN.BLK.COND_HDUTY (column condenser duty) and COLUMN.DIST.STR.METHANOL (mole fraction of methanol in distillate) as the dependent variables. 5. Go to Result sheet and click on the Calculate Sensitivity button to obtain the Jacobian matrix. Please note, EO synchronization needs to be done first in order to activate the Calculate Sensitivity button. The result is shown in the table below: COLUMN.BLK.COND_HDUTY [BTU/hr] COLUMN.DIST.STR.METHANOL FEED.BLK.TEMP [F] -1113.2 -0.013911 COLUMN.BLK.REFL_RATIO_MOLE -1.8693E7 0.55363 Keywords: EO sensitivity References: None
Problem Statement: How do you create an Aspen Properties Enterprise Database (APED) databank with extra security?
Solution: Aspen Properties Enterprise Database by default protects its content by encrypting the property values. There are additional security measures that have been implemented to further protect the database from unintended usage. The database can be used only within a client company The database can be used only within a 3rd party vendor of a client company with restricted access. This document describes these additional security measures that you can use. I. Database Accessible Only Within a Client Company This database security measure was implemented to enable the content of the database to be secured to protect the intellectual property of the company such that the database can be used only within the users' own company. The database will not function at a different company. In this manner, the proprietary data of a company will not inadvertently be used in an unauthorized manner. This security is tied to the software license string of the given client company as returned by the License Manager. See section "Database Security String" for more details. To create a database with this security, follow these steps. 1. From Aspen Properties Database Manager, select Aspen Physical Properties Databases. Click the right mouse button and select Create a New Database 2. Follow the step to create the database, such as by importing legacy files. In the dialog box that appears, enter the Login Name, Password and Database Name. In this example, PROCESSC is used. 3. Click Next to specify extra security 4. Check the ?Add extra Security to this database? checkbox. 5. Enter the database creator password. It is extremely important that you note down this password and keep it in a secured location. You will need to use this password if you want to create a restricted database for 3rd party (see next section) or you need to backup the database using a different security string, in case the security string of your company (as returned by the License Manager) has been changed. 6. Click Next to proceed to the next steps of the database creation process. See Help for the Aspen Properties Database Manager for complete instructions. The database thus created can be used only within your company. II. Database Accessible by 3rd Party with Restricted Access This database security measure was implemented to enable the content of the database to be secured to protect the intellectual property of the company when the database is shared with an outside vendor, such as a technology partner or an engineering company. The database must be shared with a 3rd party because it is used in a process model (e.g., an Aspen Plus simulation model) and the process model is required by the 3rd party to execute a project. The requirement is that the 3rd party must be able to use the database in the simulation calculations, but should not be able to view the content of the database in the Aspen Plus User Interface and Aspen Properties Database Manager. In addition, the database cannot be used outside of the 3rd party company. To achieve this level of security, the database must be configured in such a way that - a. It works only within a certain organization. A security string of the intended (3rd party) organization is required. This string can be obtained from the Aspen Properties Database Manager by using RMB on Aspen Physical Properties Databases | All Tasks | View Database Security String (see section Database Security String for more details). The string must be used when creating the restricted database. b. The option to allow the database to be used with Aspen Tech simulation products such as Aspen Plus and Aspen Properties is enabled. c. The option for Aspen Tech viewers such as Aspen Properties Data Manager and Aspen Plus User Interface is disabled. The restricted database can be created from a secured user database (source) by creating a "backup" of the "source" with additional Access and Security settings. The source database must have been created with extra security as described in "I. Database Accessible Only Within a Client Company". To create a restricted database from a secured source database with added security to protect its content from view and modification, follow these steps: 1. From the Aspen Properties Database Manager, select Aspen Physical Properties Databases and start the database backup wizard, using RMB 2. The Backup Database Wizard appears. Enter Login Name and Password and select the ?source? database that you want to backup (in this example, PROCESSA) 3. Select (check) the "Change Security Settings" checkbox. Note that if the selected database does not have the extra security as described in section I, this checkbox is not available. Enter the database creator password that you provided when creating the original secured database. 4. Note down the Backup file directory and name of backup file that will be created once you finish with this wizard. In this example, the file is: C:\Documents and Settings\All Users\Application Data\AspenTech\APED V7.0\PROCESSA 5. Click Next 6. Specify the database Access and Security settings on the form that appears. 7. Give access to Aspen Plus/Aspen Properties Calculation Engine. Do not give access to the Aspen Properties Database Manager and User Interface 8. Give access to the specified vendor company. Enter the Vendor validation string from the vendor (obtained by the vendor from the Aspen Properties Database Manager by using RMB on Aspen Physical Properties Databases | All Tasks | View Database Security String. See section Database Security String for more details) Important Notes: 1. You can use this feature to provide a secured database to AspenTech's Customer Supports so that they can diagnose and reproduce your problems. 2. If you check ?Accessible within my company?, this database can also be used within your company with the same Access Settings. This is useful for a. testing the database before giving it to a 3rd party b. deploying the secured database within your own company c. deploying the secured database within your own company, but at a different location that has a different security string Deployment of the Restricted Database The backup copy of the secured database has been created in the location noted in step 4 above. Copy this file and give it to your vendor. Tell him the name of the database (in the example above, the database name is PROCESSA). He must restore it using the following steps. 1. Place the backup file in a local drive (e.g., c:\temp\). Start Aspen Properties Database Manager. Select Aspen Physical Properties Databases, use RMB | All Tasks | Restore Database to start the database restore wizard. 2. The Restore Database Wizard appears. 3. Provide Login Name, Password and Database Name. Use the database name provided by the creator. In this example, it is PROCESSA. Note that this database must not already exist at the vendor's site. 4. Click the Browse button to go to where you placed the backup database in step 1 (e.g., c:\temp\). 5. Select the file and click Open to return to the Restore Database Wizard dialog box. 6. Click OK to start the database restore process. 7. Click Close. The restored database will automatically be registered in the Aspen Properties Database Manager and will appear in the available database tree view. The registered database is now ready for use in the simulation. 8. However, this database is not accessible (viewable) in the Aspen Properties Database Manager. Clicking on the PROCESSA node will result in the following error: Database Security String The database security string is a key element of the database security. It is tied to the software license string of your company as returned by the License Manager. To see this security string, from the Aspen Properties Database Manager, select Aspen Physical Properties Databases. Use RMB | All Tasks | View Database Security String? The dialog box that appears shows the security string for your company. As an example, the information for AspenTech is: Important notes: ? You do not need to supply this security string when creating a secured database for use within your company. This information will be obtained automatically. You cannot alter this string. ? Your company may have different security strings for different locations. For example, the US location may have a different string than the Asian or European location. If this is the case, the database created at one location cannot be used at another location. To allow a database that is created at one location to be installed and used at another location, you must backup the database using the steps described in the next section (The Database Accessible by 3rd Party with Restricted Access) using the security string of the target location. The backup database can then be restored (installed) at the target location. Be sure to set the appropriate database access and security settings while backing up the source database. Keywords: None References: None
Problem Statement: This Knowledge Base article provides steps to resolve the following error: Getting "404 - File or directory not found" which may be encountered when trying to view Tag Details in aspenONE Process Explorer for tags defined by a custom definition record.
Solution: The DETAIL_DISPLAY_REC attribute within the IP_DiscreteDef definition record defines Tag Details page. ipdscret is the default name for IP_DiscreteDef. It is possible to plug other ASP implementations for Tag Details based on the tag's definition record. If the value of the DETAIL_DISPLAY_REC is different than the default, the page name specified in DETAIL_DISPLAY_REC will be displayed. Similarly, if a custom definition record is created with DETAIL_DISPLAY_REC specified, the page with this name will be called. This will cause the above-mentioned error message to appear. To have our standard Tag Details page come up for these custom discrete tags please use the Aspen InfoPlus.21 Administrator to change the value of DETAIL_DISPLAY_REC to a space within your custom definition record. Note: ipanalog is the default Tag Details page for IP_AnalogDef definition record. Keywords: None References: None
Problem Statement: Variables with calculation involved user-defined variables get stuck at BAD value after a temporary loss in communication causes the variable to go BAD and return to GOOD after a while.
Solution: APC Builder application uses AspenCalc engine to perform its calculation. This provides a more robust calculation facility than the previous DMCplus calculation engine. One of the features is that the calculation variable statuses are now passed automatically along with the calculation. Take for example, a simple filtering calculation involving a user-defined variable, DEPOLD and FITER_FACTOR. Input Calc: If FIRST_RUN = 0 Then DEP = (1 - FILTER) * DEPOLD + FILTER * DEP End if Output Calc: DEPOLD = DEP Since DEPOLD is calculated from DEP in the output calc section, when DEP status goes BAD, it passes the value along with the status to DEPOLD. Thus, both DEP and DEPOLD have BAD statuses. When DEP status returns back to GOOD, the calculation uses the DEPOLD (with the bad status) to calculate the DEP during the input calculation stage even before the DEPOLD's status can be reset to GOOD (Input calcs are run before Output calcs). Due to this, the DEP status gets set to to BAD and remains like that forever. The only way to reset the DEP status is to reload the controller so the FIRST_RUN is 1 and it reset the DEPOLD to the un-altered DEP. The calculation above need to modified to check for such conditions and make sure that DEOLD status is reset appropriately. The example code given below would only perform the calculation if both DEP and DEPOLD are good. If there are not, DEPOLD is set to track the un-altered DEP. Input Calc: If (FIRST_RUN = 0) or ((DEP.level = Qlevel_good) and (DEPOLD.level = qlevel_good)) Then DEP = (1 - FILTER) * DEPOLD + FILTER * DEP End if Output Calc: DEPOLD = DEP In summary, care must be taken to ensure BAD status are handle correctly to avoid a variable's status to be locked in the BAD state forever. Keywords: Aspen Process Controller Aspen Calc Variable Status References: None
Problem Statement: General Operating Procedures of Aspen Exchanger Design and Rating (Aspen EDR)
Solution: Most of the Aspen EDR programs follow these general operating procedures. 1. Start Aspen EDR. On your desktop click the Aspen Exchanger Design & Rating shortcut icon or click Start | Programs | AspenTech | Exchanger Design & Rating V7.x.x | Exchanger Design & Rating User Interface. 2. Select required EDR program. On the Aspen EDR Design System dialog box, do one of the following: ? On the New tab, click the checkbox next to the EDR program you want to use and click OK. or ? Click the Existing tab and select the file you want to open and click OK. 3. Enter the required data. Use the Navigation Tree or click the Next button on the toolbar to display the required input forms and sheets, and enter the data. 4. Run the problem. On the toolbar, click the Run button, or on the menu bar click Run | Run Program. 5. Review the Results. Use the Navigation Tree to display the results. 6. Save the input data at any time. On the toolbar, click the Save button, or on the menu bar click File | Save. 7 Print a hard copy of the results, if desired. On the toolbar, click the Print button or on the menu bar click File | Print. In the dialog box, check the boxes next to the desired output, and click Print. 8. Update the file with current geometry. On the menu bar click Run | Update file with Geometry. 9. Transfer design information to other programs, if desired. On the menu bar click Run | Transfer. In the dialog box, check the box next to the desired program and click OK. 10. Exit Aspen EDR. On the menu bar click File | Exit. The program prompts you to save changes. Click the appropriate button. Keywords: procedure, EDR, general, workflow References: None
Problem Statement: How to GET and PUT a value to a Aspen Cim-IO device directly from Aspen SQLplus.
Solution: Here are two sample procedures to Get and Put values from/to an Aspen Cim-IO device from Aspen SQLplus. Sample Procedure to Get a Value from a Cim-IO Device Procedure CimIOGet ( NumberOfTags integer, TagAndValue) local x integer; -- First generate an answer file set output 'c:\CimIOSQLGet.txt'; write '9'; --Test GET write 'IOOPC';--Device name write '1'; --Unit number write CAST(NumberOfTags AS CHARACTER);--Number of tags write '1'; write '10'; write '1'; --Timeout write '100'; --Frequency write '-1'; write '1'; for x = 0 to numberOfTags-1 do--List ID write TagAndValue[x,0]; --The IO tagname write '1'; --Data Type write '1'; end;--Device type write '';--the return after the response write 'x'; set output 'c:\CimIOSQLGetResult.txt';--exit the Cimio_t_api.exe system 'c:\progra~1\aspentech\cim-io\code\cimio_t_api < c:\CimIOSQLGet.txt'; system 'del c:\CimIOSQLGet.txt'; set output default; select line from 'c:\CimIOSQLGetResult.txt' where linenum between 102 and (101 + (NumberOfTags * 9)); system 'del c:\CimIOSQLGetResult.txt'; end; Sample Procedure to Put a Value to an Aspen Cim-IO Device Procedure CimIOPut ( NumberOfTags integer, TagAndValue) local x integer; -- First generate an answer file set output 'c:\CimIOSQL.txt'; write 'a';-- PUT write 'IOOPC';--Device name write '1';--Unit number write CAST(NumberOfTags AS CHARACTER);--Number of tags write '1'; write '10'; write '1'; write '-1' -- List ID write '1'; for x = 0 to NumberOfTags-1 do write TagAndValue[x,0];--The IO tagname write '1';--Data Type write '1';--Device type write CAST(TagAndValue[x,1] AS CHARACTER);--The value write '1';--Output type end; write '';--the return after the response write 'x';--exit the Cimio_t_api.exe set output 'c:\CimIOSQLPutResult.txt'; system 'c:\progra~1\aspentech\cim-io\code\cimio_t_api < c:\CimIOSQL.txt'; system 'del c:\CimIOSQL.txt'; set output default; select line from 'c:\CimIOSQLPutResult.txt' where linenum between 118 and (118 + (NumberOfTags * 9)); system 'del c:\CimIOSQLPutResult.txt'; end; Sample Query Calling the Get and Put Procedure Values from the get are returned in the output window. local TagAndValueList; redim(TagAndValueList,10,1); TagAndValueList[0,0] = 'Bucket Brigade.Real4'; TagAndValueList[0,1] = 67.89; TagAndValueList[1,0] = 'Bucket Brigade.Real8'; TagAndValueList[1,1] = 98.76; -- The parameters are (number of tags, Array of tagnames and values) CimIOPut ( 2, TagAndValueList); CimIOGet ( 2, TagAndValueList); KeyWords Keywords: None References: None
Problem Statement: The Aspen SQLplus Query Writer has a query timeout limit of 32767 seconds (9 hrs, 6 min, 7 sec). For certain queries, this can be too restrictive.
Solution: To run a query with no upper time limit, simply set the timeout (Query > Timeout...) to zero. Keywords: timeout time out References: None
Problem Statement: How can you delete an occurrence from a GET record repeat area using Aspen SQLPlus?
Solution: To delete a particular occurrence from a GET record's repeat area requires several steps, since all "delete" means in this context is decreasing the number of occurrences by 1. If the occurrence you want to delete is not the last one, follow these steps to delete occurrences using SQLPlus. Please note, that it is prudent to turn off record processing to the get record before you change anything in the repeat area. 1. First, determine the occurrence number of the record you would like to delete. (This step is more of a check than anything else. You may want to repeat it at the end to verify that it was in fact removed.) **PLEASE NOTE: In this example, we are wanting to delete occurrence number 2 and the "io_value_record&fld" field was left blank for all other occurrences in order to more easily see what changes are being made. select OCCNUM, io_tagname, "io_value_record&&fld" FROM "testgetrecord".1; OCCNUM io_tagname io_value_record&fld ---------------- ------------------- Keywords: None References: None
Problem Statement: The version of component-sum.kb (supplied with version 11.1) is not able to perform the calculations/operations listed below: Calculate phase component data Calculate the following stream properties: api gravity, standard volumetric flow for vapour and liquid, liquid phase specific gravity, liquid phase volumetric flow and liquid phase mass based Cp Import dry stream properties from ProII (The attached Provision.lisp and materials.lisp have been enhanced to include mappings for dry stream properties and are necessary for this)
Solution: Close all Aspen Zyqad applications. Save the new component-sum.kb into C:\Program Files\AspenTech\Working Folders\Aspen Zyqad 11.1\projects\libraries\Kbs Replace material.lisp and provision.lisp in C:\Program Files\AspenTech\Working Folders\Aspen Zyqad 11.1\Examples\prototypes with the new versions. Recreate prototype store using install-prototypes.lisp in Development Application -> Prototypes menu -> Create Prototype Store From KB menu -> Compile knowledge base -> select component-sum.kb and compile. Reload Aspen Zyqad and reopen Workgroup. Note: The new kb enables the liquid1 and vapor phase components data (when available) to be calculated in the same way as bulk components. To achieve this the component names must be copied from the bulk components to each phase components using the rule ''Copy Component Names to Phases''. This needs only to run once per case after the component groupings have been established in a workgroup (ie after import of simulation data into a case). Although this could be included in the ''Sum all piping system mass flows'' rule, it would add significant redundant data manipulation at every data transfer and so is not recommended. The attached file will be used as the basis for version 12.0. KeyWords: Component-sum.kb, phase component data, extended stream properties, import, dry stream, Pro-II Keywords: None References: None
Problem Statement: MPF Lock Attempt Timed Out, Failed to Lock Controller - Skipping MPF Lock error messages The challenge with MPF lock errors, is not knowing what is causing the lock, is it the online application? the viewsrv process? or possibly a filled up message queue?
Solution: Here is a possible scenario for the mpf lock attempt timeout: 1. DMCplus controller is running, locking the context during the read thru validation phases of the controller cycle. 2. DMCplus View session is running displaying information from the controller context. 3. A user attempts to enter a value via View for a variable that has a database connection and View must build (one time) an IDB for the controller and validate all the tags in the CCF, an action similar to a controller load. This results in an attempt to lock the controller context for some period of time, maybe 5 seconds, maybe 20, depending on how many tags in CCF and length of time to validate . 4. The DMCplus controller runs again during this time and waits up to 5 seconds for the exclusive lock on the controller context. If the lock is not obtained, the following message is generated for that cycle. The controller should try again at the next cycle. MPF LOCK ATTEMPT TIMED OUT FAILED TO LOCK CONTROLLER - SKIPPING Aspen IQ / Aspen Inferentials also uses MPF regions and can experience the same type of locks on the MPF files as DMCplus. The Aspen IQ viewsrv can also lock up the files while updating. It is also possible that the mpf message que could be locked or be full and is not available for update by the control system process. In the case of a locked message queue, you can use the MPF system to clear out the messages. At a dos prompt: MPF_Manage will present the MPF subsystem and an list of MPF options MPF_Manage msglist -r -o c:\messages.txt will copy all of the messages from the msg queue to the text file messages.txt on the c drive, check the contents of the file for the messages, these are the same messages seen in the PCWS and and in the .eng file or when looking at messages from Manage. When you are confident the messages are contained in the text file, issue the following to clear out the message queue: MPF_Manage msgclear This should resolve any locks on the message queue. These actions to address the message queue will not disrupt the running controllers. You might run the ACOBASE Shutdown, which will stop all of the online applications. This would release any locks, and then the ACOBASE Startup could be used to restart the apps. You could also try stopping and starting the applications one at a time, until the lock is released. You might also use the dmcpview_shutdown.bat to stop the dmcp_Viewsrv and then use the dmcpview_Startup.com to restart. This can also be done with the iq_viewsrv, there is an iqview_shutdown.cmd and a iqview_start.cmd. These are all found in the AspenTech Shared directory. Stopping and restarting the viewsrv processes in this manner will not disrupt the running applications. KeyWords DMCplus, MPF, Timeout, Lock Keywords: None References: None
Problem Statement: While datasheet export to excel, all the numbers are recognized as text, why?
Solution: If the general format is used, the export data will be as text format. Open datasheet from datasheet definer and define the field as numerical in excel. Keywords: datasheet export, text field in datasheet References: None
Problem Statement: 把 Aspen InfoPlus.21以链接服务器(linked server)添加到Microsoft SQL Server后,可以使用Microsoft SQL Server工具来访问和查看Aspen InfoPlus.21数据。
Solution: 为了把Aspen InfoPlus.21添加为链接服务器(linked server), 事先需要用AspenTech SQLPlus ODBC driver创建ODBC数据源 1. 在安装SQL Server的机器上打开ODBC数据源管理器(开始 | 设置 | 控制面板 | 管理工具 | 数据源 (ODBC)). 2. 选择 System DSN 后点击"添加"按钮. 3. 在新数据源窗口,选择AspenTech SQLPlus driver后点击"完成"按钮. 4. 点击 "高级"按钮 取消选择 "Use Aspen Data Sources(ADSA)" 5. 输入ODBC数据源的名称和描述. 在"TCP/IP host" 中输入Aspen InfoPlus.21服务器名,默认"TCP/IP Port"值为 "10014"。点击OK保存数据源。 6. 点击"Test" 确认链接是否正常 创建数据源以后通过如下步骤创建链接服务器 Microsoft SQL Server 2005 1. 运行 Microsoft SQL Server Management Studio. 点击 "Server Objects". 2. 右击 "链接服务器(Linked Server)"后选择 "新的链接服务器(New Linked Server)" 3. 输入链接服务器( Linked Server)的名称后选择其他数据源按钮. 4. 在供应商名称(Provider name)列表中选择"Microsoft OLE DB Provider for ODBC Drivers" 5. 在数据源部分输入已经创建的ODBC数据源名称. 输入产品名称( product name). 其他部分不需要填写.点击 OK保存新建的链接服务器. KeyWords 链接服务器 Linked Server Microsoft OLE DB Provider for ODBC Drivers CN-Solution Keywords: None References: None
Problem Statement: In a GET transfer record you may receive the error: "USR_GET_RECEIVE, Error receiving a GET reply" This means that the communication between the Aspen Cim-IO server and client was successful in general, but this specific GET request failed. Or summarized: 'Unable to connect to specific tag xxxxxx"
Solution: Since this is a very generic error message, various issues can cause this. The objective of this solution is to narrow down where the error resides. Common causes are: 1. The communication between Aspen Cim-IO server and Aspen Cim-IO client is not successful. 2. The communication between Aspen Cim-IO server and the external device is not successful. 3. The tag name has not been specified as required by the interface (check interface specific Aspen Cim-IO Users Guide). 4. A tag is being addressed at the interface which doesn't exist (a typo has been made). General questions for determining where the problem resides: 1. Has this been working before? Until when, what changed? Can you stop and start Aspen Cim-IO server and client? 2. Do other transfer records work correctly ? Do these address the same unit in the DCS/PLC? 3. Does the communication to other tags at this DCS/PLC work? How are these 'successful' tags addressed? 4. Do you get the error message for every tag/data point in the transfer record? Re-initialize the transfer record by switching it OFF and ON. For resolving this issue, more specific (error) information is required, which can be obtained by: ? The cimio_msg.log on the Aspen Cim-IO client and server will provide more detailed messages. In many cases, the error code will be followed by a more meaningful error message. Chapter 11 and 12 from the Aspen Cim-IO User's Guide will provide more details. ? The Aspen Cim-IO test utility will test a low-level communication, bypassing the Aspen InfoPlus.21 core components. The Aspen Cim-IO test utility is described in the Aspen Cim-IO User's Guide. When doing the test on both the Aspen Cim-IO client and server side, please do so for various tags and compare the resulting messages. Based upon this, you can conclude whether you are suffering from a Aspen Cim-IO server, client or Aspen InfoPlus.21 issue. Keywords: USR_GET_RECEIVE Error receiving a GET reply CIMIO References: None
Problem Statement: The error codes from SQLPlus are short and don't always give enough information to diagnose the problem.
Solution: Error codes with long descriptions are listed below. Error Code Error Message Acttsk "Error activating \"%s\":%s" AsnInv "Invalid target for assignment: %s" Bar "Invalid concatenation operator" Bin_Const "Invalid character in binary constant: \"%s\"" ByExp "There must be at least two expression selected before a BY clause" ByMax "Too many columns selected from SELECT..BY table" BySet "BY expression must be a set function" CallParams "Invalid number of parameters passed to function \"%s\"" Col_Dcol "Column \"%s\" exists in more than one table" Col_Dpseudo "Pseudo column \"%s\" exists in more than one table" Col_Dtab "Table \"%s\" is not unique" Col_Fid "Error reading field ID:%s" Col_Ind_Name "Field \"%s\" not found" Col_Indirect "Invalid indirect column" Col_Inv "Invalid column" Col_Ncol "Column \"%s\" not found" Col_Ncolvar "Column or Variable \"%s\" not found" Col_Npseudo "Pseudo column \"%s\" not found" Col_Ntab "Table \"%s\" not found" Col_Repeat "Column \"%s\" is in a different repeat area" Connect "Failed to connect to link %s:\n %s" CreateView "Error writing VIEW: %s" CvBit "Invalid BIT value: \"%s\"" CvDelta "Invalid Delta time value: \"%s\"" CvField "Invalid FIELD value: \"%s\"" CvInt "Invalid INTEGER value: \"%s\"" CvReal "Invalid REAL value: \"%s\"" CvRecord "Invalid RECORD value: \"%s\"" CvRowid "Invalid ROWID value: \"%s\"" CvSelect "\"%s\" invalid value for select record %s" CvTime "Invalid TIMESTAMP value: \"%s\"" DbNotFound "Database \"%s\" not found in SETCIMDATABASES" DbParam "Invalid database parameter: \"%s\"" DbsFile "Failed to open SETCIMDATABASES" DelHist "Historical repeat area not allowed in DELETE" DelRec "Error deleting record \"%s\": %s" DelSingle "Single table expected for DELETE" Err_AgFieldId "Invalid value for FIELD_ID column" Err_AgPeriod "Invalid value for PERIOD column" Err_AgRequest "Invalid value for REQUEST column" Err_FileFun "FILE function can only be used in the FROM clause of a query" Err_InsertAs "INSERT..AS must use a definition record" Err_InvBracket "Unexpected \"(\"" Err_NoAggr "AGGREGATES table not available on this version of InfoPlus.21" Err_NoProcRS "Procedure \"%s\" is used in the context of a query but has no result set" Err_SubMonth "Result timestamp out of range" Exp_Degree "Too many expressions in list" Exp_Flds "Error reading field:%s" Exp_List "Number of expressions on each side of \"=\" are different" Exp_Type "Expecting \"%s\" expression Expecting "Expecting %s External "%s" FailedLink "Database link \"%s\" failed" FileWrite "Failed to write to file - Check disk full." FmtDate "Invalid date when converting CHARACTER to TIMESTAMP" FmtFmt "Invalid format item \"%s\" when converting CHARACTER to TIMESTAMP" FmtMiss "%s field missing when converting CHARACTER to TIMESTAMP" FmtMonth "Invalid month: \"%s\"" FmtRange "%s field out of range when converting CHARACTER to TIMESTAMP" FmtTime "Invalid time when converting CHARACTER to TIMESTAMP" ForVar "FOR variable must be a local INTEGER" FunDup "Function \"%s\" is already declared" GlobalDup "Variable \"%s\" is already declared" Grp_Col parameter of a set function" Grp_Fun "Set function \"%s\" not allowed as the parameter of another set function" Grp_Group "Set function \"%s\" not allowed in a GROUP BY list" Grp_On "Set function \"%s\" not allowed in a ON condition" Grp_Where "Set function \"%s\" not allowed in a WHERE condition" Hex_Const "Invalid character in hexadecimal constant: \"%s\"" Id_NewLine "Closing double quote missing for: %s" InsCreate "Error creating record:%s" InsDefInv "Definition record not allowed in repeat area INSERT" InsDupDef "Duplicate DEFINITION specified in INSERT" InsFree "No free record ID after %d for INSERT or CREATE" InsHisWrite "Error inserting into history:%s" InsIncr "Error increasing repeat area count:%s" InsInv "Invalid item in INSERT column list: %s" InsInvHsn "Invalid HISTSEQNUM value" InsInvId "NULL or invalid RECID value in INSERT" InsInvOccnum "Invalid OCCNUM value" InsMoreCols "More columns than values specified in INSERT" InsMoreVals "More values than columns specified in INSERT" InsNoDef "DEFINITION not specified or invalid in INSERT" InsNoFree "No free occurrence for INSERT" InsNoName "NAME not specified in INSERT" InsNoSeq "OCCNUM InsRdFmt "Error reading format field:%s" IntoCol "Expecting column name in INTO list IntoDef "SELECT...INTO table cannot be a definition record" IntoFix "Fixed area field \"%s\" cannot be used in an repeat area INTO list" IntoHist "SELECT...INTO cannot specify historical repeat area fields" IntoMoreCols "More columns than values specified in SELECT...INTO" IntoMoreVals "More values than columns specified in SELECT...INTO" IntoSc "SELECT...INTO table is not a local table" Inv_Char "Invalid character: %s" Inv_Cond "Invalid condition: %s" Inv_Exp "Invalid value expression: %s" Inv_Int "Invalid integer: \"%s\"" Inv_Macro "Invalid macro variable" Inv_Minutes "Number of minutes out of range: %s" Inv_Real "Invalid real number: \"%s\"" Inv_Seconds "Number of seconds out of range: %s" InvBind "Bind variables (\"?\" or \":var\") are only allowed in API programs" InvJoinTok "Invalid join operation" LinkNoComma "Comma missing in \"%s\" line in SETCIMLINKS" LinkNotFound "Link \"%s\" not found in SETCIMLINKS" LinksFile "Failed to open SETCIMLINKS" Max_Macro "Macro variable name too long" Max_Name "Name too long: %s" Max_Number "Number too long: %s" Max_Param "Parameter number too long" Max_String "Character constant too long: %s" MaxStr "Maximum string length exceeded" Memory "Memory Full" Mrdboccs "Error reading memory repeat area:%s" MultiOrder "Multiple ORDER BY clauses cannot be applied to a single query" NestView "View \"%s\" is included in its own definition or is too deeply nested" Network "Network failure on link %s:\n %s" No_Field "Record and Field \"%s\" not found" No_Fun "Unknown function: \"%s\"" No_Record "Record \"%s\" not found" Nodisk "Disk history not running" NoRowid "Complex remote UPDATE or DELETE requires row identification field" NotField "\"%s\" is not a valid field name record" NotTable "\"%s\" is not a valid TABLE" NotUpdatable "VIEW or TABLE is not updatable" NotView "\"%s\" is not a valid VIEW" OpiFun "Failed to find function \"opi_register\" in OPI driver \"%s\"" OpiLoad "Failed to load OPI driver \"%s\"" OpiSlot "No free slot for OPI driver \"%s\"" Order_Int "ORDER BY item %d out of range Out_Line "Maximum output line length exceeded" OutFile "Failed to open \"%s\" for output" Overlap "OVERLAPS needed a pair of values ProcNest "FUNCTION declarations cannot be nested" QLine "Error writing OUTPUT_LINE from \"%s\":%s" QNumLines "Error reading #OUTPUT_LINES from \"%s\":%s" QParam "Column name expected for QLEVEL QSetcim "Column of QLEVEL QType "Integer Rdbvals "Error reading fixed area fields:%s" Read_File "Error reading file \"%s\"" ReadOnly "Database write not allowed in read only mode" Remote "Remote database error on link %s:\n %s" Rhisdatax "Error reading historical repeat area:%s" SelHsn "HISTSEQNUM can only be selected from an historical repeat area table" SelOcc "OCCNUM can only be selected from a repeat area table" Seloccs "Error reading repeat area fields from format record:%s" Selrdb "Error reading fixed area fields from format record:%s" Star_Ids "Error reading field IDs:%s" Star_Names "Error reading field names:%s" Star_Type "Invalid data type: %d" StartFile "START failed to open file \"%s\"" StartLine "Error reading QUERY_LINE from \"%s\":%s" StartNLine "Error reading #QUERY_LINES from \"%s\":%s" Stop "Query stopped by user" Str_NewLine "Closing quote missing for: %s" String_Max "Maximum string length exceeded" Subq "Sub-query expected after SOME SysFile "SYSTEM failed to open temporary file" SysPipe "SYSTEM failed to create pipe" SysRead "SYSTEM commands and file writes disabled" SysSub "SYSTEM failed to create sub process" Tab_Break "BREAK cannot appear after CALCULATE in a SELECT" Tab_Correspond "The are no corresponding columns in the query expressions" Tab_Degree "Each row in a VALUES list must contain the same number of expressions" Tab_File "Text file table \"%s\" not found" Tab_Flds "Error reading definition record (ID=%d) repeat area fields:%s" Tab_Id "InfoPlus.21 record \"%s\" does not exist" Tab_Id "SETCIM record \"%s\" does not exist" Tab_Name "Table name expected" Tab_Nflds "Error reading definition record (ID=%d) fixed fields:%s" Tab_Query "%s is not valid as a query expression" Tab_Repeat "Record \"%s\" does not have %d repeat areas" Tab_Type "%s is not valid as a table expression" Tab_Union "Each sub-expression in a %s must have the same number of columns" Timeout "Query timeout" Timerange "Timestamp \"%s\" Timestamp "Invalid timestamp \"%s\" TmpDup "Temporary Table \"%s\" is already declared" TmpMem "No free memory to INSERT into temporary table" TmpName "Temporary Table \"%s\" is not declared" TransName "Unknown SQL transport name \"%s\" in SETCIMLINKS" UpdInv "Cannot update %s" UpdMix "Fixed area field \"%s\" cannot be updated in a repeat area table" UpdQuality "Value field must be set if quality function is set in UPDATE" UpdShort "Value out of range for short integer in \"%s\"" UpdUnuse "Error making record \"%s\" unusable:%s" UpdUsable "Error making record \"%s\" usable:%s" UpdWrite "Error writing to \"%s\": %s" User "%s" ValueSubq "Value subquery can only select a single value" VarNest "Local variable declarations must be at the start of a FUNCTION" ViewCols "VIEW columns do not match VIEW query" Where_Type "JOIN USING column has different data types in the joined tables" KeyWords: Keywords: None References: None
Problem Statement: At first, "valid.err" does not exist when we installed Aspen Process Controller Online on a new PC. At what point is this file created?
Solution: The file "valid.err" will be generated in the following folder after an error happens. The file location varies with platform OS. Windows Server 2003 R2 SP2 : C:\Documents and Settings\All Users\Application Data\AspenTech\APC\Online\etc Windows Server 2008 R2 C:\ProgramData\AspenTech\APC\Online\etc Keywords: valid.err, Windows Server 2003 R2 SP2, Windows Server 2008 R2 References: None
Problem Statement: I am trying to open any datasheet from the Excel Datasheet Editor, but the program fails to run properly within the Datasheet Explorer.
Solution: The reason why the Excel Datasheet Editor fails to run is due to the AZExplorer program running in compatibility mode as shown below: The option ‘Run this program in compatibility mode for:’ should be un-checked, in order to avoid this issue: In order to locate the AZExplorer.exe file and change this option, follow this procedure: 1. Go to the following directory: C:\Program Files\AspenTech\Basic Engineering VX.X\UserServices\bin 2. Select and right-click on ‘AZExplorer.exe’ file: 3. Select ‘Properties’ from the right-click menu list. 4. Once the Properties dialogue is open, go to the ‘Compatibility’ tab and un-check the ‘Run this program in compatibility mode for’ option: 5. Once this option is un-checked, click on ‘OK’ button. 6. Now the issue of opening a datasheet from the Explorer will be solved. Keywords: Excel Datasheet Editor fails to run, Compatibilty Mode, AZExplorer.exe. References: None
Problem Statement: Trouble viewing Connectors in stockpile resulting in a freeze in Zyqad PFD. This is fixed in Zyqad 11.1 SP1 but may occur with older databases. The problem is that the full path to the connector symbols could be set as the default-symbol attribute and if the Symbol Path was changed (eg change of server), it could become invalid. The Stockpile (which uses the default-symbol) would then search the Symbol Library for the symbol which would slow it down so much it could seem like it has frozen.
Solution: Compile the attached kb and load it into the workgroup. Run rule "Fix Connection Default Symbol" to correct the default symbol path. The rule assumes the symbol files for connectors have the same name and location as in the standard Symbol Library. If these have been changed the entries in the table in the kb file will need to be edited to reflect the settings for the workgroup. There is also a report (connection-report) included in the kb file that can be used to check the default settings for the connections if it is suspected that this problem may have occured. KeyWords: Zyqad PFD, Stockpile, Default-Symbol, Symbol Path, Connector Keywords: None References: None
Problem Statement: How can I change the primary-connection objects properties that have already been placed on the PFD via a rule all at once?
Solution: Please find the attached rule that will prompt the user to select the class (i.e. primary-connection, secondary-connection, etc.). Then the rule will prompt the user to select the desired line width and color. Installation Instructions: Compile KB source file (*.kb) with the Development Application Tool: a. Start the Development Application. b. Select the Prototypes=>Load Prototype Store menu option and from the file selection dialog select a prototype store file: c:\?\aspenech\working folders\aspen zyqad\projects\libraries\prototypes\prototypes.prot c. Select the KB=>Compile Knowledge Base menu option and then from the file selection dialog select the source KB file. The compiled file is written to the same directory as the source file and will have the same name but it will have an extension ''.fsl''. The compiled file should be moved to the projects'' library directory: (i.e. C:\Program Files\AspenTech\ working folders\aspen zyqad\projects\libraries\kbs) Then, there are 2 methods of adding a compiled file to a project: a. Add KB Entry to a Project''s .CFG File. Each project has a configuration file (.CFG) which contains a section resembling: [kbs] KB = "mgmt" "filters" "naming" "sort" "component-sum" "hcst" "holds" "masbal" "pipingsystems" "SPPID-Export" KB= "connection-size-color" Each entry represents a compiled KB file while will be loaded into the relevant project at startup. This method is used on production projects when the KBs are stable. b. Load File through Trace Window. A compiled KB file can be added to a running project from the Trace Window of any connected client machine. The steps involved are summarised: 1.) From the Aspen Zyqad client select the Tools => Trace Window menu option. 2.) From the Trace Window select the Load => Load File menu option. In the dialog box displayed enter the full path to the compiled file relative to the server machine (e.g. C:\Program Files\AspenTech\ working folders\aspen zyqad\projects\libraries\kbs\filter). In this way, a KB file can be load into a running project without having to close the project and reopen it. This can speed up development times dramatically. Running The Rule: Running Rules - A rule can be launched for either the main Aspen Zyqad window (Run=>Rule) or the Drawing Application (Tools=>Rule). The desired rule can then be selected from the list displayed. (i.e. "Change the width/color for connections") Note: If you run this rule with the diagrams open, you will see the changes occurring live, or you can just fire it off from the workbench. I did notice that when run from the PFD the question boxes do not display all of the dumb text properly for 10.3. This is a problem that has been fixed for 11.1. Please keep in mind that this is just a sample of a couple of graphic elecments that can be manipulated via the rule language. These and any number of changes to the graphics can be affected by writing similar rules. KeyWords: Rules Rule Language Graphic Element PFD RAD connection object Keywords: None References: None
Problem Statement: What can cause an out of memory error with regard to an Aspen Process Controller application?
Solution: There is one "legitimate" way you can run out of memory: if you have configured the history retention interval too long. For each application that is running, the application history is kept in memory. You can control how much history is kept in the Configure This Server dialog. If it's keeping only a few days of history then that's most likely not the problem. If it is set to something much longer it might be an issue. The dialog looks like this. It is the History Retention period that will impact memory usage. As a rough guide, you can estimate the memory requirement for the application history by multiplying together the following factors: (# Days) x (Cycles per Day) x (# Applications) x (Variables per App) x ( Historized Entries per Variable) x (Storage per Entry) # Days 20 in this case Cycles per Day would be 1440 for a 1 minute controller # Applications ? Variables per App ? Historized Entries A rough guess would be 20. It varies with the type of variable and the type of controller (FIR, MIMO, MISO) Storage per Entry Varies, but most common would be 20 bytes By way of example, if I had 2 controllers running once a minute with about 40 variables each that would yield: 20 x 1440 x 2 x 40 x 20 x 20 = 921,600,000 = 878 MB This is probably too big. All history is held in memory by the RTEService.exe program. (You can use the Task Manager to see how much memory it is consuming.) Although the system may not have run out of memory, the heap size for a single .NET application cannot exceed 1 GB, and may be less depending on fragmentation. A good rule of thumb would be to limit the history (estimated by the above technique) to about 500-600 MB. We would also recommend running "perfmon" (Control Panel -> Administrative Tools -> Performance Monitor) and track the memory usage various RTEApplication.exe, RTEService.exe programs running on the online server. You can set the collection interval at every 10 minutes so that a the time window would be long enough (12-16hours) to see a trend in the memory usage. We typically look at Private Bytes and Virtual Bytes for each process. Keywords: References: None
Problem Statement: How does Aspen Plus calculate ideal gas enthalpy, entropy and Gibbs energy?
Solution: For a pure component: Ideal Gas Enthalpy: HIG = DHFORM + Integral (CPIG dT) from Tref to T , where Tref is 25 C Ideal Gas Entropy: SIG = (DHFORM - DGFORM )/Tref + Integral (CPIG/T dT) from Tref to T Ideal Gas Gibbs Free Energy: GIG = HIG - T * SIG Note that it is important to check the rules of extrapolation for the ideal gas heat capacity (CPIG) model outside lower and higher temperature limits. For example, CPIG model at T<Tmin reslut in CPIG=0; however, this is not the case when CPIGDP model is used. See Solution 114978 for more details about extrapolation. Where for mixtures: Ideal Gas Enthalpy: HIG for mixtures is the mole fraction average of pure component ideal gas enthalpy (HIG) Ideal Gas Entropy: SIG for mixtures = molefraction average of SIG - R* SUM (xi * ln xi) Ideal Gas Gibbs Free Energy: GIG for mixtures = HIG for mixture - T * SIG for mixture The reference state for thermodynamic properties is the elements in the standard state at 25 C and 1 atm. Keywords: None References: None
Problem Statement: Can the Target value or variable bounds in a RadFrac Design Specification be manipulated using a Sensitivity block or Calculator (Fortran) block?
Solution: Yes - Radfrac's internal design-spec variables are available to the flowsheet define variables used by Calculator, Design-Spec and Sensitivity Analysis blocks. The list of define variables for RadFrac is rather long, but you can find the variables quickly if you drag the scroll button down to about 50%, and then slowly scroll down until you find: LB Lower bound for the Radfrac VARY (associated with the Design-Spec) UB Upper bound for the Radfrac VARY (associated with the Design-Spec) VALUE The target value for the Radfrac Design-Spec (it is located about 5 items below the LB/UB items). NOTE: The values used for the above 3 variables MUST always be in SI units. Below is a sample of accessing the Radfrac design-spec target value via a Sensivity block's Input / Vary sheet. Specify the manipulated variable as follows: Type: Block-Var Block: Block ID of the RadFrac block Variable: VALUE ID1: Design specification number (i.e. 1 for 1st spec in this RadFrac block, etc) KeyWords: RadFrac, Design Spec, Sensitivity Keywords: None References: None
Problem Statement: How do I get information on internal column fluids such as trays, strippers and pumparounds? Where do I connect/specify a Pseudo Stream?
Solution: A Pseudo Stream can be used to track the composition, temperature, pressure and flow of any column fluid (on a tray, in a stripper, or in a pumparound). Start by connecting a material stream to the Pseudo Stream connection port (usually located on the right center of most column icons). You can then either hit the next button or navigate to the Petrofrac / Report form in the data browser. Use the Pseudo Streams sheet to specify the location. The data entry form is vastly improved in Aspen Plus 10. There are separate dialogues for trays, strippers, and pumparounds. The procedure for connecting and specifying a Pseudo Stream: 1. Connect a material stream to the Pseudo Stream port of PetroFrac block. Let's name the block B1 and the stream S1. 2. In the data browser, go to Blocks\B1\Report, then click the Pesudo Streams tab to see the specification form. 3. In the View field, select stream S1. 4. Select one from the three possible choices Main column internal flow Pumparound flow Stripper flow 5. According to the selection in step 4, the specifications are Choice Speficiations Main column internal flow Stage, Phase Pumparound Pumparound ID, Conditions Stripper flow Stripper ID, Source*, Stripper Stage, Phase Draw Stage, Conditions ? Depending on the Source selection, some fields may be active or inactive. The prompted message indicates the action. Pseudo Streams are handled as material streams except that they do not actually participated in the PetroFrac block material balance calculation. They are used for the reporting purpose only. The user can attach multiple Pseudo Streams, with each representing a different internal flow. KeyWords: petrofrac pseudo pseudostream internal flow stream column pumparound stripper stage composition Keywords: None References: None
Problem Statement: The NRTL and UNIQUAC property models can describe vapor-liquid equilibrium (VLE) and liquid-liquid equilibrium (LLE) of strongly nonideal solutions with the use of binary interaction parameters. Many binary parameters for VLE and LLE, from literature and from regression of experimental data, are included in the Aspen Physical Property System databanks. Is it possible to use either the NRTL or UNIQUAC models with two different sets of binary parameters (regressed from different data) in the same calculation run?
Solution: You can use separate data sets for the NRTL (or UNIQUAC) binary parameters to model properties or equilibria at different conditions. It is also possible to use one data set for VLE and a second data set for LLE (use NRTL and NRTL-2 or UNIQUAC and UNIQ-2). The property methods are identical except for the data set number they use. For example, you can use these property methods in different flowsheet sections or column sections. The attached example file (NRTL-2.bkp) models a butanol - water separation system. The chosen global property method is NRTL, for which binary parameters are retrieved from the VLE-IG databank (i.e. parameters regressed from vapor-liquid equilibrium data). This property method is specified in the Data\Properties\Specification\Global sheet, and will be used by all of the unit operations in the simulation. . However, in this simulation there is a decanter to separate butanol and butyl-acetate from the water phase. For this liquid-liquid separation, it would be more accurate to use binary interaction parameters (for NRTL) that have been regressed from liquid-liquid equilibrium data rather than vapor-liquid equilibrium data. Binary parameters regressed from liquid-liquid equilibrium data for these components are available in the LLE-ASPEN databanks. It is possible to configure the decanter to use the NRTL property method with the LLE-ASPEN databank, and leave the remaining unit operations (RadFrac columns) to use NRTL with the VLE-IG databanks. In the Data\Properties\Specification\Global sheet, the base method is specified as NRTL. Change this base method to NRTL-2, which uses NTRL with a second set of binary parameters. In the Data\Parameters\Binary Interaction\NRTL-2 sheet, select the source databank to be LLE-ASPEN for mixtures butanol/water and butyl-acetate/water. The Data\Parameters\Binary Interaction\NTRL-1 sheet still contains the mixture parameters from the VLE-IG (default) databank. Change the global base method back to NRTL. In the decanter block, select the Properties folders (Data\Blocks\Decanter\Properties) and on the Phase Property sheet, select the property method to be NRTL-2 for both liquid phases. With this configuration the binary parameters retrieved from the LLE-ASPEN databank will be used with the NTRL property method to calculate the decanter separation. By leaving the default global property method as NRTL, the parameters regressed retrieved from the VLE-IG databank will be used for the remaining two RadFrac columns. KeyWords: NTRL-2 UNIQUAC-2 binary parameters VLE LLE Keywords: None References: None
Problem Statement: How does move resolution work in Smart Step/Calibrate mode?
Solution: When in SmartStep/Calibrate mode, the engine will calculate a MOVRES based on the CV Test Margin and the model gain matrix in such a way that the MV will be able to make a move should the relevant CV be one Test Margin outside the limit. The engine will then use the smaller of the user entered value and the calculated value. When in SmartStep/Calibrate mode, if a CV violates its constraint for a sustained time period (1/6 of TTSS), the Soft Move Resolution will be reset to 0 and will stay at 0 for a certain time period (1/6 of TTSS). MV has a Hard Move Resolution if and only if its MOVRES value is an integer and both its Operator Upper and Lower Limits (ULINDM/LLINDM) are integers; if one of them is not an integer, it is treated as a Soft Move Resolution which means that the engine can modify this number internally, if needed. Keywords: SmartStep Calibrate Move Resolutions References: None
Problem Statement: How can I create a phase plot in Aspen Custom Modeler?
Solution: To display variation of two variables against each other, a phase plot could be used. The starting step is to create a time series plot. Then the variables to be plotted against each other should be dropped on to the plot form. Right-click on the plot form and select properties and then select the variable tab. Change the default x-axis selection to specify x-axis variable and then in the field next to it type the variable’s name (or copy and paste). Click the “apply” or “ok” button to change the plot to the phase plot. KeyWords phase plot, plot Keywords: None References: None
Problem Statement: In air coolers, when specifying the fouling for the tube side, as displayed in the screen shot, and then upon running the simulation we may see different rounded up values for the fouling coefficient in the result summary | performance sheet.
Solution: The fouling on tube side is not rounded up. The difference between input and output is due the fouling definition. From the online help shown below, the input fouling is based on tube ID, but the output is based on bared tube OD. Keywords: , Fouling factor, TEMA References: None
Problem Statement: Aspen Plus document (APW) files are not upwardly compatible so they must all be converted into backup (BKP) files. What is the easiest way to convert a large number of APW files into the BKP format - is there any automated tool?
Solution: As documented in the Aspen Plus 10.2 System Management Keywords: None References: Guide at page 1-2 you can convert a file from the APW format to the BKP format in the Graphical User Interface (GUI) using the Export command from the File menu. The "Export to BKP" is also available from the commands you get when clicking the right mouse button (RMB) on an APW file in the Windows Explorer. Another possibility is to call APWN.EXE with the option /b and the name of the APW file as argument. The attached t.bat batch file provides a shortcut for that operation (assuming Aspen Plus 10.2 is installed in the default folder in the D: drive): t.bat c:\test\ATE.APW The command in the batch file is: "D:\Program Files\AspenTech\Aspen Plus 10.2\GUI\xeq\apwn.exe" /b "%1" NOTE: you need to adapt that if Aspen Plus is installed on a different location on your system. Still another possibility is using the COM interface. The attached VBScipt (for Aspen Plus 10.2) can be run from the command line (save as t.VBS); it takes as single argument the name of the APW file with complete path and extension: t.vbs c:\test\ATE.APW To use this script, you need to be running Windows 2000 or you need to have installed the scripting host (see http://www.microsoft.com/msdownload/vbscript/scripting.asp). If you need to convert many APW files at a time (as is the case when you are preparing to migrate to a later version of Aspen Plus) you can prepare a script to call the BATCH file or the VBS command repeatedly. The file apwbackfu.bat will convert all of the APW files in a given directory. Note that it is also possible to save a BKP file automatically when saving an APW file, by selecting "Always create backup copy" on the General sheet of the Tools / Options dialog. KeyWords: Upgrade Migration mmbackup mmrestore
Problem Statement: What is the procedure to reset the "Number Steps Performed" when using Calibrate in an APC/RTE application?
Solution: This solution applies only to applications deployed in the RTE platform (APC Builder). It applies to both Advanced Process Controllers and DMC3 controllers. 1. In the Edit Calculations window, move to User Entries. 2. Under General Variables, create a new User Entry called resetstep, the data type is an Int32, so it’s an integer number. The default IO flags for user entries are IsInput and IsTuningValue. Change the IsTuningValue to IsOperatingValue to avoid any overwritting during redeployment. 3. Now in the Input Calculations section, create a calculation called Resetstep. 4. This particular controller has 12 MV’s, so it’s necessary to create 12 parameters MVStep# to link to each of the Number of Steps performed of each MV. 5. The calc script is as follows: 'This code will reset the number of steps performed for MVs if resetstep = 1 then resetstep = 0 MVStep1 = 0 MVStep2 = 0 MVStep3 = 0 MVStep4 = 0 MVStep5 = 0 MVStep6 = 0 MVStep7 = 0 MVStep8 = 0 MVStep9 = 0 MVStep10 = 0 MVStep11 = 0 MVStep12 = 0 end if 6. The calculation environment will create each one of the parameters. Binded the resetstep parameter to the resetstep User entry and binded the MVStep# parameter to each Number Steps Performed of each MV (SS_NumberStepsPerformed) 7. Redeploy the controller 8. Click on the application name, almost at the bottom is the resetstep user entry. 9. Changed the entry value to 1 to initiate the reset calculation. 10. Wait one cycle and all counters are now 0. Keywords: Reset, Number Steps Performed, Calibrate, DMC3, APC Builder, RTE References: None
Problem Statement: How to adjust decimal digits shown in the process flowsheet variables?
Solution: To adjust the format of variables , you must go to File | Prefereces | Formatting | Edit Variable Formats. In the Variable Formats Dialog, select the "Result" object and then the desired variable you want to change. Then adjust the decimal digits by double clicking on the variable or press the Format button. After adjust your preferences click on OK to save your changes. Keywords: fixed point, preferences, decimal digits, process flowsheet variables References: None
Problem Statement: How to Print PFD in Aspen Flare System Analyzer V7.3
Solution: In order to print PFD, please go to Process Flowsheet tab and click Print Preview. The interface of print preview should show up. Go to File->Page Stepup and you should be able to set up orientation and Paper size To print the Pressure/Flow Summary, you will have to go to Aspenone button ->Print and check the options and go to Preview again. Keywords: Printer, Print preview, PFD References: None
Problem Statement: Why equivalent length varies?
Solution: Change in equivalent length is possible. Well! It definitely happens in the pipe where you have fittings. Aspen Flare System Analyzer reports the equivalent length of a pipe as the specified pipe length plus the equivalent length of fittings etc: L = Lpipe + Leq. The equivalent length term Leq is calculated from: Leq = (K*D)/f where: K = A + B*Ft; D = pipe diameter (the internal diameter in ft) f = friction factor (displayed under View | Result | Pressure/Flow Summary). Friction factor calculation is based on Reynolds number and other pipe information. As friction factor can vary from scenario to scenario, equivalent length can vary as well. Keywords: equivalent length, Reynolds number, loss coefficient. References: None
Problem Statement: What are the Set and Auto options on the Relief valve conditions tab?
Solution: ? Set : Calculates the data that is showing in the input box based on the values it depends upon. For example relieving pressure = 1.10 MAWP, clicking the set button calculates it and sets it to the values. It is not updated later on. ? Auto: Changes the value whenever any changes are made on the Conditions tab. It gets updated any time change is made on the relief valve conditions tab. For example if you change the valve type to pilot from conventional, the changes will be updated in the corresponding places. Keywords: Set, Auto References: None
Problem Statement: How can I model a predominantly liquid-only relief system in Flarenet?
Solution: For liquid-only scenario, usually there is no flow from the Separator Drum. Since there is no flow, Aspen FLARENET treats the composition of those lines as the same as the (liquid-only) composition in the Separator drum.This can cause flash failures in the program. This can also result in high back pressures due to static liquid head in the vessel. As a workaround, add a small gas flow so there is some flow to the stack. There should sufficient amount of gas to make this effective. Alternatively, a less soluble gas like Hydrogen can be used. Finally use the "Isothermal Gas" pipe correlation for pipes downstream of the separator drum, since this correlation does not include the static head term If PH flash failures is noticed, this can be caused by downflow head recovery in pipes further upstream. Essentially, the pressure downstream of the separator drum is set to atmospheric, then any liquid-filled pipes with an elevation >10m will have sub-atmospheric pressures and this is the cause of the flash failures. To avoid this problem you can set the "Ignore DownFlow Head Recovery" option to "Yes" for all pipes (Build | Pipes select all pipes | Edit) then either option above should work fine (i.e. a small gas flow or isothermal gas method for pipes downstream of the separator drum). Keywords: Liquid-filled relief, PH flash failure, high back pressures References: None
Problem Statement: I am designing my body flanges with ANSI as the Design standard. However, when I run the simulation, the code calculations under Code Calculations | Body Flanges do not appear. How do I see the code calculations for ANSI body flanges?
Solution: ANSI flanges are predesigned, so no calculations are needed. If you still want to see the code calculations click on Update Geometry after running. This will copy the ANSI geometry as input. Then change the Design Standard to Optimized in the Body Flanges | Individual Standards form. Note that in version 8.0 and 8.4, Update Geometry is under the Run menu. Now the program will use the ANSI flange geometry and provide calculations. Please consider that you will receive warnings because ANSI geometries and Appendix 2 do not agree. Keywords: ANSI, body flanges References: None
Problem Statement: How do I activate the ADSA Client Config Tool from a Windows command prompt?
Solution: Open a command window and execute the following command at the prompt.            C:\Windows\SysWOW64\regsvr32.exe /i /n /s "C:\Program Files (x86)\Common Files\Aspentech            Shared\Adsa\AtDsaLocator.dll" Note: Depending on the Operating System the exact paths in the command may be different. Keywords: References: None
Problem Statement: As explained on solution 123069, quality status for a CIM-IO for OPC interface are defined following OPC standard, this solution provides the deeper definitions of this states that could be useful to define is an issue with a tag is related to an OPC issue or if it is related to CIM-IO configuration.
Solution: The following information is taken from OPC DA Standard; the complete standard information can be consulted on OPC Foundation web page in https://opcfoundation.org/developer-tools/specifications-classic. The quality flag is represented by 8 bit arranged as: QQSSSSLL. The first two bits (QQ) are used for CIM-IO to show the value displayed by the IO_Data_Status field. These values defined by the OPC standard are as follow: Uncertain status is mapped to CIM-IO as Suspect. Each of this status provides also a substatus value (next four bit locations SSSS) which gives more information about the tag values status, this value is displayed by the IO_Data_Status_Desc field. Substatus defined by OPC standard are as follow: For BAD quality status: For Uncertain (Suspect) quality status: For Good quality status: CIM-IO will also use Bad Tag as substate of a Bad status when the tag name could not be located on the OPC server, this means that tag do not exist, tag name is not properly written or tag path syntax need to be different. OPC standard allow the use of custom status messages, for other tag status and substatus not specified on this list please consult with your specific OPC vendor the details of what the status means. Keywords: CIM-IO for OPC Quality status IO_Data_Status IO_Data_Status_Desc References: None
Problem Statement: How to determine if a port is being blocked by a firewall, if Aspen ME Systems products require certain port(s) to be 'opened' in the firewall to allow bi-directional communications? An example of this is Cim-IO, which uses a set of four TCP ports to establish bidirectional communications between the Cim-IO server and client systems. If the Aspen log files for Aspen CIM-IO and Aspen InfoPlus.21 indicate that there is no communications between the CIM-IO server and the Aspen InfoPlus.21 server (Cim-IO client) for the ports designated in the Windows services file, Explore 47637/tcp Explore_SC 47638/tcp Explore_ST 47639/tcp Explore_FW 47640/tcp then it is possible that these ports are not opened in the firewall and this is the cause of the issue.
Solution: The key to diagnosing the issue is to use the Windows netstat command to find if connections for the communications ports have a LISTENING or ESTABLISHED status; you can do this by issuing the following commands at the DOS prompt: netstat -an |find /i "listening" or netstat -an |find /i "established" Here is an example of the results for each command. By matching the TCP ports listed in the Windows services file with the TCP ports in the output of the netstat command, and by confirming the ports have a LISTENING or ESTABLISHED status you will be able to determine if the ports are being blocked by a firewall. Example: and Remember to check the ports on each computer that is involved in the bi-directional communication. Keywords: firewall TCP/IP services sockets References: None
Problem Statement: (Japanese) Standard Gas Flow�標準状態����
Solution: Standard Gas Flowã�®è¨ˆç®—ã�«ç”¨ã�„られる標準状態ã�¯ã€�デフォルトã�§ã�¯ã‚·ãƒŸãƒ¥ãƒ¬ãƒ¼ã‚·ãƒ§ãƒ³ä¸Šã�§ã�Šä½¿ã�„ã�®æ¸©åº¦å�˜ä½�ã�«ã‚ˆã�£ã�¦ç•°ã�ªã�£ã�¦ã�„ã�¾ã�™ã€‚詳細ã�¯æ·»ä»˜PDFファイルをã�”覧ã��ã� ã�•ã�„。 Keywords: 標準状態, Standard Condition, Standard Gas Flow, Japanese, 日本語 References: None
Problem Statement: Sulfur recovery is a core process in both midstream and downstream oil & gas industries. In Aspen HYSYS V9, Sulsim technology (by Sulphur Experts) has been fully integrated within Aspen HYSYS. In Aspen HYSYS V9, you can configure, simulate, and optimize the Sulfur Recovery Unit (Claus Process) to recover elemental sulfur from gaseous H2S, COS, CS2, and SO2. Since these gases are harmful to the environment and can affect petroleum product quality, achieving sulfur removal targets and satisfying regulatory requirements across a range of feeds and conditions are critical for operations.
Solution: The integrated Aspen HYSYS Sulsim product includes a new Sulsim (Sulfur Recovery) property package. When you add a Sulsim (Sulfur Recovery) property package to your simulation, the component list will automatically include all supported components, allowing you to easily integrate your sulfur recovery simulation with other gas plant processes. Keywords: Sulfur Recovery, Claus Process, Sulsim, Sulphur Recovery, SRU, Modified-Claus Process, Challenged Feed, Sulfur, Sulphur, H2S, COS, CS2, Reaction Furnace, Catalytic Converter, Tail Gas, Flare, Incinerator, Hydrogenation Bed, Waste Heat Exchanger, Condenser, Titania, Alumina, Selective Oxidation Converter, Sub-Dewpoint, Amine References: None
Problem Statement: What is the estimated cost for the refrigeration gas plant in HYSYS?
Solution: The attached file is the simplified version of refrigerated gas plant. One of the requirements is that the sales gas hydrocarbon dew point should not exceed 15 degree C at 6000 kPa. The cost estimation is done on the HYSYS side and the procedure is given below: 1) Activate the economic evaluation in HYSYS from the tool bar, You will need to choose the standard basis file for the units. If you prefer IP units you can choose US-IP or default. Parallel to the current HYSYS file, a project directory with the same name is created. Under this project directory, a scenario file with the default name "scenario1" is created automatically. Later on you can also open the file "scenario1" from the Aspen Economic Evaluation. 2) Load data. 3) Map the equipment. 4) Size the equipment. 5) Evaluate cost. 6) View/edit equipment summery data. You can take help from the solution from the link below on how to do the economic evaluation in HYSYS side: http://support.aspentech.com/webteamcgi/SolutionDisplay_view.cgi?key=126587 In the summary you will see the operating cost and total capital cost for this refrigeration plant. Some of the equipment will be considered as quoted item by Aspen Economic Evaluation. You can edit the material cost from economic evaluation side. You can also open the file "scenario1" under the project directory parallel the current HYSYS file. You will see the equipments are loaded, mapped and sized. You can run the evaluation from Aspen Economic Evaluation side too. You can compare the cost for the equipments. Keywords: Economic, evaluation, cost, estimation References: None
Problem Statement: Example of ActiveX Automation with EDR Shell&Tube.
Solution: The attached excel VBA example illustrates how to interface EDR with your own program. This example populates some data in EDR Shell&Tube, runs the simulation and reads back some results. For simplicity the example uses a simulation file also attached that already contains most of the inputs, but the values entered by code shows the process. In order to identify variable names, please use the “variable list” control: Please make also sure that the corresponding references to the EDR server components are correctly set in Excel VBA editor. This is required in my example since I am using early binding: Keywords: EDR, ActiveX Automation. References: None
Problem Statement: How can I predict the low flammability limit for aqueous solutions?
Solution: Aqueous mixtures are often vulnerable to flammability problems. The flammability analysis must cover a broad range of compositions and temperatures, and experimental measurements to define safe ranges are time consuming and expensive [1] Flammability analysis can be performed in Aspen Plus. An approach based on the work of Merck technologists is presented [2]. The calculation procedure in this application is to first compute the vapor composition of the aqueous solution in physical equilibrium with air, and then to react the organic component in the vapor with available oxygen. The combustible concentration shall be maintained at or below 25 percent of the lower flammable limit (LFL) [3]. The predicted LFL will be the cocnentration at which the adiabatic flame temperature gets above 1400 K. Fig 1. Flowsheet used to calculate flamabbility limits Basic Assumptions 1. Vapor in equilibrium with liquid at time of combustion 2. Liquid and vapor at time of combustion at same temperature 3. During combustion no heat transfer to liquid or surroundings (adiabatic) 4. Combustion at constant volume 5. Vapor mixture will support combustion (i.e., a spark will propagate) if its adiabatic flame temperature exceeds a threshold value, typically 1400 K. Unit Operation Models · A FLASH block can be used to compute the VLE equilibrium between the aqueous solution and air · An RGIBBS reactor can be used to model the adiabatic combustion Design Spec used in this simuation Spec Name Spec (Target) Manipulated Variable DS-1 Mass fraction of O2 on Stream RX-OUT is 0, the tolerance 0.01 Molar flowrate of stream AIR DS-2 Temperature of stream RX-OUT is 2060.334 F, the tolerance is 1 Temperature of the FEED Stream Calculator used in this simulation Name Purpose C-1 Correlate the Flow of Ethanol and flow of water A sensitivity study is also created to generate the predicted Low Flamability Limit curve. Keywords: Flammability limits References: s [1] Paul M. Mathias, Applied thermodynamics in chemical technology: current practice and future challenges, Fluid Phase Equilibria, Volumes 228–229, February 2005, Pages 49-57 [2] J. Sharkey, G. Gruber, D. Muzzio, Prediction of the flammability range for chemical systems using Aspen, in: Presented at Aspen- World 2002, 27 October–1 November, Washington, DC, 2002. [3] NFPA 69 Standard on Explosion Prevention Systems (1997 Edition)
Problem Statement: A key application of dynamic simulation is in the area of process safety. It is important to understand how the process will respond to a wide variety of emergency scenarios and what actions you should take when these events occur. Using Aspen HYSYS Dynamics, you can define and run a nearly endless set of emergency scenarios to obtain the safest design possible. In this example, we will explore one emergency scenario and the key steps involved in setting up the model.
Solution: Aspen HYSYS Dynamics can be used to run a wide set of emergency scenarios. In this example, we will explore one emergency scenario using a number of unit operations and the built-in HYSYS Spreadsheet. Keywords: Dynamic simulation, dynamics, transient, HYSYS, HYSYS Dynamics, safety, process safety, fire, pressure relief, spreadsheet, API 521 References: None
Problem Statement: How can I estimate the Cubic Expansion Coefficient for Thermal Expansion using Aspen HYSYS?
Solution: In HYSYS PSV Sizing environment, user can set up different scenarios. For Thermal Expansion scenario, the Cubic Expansion Coefficient for the reference stream is required. In this scenario, there is a table showing the recommended values which are taken from API521 standard, as shown below: Aspen HYSYS can be used to estimate this value by flashing a liquid stream at two different temperatures, then calculate the actual volume difference for the two conditions, and finally divide the volume difference by the temperature difference, which will give the estimation value of the Cubic Expansion Coefficient for this stream. This can be done in a spreadsheet, as shown below: In the example, the temperature difference is set as 5 degree C, but user can decrease/modify this difference for their own cases. Please note, the example case illustrated here works in Aspen HYSYS V9, however, user can apply the same method to other versions. Keywords: PSV Sizing, Cubic Expansion Coefficient, Thermal Expansion. References: None
Problem Statement: Manipulation of Fluid Package Tabular Properties via VBA automation
Solution: The variables used for the tabular properties have not been exposed as automation interfaces in Aspen HYSYS, and users cannot access to them directly using standard Automation procedures. However, the backdoor method can be used with the monikers of those variables for this purpose. This example illustrates how to use this method to set up tabular properties. The code below will allow: · Activate tabular properties · Activate CPL · Clear table · Input temperature data · Perform regression Dim hyCase As HYSYS.SimulationCase Dim BD As HYSYS.BackDoor Dim intObjVar As HYSYS.InternalObjectVariable Dim rv As HYSYS.RealVariable Dim rfv As HYSYS.RealFlexVariable BD = hyCase 'Activate tabular properties rv = BD.BackDoorVariable("FluidPkgMgr.300/CompList.300(Basis-1):Boolean.303").Variable rv.Value = 1 'Activate CPL rv = BD.BackDoorVariable("FluidPkgMgr.300/CompList.300(Basis-1)/TabularMgr.300:Boolean.203.6").Variable rv.Value = 1 'Clear table intObjVar = BD.BackDoorVariable("FluidPkgMgr.300/CompList.300(Basis-1)/TabularMgr.300/ComponentCurve.202.6:PropCurve.201.0").Variable BD = intObjVar.Object BD.SendBackDoorMessage("ClearTable") 'Temperature data rfv = BD.BackDoorVariable("FluidPkgMgr.300/CompList.300(Basis-1)/TabularMgr.300/ComponentCurve.202.6/PropCurve.201.0:ExtraData.107.[]").Variable rfv.Values = {274.1, 278.7, 301.3} 'CP values the same but :ExtraData.106.[] 'Pressure values the same but :ExtraData.108.[] 'Regress intObjVar = BD.BackDoorVariable("FluidPkgMgr.300/CompList.300(Basis-1)/TabularMgr.300/ComponentCurve.202.6:PropCurve.201.0").Variable BD = intObjVar.Object BD.SendBackDoorMessage("Regress") Keywords: Tabular properties, Automation, VBA/VB.NET References: None
Problem Statement: How to create a case study in Aspen Plus using ActiveX interfaces.
Solution: We use data series from excel as input for Aspen Plus models. Aspen Plus performs steady state calculations, so time dependent values will be treated as discrete data for different case scenarios, the same as when we set a sensitivity analysis. There are two alternatives to accomplish this: using Aspen Simulation Workbook or programmatically via VB.NET or VBA. For the first option, Aspen Simulation Workbook, there is no needed for programing knowledge and it might be more convenient for most users (a getting started guide can be found in solution 138912) For the second option, please find enclosed an example using VBA macros in Excel. The code illustrates how to input a series of data from excel, in this case the Temp of Heater block, and run the simulation for each case reading back to excel the results of the variables we want to monitor, in the example molar fraction and recycle flow rate. To see the code, press alt + F11. Keywords: VBA, VB.NET, ActiveX Automation, Sensitivity analysis. References: None
Problem Statement: Sulfur recovery is a core process in both midstream and downstream oil & gas industries. In Aspen HYSYS V9, Sulsim technology (by Sulphur Experts) has been fully integrated within Aspen HYSYS. In Aspen HYSYS V9, you can configure, simulate, and optimize the Sulfur Recovery Unit (Claus Process) to recover elemental sulfur from gaseous H2S, COS, CS2, and SO2. Since these gases are harmful to the environment and can affect petroleum product quality, achieving sulfur removal targets and satisfying regulatory requirements across a range of feeds and conditions are critical for operations.
Solution: The integrated Aspen HYSYS Sulsim product includes a new Sulsim (Sulfur Recovery) property package. When you add a Sulsim (Sulfur Recovery) property package to your simulation, the component list will automatically include all supported components, allowing you to easily integrate your sulfur recovery simulation with other gas plant processes. Keywords: Sulfur Recovery, Claus Process, Sulsim, Sulphur Recovery, SRU, Modified-Claus Process, Challenged Feed, Sulfur, Sulphur, H2S, COS, CS2, Reaction Furnace, Catalytic Converter, Tail Gas, Flare, Incinerator, Hydrogenation Bed, Waste Heat Exchanger, Condenser, Titania, Alumina, Selective Oxidation Converter, Sub-Dewpoint, Amine, Adjust, Air Demand Analyzer, ADA, Case Study, Variable Navigator References: None
Problem Statement: Sulfur recovery is a core process in both midstream and downstream oil & gas industries. In Aspen HYSYS V9, Sulsim technology (by Sulphur Experts) has been fully integrated within Aspen HYSYS. In Aspen HYSYS V9, you can configure, simulate, and optimize the Sulfur Recovery Unit (Claus Process) to recover elemental sulfur from gaseous H2S, COS, CS2, and SO2. Since these gases are harmful to the environment and can affect petroleum product quality, achieving sulfur removal targets and satisfying regulatory requirements across a range of feeds and conditions are critical for operations
Solution: The integrated Aspen HYSYS Sulsim product includes a new Sulsim (Sulfur Recovery) property package. When you add a Sulsim (Sulfur Recovery) property package to your simulation, the component list will automatically include all supported components, allowing you to easily integrate your sulfur recovery simulation with other gas plant processes. Keywords: Sulfur Recovery, Claus Process, Sulsim, Sulphur Recovery, SRU, Modified-Claus Process, Challenged Feed, Sulfur, Sulphur, H2S, COS, CS2, Reaction Furnace, Catalytic Converter, Tail Gas, Flare, Incinerator, Hydrogenation Bed, Waste Heat Exchanger, Condenser, Titania, Alumina, Selective Oxidation Converter, Sub-Dewpoint, Amine, Adjust, Air Demand Analyzer, ADA, Case Study, Variable Navigator, Kinetic Value, Checker Wall, Aspen Simulation Workbook, ASW References: None
Problem Statement: A key application of dynamic simulation is in the area of process safety. It is very important to understand how the process will respond to a wide variety of emergency scenarios and what actions you should take when these events occur. Using Aspen HYSYS Dynamics, you can define and run a nearly endless set of emergency scenarios to obtain the safest design possible. In this example, we will explore one emergency scenario and the key steps involved in setting up the model.
Solution: Aspen HYSYS Dynamics can be used to run a wide set of emergency scenarios. In this example, we will explore a blocked outlet emergency scenario. Keywords: Dynamic simulation, dynamics, transient, HYSYS, HYSYS Dynamics, safety, process safety pressure relief, strip charts, process control, PID, blocked outlet, relief valve References: None
Problem Statement: How do we identify when choked flow occurs though a control valve in Aspen HYSYS.
Solution: Choked flow is a limiting condition which can occurs through a valve when the mass flow rate does not increase and there is a further decrease in the downstream pressure environment, while the upstream pressure is fixed. For homogenous fluids, the physical point at which the choking occurs for adiabatic conditions is when the exit stream velocity is at sonic conditions. For ideal gas flow, the choked flow occurs when the ratio of the downstream pressure () falls below a defined critical pressure (), where k is the heat capacity ratio. HYSYS is able to predict the downstream pressure at which choking occurs for both single and multiphase inlet streams. The downstream pressure at which choking occurs through the valve is governed by the user selection of the valve sizing calculation method. In the attached simulation, the valve is sized using the universal gas sizing method. Through a case study the user can analyze the downstream pressure at which choking will occur. The independent variable is the downstream pressure exiting the valve and the dependent variable is the mass flow rate. In the figure shown below, the choking point corresponds to the downstream pressure value at which the gradient of the mass flow rate curve becomes zero. Keywords: Choking, Control Valve, Universal Gas Sizing References: None
Problem Statement: Optimizing a feed stage location using an NQ curve.
Solution: NQ curve is used to optimize the feed stage location and number of stages while minimizing the objective function selected by the user. To use, go to Blocks ->NQ curves and click new button. Here you can specify lower and upper limits for the number of stages ,Feed stream name, as well as objective function. The objective function can be Qreb-Qcond ,Qreb ,Mole-Rr ,Mass-Rr and Stdvol-Rr . A Design Spec needs to be present . Attached is the example of generating a NQ curve for feed having ethane ,isobutane mixture. The NQ curve for the above process is shown below. As above graph suggests , the objective function (Qreb -Qcond ) is decreasing as you increase in number of stages but stops decreasing (after stage 27 ) when there is no significant change in heat load with increase in number of stage. Keywords: NQ curve ,Distillation References: None
Problem Statement: This case shows simple functionality of the Cause&Effect Matrix (CEM) object of HYSYS Dynamics to simulate Emergency Shutdown systems (ESD) The Inputs of the CEM are the High High Level Switch and the Low Low Level Switch (red trend lines at 80% and 15%). Both are implemented through Indicators (PID controller object) A third input is the ESD reset, which is manipulated by the user. The Outputs are the inlet feed valve (to Fail Position) and the pump (switched off). A failure in the Level transmitter of the LIC-100 is simulated. Blue dotted trendline is LIC-100 transmitter, Cyan trendline is real level % in vessel.
Solution: 1. Start the integrator. 2. Go to LIC-100, section of "PV conditioning", and check "Fixed Signal". The level seen by the LIC-100 controller is frozen. 3. Go to the Feed controller FIC-100 and change the SP to 180 T/h. 4. Observe how the Cyan line (real level) goes up, since the Level controller is blind. 5. When the level reaches 80%, the CEM automatically shutdowns the feed valve and the level goes down. 6. When the level reaches 15%, the CEM automatically shutdowns the pump to prevent cavitation. Everything is now in safe position. Optional: 7. The instrumentation department correct the level transmitter, so check the "None" button of LIC-100, (section of "PV conditioning). 8. Press the "On" button of the ESD Reset in the CEM window. This will open again the feed valve. 9. When the level is higher than 15%, press again the "On" button of the ESD Reset in the CEM window. This will start up the pump. 10. Everything should go then so normal conditions. Keywords: dynamics; Cause And Effect Matrix (CEM); ESD References: None
Problem Statement: Is it possible to model a vapor-liquid-liquid-liquid (4-phase) system?
Solution: Yes, it is possible using an RGibbs block. It is not possible with other blocks since they do not have the data structure available for four phases. The Polymer Plus User Model Library includes a 4PhaseFlash User2 block that uses a special 4-phase flash. This special model is not necessary for a system of purely conventional components. See the attached example file created in Aspen Plus V7.3 for an system of a water, isopropyl-alcohol, n-octane, and potassium bromide. This file will run in Aspen Plus V7.3 and higher. The example is for demonstration only. These components do form four phases; however, data for the phase composition was not available to compare the results in more detail. Binary parameters can be regressed from data for a more accurate model. Note that the enthalpy of a single stream that has 3-liquid phases will not be calculated correctly since the number of phases in the stream will be incorrect. It is possible remedy this problem by checking the "Use 4-phase convergence algorithm to solve 3-phase flash" on the Setup \ Simulation Options \ Flash Convergence sheet. This issue does not apply to this example example since all of the streams are only one single phase; therefore, the enthalpy calculations are consistent. KeyWords kbr 4-phase three liquid 3 liquid VLLLE IPA C3H8O N-OCTANE C8H18 Keywords: None References: None
Problem Statement: Most distributed control systems use the ISA Standard Form with the PID having specific tuning settings units where the controller gain has an associated dimensionless term. However, the HYSYS PID Controller algorithm does not allow the user to alter the structure of the PID. Furthermore, the HYSYS Controller output samples the previous time step and this can lead to PID behavior that does not replicate plant behavior which use distributed control systems. Knowledge base solution 143221 demonstrates the operation of the PID controller using the ISA Standard Form. This knowledge base solution documents the tuning of PID controller which has implementation of the ISA Standard Form.
Solution: The HYSYS PID Control Algorithm does not have the flexibility for users to alter the PID structure to target the PV rather than the error with the SP. In altering the structure of the PID, the user will have further flexibility for various load and setpoint responses. The ISA Standard Form PID user variable found in the attached to this solution, offers 3 dimensionless terms, alpha, beta and gamma. This gives flexibility to the user to alter the structure of the PID and target terms which give a faster set point response, alter the rise time and the overshoot of the PV. A table below is given of the 8 ISA Standard form structures: Structure Description β γ 1. PID action on error (β = 1, γ = 1) (HYSYS PID structure) 1 1 1. PI action on error, D action on PV 1 0 3. I action on error, PD action on PV (β = 0, γ = 0) 0 0 4. PD action on error (β = 1, γ = 1) (with no I action) 1 1 5. P action on error, D action on PV (β = 1, γ = 0) (with no I action) 1 0 6. ID action on error (γ =51) ( with no P action) 0 1 7. Two degrees of freedom controller (β and γ adjustable 0 to 1) Variable Variable For the fastest possible setpoint response, structures 1 and 2 are used. If preventing overshoot is more important than minimizing rise time, structure 3 is used. If the ability to customize the balance between fast rise time and minimum overshoot for a setpoint response is needed, structure 7 is used. Keywords: HYSYS Dynamics, User Variable, PID References: None
Problem Statement: How can I study effect of cooling or heating for a closed vessel.
Solution: Such study can be performed in Aspen HYSYS Dynamics. The example file shows how to do it. A vessel with 100% liquid level is added. The negative duty stream provides cooling. All the valves are closed. Once you turn the solver, the pressure and temperature in the vessel starts falling over time. Note: To study effect of heating in the same way, you have to provide a positive duty. Keywords: closed vessel, closed tank, cooling, heating References: None
Problem Statement: Is there an example featuring the calcualtion of sensitivities in Aspen Custom Modeler?
Solution: A simple example (sensitivity.acmf) featuring a differential equation. 1- Create the model with the following equation: dy/dt = y - (4*k1 + 3*k2) 2- The aim of the example is to calculate the sensitivities as well as the normalized sensitivities of y with respect to both k1 and k2 (Ref. Aspen Modeler Keywords: None References: manual. Chapter 4). In the enclosed simulation, a script "sensitivity", has been created. Invoking it will clear any previous sensitivity results, will activate the sensitivity calculation. The sensitivities will be available on the messages window after a steady state run is performed. Dim ACMApp Set ACMApp= Application ACMApp.Simulation.ClearSensitivities ACMApp.Simulation.EnableSensitivities ACMApp.Simulation.AddSensitivityVariable "B1.y" ACMApp.Simulation.AddSensitivityParameter "B1.k1" ACMApp.Simulation.AddSensitivityParameter "B1.k2" 'Run the Simulation ACMApp.Simulation.Runmode = "Steady State" ACMApp.Simulation.Run(True) 'Get sensitivities Dim sens1, sens2, Nsens1, Nsens2 sens1 = ACMApp.Simulation.GetSensitivityValue( "B1.y","B1.k1" ) Nsens1 = sens1*B1.k1.Value/B1.y.Value sens2 = ACMApp.Simulation.GetSensitivityValue( "B1.y","B1.k2" ) Nsens2 = sens2*B1.k2.Value/B1.y.Value ACMApp.Simulation.DisableSensitivities Application.Msg "Result" Application.Msg "y " & B1.y.Value Application.Msg "Sensitivity of y to k1 " & sens1 Application.Msg "Sensitivity of y to k2 " & sens2 Application.Msg "Normalized sensitivity of y to k1 " & Nsens1 Application.Msg "Normalized sensitivity of y to k2 " & Nsens2 Results will be displayed at the end of simulation messages window. KeyWords: ACM, Sensitivity, example, Normalized, EnableSensitivities, GetSensitivities, ClearSensisitvities, AddSensitivityVariable, AddSensitivityParameter
Problem Statement: Where can I find a list of the examples which are available in the Aspen Adsorption library?
Solution: The following table collects a list of the examples loaded in the Aspen Adsorption example library. These files can be accessed in two ways: 1. Through the following path C:\Program Files (x86)\AspenTech\Aspen Adsorption V8.4\Examples 2. By clicking on File | Open Library on Aspen Adsorption File Name Description AirDrySep Air separation using adsorption and distillation This example demonstrates the separation of air using a combined adsorption and distillation process. The adsorption system is used to dry the air before feeding into the cryogenic distillation section. The simplified distillation section was created using Aspen Plus and exported as a flow driven Aspen Plus Dynamics flowsheet. The resulting Aspen Plus Dynamics flowsheet is then imported into a hierarchy block within the Aspen Adsorption flowsheet and connected to the dryer product. Before execution of this example both Aspen Plus and Aspen Plus Dynamics must be licensed and installed with Aspen Adsorption. AirPure Air breakthrough This example considers the breakthrough curves for a binary adsorbent within a fixed bed system. The process simulated is the adiabatic adsorption of water and carbon dioxide on a single bed of 4A molecular sieve. To run this example (after loading), select the dynamic run mode and execute. AirTSA Air drying using TSA This example process simulates a Thermal Swing Adsorption (TSA) process used to dry a humid gas feed stream. The water in the feed is adsorbed onto an alumina adsorbent at high pressure, 10 bar and 16 C. Aromatic Aromatics breakthrough This example process simulates the adsorption of benzene and cyclohexane from a gas stream using a single fixed bed of activated carbon under adiabatic conditions. BiFlow Bi-directional flow example Example to demonstrate the bi-directional flow capabilities of the Aspen Adsorption gas bed model. C4Dehyd 1-Butene dehydrogenation This example looks at how Aspen Adsorption can be used directly to simulate the heterogeneously catalysed reaction of 1-butene to butadiene on a Cr2O3/Al2O3 catalyst. C8Arom C8 aromatics liquid phase breakthrough Multicomponent breakthrough investigation of a mixture of C8 aromatics (m-xylene, p-xylene, ethylbenzene and isopropylbenzene) on Faujasite zeolite in a packed bed. CSS_DiffusionModels Surface/Pore/Combined Diffusion Models Illustration (Dynamic mode on CSS models) This Aspen Adsorption flowsheet presents three non-isothermal chromatography column responses. There are three simulation components and each column uses the same equilibrium properties based on the Generalized Equilibrium Model (GEM) and the parameters are provided by means of flowsheet constraints. The first column (Combined) employs the Diffusion Combined Model as the adsorption kinetics. The second column (Pore) employs the Diffusion Pore Model as the adsorption kinetics. The third column (Surface) employs the Diffusion Surface Model as the adsorption kinetics. CSS_VariableGeometry Variable Geometry Bed Simulation Illustration (Dynamic mode on CSS models) This Aspen Adsorption flowsheet demonstrates adsorption dynamics for variable frontal area columns. There are three simulation components and each column uses the same equilibrium properties based on the Generalised Equilibrium Model (GEM) and the parameters are provided by means of flowsheet constraints. The first bed (Const_Db) uses a constant diameter column; reference column The second bed (Variable_Db1) uses a variable diameter column; from small to large The third bed (Variable_Db2) uses a variable diameter column; from large to small EtOH Ethanol adsorption onto activated carbon. This example of a liquid phase adsorption process simulates the breakthrough of ethanol in a column into which an aqueous stream is introduced. This stream contains 21 mol% ethanol, at a temperature of 30 C and atmospheric pressure. FlowConst Flowsheet constraint isotherm Production of nitrogen rich gas from air using a two bed Pressure Swing Adsorption (PSA) process that uses a carbon molecular sieve as the adsorbent. This example demonstrates the single bed approach for the simulation of a full multibed process and the use of flowsheet level constraints to provide a user defined isotherm. H2PSA Hydrogen separation using PSA This example process simulates hydrogen production from a gas stream comprising 75 mol% hydrogen and 25 mol% of impurities at 15 bar and 21 C. A four bed eight stage Pressure Swing Adsorption (PSA) process is used with a 5A molecular sieve as adsorbent. H2PSA3CSS Hydrogen PSA - Ternary component (Cyclic Steady State approach) Production of hydrogen rich gas from a gas stream comprising 60 mol% hydrogen, 30 mol% of methane and 10 mol% carbon monoxide at (initially) 2 bar and 25 C. A two bed six stage Pressure Swing Adsorption (PSA) process is utilised on double layered beds (activated carbon and zeolite 5A). This example demonstrates the single bed cyclic steady-state approach for the simulation of a full multibed process using the steady-state run mode. H2PSA3Dyn Hydrogen PSA - Ternary component (Dynamic multibed approach) Production of hydrogen rich gas from a gas stream comprising 60 mol% hydrogen, 30 mol% of methane and 10 mol% carbon monoxide at (initially) 2 bar and 25 C. A two bed six stage Pressure Swing Adsorption (PSA) process is utilized on double layered beds (activated carbon and zeolite 5A). This example demonstrates the use of a full multibed flowsheet using gCSS models in dynamic run mode. H2PSA5CSS Hydrogen PSA - Five component (Cyclic Steady State approach) Production of hydrogen rich gas from a gas stream comprising 56.4 mol% hydrogen, 26.6 mol% of methane, 8.4 mol% carbon monoxide, 3.1 mol% of carbon dioxide, and 5.5 mol% of nitrogen at 10 bar and 25 C. A two bed six stage Pressure Swing Adsorption (PSA) process is utilised on double layered beds (activated carbon and zeolite 5A). This example demonstrates the single bed cyclic steady-state approach for the simulation of a full multibed process using the steady-state run mode. It should be noted that essential design parameters such as equilibrium as well as kinetic information have been obtained from Industrial and Engineering Chemistry Research. H2PSA5Dyn Hydrogen PSA - Five component (Dynamic multibed approach) Production of hydrogen rich gas from a gas stream comprising 56.4 mol% hydrogen, 26.6 mol% of methane, 8.4 mol% carbon monoxide, 3.1 mol% of carbon dioxide, and 5.5 mol% of nitrogen at 10 bar and 25 C. A two bed six stage Pressure Swing Adsorption (PSA) process is utilised on double layered beds (activated carbon and zeolite 5A). This example demonstrates the use of a full multibed flowsheet using gCSS models in dynamic run mode. IonX_Cycle Mg2+ removal using cyclic ion exchange The example deals with the removal of Mg2+ ions in an aqueous magnesium sulphate solution by ion-exchange with a bed containing polystyrene sulfonate resin. The resin is initially in the NH4+ form. The process employs two ion exchangers: one in adsorbing mode, the other going through regeneration. This example covers the steps typically encountered in an ion exchange process cycle, including steps in which both ion exchangers are interacting. IonX_Mg Mg2+ ion breakthrough The example deals with the removal of Mg2+ ions in an aqueous magnesium sulphate solution by ion-exchange with a bed containing polystyrene sulfonate resin. The resin is initially in the NH4+ form. IonX_Mg_Ca Mg2+ & Ca2+ ion breakthrough This example deals with the removal of Ca2+ and Mg2+ ions in aqueous solution by ion-exchange with a bed containing polystyrene sulfonate resin initially in the Na+ form. Such a mixture is a typical feedstock for water softening processes, such as in the pre-treatment of sea water for sea water evaporation plants. N2DynEst Mass transfer parameter dynamic estimation Estimation of Nitrogen and Oxygen mass transfer coefficients using measured data from a breakthrough run. N2PSA Nitrogen production (single bed approach) Production of nitrogen rich gas from air using a two bed Pressure Swing Adsorption (PSA) process that uses a carbon molecular sieve as the adsorbent. This example demonstrates the single bed approach for the simulation of a full multibed process. N2PSACSS Nitrogen production (Cyclic Steady-State approach) Production of nitrogen rich gas from air using a two bed Pressure Swing Adsorption (PSA) process that uses a carbon molecular sieve as the adsorbent. This example demonstrates the single bed cyclic steady-state approach for the simulation of a full multibed process using the steady-state run mode. N2PSAMB Nitrogen separation using PSA (rigorous multibed method) Production of nitrogen rich gas from air using a two bed Pressure Swing Adsorption (PSA) process that uses a carbon molecular sieve as the adsorbent. The example demonstrates the use a a full multibed flowsheet. N2PSAVL Nitrogen separation using PSA (single bed method, pressure ramps) Production of nitrogen rich gas from air using a two bed Pressure Swing Adsorption (PSA) process that uses a carbon molecular sieve as the adsorbent. This example demonstrates the single bed approach using pressure ramp units instead of valves for the simulation of a full multibed process. N2SSEst Isotherm parameter steady state estimation (1 component) Estimation of Nitrogen isotherm parameters using static measurements. NatGas Natural Gas mixture breakthrough This example process simulates the adiabatic adsorption of carbon dioxide and ethane on a single on a single bed of 5A molecular sieve. The objective of this example is to study the effects of substantial temperature changes that accompany the process during breakthrough. O2PSA Oxygen separation using PSA This example process simulates the production of an oxygen rich gas from air. The separation is achieved by two beds packed with 5A zeolite adsorbent using a four stage Pressure Swing Adsorption (PSA) process. O2VSA Oxygen separation using VSA This example process simulates the production of an oxygen rich gas from air. This is a three bed Vacuum Swing Adsorption (VSA) process. O2VSACSS Oxygen production via Vacuum Swing Adsorption (Cyclic Steady-State approach) Production of an oxygen rich gas from air using a two bed Vacuum Swing Adsorption (VSA) process that uses NaX zeolite adsorbent. This example demonstrates the single bed cyclic steady-state approach for the simulation of a full multibed process using the steady-state run mode. O2VSADyn Oxygen production via Vacuum Swing Adsorption (Dynamic multibed approach) Production of an oxygen rich gas from air using a two bed Vacuum Swing Adsorption (VSA) process that uses NaX zeolite adsorbent. This example demonstrates the use of a full multibed flowsheet using gCSS models in dynamic run mode. ParamEst Isotherm parameter steady state estimation (2 components) This example fits data for the two-component adsorption of methane and carbon monoxide on carbon molecular sieve. The isotherm model to be fitted is a partial pressure dependent Langmuir 1 model. PID_Control Nitrogen breakthrough (pressure controlled) This example shows the use of a PID controller for the control of the column pressure during a breakthrough investigation. The column feed is air, with Nitrogen preferably adsorbed onto 5A Molecular Sieve. PXSMB7Z Xylene separation 24 layer Simulated Moving Bed (SMB) separation of p-Xylene, o-Xylene, M-xylene, n-Octane, Toluene and Ethylbenzene using 1,4-Diethylbenzene desorbent. The adsorbent assumed was a Ba-Y zeolite. The example demonstrates the use of the single bed approach to rapidly simulate a single layer at a time. The example also demonstrates the use of additional delay units to simulate a pumparound and the use of the universal block model to create a user specified mixer model. RapPSA Nitrogen separation using RapPSA This example simulates a two stage process of cyclic rapid pressure swing, used to produce nitrogen from air. ReacAds Methylcyclohexane dehydrogenation This example looks at the dehydrogenation of methylcyclohexane to produce toluene. For this reversible reaction a 5 mol% mixture of methylcyclohexane in an inert carrier gas is introduced into the bed. It is assumed that only the toluene is adsorbed. StripAds Toluene removal using air stripping and adsorption This example demonstrates the removal of toluene (100ppm wt, 120000 gph) from a water stream using a combination of air stripping and adsorption. The example also shows the linking of Aspen Plus Dynamics models to Aspen Adsorption models. The exit water stream will contain no Toluene for a steady-state operation cycle of 1 year. Before execution of this example both Aspen Plus and Aspen Plus Dynamics must be licensed and installed with Aspen Adsorption. Keywords: Aspen Adsorption, examples References: None
Problem Statement: Safety and process engineers are required to calculate projected emissions from tankage in major projects, for commissioning, and for revamp work. Additionally, emissions calculations and reporting may be required during audits or on a routine basis by governmental environmental agencies. In most cases, these emission calculations cannot be measured directly, and must be calculated based on tank configuration, tank geometry, operation mechanics, and changes to process operating conditions.
Solution: Aspen HYSYS can be used to calculate organic liquid storage tank emissions losses under various conditions. Aspen Simulation Workbook (ASW) can also be used as a front-end for quickly calculating flashing, working, and breathing open tank losses for those unfamiliar with HYSYS workflows. In this abridged application example, a user will simulate simple open tank flashing, working, and breathing losses using Aspen HYSYS and Aspen Simulation Workbook. The intent of the example is to demonstrate how users can configure HYSYS to calculate these types of emissions calculations, and customize those calculations based on tank configuration and operations specific to their plant and region. Keywords: HYSYS, emissions, gas plant, tank farm, tank, tankage, open tank, flash, flashing, breathing, working, losses, ASW, Aspen Simulation Workbook, References: None
Problem Statement: How do I use steam as the energy stream for the reboiler in my standard HYSYS column?
Solution: Follow these steps (The names and numbers below were used to modify the HYSYS example case TUTOR1.hsc, as attached): Double click on the column to open the column runner and switch to the Parameters | Solver page. Switch the solver to Modified HYSIM I/O. In the Basis environment, add Water as a component (if not already in the component list). In the Simulation environment, press RUN to rerun the column if required. Once the column is converged, save the case, then enter the Column Environment. Write down the column bottom pressure (1413 kPa) Delete the existing reboiler and energy stream from the PFD. Add a Heat Exchanger to the Column''s PFD, specifying the following parameters, as per your system: Heat Exchanger Model: Calculated by Column (this will appear by default). Shell Side Inlet: Column Bottom stream (''To Reboiler'') Shell Side Outlet: ''Reboiler Out'' Tube Side Inlet: ''Steam In'' Tube Side Outlet: ''Steam Out'' Tube Side Pressure Drop: 50 kPa (You should set this to your exchanger''s tube side DP) Shell Side Pressure Drop: 0 kPa (If you specify a nonzero number here, you will need to add a pump before the heat exchanger.) Define the ''Steam In'' stream as saturated pure steam at 275 kPa (i.e. specify vapour fraction = 1.0, Pressure = 275 kPa and composition of pure water). Define the flow rate to be 1700 kg/h. When the reboiler was deleted from the simulation, the column bottom pressure was lost, so now we must add it back. Open the Column Runner | Parameters | Profiles and add the pressure we wrote down in step 4 to the bottom stage of the column. Add a separator after the heat exchanger, with the vapour product returning to the bottom stage of the column (you can use the existing ''Boilup'' stream), and the liquid product leaving the Column Environment as the Liquid Product (use the existing ''LiquidProduct'' stream). Return to the Column Runner. If you had any specifications that referred to the original reboiler or the liquid product, you will need to add these back at this time. In our example, we need to add the mole fraction of Propane in the Reboiler Liquid specification again. The specification value is 0.02, and the specification must be a ''Stream'' specification for the LiquidProduct stream (not a stage specification). Note that since the Heat Exchanger was installed in the Column Environment, it''s specifications appear on the Monitor page along with the specifications of the column. You may need to ensure that the ''Heat Balance'' specification for the Heat Exchanger is checked as Active. Return to the Main Flowsheet, and Run the column if it has not run. The column should converge. To view the steam streams on the main flowsheet, go to Design | Connections and add names in the External Stream column beside the Internal Stream ''Steam In'' and ''Steam Out'' stream names. NOTE: Most of this solution repeats portions of the Advanced Columns module of the ''Advanced Process Modeling using HYSYS'' course. KeyWords duty stream Keywords: None References: None
Problem Statement: How do you use Aspen Properties and Excel to simulate a moving-bed dryer used to remove solvent (isopentane) from solid polymer (LLDPE) particles?
Solution: Many polymerization processes use a chemically inert solvent to facilitate heat removal inside the reactor vessel. For economic and environmental purposes the excess solvent must be recovered from the polymer and returned to the process. Nitrogen-purged moving-bed dryers are frequently employed for this purpose. Typically the performance of these types of drying operations is limited by the diffusion rate of the solvent inside the solid particles. The drying rate depends on several factors including the average particle diameter, drying column geometry, solvent loading, and the gas/solid flow ratio. In this example, the packed bed is discretized into twenty-five equal-sized elements. The evaporation rate of isopentane is calculated at each position in the bed using fugacity coefficients calculated by Aspen Properties. The pressure drop across the bed is calculated using vapor density and viscosity calculated by Aspen Properties. This dryer model assumes the polymer is fed to the dryer as a "dry cake", in other words it assumes that the solvent is present in the polymer particles at concentrations below the saturation concentration. At higher concentrations, dryer performance is limited by heat-transfer rates instead of diffusion rates. How To Use It This example lets you analyze the influence of the following process variables on the dryer performance: ? Drying column diameter ? Drying column height ? Solids feed flow rate ? Gas/solids flow ratio ? Particle diameter ? Initial solvent loading ? Temperature ? Gas outlet pressure In addition, the user can adjust the specified diffusivity of isopentane in the polymer and the presumed void fraction of the bed. This example assumes 40% void fraction based on loose packing of spherical particles. The model inputs are specified near the top of the Specifications sheet in the example. These are highlighted with light yellow backgrounds and bold text. Related calculations are displayed in the same area of the worksheet. This example requires a trial-and-error solution due to the countercurrent arrangement of the equipment. After entering the model specifications, use the "Run Model" button to run the Excel Solver to resolve the isopentane mass balance. The solver adjusts the outlet vapor flow rate of the solvent to close the mass balance. The Excel spreadsheet calculates the concentration profiles for isopentane in the solid polymer phase and the vapor phase. These are shown graphically in a plot and the outlet concentration of isopentane in each phase is tabulated near the top of the sheet. The model also calculates the pressure drop across the packed bed. The pressure profile is shown graphically and in tabular form. After running the model the "Isopentane Mass Balance" should be closed (e.g., the "balance" field should be very close to zero). Use the "Save Cases" button to transfer a summary of the model inputs and results to the Report sheet. Each case is saved as a new row in the Report sheet. This mechanism makes it easy to tabulate the results from several simulation runs. The "Clear Case Report" button can be used to remove all of the saved results from the Report sheet. In this example the fixed bed is divided into 25 equal increments. The number of increments can be changed, however the user must add or delete rows to the calculations to keep the spreadsheet consistent with the specified number of increments. The calculation area of the spreadsheet is located directly below the charts shown in the previous figure. What is it based on? Each Aspen Properties Excel application obtains its content (components, property methods, model parameters, etc.) from an Aspen Properties file, in this case, Drying.aprbkp. Drying.aprbkp is a Polymers Plus / Aspen Properties file that uses the Sanchez-Lacombe (POLYSL) property method. The key components in the simulation are the solvent, isopentane (ISOPENT), the linear low-density polyethylene polymer (LLDPE-O), and nitrogen (N2). The component list also includes the polyethylene repeat unit (E-SEG), which is required to define the structure of the polymer component. The current version of Properties Plus does not account for polymer component attributes. However, as shown through this example it is possible to represent the polymer as an oligomer component. The segmental structure and molecular weight of the oligomer are defined using the number-average properties of the polymer. The pure component property parameter POLPDI is used to define the polydispersity index. The weight-average molecular weight is calculated from the polydispersity and number-average molecular weight. The polymer in this example is a semi-crystalline solid. In Polymers Plus, the solid phase properties of polymers are represented using liquid-phase property routes. Thus, the property functions used in this example refer to the liquid phase. It is generally believed that solvents are insoluble in the crystalline phase of the polymer. Further, the crystalline domains reduce the diffusion rate of solvents out of the polymer. In this example we assume that the diffusivity and phase equilibrium parameters for this system are apparent values, representing the behavior of the semi-crystalline polymer, as opposed to true values pertaining to pure amorphous phase polymer. This example assumes that nitrogen is completely insoluble in the polymer phase. Further, polymer is assumed to be non-volatile. Thus the liquid and vapor phases are each treated as two-component systems. The calculation procedure in the Excel example is as follows: ? Solid-phase mass balance The feed flow rates of polymer and isopentane are calculated based on user-specified values. These data appear in the first row of the calculation section, which represents the composition of the solids feed stream and vapor effluent stream. The flow rate of polymer is assumed to be constant throughout the dryer. The flow rate of isopentane at each axial location in the dryer (each row) is calculated by subtracting the evaporation rate of isopentane in each section from the feed flow rate of isopentane to the same section. ? Vapor-phase mass balance The flow rate of nitrogen is assumed to be constant throughout the dryer. The isopentane vapor-phase mass balance is solved from the top of the dryer down (opposite the direction of flow). The Excel solver iterates on the isopentane vapor flow rate out of the dryer to close the vapor-phase mass balance. ? Evaporation Rate The evaporation rate of isopentane from the polymer is calculated using the following equation: Where: E = Evaporation rate of isopentane (kg/hr) epsilon = Void fraction of the bed Dd = Dryer diameter (m) h = Height of an incremental section of the bed (m) h = Hd/Nincrements Where: Hd is the total bed height rhos = Solid density (calculated by Aspen Properties, kg/m3) Dab = Diffusivity of solvent in polymer (cm2/s) Dab,298 = Diffusion coefficient of solvent in polymer at 298A?K ED = Diffusion activation energy (kcal/kmol) Rg = Gas law constant (kcal/kmol-A?K) T = Temperature in A?K Wb = Mass fraction of isopentane in the polymer Ws = Mass fraction of isopentane in the solid phase at the solid-vapor interface (at the surface of the particle) The mass fraction of isopentane in the bulk polymer phase is calculated from the solid-phase mass balance equations. The mass fraction of isopentane at the surface of the particle is calculated based on phase equilibrium (assuming the mass-transfer resistance in the vapor phase is very small compared to the resistance in the polymer phase). At the surface of the particle, the fugacity of isopentane (C5) in the liquid phase must be equal to the fugacity of isopentane in the vapor phase: XC5 = phivC5 YC5 / philC5 Where: XC5 = Mole fraction of isopentane in the polymer at the surface YC5 = Mole fraction of isopentane in the vapor phase phivC5 = Vapor-phase fugacity coefficient of isopentane (calculated by Aspen Properties) philC5 = Liquid-phase fugacity coefficient of isopentane (calculated by Aspen Properties) Appropriate conversion formulas are used to convert between mole fractions and mass fractions as needed. To simplify the calculation procedure, the liquid fugacity coefficient at axial position "i" is calculated using the surface concentrations at position "i-1". The surface concentration in the first cell is set equal to the bulk phase composition (e.g., we assume the particles start with a homogeneous composition). This assumption avoids the need to solve each row in the mass-balance through an iterative technique. ? Pressure The outlet pressure is specified. The model calculates the pressure drop across each incremental cell using the Ergun equation for packed beds: Where: DP= pressure drop (Pa) mu = Vapor-phase viscosity (from Aspen Properties) Uo = Superficial velocity of gas (m/s) rg = Vapor-phase density (kg/m3) (from Aspen Dynamics) gc = gravitational constant, 9.806 m/s2 ? Temperature This example assumes isothermal operation. This assumption is reasonable if the unit has sufficient heat transfer area to replace heat lost to the heat of vaporization of the monomer. More Ways to Use It Determination of Diffusion Coefficients Diffusion coefficients for polymer/solvent systems depend on many factors including the crystallinity of the polymer and the polymer morphology (for example the tacticity and branching frequency). In practice, reported diffusion coefficients are often unreliable. This spreadsheet can be used to estimate diffusion coefficients to match experimentally determined drying performance (either from an industrial dryer or by a lab-scale drying column). Training The Excel model is ideal for non-expert users. Therefore, it provides a cost-effective way to give a wide group of technologists a "feel" for how the moving bed dryer behaves. The model can be used to study interactions between key operating conditions and the dryer performance. Equipment Rating This Excel model can be used to rate existing equipment. For example, the model can be used to determine the maximum solids throughput achievable while maintaining constraints on the solvent concentration in the dried polymer. Equipment Sizing This model could be used to find the appropriate dryer diameter and height to meet solvent recovery requirements while meeting pressure drop constraints. Suggestions to Extend It Extension to Multi-component Drying In this example we ignore all other organic compounds other than the solvent. In practice, however, the polymer may contain other organic compounds including unreacted monomer and reaction by-products. Several of these compounds are already defined in the Dryer.Aprbkp Aspen Properties file. The Excel spreadsheet could be modified to include additional volatile compounds. However, the mass balance for each additional volatile compound must be solved iteratively using the Excel solver or, alternately, using the Goal Seek function in Excel to minimize the mass balance errors. Energy Balance The example could easily be extended to calculate the overall heat duty of the dryer (based on the enthalpies and flow rates of the feed and product streams). Many industrial dryers operate in an essentially adiabatic manner. Heat to vaporize the monomer is supplied by preheating the purge gas. In these types of equipment the gas feed rate and temperature must be adjusted to ensure sufficient heat to dry the polymer. The model could be extended to include a cell-by-cell energy balance to calculate the temperature profile in the bed. This extension, however, is not trivial since it requires an iterative solution to calculate the temperature at each point in the dryer based on the enthalpy balance across the cell. Finer Resolution The present example divides the dryer into 25 equal increments. The number of increments could be increased by changing the Nincrements cell and by adding additional rows to the calculation table near the bottom of the Specifciations sheet. Although this will increase the resolution of the model, it will increase the time required to reach a solution. The Visual Basic macro "CopyCase" (used by the Save Case button) must be updated if additional rows are added to the calculation section of the spreadsheet. KeyWords Aspen Properties Excel example Aspen Properties Excel template Keywords: None References: None
Problem Statement: RCSTR seems not having built-in heat transfer capability. How do you model heat transfer in a CSTR with jacket?
Solution: You can add a heat stream to a CSTR to model heating or cooling in a jacket. The CSTR also can use a Utility. See the attached file that can be opened in Aspen Plus V7.3 and higher. A heat stream is an information stream with a duty. The duty is passed to the CSTR as one of its two thermodynamic specifications. The heat stream can come from a HEATER or the duty can be specified in the stream. You can use a design specification to vary flows. In the example file, there are three RCSTR blocks one uses a Utility, one uses a heat stream, and one uses a heat stream connected to a HEATER block. Keywords: None References: None
Problem Statement: How is the enthalpy calculated for coal?
Solution: Properties of nonconventional components are calculated by the physical property system using component attributes. For nonconventional components, the enthalpy model has to be specified in the Properties Environment, Methods | NC Props | Property Methods. HCOALGEN is a general enthalpy model for coal. It includes correlations to calculate the heat of combustion, the standard heat of formation and the heat capacity. You can use different option codes to manipulate which correlations to use. The example in this solution uses the following option codes: The first option code (6) specifies that the heat of combustion is a user specified value. This value has to be specified with the HCOMB parameter. In this example a value of 13416 Btu/lb is being used. The second option code (1) specifies the heat of combustion-based correlation to calculate the heat of formation. The third option code (2) specifies a cubic temperature equation to calculate the heat capacity. The attached Aspen Plus file "Coal_Enthalpy_Calculations_Example.bkp" contains a feed stream with coal, which is fed to an RYield reactor that simulates a pyrolysis reactor. This reactor yields are based on experiments at 1 atm. The attached Excel spreadsheet calculates the enthalpy of the stream based on the following steps: 1. Calculate the heat of combustion In this case, the value of the HCOMB parameter: 2. Calculate the heat of formation The heat of combustion-based correlation is as follows: Where the w's are the weight fractions of all the oxidized elements on a dry basis. These are the attributes defined by ULTANAL and SULFANAL. 3. Calculate the sensible heat By integrating the correlation for the heat capacity from 25 C to the stream temperature: 4. Correct for moisture content The enthalpy of the stream is: Where H_water is the enthalpy of water at process temperature. Keywords: Solid, char References: None
Problem Statement: Is it possible to model polyimides in Aspen Plus?
Solution: Polyimides are polymers that contain imide linkages (see below) typically with the structure: Where “R” and “X” are typically aromatic structures such as biphenyl, biphenyl carbonate, naphthalene, etc. Polyimides are produced by reactions between diamines and dianhydrides in a two step reaction, generating water as a reaction by-product. Typically polyimides are produced in a solution with a carrier solvent to keep the overall viscosity reasonably low. Component Characterization Aspen Polymers® represents polymeric components using a patented segment-based approach. Each polymer component is composed of one or more “segments” which represent characteristic repeat units, end groups, and branch points in the polymer chain. The physical property and reaction kinetics models track the flow rates of segments and the moments of the polymer molecular weight distribution. Using this technique, it is possible to characterize the average size and composition of the polymer molecules in the distribution. Polymer molecules can be segmented (divided into segments) in several different ways. The manner in which the polymers are segmented can have a major influence on model development, including: · The number of physical property parameters required to predict final product properties and phase equilibrium; · The accuracy of physical property estimations; · The number of reactions which must be defined to fully represent the polymerization kinetics, especially for copolymers; and, · The level of detail of the model (for example, the ability to distinguish different types of similar reactions). Based on these criteria, we characterize the polymer using segments based on the end groups and repeat units corresponding to the monomers used in the process. The segments are defined according to the monomers from which they derive. For example, the nitrogen atoms are grouped with the diamine series of segments and the oxygen atoms are always grouped with the dianhydride series of segments. This convention makes it easy to calculate the conversion of the monomers, the concentration of various types of functional groups, and the total imide fraction. This method of segmenting the polymer is designed to minimize the number of segments and reaactions required to fully characterize the polymer. Additional segments can be added to the model to account for side reactions such as branch formation. Components in Example Model Model ID Common Name Database ID Component Structure PMDA Pyromellitic Dianhydride - BPTDA Biphenyl tetracarboxylic dianhydride - DADMBP 4,4' diamino 3,3' dimethyl biphenyl - POLYMER Polyimide POLYMER See segments in tables below SOLVENT H2O Water H2O H2O The tables below summarize the segments associated with the diamide monomer and the pyromellitic dianhydride monomer. For simplicity, the BPTDA monomer is not considered in this example, however a similar set of segments could be defined for BPTDA. After two sequential reactions the dianhydride groups are converted to two diacid groups. When these ring-opening reactions occur the resulting diacid can be formed in two configurations, cis (acid groups on carbons 2 and 6) or trans (acid groups on carbons 2 and 5). These could be “lumped” into a single segment in the model or differentiated into two different segments (as shown in the table below). The lumped model is simpler to develop since there are fewer components and reactions. Differentiating the segments, as in this example, allows the model to keep track of the structure of the polymer in more detail. This could be important if the characteristics of the intermediate polyamic acid prepolymer (such as viscosity, etc) depend on the relative amounts of cis and trans segments. Several “component attributes” are used to keep track of the properties of the polymer component. These include the moles of each type of segment in the polymer, the number-average chain length (DPN), number-average molecular weight (MWN), and others. 4,4’ Diamino 3,3’ Dimethyl Biphenyl Monomer and Derivative Segments Pyromellitic Dianhydride and its Derivative Segments Physical Properties The physical properties of the monomers in this example are estimated from their molecular structures using standard features of Aspen Plus. The accuracy of the model predictions could be improved by fitting the property models to experimental data, especially for density and vapor pressure. The segments used in this example are not available in the standard segment databank in Aspen Polymers. However, we can define the functional groups in each segment using the Van-Krevelen functional groups. The physical property models in Aspen Polymers use this structural data to estimate key physical properties such as density, enthalpy, heat capacity, and viscosity. The tables below document how the segments in this example are divided into Van-Krevelen functional groups. We use estimates for all of the properties except the transition temperatures of the imine repeat unit; these are drawn directly from the Van Krevelen reference and are entered into the model as user-specified property data. Van Krevelen Groups in the DADMBP Segment Series Group Structure ~NH~ ~NH2 ~N< Group Number 119 164 168 169 2 1 1 - 2 - 1 1 2 2 - - 2 1 - 1 2 - - 2 Van Krevelen Groups in the PMDA Segment Series Group Structure >CArH >CAr ~ ~C=O~ ~COOCO~ ~COOH Group Number 142 143 150 154 163 2 4 1 1 1 2 4 1 - 3 2 4 2 - 2 2 4 2 - 2 2 4 3 - 1 2 4 4 - - 2 4 2 - 2 2 4 2 1 - Reaction Kinetics Polyimides are typically produced in a two-step process. In the first stage, the dianhydride and diamine monomers are reacted in a solvent at relatively low temperatures to produce a polyamic acid precursor, as shown in the figure below. Amidization Reactions to form Polyamic Acid Precursor The resulting polyamic acid is soluble and relatively easy to handle, for example it can be cast into sheets or spun into fiber. These products are converted into polyimides through heat treatment or by catalytic dehydration. During this stage the amic acid sites undergo cyclization reactions to produce imide groups as shown below. This reaction releases water, which must be removed to reach high imide fractions and to prevent unwanted side reactions such as hydrolysis. The major cycloimidization reactions are shown below. In addition to these reactions, there are many analogous reactions involving the amide groups in PMDA-E, DMBP-A-E, and DMBP-I-E segments. These additional reactions are not considered in this example since they represent a small fraction of the cycloimidization reactions, however these reactions can be added to the model if necessary. Major Cycloimidization Reactions The water generated in these reactions must be removed to reach high conversion and to avoid hydrolysis reactions. These include the reverse reactions shown above and the additional reactions shown below. Additional hydrolysis reactions can occur, but have been excluded here for simplicity. The hydrolysis reactions shown below are not included in the example. The model can be extended to include these reactions (the necessary segments are included in the example). Hydrolysis Reactions We can imagine additional side reactions may occur, especially at higher temperatures. The model presented in this example can be extended to include additional components and side reactions. As a first approximation, we have assumed that the reaction rates are independent of the chain length, e.g., the reactivity of the functional groups are the same in the monomers and the corresponding polymer end groups. In reality, the reactions may slow down as the chains become very large since the mobility of the end groups decrease with molecular size. For simplicity, we have treated each reaction as first order with respect to each reacting component and segment. This approach is reasonable as a first approximation since it guarantees that the predicted reaction rates are feasible (for example the calculated reaction rate approaches zero if the concentration of any of the reactants approaches zero). User Property Routine The example includes a user property routine, USRPIM.F, which is used to calculate the analytical properties shown in the table below. User Prop-Set Properties Name Description Units ACID Total concentration of acid groups (dry basis)* Mmol/kg ANHYD Total concentration of anhydride groups (dry basis) Mmol/kg AMIDE Total concentration of amide groups (dry basis) Mmol/kg AMINE Total concentration of amine groups (dry basis) Mmol/kg IMINE Total concentration of imine groups (dry basis) Mmol/kg X-ANHYD Anhydride conversion Unitless X-AMINE Amine conversion Unitless X-IMINE Imine conversion (imine fraction) Unitless *Dry basis concentrations are concentrations in solvent and water free polymer (including monomers). User prop-set properties can be included in the stream and block reports or used in sensitivity studies, design specifications, etc. These properties make it easier to interpret model results and to compare the model to measured analytical properties. Model Results In our example we simulate the production of polyimide film in a two-stage process. The flowsheet in this example is for demonstration purposes only; it does not reflect any particular industrial process. The monomers are fed in a 1:1 molar ratio in an appropriate solvent into a continuous stirred-tank reactor (CSTR). The reactor has a residence time of approximately two hours. It operates near room temperature and at slightly positive pressure. Polymer solution from this reactor flows to a multi-tube heat exchanger / reactor with a residence time of approximately thirty minutes. The polymer is heated from 25°C to 120°C in the heat exchanger. In these reactors the monomer is converted to a polyamic acid, with an amine end conversion of 94%. The polyamic acid is extruded into film and heat-treated to form polyimide film. In this example we simulate the heat-treating process using a batch reactor model. The model has a continuous feed of nitrogen and a continuous vent to simulate the drying of the film in an oven. The batch reactor model is convenient for this purpose because it allows us to specify the residence time, pressure, and temperature for the heat treatment. The imine fraction and other properties can be plotted as functions of time. A more detailed model could be developed in Aspen Custom Modeler to account for mass-transfer limited solvent and water loss from the surface of the film, fibers, or molded parts. Polyimide Process Model The base-case model results are summarized in the table below. Nitrogen and other trace components are excluded for brevity. These results assume heat-treating at 310°C and 0.1 bar for one hour. Under these conditions the final product has a high imide fraction and high molecular weight. Mass Balance for Polyimide Process Cycloiminization is more favored as the temperature increases. The example includes a sensitivity study in which the heat treatment temperature is adjusted over a range of temperatures. As shown below, the imine fraction approaches unity at high temperatures Predicted Imine Fraction v Annealing Temperature (for 60 minutes annealing time) Conclusions This example demonstrates how to set up a simple model to predict mass and energy balances for polyimide processes using Aspen Polymers in the Aspen Plus environment. We used the Polymer-NRTL option set to predict physical properties. Special care was taken to define the segments in a manner that minimizes the number of segments and reactions defined in the model. Since the requisite segments were not available in the built-in databases, the structure of each segment was defined using van-Krevelen functional groups. The segment-based power-law model was used to define the reaction kinetics for polyamic acid formation and subsequent cycloimidization. We simulated the initial formation of the polyamic acid followed by heat treatment to form a polyamide film. The example includes a user prop-set property model to predict analytical properties such as the concentration of each type of functional group, the conversion of amine and anhydride groups, and the imide fraction. This model could be extended to include other analytical properties. This type of model could be used to evaluate the influence of operating conditions on process performance and final product quality. It could be an important tool in debottlenecking the process or developing new product grades. The Aspen Custom Modeler package could be used to extend this basic model to account for mass-transfer limitations and/or spatial variation within the film. Keywords: polymers References: s The model presented here is intended to demonstrate the power and flexibility of Aspen Polymers. Reaction kinetic parameters and physical properties are based on estimates; further work would be required to develop a fully validated process model. This example is loosely based on the following sources: Mechanisms and Kinetics of the Polymerization of Thermoplastic Polyimides. I. Study of the Pyromellitic Anhydride / Aromatic Amine System. Marie-Florence Grenier-Loustalot, Frederic Joubert, and Phillipe Grenier. J. Applied Polymer Science Part A: Polymer Chemistry, Vol. 29, 1649-1660 (1991). Mechanisms and Kinetics of the Polymerization of Thermoplastic Polyimides. II. Study of “Bridged” Dianhydride / Aromatic Amine Systems. Marie-Florence Grenier-Loustalot, Frederic Joubert, and Phillipe Grenier. J. Applied Polymer Science Part A: Polymer Chemistry, Vol. 31, 3049-3063 (1993). Solvent and Isomer Effects on teh Imidization of Pyromellitic Dianhydride-Oxydianiline-based poly(amic ethyl ester)s. Nancy C. Stoffel, Edward J. Kramer, Willi Volksen, and Thomas P. Russel. Polymer, Vol. 34, No. 21, 4524-4530 (1993). Ullman’s Encyclopedia of Industrial Chemistry (5th ed.), Vol A21, VCH Publishers (1992).
Problem Statement: How are utilities specified?
Solution: A Utility is a new feature in Aspen Plus 12 that can be used to calculate the energy consumption of individual unit operations, energy costs, and/or how much utility of each type is used by the process (i.e., high pressure, medium pressure, and low pressure steam) Rather than actual material streams, Utilities have been implemented as variable utilities in Aspen Plus, where it is assumed that there is a large source of the utility available for use, and each unit operation computes its usage based on the extent of heating/cooling that is required by the block. You can assign a utility to any block where Duty or Power is either specified or calculated (except MHeatX) The utilities available are: Coal Gas Refrigerant Water Electricity Oil Steam General (of any specified composition) Specify utility type, price, and a specification, either a heating/cooling value or inlet/outlet conditions of utility. How Utilities were Modeled in Aspen Plus 11.1 In version 11.1, a utility was modeled using a Heater block in conjunction with Design specs and/or possibly, a Calculator block: Define a second Heater to represent the UTILITY side of the process, then connect the two heater blocks with a Heat stream. Define the IN feed stream and Utility block conditions. Calculate the required utility (IN) flow by using a Design Spec that varies the inlet flow to meet a specified outlet condition. Any cost calculations were performed through the Calculator block. How Utilities are Modeled in Aspen Plus 12.1 To calculate the required utility flow for a given process in Aspen Plus 12.1, no physical changes to flowsheet are necessary. Simply follow these steps: In the Utilities folder, create a new object, U-1. Choose the Utility type from the eight selections provided. For utility cost calculations, enter either the Purchase price or Energy price. Set the Calculation option as Specify heating/cooling value (default), or Specify inlet/outlet conditions (set values on State Variables form) and supply related parameters. Open the Block's Input form and select the Utility form. Choose the U-1 from the Utility ID dropdown list. KeyWords Keywords: None References: None
Problem Statement: Example on how to quickly import your company's MTO's into ACCE by using an Excel VBA.
Solution: This solution is a proof of concept on how to automate the process of entering your company's MTO's into ACCE. The attached example code illustrates how to parse your piping MTO's and automatically populate the import/export spreadsheet template. Keywords: Export/Import feature, VBA automation, MTO data. References: None
Problem Statement: Best practices and guidelines for setting IO Flags for User Defined Entries in APC Builder
Solution: Engineers can add user defined entries to an APC controller. These entries can be used to augment the interaction between the controller and DCS, and they can be used as input or output or scratch pad variables for User Calculations. All entries, including user-defined entries, have a set of properties that govern how the APC Web Viewer and the APC Controller present and interact with those entries. One of these properties is IO Flags, which is explained in detail in this KB article. **Note: The IO Flags for all built-in parameters (measurement, setpoint, limits, etc.) are automatically set by the engine and cannot be changed by the user. About the IO Flags The IO Flags control whether the entry value is read from or written to an IO Tag mapped to that entry. Each entry in an APC controller has some combination of IO Flags set, or possibly no IO Flags set. The IO Flags also control whether value change operator messages are displayed when the entry value changes. Additionally, they also control whether the changes to the entry value are historized in the online application history of the controller. How reads happen in an APC controller Any entry mapped to an IO Tag will be read at the start of the cycle if and only if that entry has the IsInput IO Flag set to TRUE. How writes happen in an APC controller In almost all cases an entry mapped to an IO Tag will be written to the DCS if and only if its value changes – unless it has the IsReadOnly IO Flag set. It is possible to force an unchanged value to write every cycle, but only if the IsConstant IO Flag is marked TRUE (see IsConstant and IsPublish in the next section). The IO Flags and their meaning IsConnectedDCSRead This flag is automatically set or reset at runtime. A value of TRUE for this flag indicates that the entry is connected to an IO Source tag for reading. IsConnectedDCSWrite This flag is automatically set or reset at runtime. A value of TRUE for this flag indicates that the entry is connected to an IO Source tag for writing. IsConnectedHistory This flag is automatically set or reset at runtime. A value of TRUE for this flag indicates that the entry is connected to a local history store and will be historized by the RTE environment. IsConstant If TRUE, the entry is not changeable by reading an IO tag. It does not mean that the value of the user-defined variable cannot be changed online. A user-defined variable with IsConstant set as true can be changed by the engine and the user-defined internal calculations. IsConstant can be used in conjunction with IsPublish. IsConstant and IsIsInput cannot both be true at the same time. IsHistorized If set to TRUE, the entry will be connected to a local history store at runtime. IsInput If TRUE, the entry value is refreshed from an IO tag (if one is configured) at runtime. IsConstant and IsIsInput cannot both be true. IsIoTagRequired When set to TRUE, the application configuration software may require an IO tag be configured for this entry. APC Builder automatically adds the user-defined variable to the Configure Connections page. IsOperatingValue If TRUE, the runtime entry value will be preserved when the application is redeployed i.e. any change to the value of the user-defined variable that is made offline would be disregarded when redeployed. IsTuningValue and IsOperating cannot both be true IsPublish When set to TRUE, the entry value will be written to the IO Source (if there is an IO tag configured) at runtime. Can only be true if IsConstant is true. IsReadOnly If TRUE, the entry value is not changed by the application at runtime. Can only be true if IsInput is true. IsReadWrite If TRUE, the entry value may be changed by the application at runtime. Note: IsReadWrite is not used by this version of the software. IsTuningValue If TRUE, the entry will be replaced when the application is redeployed i.e. any change to the value of the user-defined variable that is made offline would replace the value in the online controller when the application is redeployed. IsTuningValue and IsOperating cannot both be true. LogChange This parameter is not user changeable. If TRUE, a message will be logged whenever the entry is changed by a user at runtime. IsOperatingValue vs IsTuningValue The distinction between IsTuningValue and IsOperatingValue is that on a redeployment of the application, an entry with IsOperatingValue will receive its initial value from the already deployed version of the application, overriding whatever entry value had been set in APC Builder, whereas an entry with IsTuningValue will receive its initial value from the new version of the application being deployed, the value as set in APC Builder. ***Each user-defined entry should have one of these flags set true: IsConstant, IsOperatingValue, or IsTuningValue. Mapping of DMCplus CCF entry keywords and APC IOFlag combinations This section describes the minimum IO Flag settings to achieve the same effect as the various DMCplus CCF keywords. Not all keywords have a mapping (usually because they are handled some other way in APC controllers). You can set additional flags to achieve additional effects (such as logging changes or capturing a history of changes). Some IO Flags are mutually exclusive (see the description of the IO Flags in the previous section for examples). Note: To ensure proper operation, each user-defined entry should have one of these flags set true: IsConstant, IsOperatingValue, or IsTuningValue. **By default, all user-defined entries have the IsInput and IsTuningValue flags enabled. It is strongly recommended that users review the IO Flag settings for all user-defined variables before deploying the controller online. The table below summarized the minimum required set of IO Flag settings in order to replicate the behavior of the most commonly used keywords from the traditional DMCplus environment. A full list of DMCplus keywords including additional details about each individual keyword can be found in the glossary that follows the table. IsInput IsConstant IsReadOnly IsPublish IsTuningValue IsOperatingValue WRITE F F AWRITE F T F T T PWRITE F F LWRITE F F RDWRT T F READ T T LOCAL T CONFIG T T T CONS T T T Glossary: DMCplus Keyword AWRITE Intent: Always write this value to a tag in the PCS (Process Control System). Equivalent IO Flags: IsConstant, IsPublish, IsTuningValue Note: any entry value without IsConstant and IsPublish set true, and mapped to an IO Tag, are only written when the value changes. BUILD Intent: Reserved keyword for Build-only entries Equivalent IO Flags: not applicable CALGET Intent: Input Calculation (only allowed in the Calculation section) Equivalent IO Flags: not applicable (the input user calculations take the place of CALGET entries) CALPUT Intent: Output Calculation (only allowed in the Calculation section) Equivalent IO Flags: not applicable (the output user calculations take the place of CALPUT entries) CONFIG Intent: Configuration parameter set once at CCF load time Equivalent IO Flags: IsConstant, IsPublish, IsTuningValue CONS Intent: Constant value, set once at load-time (not changeable) Equivalent IO Flags: IsConstant, IsPublish, IsTuningValue INIT Intent: Initialize value associated with the PCS (Process Control System) tag at initialization time Equivalent IO Flags: not applicable. INIT is ignored in DMCplus LOCAL Intent: Local value in DMCplus shared memory (changeable by View, Calculations and Transforms) Equivalent IO Flags: Could be None; could be IsOperatingValue LWRITE Intent: Low priority write to a PCS (Process Control System) tag after all other writes are finished Equivalent IO Flags: not applicable; APC does not have multiple write lists. PWRITE Intent: High priority write to a PCS (Process Control System) tag before any other writes occur Equivalent IO Flags: not applicable; APC does not have multiple write lists. RDWRT Intent: Read and write from/to a PCS (Process Control System) tag (middle priority) Equivalent IO Flags: At a minimum use IsInput so that reads happen. READ Intent: Read only from a PCS (Process Control System) tag (middle priority) Equivalent IO Flags: IsInput and IsReadOnly. WRITE Intent: Write only to a PCS (Process Control System) tag (middle priority). Equivalent IO Flags: Do not specify IsInput and IsReadOnly. XFORM Intent: Transform Equivalent IO Flags: not applicable. Transforms are specified in the controller configuration, not via IO Flags. Keywords: APC Builder User-defined parameters IO Flags Communications References: None
Problem Statement: Can you use both synchronous and asynchronous scanning for a transfer record?
Solution: Nothing in Aspen InfoPlus.21 prevents you from adding a solicited IO transfer record (defined by IoGetDef, IoLongTagGetDef, IoLLTagGetDef, or IoGetHistDef) having the field IO_ASYNC? set to YES to a scheduling record defined by ScheduledActDef; however, several clients reported scanning problems that were solved by either removing the transfer record from the scheduling record or by setting the field IO_ASYNC? to NO. AspenTech strongly recommends either using synchronous (i.e. using a ScheduledActDef record to schedule the transfer record) or asynchronous (i.e. setting the field IO_ASYNC? to YES) scanning but not both. Keywords: synchronous asynchronous ScheduledActDef IO_ASYNC? References: None
Problem Statement: For the Aspen InfoPlus.21 Product Family, Aspentech releases two types of patches. The first is commonly referred to as an Emergency patch. Each one is specific to a particular product and is made available to fix one or more specific problems that AspenTech has decided need to be addressed as soon as possible. When viewed from the AspenTech support website these patches have the word Engineering in their title, for example Aspen InfoPlus.21 V7.3.0.4 Server Engineering Release IP130206Z (March 2013) The second is referred to as a cumulative patch and contain all fixes in all Engineering patches, as well as often some other fixes, since the specific version was released via a set of DVDs. An example of this might be.. Aspen InfoPlus.21 Server V7.3 Cumulative Patch4 (V7.3.0.4) - January 2013 Special Note:- A Cumulative Patch is built using a Windows MSI installer package, whereas an Engineering patch is NOT an MSI patch. How does this affect a database administrator, how do they know which patches are available to be installed, and which ones should be installed, maybe at the same time?
Solution: There are two tools available to a database administrator. One is the monthly newsletter distributed by AspenTech which amongst other things, lists all patches released in the past few weeks. The other is the Aspen Support Website http://support.aspentech.com/ that makes all patches for all products available to qualified users. From the AspenTech Support website, clicking on the left side menu option 'Patches' brings up the page below. As indicated it lists all current patches for all products. However there is also a black button with white writing saying "aspenOne Update Center". This will work in the same way as the Microsoft Windows Update by scanning your system for all installed products. It will then list any cumulative patches that are available but not yet installed. Therefore, the recommendation would be to first install 'all of' the cumulative patches found by the aspenONE Update Center. However, this is not the final step. As mentioned above, only cumulative patches are MSI patches, and only MSI patches can be seen through the aspenONE Update Center. This means that even after updating to the latest cumulative patches, a product by product search would need to be performed to pickup any Engineering patches that were released after the cumulative patches. SPECIAL NOTE: It is very important that installation of all cumulative patches of the same version are installed at the same time. Several of them could be dependent on others. For example AspenTech released several v7.3.0.4 cumulative patches, such as Aspen Process Explorer, Aspen Process Data, Aspen Calc, Aspen Production Record Manager etc. If you look at the Release Notes for all of these patches, you will see that the instructions include the note that all patches should be installed together. Failure to do so can lead to unexpected results. Keywords: None References: None
Problem Statement: When using the AspenTech Recommended method of backing-up history filesets via TSK_HBAK, those backed up filesets may be written to 'Active', or 'Shifted' or 'Changed' directory, depending on their status at the time of executing Tsk_Hbak.. To avoid excess disk usage there is a cleanup utility for the 'Active' directory. But purposefully there is NOT one for the 'Shifted' or 'Changed' directory. However it is still possible that the 'Shifted' directory could contain many fileset backups which are also in the 'Changed' directory. This would be achieved by the fact that it will automatically be backed up to the 'Shifted' directory, but if at a later date there is a change, then a newer copy will now be saved in the 'Changed' directory. How can I avoid using excess disk space by having filesets in the 'Changed' directory while we still hold an older version in the 'Shifted' directory?
Solution: Inside the record defined by HistoryBackupDef are fields such as "Active_Location", "Shifted_Location" and "Changed_Location". The suggestion here is for the user to make the fields Shifted_Location and Changed_Location point to the same location. What will happen now is that every time there is a Shift, Hbak will copy the fileset to the Shifted_Location. Every time there is a Change, Hbak will copy the fileset to the Changed_Location. By making those two locations the same, Hbak will overwrite the fileset generated by the Shift with the fileset generated by the Change. Similarly if another backup needs to be made due to another change, then again it will overwrite the previous one with the latest one. Keywords: None References: None
Problem Statement: After an unexpected shutdown, such as a power cut, or somebody rebooting without first doing a clean shut-down of the database, a restart of Aspen InfoPlus.21 may show that one or more of the Aspen History Repositories failed to restart. This would often be indicated by viewing that repository from the Aspen InfoPlus.21 Administrator and seeing the repository and associated filesets with a red icon. What would be a recommended way to first try to recover ?
Solution: All repositories have an 'active' fileset. This active fileset is continuously being accessed by applications such as Aspen Cim-IO trying to write to it, or Aspen Process Explorer, Aspen SQLplus, Aspen Calc etc., reading data from it. Therefore we say that this fileset is always 'open'. With unexpected shutdowns, this fileset, as well as the recently closed filesets have the potential of some form of minor corruption. So this is where we should start our recovery attempts. HOWEVER: DO NOT begin to try to recover by forcing a manual fileset shift. All this does is close an a possibly already corrupted fileset. And, when it does restart, performance may be adversely affected especially if a large Event.Dat existed. ALSO: At least at initial stages, DO NOT delete the Cache.dat, the Event.dat, nor the Save.dat - deleting any one of them always results in lost data AND it rarely fixes the problem. The first thing that should be attempted would be to try to 'repair' the active fileset, as well as the recent 2 or 3 closed filesets. To do this, open the Aspen InfoPlus.21Manager and click on the 'Actions' Drop-Down menu - choose the option at the bottom - "Repair Archive" Here you will be presented with a dropdown list of your repositories. Select the one you want to repair and click 'Next' Here you will see a list of all filesets in that repository with the active one at the top and the rest in chronological order. In this example #3 is the 'active' fileset, #2 then #5 are the most recent ones to be shifted out of, etc... The recommendation in this example would be to put a checkbox next to filesets #s 3, 2, 5 (and maybe 4) and click on the 'Next' button. Accept the defaults to any questions that come up and "WAIT". No matter how long, it will definitely finish and a log can be viewed. Once finished, go to the Aspen InfoPlus.21 Administrator, right-click on the repository and click 'Start'. If it still fails to start, try restarting Aspen InfoPlus.21. If it still fails to start, contact your local Aspen Support Group, ready to show them, what you see in the Administrator, what you saw in the log from the Repair Archive, AND, what you see in the Error.Log that is located in the specific Repository Windows Directory. Keywords: References: None
Problem Statement: The MSMQ Publisher may give a warning re full queues, if the subscriber(s) are not available. How can queue size be managed?
Solution: Replication is designed to self-monitor, adjusting its operation based on available resources. So ideally resource parameters should not be manually configured. However should changes need to be made, the following information is available. By default, the machine quota typically is set to 1 GB to be used for ALL MSMQ applications, including InfoPlus.21 Replication and there are several factors to consider when managing MSMQ resources. See Microsoft article MS811056 for more detailed information. First, note that InfoPlus.21 Replication does not use all the available machine quota to send messages. It will only use up to 10% of the MSMQ quota set so that Replication will not overwhelm MSMQ and causes problems for other MSMQ applications in the system. If Replication queues to 10% of the quota, it will write warning messages to the log. Since MSMQ can hold only a limited number of messages, a secondary back-queue file is established to hold the messages for each subscriber. Replication has configured MSMQ for its messages to be held for only 3 days. After that, the messages are moved to a so called dead-letter queue to be removed eventually. As soon as communication is established again, publisher will pull saved messages from the backup file queue and send them to the subscribers. Let's look in closer detail. How TSK_PUBR determines if MSMQ is full At the beginning of each cycle, TSK_PUBR has to determine if the MSMQ for each subscriber is full. By default, Microsoft MSMQ is typically configured with 1GB of machine quota, i.e., working disk space. (Microsoft has recommended MSMQ not exceed 1.6 GB.) As noted above, this machine quota is for all queues in the system. To minimize the impact of replication on other MSMQ applications in the system, TSK_PUBR will only use up to 10% of the machine quota. In addition, the maximum number of messages, which helps determine if the queue is full for each subscriber is based on a number of factors. They are: - Assumption of maximum tag update rate of 2000 tags per seconds - Assumption of maximum of 5 subscribers - Assumption that MSMQ should only hold 10 minutes of updates per subscriber - Actual message size in MSMQ is around 2 MB As a result of above assumptions, the maximum size of around 100MB can be allocated for each subscriber. Therefore, the maximum number of messages allowed is 50. To prevent from exceeding 10% of machine quota and if there are more than 5 subscribers, the maximum number of messages allowed will be reduced. So the pseudo code to compute the maximum number of messages allowed is: Max. Messages = 0.5*Machine Quota/Number of Subscribers/2MB IF Number of Subscribers <= 5 AND Max. Messages > 50 THEN Max. Messages = 50 ENDIF A typical update requires 60 bytes. Replication typically can hold up to 35000 (~2.1MB) updates before sending them via MSMQ. So, potentially, one message can contain up to 35000 tag updates. Of course, if there are no pending updates, it will still send however many updates already held to the subscriber. (Text messages are bigger and usage-dependent.) Once the queue is full, new updates will be put into a disk backup queue. Processing of Failed Messages There are some circumstances in which the attempt to send a message to a subscriber will fail. For example, the attempt will fail if the computer system hosting the subscriber is shut down and the MSMQ queue is full holding messages previously sent. In addition, MSMQ or the communication channel may have irrecoverable errors which can also prohibit messages from being delivered successfully. In such cases, TSK_PUBR may queue the message to a backup memory-mapped file dedicated to the subscriber. But it will simply discard the message if the MSMQ is full and the backup memory mapped file resides on a hard disk that has less than one gigabyte of free space. Later on TSK_PUBR will de-queue and attempt to re-publish such failed messages provided they are not too old. TSK_PUBR will simply discard messages that are more than 72 hours old (Note: The default value of 72 can be changed by setting a command-line parameter.) If any messages are discarded, TSK_PUBR will log a message to the log file. TSK_PUBR will also increment the current error count that is subsequently written to the IP_#_ERRORS field. Backup Disk file Size Limited by Available Hard Drive Space I would not recommend to adjust the MSMQ queue quota size more than 1.6 GB as indicated in the Microsoft article linked above. The secondary queue is limited to the available disk space. Publisher monitors if the disk space is approaching to a allowable free disk space (by default 1GB). If the disk is approaching this limit, it will discard messages because there is no way to put more messages in the backup queue without the risk of filling up the hard drive. Keywords: overflow limit increase maximum best practice References: None
Problem Statement: AspenTech recommends that application programs or queries use the Aspen InfoPlus.21 Health Monitor to report statuses and process operating conditions. This solution describes how to configure the Aspen InfoPlus.21 Health Monitor and user written applications to allow users to quickly monitor the well being of the application using a traffic light scheme.
Solution: A user written application is a query, external task, or other program that updates the Aspen InfoPlus.21 database regularly. The application can use the Aspen InfoPlus.21 Health Monitor to report on the status of the application or on some phase of the process. The Aspen InfoPlus.21 Health Monitor also ensures that the application makes regular updates. AspenTech provides an example application health monitor record defined by IP21HealthDef named ExampleApplicationTest. Using the Aspen InfoPlus.21 Administrator, copy ExampleApplicationTest to create a health monitoring record for your application. For the sake of discussion, name the record AppMonitorTest. Change the 40 character description field (IP_DESCRIPTION) of AppMonitorTest to describe the purpose of the application. This descriptor will appear in the Health Monitor display. The fields IP_YELLOW_LIMIT and IP_RED_LIMIT in AppMonitorTest are application update timeout limits measured in seconds. For example, if IP_YELLOW_LIMIT equals 120 (seconds) and IP_RED_LIMIT equals 300 (seconds), then the Aspen InfoPlus.21 Health Monitor will generate a yellow warning alarm if your application has not updated AppMonitorTest within the past two minutes and a red severe alarm if your application has not updated AppMonitorTest within the past five minutes. Your application must update the field IP_VALUE in AppMonitorTest on a regular basis with a value of White (no monitoring), Green (no alarm), Yellow (Warning), or Red (Severe). In addition to updating IP_VALUE, your application may display informational messages by setting the field #STATEMENTS to the number of 60 character statements to show and by writing the messages to the field STATEMENT in that repeat area. The following screen capture shows AppMonitorTest configured as described above. After configuring the application health monitoring record and changing your application to use the record, you must make AppMonitorTest available to the Aspen InfoPlus.21 Health Monitor. Add a reference to the repeat area #DATA_BASE_RECORDS in the record CustomAppHealthTests to include AppMonitorTest as shown in the following example: In this example, a query wrote 'Green' to the field IP_VALUE in AppMonitorTest and 'No Alarms to Report' to STATEMENT[1] in the repeat area #STATEMENTS. This causes the Aspen InfoPlus.21 Health Monitor to display: If more than two minutes elapse since the application updated AppMonitorTest, the Aspen InfoPlus.21 Health Monitor shows the following display: Keywords: Aspen InfoPlus.21 Health Monitor Custom application health tests IP21HealthDef IP21HealthGrpDef CustomAppHeathTests References: None
Problem Statement: A Mandatory step when upgrading an Aspen InfoPlus.21 system to a new machine, and preserve old history, is to copy the contents of what we call the 'dat' directory from the old to the new machine. This 'dat' directory contains 3 important files needed for interpreting history. They are Config.Dat, Map.Dat and Tune.Dat The reason for this article is to point out that as per our knowledge-base article on a NEW system that did not have a pre-V7.2 version of Aspen InfoPlus.21, the 'dat' directory would NOT be located in the same directory structure location as it was on the old non-pre-V7.2 system. Copying the files to the wrong location could result in a loss of the ability to see old history after the upgrade. REMINDER: This is NOT a problem if upgrading on the same PC - only if upgrading to a new V7.2 system on a new machine.
Solution: Our knowledge-base article 129813 has a link to our V7.2 Installation Guide.. It then points you to the Overview section - Chapter 2 pages 12-14 - PDF pages 18-20 - where there is a section called "Directory Locations and Keywords: References: s". You will notice on page 13 (PDF 19) the 'dat' directory location for a new V7.2 installation and next to it we list the location for a V7.1 or earlier installation. Specifically for V7.1 and earlier :- <drive> \ Program Files\AspenTech\InfoPlus.21\c21\h21\dat Whereas for V7.2 :- Windows Server 2003 C:\Documents and Settings\All Users\Application Data\AspenTech\InfoPlus.21\c21\h21\dat Windows Server 2008 C:\ProgramData\AspenTech\InfoPlus.21\c21\h21\dat Therefore you must copy the 'dat' directory from the V7.1 location above to the new V7.2 location (dependent on your operating system)
Problem Statement: Should you allocate multiple CPUs for Aspen InfoPlus.21 VMWare virtual servers?
Solution: In a dual-socket, dual-core VMWare server, four processors are available for the virtual images on the server. If the server, for example, hosts 10 virtual images, only four of the 10 virtual machines can be active at a time assuming each of the images has been allocated one CPU. The other six virtual machines have to wait for a physical CPU to go idle. When an image relinquishes its CPU time (perhaps waiting on disk I/O), then VMWare can reallocate the CPU to another virtual machine. A virtual image assigned to multiple cores cannot run until all the CPUs assigned to the image are available. If, for example, nine of 10 virtual images in a four processor VMWare server have been assigned one CPU, and the tenth image has been assigned four CPUs, then the tenth image cannot load until the other nine single-CPU images are idle. Mixing single, dual, and quad-CPU virtual images on a VMWare server can create major scheduling problems. CPU contention problems can often be identified by monitoring various CPU performance counters provided by VMWare. AspenTech strongly recommends customers familiarize themselves with virtual counters and metrics available for monitoring the virtual environment. When running Aspen InfoPlus.21 on a VMWare server, AspenTech recommends 1. Allocating only one CPU to the virtual image containing Aspen InfoPlus.21 and 2. Not allocating multiple CPUs to any of the other virtual images running on the VMWare server. 3. Learning the tools VMWare provides to monitor the virtual environment Keywords: VMWare virtual ESX multiple References: None
Problem Statement: This best practices article discusses configuring Aspen InfoPlus.21 history repositories for optimal performance. Knowledge base solution 129111 (Configuring History Repositories for Optimal Performance (Summary)) summarizes the recommendations of this article.
Solution: Best practices for Aspen InfoPlus.21 history repository configuration include the following: ? Understanding History Repositories ? Comprehending History Data Flow ? Configuring the Number of Cache Buckets for each Repository ? Sizing Cache Buckets for each Repository ? Setting Archive Shift Criteria for each Repository ? Optionally Increasing the Event Queue Buffer Size ? Defining Multiple History Repositories ? Choosing the Correct Drive Understanding History Repositories A history repository consists of a history event queue, an archiver process named h21archive, a history cache, and multiple disk-resident archive file-sets. The history event queue allows the Aspen InfoPlus.21 database engine to pass history events to the archiver process for archiving. The archiver process dequeues history events and either drops them into memory-resident history cache buckets or stores them into archive file-sets. An archive file-set has starting and ending timestamps defining the time range of the data within the archive file-set. Each archive file-set consists of three files: arc.dat, arc.key, and arc.byte. Arc.dat contains archive history records holding data for a particular point (history repeat) for a specific time range. Arc.key has information that allows Aspen InfoPlus.21 to quickly find the location of archive history records in the arc.dat file. The file arc.byte holds history event data for very large history events. The archive file-set with the most recent history records is the active archive or file-set. The active file-set grows in size as h21archive allocates additional space. The archiver process allocates additional space in chunks called history archive records varying in size from 256 bytes to 64 kilobytes depending upon various criteria including the rate at which data is being archived and the how much time remains before the next file-set shift. A file-set shift occurs when the active archive reaches a maximum size or time period. Comprehending History Data Flow Proper configuration of Aspen InfoPlus.21 history repositories requires an understanding of the basic flow of process data into history repositories. An Aspen InfoPlus.21 database record can contain one or more memory-resident history repeat areas. For example, a record defined by IP_AnalogDef has a history repeat area containing multiple occurrences of the fields IP_TREND_VALUE, IP_TREND_TIME, IP_TREND_QSTATUS, and IP_TREND_QLEVEL. A new history occurrence shifts into a repeat area when certain fields in the fixed area of the record change. If archiving is enabled, Aspen InfoPlus.21 creates a history event package containing the history occurrence data and inserts the event package into the event queue of the history repository assigned to the history repeat area. The event package contains a point ID, a map index, key timestamp, a key quality level, and data field(s). The point ID is an integer identifying both the Aspen InfoPlus.21 record and the history repeat area. The map index points to a location within the file map.dat defining the structure (number and types of data fields) of the history event. The repository's archiver process dequeues the history event package and examines its key timestamp. If the event is more recent than any event previously received for the point, then the archiver simply drops the history event into a memory-resident cache bucket assigned to the point; otherwise, the archiver must store the history event directly into the appropriate archive on disk. Direct storage to disk takes much longer. When a cache bucket fills, the archiver flushes the cache bucket into the active file-set. Best performance is achieved when: ? All history events for a history repeat area are queued in chronological order. ? Most history events are being stored in the active archive rather than some older archive. ? A cache bucket exists for every history repeat area. ? The cache buckets are sized large enough to minimize disk writes. ? The history event queue memory buffer is large enough to handle reasonable surges without overflowing to disk. ? Average history archive record sizes are larger (16K to 64K) rather than smaller (256 to 1024 bytes). ? The time between archive shifts is larger ( > 7 days) rather than smaller ( < 2 days). The remainder of this document discusses best practices for achieving these goals. Configuring the Number of Cache Buckets for each Repository Use the Aspen InfoPlus.21 Administrator to specify the number of cache buckets for a selected repository by changing the field Number of Points on the Repository tab of the Properties dialog. The number of cache buckets allocated for a history repository should equal or exceed the number of history repeat areas assigned to the history repository; otherwise, the archiver will constantly be flushing and reassigning cache buckets. Sizing Cache Buckets for each Repository Use the Aspen InfoPlus.21 Administrator to specify the cache bucket size for a selected history repository by modifying the field Cache Size on the Advanced tab of the repository Properties dialog. All of the history cache buckets assigned to a repository are the same size. By default, the cache bucket size is 256 bytes. This is sufficient for records defined by IP_AnalogDef because events generated for such records occupy 15 bytes. Use larger cache buckets (512 or 1024 bytes) to accommodate larger history events. For example, a 300-byte history event would not fit in a 256-byte cache bucket. Application developers creating definition records with large history occurrences should consider assigning the resulting records to history repositories with larger cache bucket sizes. Consider increasing the cache bucket size from 256 to 1024 for repositories being updated at very rapid rates. This could significantly improve archiving performance. Do not increase the cache bucket size unnecessarily. The amount of memory allocated to a repository's cache buckets is the size of each cache bucket times the number of points allocated to the repository. Large cache buckets will also increase the amount of time spent saving the entire cache to disk every five minutes. Setting Archive Shift Criteria for each Repository Use the Aspen InfoPlus.21 Administrator application to specify the archive shift criteria by modifying the fields File Sets Size and Time Span on the Global tab of the repository Properties Dialog. Normally most new process data is stored in the active file-set. An archive shift occurs when the active archive reaches a specified maximum size or if the difference between the active archive start time and a new history event exceeds an archive shifting period. Since the archiver process has a lot of extra work to do whenever file-set shifts, to improve performance, it is better to configure archive shifting criteria so that the archive shift frequency is minimized. AspenTech recommends setting the file-set time span to be at least five days and the File Sets Size to a number large enough to accommodate the time span setting. Optionally Increasing the Event Queue Buffer Size The history event queue is an inter-process communication mechanism that allows the Aspen InfoPlus.21 database engine to pass history events to the archiver process. The history event queue is generally memory-resident; however, the queue can overflow to disk (event.dat) if the archiver is overloaded or paused. Use the Event Queue Summary dialog of the selected history repository to monitor the event queue. Normally the Disk Overflow numbers are zero. If they are not zero, then the event queue is overflowing. History event queue overflow can degrade performance because of additional disk accesses required. The Aspen InfoPlus.21 database engine responds by throttling back any process inserting events into the history event queue. The Aspen InfoPlus.21 Administrator can be used to increase the Buffer Size of the history event queue from 100KB to as much as 1000KB. This is recommended on systems likely to experience significant surges of process data to be archived. Defining Multiple History Repositories Every Aspen InfoPlus.21 system has a history repository named TSK_DHIS that is automatically created, along with three archive file-sets, during installation. Additional history repositories should then be created as required to meet the process data archiving needs. The creation of additional history repositories is generally driven by one of the following considerations: ? Many Aspen InfoPlus.21 Tag Records ? Different History Occurrence Sizes ? Multiple Update Rates ? Several Data Sources Many Aspen InfoPlus.21 Tag Records Multiple history repositories should be created if an Aspen InfoPlus.21 system hosts many tags, especially if those tags are updated frequently. For example, suppose an Aspen InfoPlus.21 database contains fifty thousand tag records defined by IP_AnalogDef. Suppose also that archive shifts should happen every ten days. Finally, suppose that a history event is being stored every thirty seconds for each of the fifty thousand tag records. If all fifty thousand tags were assigned to the same repository, then the active archive would grow to more than 20 GB before the archive shift occurs. In contrast, maximum archive sizes on typical Aspen InfoPlus.21 systems generally range from 500MB to 5GB. Of course, the maximum archive size could be reduced by a factor of five by allowing archive shifts to occur every two days instead of every ten days; however, AspenTech discourages frequent archive shifting because of performance penalties incurred during archive shifts and somewhat poorer history retrieval performance. AspenTech recommends distributing the 50 thousand tags amongst several different history repositories so that both archive sizes and archive shift periods stay within reasonable ranges. Different History Occurrence Size History repeat areas with significantly different history occurrence sizes should be assigned to different history repositories in order to make effective use of the history cache. The key point here is that all the cache buckets assigned to the same history repository must have the same size. If different cache bucket sizes are desired then multiple history repositories are necessarily required. Multiple Update Rates History repeat areas with significantly different update rates should be assigned to different history repositories. Suppose some Aspen InfoPlus.21 tag records are being updated every 10 seconds while other Aspen InfoPlus.21 tag records are being updated every 10 days. It is better to store the 10-second data and the 10-day data in different repositories. Several Data Sources History repeat areas associated with different data sources should not be assigned to the same history repository. For example, Aspen Technology recommends creating separate repositories for the tags associated with each Aspen Cim-IO server. In other words, do not mix data from multiple Aspen Cim-IO servers into one repository. It's OK to have several repositories storing data from the same Aspen Cim-IO server; however, do not combine data read from different Aspen Cim-IO servers into the same repository. Likewise, you should not store historical information for calculated or manual entry tags in the same repository containing Aspen Cim-IO data. Create a separate repository for all manual entry and calculated tags. Creating separate repositories for the tags associated with each Aspen Cim-IO server avoids problems that can occur during store and forward situations. Suppose a repository receives data from two or more Aspen Cim-IO servers, say Aspen Cim-IO 1 and Aspen Cim-IO 2, and suppose Aspen Cim-IO 1 loses communication with Aspen InfoPlus.21 while Aspen Cim-IO 2 continues to send data to Aspen InfoPlus.21. Also, suppose a query updates tags stored in the same repository. During the communication outage, Aspen Cim-IO 1 stores data read from the process. While Aspen Cim-IO 1 stores data on the Aspen Cim-IO 1 server, the repository has a file-set shift. The beginning time of the new (active) file-set will be one microsecond greater than the timestamp of the last piece of historical data stored in the previous file-set. When communications between Aspen InfoPlus.21 and Aspen Cim-IO 1 are restored, Aspen Cim-IO 1 forwards time-stamped data older than the beginning time of the active file-set. Each item being forwarded would be moved individually from the memory queue to the previous file-set, clogging the normal flow of history data from the real-time database to the repository. Likely, this situation causes the queue to fill and forces data into the repository's overflow file event.dat. Eventually, the Aspen InfoPlus.21 database engine will begin to throttle back all input into the history event queue. This performance problem could have been avoided if there had been separate, dedicated repositories for Aspen Cim-IO 1, Aspen Cim-IO 2, and the calculated tags. During the time communications between Aspen InfoPlus.21 and Aspen Cim-IO 1 was down, all data flow into its dedicated repository would have stopped. After the file-set shift, the timestamp of the active file-set would have the time of the last item written to the previous file-set. When communications were restored, forwarded data would have been streamed normally through the repositories in-memory cache since the timestamps of the incoming data would be more recent than the beginning time of the active file-set. Choosing the Correct Drive AspenTech recommends installing history repositories and file sets on local drives to optimize performance and minimize the possibility of losing the connection to the repository drive. Using network drives seriously degrades performance. The archiver tasks constantly lock and unlock archive files. A temporary network problem can cause the archiver exit or hang since the remote files suddenly disappear. Because of these problems, AspenTech does not recommend using network or remote storage for history repositories. SAN (Storage Area Network) and NAS (Network Attached Storage) devices are special-purpose computers designed as high performance file-servers. Using a dedicated gigabit fiber channel to access hard disk clusters, the SAN does not use the local Local Area Network (LAN) as does NAS. Using SAN devices for Aspen InfoPlus.21 historical repositories are acceptable if the SANs are connected via a reliable fiber optic cables. Any connection break can corrupt the database and cause data loss. AspenTech does not recommend using NAS devices for the same reasons we cannot recommend using network drives. Keywords: Shift Shifting File set Cache Event queue Buffer size Repository SAN NAS Network drive References: None
Problem Statement: AspenTech released new, more user-friendly versions of the Aspen Process Explorer Add-Ins taking advantage of the Microsoft 'Ribbon' technology in version 7.3. This article describes how to add the ribbon-friendly Aspen Process Explorer Add-Ins for Microsoft Excel.
Solution: On your Office 2007 or 2010 client, press the Microsoft Office button from Microsoft Excel, and choose Excel Options. Select Add-Ins when the Excel Options screen opens. Click on the Manage field, choose COM Add-Ins, and press the Go button. Finally select Aspen InfoPlus.21 Configuration Excel 2007/2010 and Aspen Process Data Excel 2007/2010 and/or the Aspen Production Record Manager (formerly known as Batch.21) Add-Ins. Note: The Aspen InfoPlus.21 COM Add-Ins are located either in C:\Program Files\Common Files\AspenTech Shared\ExcelAddin or C:\Program Files (x86)\Common Files\AspenTech Shared\ExcelAddin After choosing the Aspen Process Explorer COM Add-Ins, you should see the Aspen Configuration and Aspen Process Data tabs when you open a spreadsheet. Keywords: Excel Add-Ins COM References: None
Problem Statement: Occasionally an Aspen Infoplus.21 Database Administrator needs to run a procedure to do a repair on one or more history file sets. The most common method is to run the procedure from the Aspen InfoPlus.21 Manager via the drop-down menu option of Actions->Repair Archive After selecting the required repository and fileset, a pop-up message will appear: How should this question be answered?
Solution: Unless there is an excellent reason, the answer should always be 'NO' Under normal conditions the Start Date of a fileset will match the End Date of the previous fileset as seen below. Therefore if doing a repair on fileset #2 below, it really doesn't matter how this question is answered. However, suppose for some reason fileset #1 has been temporarily moved away (no longer available to any historian process) and a repair is performed on fileset #2. If the question about extending end time is set to YES, then because #1 is not currently visible, the ending time of fileset #2 would be modified to fill the gap, and therefore changed from 24-NOV-12 14:46:55.3000 to 24-DEC-12 14:46:55.3000. The historian will continue to work this way until fileset #1 is restored. Now you would have the situation where the End Date of #2 overlaps the Start Date of fileset #1 by a whole month. If the answer to the question had been correctly set to NO then the gap would have remained and therefore there would not be any problem when restoring fileset #1. Keywords: None References: None
Problem Statement: Where can I get tips for supporting Advanced Process Control (APC) issues?
Solution: Download the attached, "Tips for Troubleshooting APC Calls", for information on the most commonly reported APC issues, file locations, available log files, how to determine the version of the software being used, etc. Keywords: user guide, support, tips References: None
Problem Statement: How do we improve the time for saving Events in Aspen Petroleum Scheduler (Orion)
Solution: In Orion there is no limit to the number of events you can save at one time. These are some things to try to speed up writing and reading of the Events table: 1.- Try using the BULK_EVENT_WRITE in conjunction with EVENTS_READ_USING_TMP_TBL in CONFIG table 2.- For Events, ATORIONEvents is the master table in Orion. There are around 9 other tables ATORIONEVENTTANKS, ATORIONEVENTPROPS, these tables have a column called EVENT_XSEQ. This is a foreign key that is related to the X_SEQ in the ATORIONEVENTS table. This is how these tables are all linked back to the ATORIONEVENTS. . Indexing the EVENT_XSEQ of any ATORIONEvent tables where there are lots of records. The combination of the temp table and indexing should speed up the retrieval of the events from the database. When you save to the database, you always do a read at the end. Saving events has 2 steps: First is to write, the second is to read back The main ones that need to be indexed are the ATORIONEVENTTANKS, .PARAMS, .PROPS. You should examine the other ATORIONEVENT tables to see which one is large. Those are the ones that are candidate for indexing. Here an example: ATORIONEvents and ATORIONEventTanks tables: Keywords: - Reading and writing Events -CONFIG Keywords References: None
Problem Statement: What is the meaning of the MULTIPATH message in the Aspen PIMS iteration log file? NOTE: PIMS will detect and automatically correct MULTIPATH scenarios. This article is meant to help users understand the concept of MULTIPATH scenarios in recursion. However, this documentation is academic and you do not have to model MULTIPATH TERMINATROR in PIMS. In other words, PIMS automatically creates a MULTIPATH terminator if multipath exists and the additional structure described in this solution need not be added to your model.
Solution: When setting up recursion structure in a sub-model it is possible to create a situation where PIMS detects what is referred to as multiple path. The following example illustrates a multiple path problem. We have a straight run kerosene (KR1) produced on a crude unit. KR1 is a recursed pool and can be blended directly to Diesel Fuel (DSL). The recursed properties of KR1include sulfur content (SUL). KR1 can also be hydrotreated in sub-model SKHT where hydrotreated KT1 is produced from KR1. KT1 can also be blended to DSL. The properties of KT1 are related to the properties of KR1 through Table PCALC. The flow diagram would look like: Table PCALC would be *TABLE PCALC * ROWNAMES TEXT SUL * KT1KR1 KTI in terms of KR1 0.1 *** *** DSL has a minimum sulfur specification of 0.5 weight percent. KR1 has an initial sulfur content estimate in Table PGUESS of 0.6 weight percent. In Table PCALC we specified the sulfur content of KT1 is equal to the sulfur content of KR1 multiplied by 0.1. Therefore, KT1 has an initial sulfur content of 0.06 weight percent. The SUL specification row of DSL in matrix would look like: BWBLDSL BKR1DSL BKT1DSL XSULDSL 0.5 -0.6 -0.06 (SUL)SPEC (SUL)KRI (SUL)KRI(PCALC) The multiple path problem arises from the fact that the SUL of KT1 is equal to the SUL of KR1 times 0.1. Since we created error when we guessed at the sulfur content of KR1 we also created error in KT1 as well. Since KT1 is really KR1 multiplied by a constant, we created error for KR1 twice. The problem then becomes, ?How do you distribute error from the recursed pool KR1 twice to the same row?? Because KR1is a recursed pool with SUL as a recursed property PIMS will automatically generate a column into the matrix to distribute KR1 error to DSL. The matrix will look like: BWBDSL BKR1DSL BKT1DSL RSULKR1 XSULDSL 0.5 -0.6 -0.06 -1 PIMS cannot generate a matrix with an extra RSULKR1 column because this is not allowed by the optimizer, i.e., two columns with the same name cannot be in the same matrix. BWBDSL BKR1DSL BKT1DSL RSULKR1 RSULKR1 XSULDSL 0.5 -0.6 -0.06 -1 -1 Also, PIMS cannot generate a column RSULKT1 with an entry in the RSULKR1 row because there is no linkage of KT1to KR1 for material balance and recursion balance rows. In other words, this structure would permit error to be distributed through the RSULKT1 vector, which may be totally out of proportion to the amount of KT1 material that is produced. Therefore, the distributive recursion matrix below is incorrect. BWDSL BKR1DSL BKT1DSL RSULKR1 RSULKR1 RSULKR1 1 1 XSULDSL 0.5 -0.6 -0.06 -1 -1 In order to solve the multiple path problem a new recursion pool must be introduced into the matrix. In our current example, this new pool can be inserted in one of the two places. If the new pool is added upstream of the hydrotreater the flow diagram would look like: The new pool is KX1 and is produced by recursing the material KR1 into KX1 through sub-model SMPT-MULTIPLE PATH TERMINATOR. Table SMPT would look like: *TABLE SMPT * ROWNAMES TEXT KR1 KX1 * VBALKR1 1 VBALKX1 -1 * RBALKX1 -1 1 RSULKX1 -999 999 *** An initial guess for KX1 properties is also included in Table PGUESS: *TABLE PGUESS * ROWNAMES SUL * KX1 0.6 *** KX1 now replaces KR1 as the feed to sub-model SKHT and the properties of KT1 are linked to KX1 rather than KR1 so: *TABLE PCALC * ROWNAMES TEXT SUL * KT1KX1 KTI in terms of KX1 0.1 *** *** The matrix from this PIMS model would look like BWDSL BKR1DSL BKT1DSL RSULKR1 RSULKX1 RSULKR1 1 RSULKX1 -0.5 1 XSULDSL 0.5 -0.6 -0.06 -0.5 -1 This allows error from KR1 to be distributed to the KX1 SUL property recursion row RSULKX1 and the maximum SUL spec row for DSL XSULDSL. The column RSULKX1 also distributes error from KX1 pool to XSULDSL when KT1 is blended to DSL. Keywords: Multipath Multipath recursion Multipath terminator References: None
Problem Statement: How do I resolve a Visual Fortran run-time error that appears at the end of an Aspen PIMS run?
Solution: The solutions we have seen for this so far are some combination of turning off the following: 1. Indexing (both at the file/directory level and at the service level). Please refer to solution 134228. This works for Window 7 OS environment. 2. Encrypting File System (EFS). EFS is a feature included in windows OS that allow us to store files in encrypted format. 3. Virus scanning on the model directory (especially On-Access type scans). Please refer to solution 134320. Keywords: Visual Fortran run-time Error, Fortran, run-time, visual, Index, EFS, Encrypting, system, virus, scanning References: None
Problem Statement: This article describes the best practice that can be followed to create a generalized initial starting solution to minimize local optima, which are inherent to solving non-linear problems. It is still the recommended best practice to use multi-start to address problems with local optima.
Solution: There are various ways to minimize local optima using Aspen PIMS. Each method has pros and cons. The primary benefit of the generalized input solution method discussed in this article is that multi-start execution of only 1 case is required, which minimizes computer processing time. This can be useful if a large stack of cases is run at once. The potential drawback to relying only on an input solution is that it may get close to the global optimum most of the time, but there could still be cases where local optima are encountered. The methodology to generate a generalized input solution file is to get an initial starting solution that solves the problem using non-zero activities for most of the variables in the model. This solution may not represent any particular real-life operating scenario. Rather, it is used only as a starting solution for an Aspen PIMS-AO solve. The procedure follows: 1) Create a case by setting key variables to non-zero minimum values. This is often referred to as ?igniting? the variable. The key variables are those found in Tables BUY, SELL, CAPS, PROCLIM. The recommended minimum is at least 0.001. 2) Solve this case and see if the problem converges to feasible solution. If solution not obtained make sure the ignited variables have proper disposition. The unit must also be active if the problem has an external model. 3) Now run this feasible case with multi-start and save the output solution. 4) The resultant solution file from the multi-start execution cycle can be used as a starting solution for all cases. It may be difficult to get a feasible solution in step two. Sometimes units, like flow limits on furnaces, may have to be relaxed or quality specifications on some products relaxed to arrive at a feasible solution. For some cases, even with a better starting point there is no guarantee the model will converge to global optimum. The generalized input solution approach simply offers a higher probability of reaching the global optimum with a single run. Multi-start is still the most simple and reliable way to generate the optimal solution. Keywords: Local optima Fix local optima Best initial solution Creating initial starting solution PIMS-AO best practice References: None
Problem Statement: How do I copy Aspen SQLplus web-based reports from one server to another?
Solution: This article describes how to move Aspen SQLplus web-based reports from one server to another. The steps vary for automatic and interactive reports. Interactive reports only require the XML files to be moved to the new server, but automated reports require the XML files and records defined SqlReportDef in Aspen InfoPlus.21. The XML files define the report content, and the SqlReportDef records defines the automated schedule used to generate the report. The automated reports are saved on the Aspen InfoPlus.21 server server, but interactive reports are saved only on the web server. Interactive Reports 1. Interactive reports are saved on the web server in the path C:\ProgramData\AspenTech\SQLplus\. This is the server hosting the SQLplus reporting web interface. a. Private reports are in subfolders named after the user who created the report (e.g., C:\ProgramData\AspenTech\SQLplus\private\CompanyDomain.User123\). b. Public reports are located under the public subfolder. 2. Copy the private subfolders and their contents to the same path on the destination server. 3. Copy the XML files from the public folder o the same path on the destination server. Automated Reports 1. Automated reports are saved on the Aspen InfoPlus.21 server in the path C:\ProgramData\AspenTech\SQLplus\automated\. Depending on your system architecture, the InfoPlus.21 server and the web server may not be the same computer. 2. Copy the XML files within the automated folder from the source server to the same path on the destinationInfoPlus.21 server. 3. If you want to also transfer the history of the automated reports, copy the files from the output subfolder to the same path on the destination server. 4. The XML files contain the content of the reports, but records within Aspen InfoPlus.21 are used to trigger the automated sending/printing. 5. Using InfoPlus.21 Administrator, locate records defined SqlReportDef. There will be a record defined for each automated report. 6. Save the SqlReportDef records to a recload (.rld) file. 7. Load the recload file into the destination Aspen InfoPlus.21 database. Notes: · The XML files that define the report content also reference the ADSA data source of any Aspen InfoPlus.21 tags. The XML files will need to be updated if the new server uses a different data source. · The above steps define how to transfer all reports to a new destination server. They could be adapted to replicate a single report by copying only the specific XML file and duplicating only the specific SqlReportDef record. · The above steps can also be adapted to duplicate reports in the same system. In this case, the XML files would be renamed and their contents edited to reference the new report name (e.g., Report1 updated to Report2). The source SqlReportDef record would be duplicated and given the new report name. The SQL_REPORT_NAME field within the new SqlReportDef record must also be updated to the new report name as well so that the new record will trigger the new report. Keywords: web.21 SQL+ Reports Automated reports SqlReportDef XML References: None
Problem Statement: What to do when the Aspen DMCplus controller shuts down with a QP Internal error.
Solution: QP internal errors can be investigated by using the ModelSVD tool, found in the C:\Program Files\AspenTech\DMCplus Desktop\Utilities. It is also commonly recommended to consider resetting the parameter EPSMVPMX. Additional options were made available for this parameter with an update to the 2006 version of the software. Consider resetting EPSMVPMX to 8 and report your results. EPSMVPMX allows the user to select from several new solver options; This version of the composite allows the engineer to choose the QP algorithm they would like to use based on setting the value of EPSMVPMX 1 Original DMCplus 2.X QP algorithm 2 Legacy interior QP algorithm 3 Legacy interior QP algorithm and generate a debug file at each cycle - don't use for long as this will fill up your disk 4 Active set method - very robust but can be slow for Composite-size problems. 5 New interior point method 6 New interior point method - switch to active set for the objective function rank. The motivation for the type 6 is that this enables shadow prices. 7 Do not use 8 Use interior point for the objective function rank if number of variables is greater than 300. Otherwise use active set. This is recommended for Composite controllers. Note: if there is an error in the optimization, a special binary file will be saved to the following location: AC Online\sys\etc\lpqp_error_file_1.bin. These files allow Aspentech to exactly reproduce the client?s problem. The disk will not be filled up, as only the last 25 files are saved. Values of 14 through 18 have a special meaning. For example, if the value is 14, then the active set method is used (14 ? 10), and a binary file is saved even if the optimization has no errors. These files are saved to AC Online\sys\etc\lpqp_no_error_file_1.bin. Note: the special binary file is also saved if the optimization takes more than 20 seconds. Keywords: None References: None