question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: When I print out a report, all pages in the report are included. Is there a way I can customize the report without exporting to a file? Is there a way I can print only the page(s) I want? | Solution: It is possible to print a specific page of a report using by using some of the features included in the Preview page. From the preview page, click on the printer icon in tool bar and select the desired page number(s). For example, suppose you want to print a particular range of pages from the Messages form. In the print screen form (after you select Messages) click on Preview as shown in the image below. Then use the arrow buttons (shown in the second image) on the tool bar to navigate to find the desired page(s). Note the desired page number(s), then click on the printer icon and enter the page number(s) you wish to print (as in the second image).
Keywords: print report, customized, individual, specific
References: None |
Problem Statement: The term tear pipe is used in the | Solution: The concept of tear pipes only applies to looped divergent systems. The tear pipes are the pipes within the network where we iteratively adjust the flowrate to solve the pressure balance that defines the flow directions through network.
In simple terms, think of a simple system in which a single flow splits, with downstream flows to two flare tips. Upstream of the splitter, the flows are known regardless of the pressure drops downstream of the split pipes. We must therefore specify/estimate one downstream flow in order to continue with the mass balance. This estimated flow is the tear flow for the tear pipe. The loop solver will then adjust this flow rate until the pressure balance equation across this splitter is solved.
Keywords: divergent; looped; network; balance
References: Manual; however, it is not defined. What is the definition of tear pipe? |
Problem Statement: What is the significance of the Rho V2 term as relates to pipe erosion? | Solution: The current practice for eliminating erosional problems in piping systems is to limit the flow velocity (Ve) to that established by the recommended practice API RP 14E based on an empirical constant (C-factor) and the fluid mixture density (rhom). The governing equation is as follows:
Ve = C / sqrt(rhom)
Note that the value for the constant C depends upon the piping material and is discussed in detail in API RP 14E.
Keywords: Rho, RhoV2, V2, density, velocity, Rho V2
References: None |
Problem Statement: Does Aspen Flare System Analyzer change the pipe schedule in the design calculations? | Solution: The design algorithm maintains the pipe schedule specified for the pipe unless the user applies the pipe class . The pipe class can be selected from the Pipe Editor by setting Use Class - Yes. If the pipe class is used then all the schedules included in the Pipe Class Editor are considered in the design calculations.
In summary,
- Design changes nominal pipe diameter for the specified pipe schedule if the Use Class is set as No.
- Design changes nominal pipe diameter and schedules included in the class if the Use Class is set Yes.
Keywords: Design, Pipe Schedule, Nominal Diameter, Pipe Class
References: None |
Problem Statement: The evaluation of the friction loss in valves and fittings involves the determination of the appropriate fittings K factor. In the literature, there are several correlations available for this. For example, Equivalent L/D method, Crane method, 2-K (Hooper) method and 3-K (Darby) method.
Which one is used in Aspen Flare System Analyzer? | Solution: The method used in Aspen Flare System Analyzer is the Crane method and not the 2-K method. The Crane method is given in the Crane Co. Flow of Fluids Through Valves, Fittings and Pipe. Tech Paper 410, 1991.
K = f (L/D)eq
It recognizes that there is generally higher degree of turbulence in the fitting than in the pipe at a given Reynolds number. Hence, this is accounted for by always using the fully turbulent value for Ft in the expression for the friction loss in the fitting (K = A + B Ft), regardless of the actual Reynolds number in the pipe.
For a calculation example, please refer to Knowledge BaseSolution ID 130184.
http://support.aspentech.com/webteamcgi/SolutionDisplay_view.cgi?key=130184
Keywords: friction loss, Crane method, two K method
References: None |
Problem Statement: I am expecting a vapor liquid mixture. Why I am getting vapor (or liquid) only? | Solution: 1. First, check the pressure you are looking at. You may expect a vapor liquid mixture at the relieving pressure. However, back pressure can be much lower than the relieving pressure.
2. If the pressure is the back pressure or a pressure anywhere in the network, then use a suitable VLE method such as Peng Robinson.
Keywords: Vapor, liquid
References: None |
Problem Statement: How are heat transfer calculations performed in Aspen FLARENET? | Solution: There are three components to the external heat transfer in Aspen FLARENET, calculated as follows:
1. The external heat transfer coefficient is calculated according to the ESDU 69004 standard. See attached document esdu69004.doc.
2. Pipe wall resistance. This calculation includes insulation and is material dependent.
3. The internal heat transfer coefficient is calculated from the Dittus-Boelter equation (Nu = 0.023 Re^0.8 Pr^n). This is in most standard Chemical Engineering texts but the original reference is: Dittus, F.W and Boelter, L. M. K. Heat transfer in automobile radiators of the tubular type. Uni. of California, Berkeley, Pubns in Engineering. Vol 2 (1930), pp 430-.
Note that the Wall Temperature reported in the P/F Summary is the internal wall temperature, and the External Temperature is the outside wall temperature.
Keywords: heat transfer coefficient
References: None |
Problem Statement: I get a warning in the flare because the temperature goes beyond the limit for the selected material. I reality we use a different material that is able to withstand those temperatures. How can I specify it? | Solution: It is true that we only provide Carbon Steel and Stainless Steel as material options in FSA. You can however modify the temperature limit for each material under the Home tab > Options > Warnings.
You probably will also need to change the roughness and thermal conductivity of the selected material on each pipe (under the Dimensions tab).
Keywords: AFSA flare material carbon stainless steel
References: None |
Problem Statement: How does Aspen Flare System Analyzer calculate the temperature at the outlet of a flare tip? | Solution: Aspen Flare System Analyzer calculates the outlet tip temperature by first calculating the Enthalpy at Stagnation Point of the fluid at the exiting flare system.
For this calculation, the program assumes that the velocity of the gases after the exit from the pipe system and into the atmosphere is zero, hence reporting this velocity value at the end of the tip.
Once it has found the stagnation enthalpy value, it calculates the temperature based on a PH Flash.
Keywords: Temperature, Flare Tip, Outlet, Velocity
References: None |
Problem Statement: Why can't I drag and drop annotations after created on the PFD? | Solution: It is possible you are in connecting mode. Please toggle Arrange/Connect Mode button as shown in picture.
After that, you should be able to move around the Annotations.
Keywords: Aspen Flare System Analyzer, Annotation, drag and drop
References: None |
Problem Statement: The help file for Batch.21 states:
Subbatch Instances
Limit number of Subbatch Instances: Check this box if you want to limit the number of subbatch instances that can be recorded for each subbatch. The standard number is 0 indicating an infinite number of instances.
If a specific number of subbatch instances is configured however, and conditions are met to trigger a number of subbatch instances which exceeds the configured limit, the BCU unit goes into a Failed state.
The BCU log file for the unit in question will contain the following message:
4/4/01 4:09:45 PM : trigger mix_start_trig : subbatch action mix : MIX[automatic]
Failed to create automatic instance of level 2 subbatch MIX : B21BSC-50172: Sub-batch instance overflow | Solution: Customers should prevent triggering the add subbatch action more often than the subbatch instances allow (perhaps by using more complex triggers).
There is a kind of workaround to this behavior. The add subbatch action is specifically meant to make a new instance. If you don''t use it, and just use add characteristic, the system will automatically make the subbatches it needs, but the subbatch instances will not go beyond 1.
For example, in the BCU unit, the trigger used for the start event doesn''t need to have a subbatch added. Instead, a characteristic can be added that utilizes the subbatch from the drop down list. Then, when this characteristic is recorded, the subbatch will automatically be created. If another instance of the characteristic is triggered, the subbatch will not be recorded again (a subbatch is only created if it doesn''t already exist).
Keywords:
References: None |
Problem Statement: How does one import HYSYS sources in FLARENET? | Solution: In FLARENET, users can import Source data directly from HYSYS (including component information), but unfortunately this functionality does not extend to Pipe data (note that you will need to have both HYSYS and FLARENET installed on the machine you are working on in order to import Source data). The required procedure is described in detail in section 16.3 of the FLARENET User Guide, but a summary is as follows:
1. Open FLARENET and go to Files | Import Sources...HYSYS Sources.
2. Browse to your HYSYS case, then click Open.
3. Set the P/T Location; this is the pressure and temperature location for the source. If Upstream is selected, the relieving pressure and actual inlet temperature specification is copied from the source data. If Downstream is selected, the allowable back pressure and outlet temperature are copied.
4. Set the Component Data; choose whether to use the FLARENET or HYSYS component database. This determines whether or not to use the pure component definitions from FLARENET or from HYSYS. As an example, suppose the FLARENET model already has component A and the HYSYS model from which the data will be imported also has component A:
Use FLARENET will keep the component definition for A as that defined in the FLARENET database.
Use HYSYS will over-write the component data for A as original defined by FLARENET with the pure component definition of A as defined in the HYSYS model.
5. Choose the streams in the HYSYS case that you wish to import as Sources in FLARENET, set the Source type (relief or control valve) and click OK.
Note that if you are using FLARENET 3.50 or 3.51 there is a known issue related to the above. The problem and correspondingSolution are described inSolution 111766.
Keywords: import, sources, source, HYSYS
References: None |
Problem Statement: How does Aspen FLARENET handle multiple inlets and outlets on the Flare Knockout Drum? | Solution: Currently it is possible to have up to 2 inlets into the drum and 1 outlet or vice versa.
If you wish to model an alternative arrangement you will need to combine streams using tee pieces along with the KO drum.
Note that Aspen FLARENET calculates the entrance and exit losses to the drum based on the diameters of the nozzles and drum. The pressure drop through the drum itself is not calculated, as it is usually very small in comparison to the other losses.
Keywords: multiple, knockout, drum, connections
References: None |
Problem Statement: Is there any ways to input the liquid flow and vapor flow on PSV in Aspen Flare System Analyzer? | Solution: Aspen Flare System Analyzer did rigorous flash calculation based on temperature and pressure to determine the phase fraction. So you could not input the liquid and vapor flow separately. Aspen Flare System Analyzer will recognize the two phase based on relieving pressure and inlet temperature.
Keywords: Two phase flow, PSV, Flash.
References: None |
Problem Statement: How do I model a source flange in Aspen Flare System Analyzer? | Solution: In Aspen Flare System Analyzer (AFSA), there are two ways to model source flange:
1. Input flange diameter directly in the source Editor | Condition page. If the flange has the same diameter as the tailpipe, you can leave it empty in this page.
When a value other than the tailpipe diameter is specified, you may see different pressures between the source outlet and the upstream of the tailpipe. This is due to a swage/fitting loss calculation to take enlargement pressure drop into account.
2. Add a dummy pipe with zero length but same diameter as the flange, then connect it to tailpipe with a Connector.
Under non-choking condition, the results such as back pressure generated from the two methods should be the same.
 However, if the flow is choked in the flange, you may see different results as AFSA does not handle choking condition rigorously. Then the relieving flow rate and flange size should be revised.
Keywords: Flange, Pressure
References: None |
Problem Statement: Where can I locate all of the design constraints for Aspen FLARENET case files while in design mode? | Solution: The design constraint and the locations are as follows:
Constraint
Location
1. MABP (Maximum allowable back pressure)
On the valve property interface (Conditions TAB)
2. Velocity
Scenario Editor (Build | Scenario | Edit | Constraint)
3. Mach number
Scenario Editor (Build | Scenario | Edit | Constraint)
4. Noise
Scenario Editor (Build | Scenario | Edit | Constraint)
Keywords: Constraints, Constraint, Noise, Mach, Velocity
References: None |
Problem Statement: Whenever I create a new object, it is automatically added to my PFD. Is there a simple way to place a newly created object exactly at the location I want? | Solution: To create an object in a specific location, right click on the appropriate icon in the Toolbox (Object Pallet) and continue to hold the mouse button down. Drag the object to the desired point on the PFD and release the mouse.
Keywords: toolbox, exact point, exact place, add object, add equipment, object pallet
References: None |
Problem Statement: Unable to start calculations: No active source is found. What should I do? | Solution: If you are getting this message, you probably
1. don't have any PSV or control valve connected to the network. You need at least one.
2. have all of the PSV and/or control valves ignored. You need at least one active.
3. have zero mass flow. Provide a flowrate.
4. have no components. Add components.
Keywords: active source, unable, start, calculation
References: None |
Problem Statement: In the Navigation Pane, under Views / Results, there are options to see the Compositions and Physical Properties results. However, these are greyed out, which means that there are not active.
Why are they greyed out and how can you activate them? | Solution: Under the preferences menu there is an option to save phase properties. You need to check this box to activate the composition and physical properties results.
This check box will save the phase properties. The disk space/memory requirements are significantly affected by this option, specially for large cases. It is advised to select this option only if you have a high specification PC.
Keywords: composition, physical properties, save phase properties
References: None |
Problem Statement: Why do my flow maps show the flow regime as Unknown? | Solution: The flow regime selection in Flarenet is based on the flow map (view / results / flow map). The decision about the type of flow is based on the flow map of Mandhane et. al.(Mandhane, J. M., Gregory, G.A., Aziz, K., A Flow Map for Gas-Liquid flow in Horizontal Pipes, Int. J. Multiphase Flow, Vol 1, pp 537-553, 1974). The map is based on the superficial gas and liquid velocities which are calculated based on other methods/correlations used in the model.
If the superficial liquid and vapor velocities are outside the range of this plot (flow map), then the flow regime is shown as Unknown. In such a case the pressure drop is calculated based on the closest regime.
Keywords: Flowmap, flow regime, superficial velocities
References: None |
Problem Statement: What does the bar on the lower left corner mean when it turns yellow even if the run converges | Solution: It shows that your case has the warning messages. You should go to View->results->messages to check them. If you have any questions about your results, you may want to check this page to see any error messages on solver.
Keywords: Warning, convergent, View
References: None |
Problem Statement: What is the difference between modeling multiple relief valves under one icon as compared to modeling them separately? What assumptions are made about piping and connections when multiple RVs are modeled together? | Solution: The Number Of Valves option is included so that a single relief valve source can be used to model several relief valves connected to a common tailpipe. The value for the specified Number Of Valves is then used as follows:
A. The specified mass flow is divided equally between the specified number of valves, to give the mass flow per valve.
B. The total rated flow is calculated as the product of the rated flow for a single valve (seeSolution ID #109493) and the specified number of valves.
C. The total flange area used in the swage calculation (between the multiple valve source and the downstream piping) is taken as the individual valve flange area (determined from the specified flange diameter) multiplied by the specified number of valves.
Keywords: multiple, relief, valve
References: None |
Problem Statement: How do I prevent the error, Incomplete connectivity for node on my model for two pieces of equipment that are fully connected? | Solution: This usually happens when the model you are running come from an older version or if it's been imported with some errors. What it's recommended to do is delete the graphic connections in the Process Flowsheet and then reconnect the two items again. Once you do it, please go to Home> Simulation > Check model, the message should have dissapear.
Keywords: Incomplete connectivity for node
References: None |
Problem Statement: How can I check how much vapor and liquid content there is in the pipes? | Solution: There are two ways to check the liquid and vapor content:
1) You can activate the vapor fraction on PFD by selecting vapor fraction in select variable to display with object names box.
2) You can activate save phase properties in Preferences and then Physical properties will be reported at upstream and downstream conditions for every object in Aspen FLARENET. To view, select View | Results | Physical Properties from the main menu and it will give you the vapor fraction.
Keywords:
References: None |
Problem Statement: Why do I see high pipe pressure drop with no flow? | Solution: Although the net flow is zero, Aspen FLARENET will calculate the static head due to elevation change based on the equation:
static head = elevation change * density * g
To eliminate this effect, select the Static Head contribution as 'ignore'.
Keywords: pressure, drop, stack, elevation, static, head
References: None |
Problem Statement: Why do I see Mach No. greater than 1 at the outlet of the source? | Solution: The reason is user specified relieving load is too high compared to the flange size.
It is true that when Mach No.=1 the flow in the flange becomes choked, and it is not possible to increase the flow rate anymore. Under this condition, the relief valve will fail to relieve at the relieving load specified by the user and choked flow condition is applied to the actual flow rate. It is very dangerous situation for the plant.
Aspen Flare System Analyzer is primarily used to design the relief network and make sure the relieving load is always at its safe operating point. So it always simulates the relief network using the user specified flow rate in the relief valve.
Therefore, if the relieving load is high enough compared to the flange size, it is possible to see the Mach No. greater than 1. If this happens the user should either reduce the flow or change the valve or flange size.
Keywords: Mach No., Choked Flow
References: None |
Problem Statement: Can I select option of using mixture velocity & superficial velocity while checking for design constraints in a scenario? | Solution: Yes, it is possible from V7.2 version onwards to select option of using both mixture velocity & superficial velocity while checking for design constraints in a scenario.
Before V7.2, user could select only the superficial phase velocity as the velocity constraint for a two phase relief scenario. A new option is provided in the scenario editor for specification of velocity constraints. Now you may select either the mixture velocity or the superficial phase velocity for a two phase relief scenario.
This will affect the design algorithm in the following way:
The design diameters can be different if the mixture velocity (default value) is used.
If the superficial velocity option is selected the results will not be affected.
Keywords: mixture velocity, superficial velocity, velocity constraints, scenario etc;
References: None |
Problem Statement: How can I add a vaporizer / heater / cooler / evaporator in Aspen Flare System Analyzer? | Solution: There is no vaporizer or heater or evaporator in Aspen Flare System Analyzer. You cannot add heat in a knockout drum either. However, you can use a dummy pipe and specify outlet temperature or duty to simulate a vaporizer / heater / cooler / evaporator. Specify a negative duty if you want to use duty specification for cooling.
Keywords: Vaporizer, evaporator, heater, cooler
References: None |
Problem Statement: How to add Viewport in Aspen Flare System Analyzer V9 | Solution: As part of the V9 user interface updates, adding a viewport in Aspen Flare System Analyzer (AFSA) is easier than it was in earlier versions.
A viewport will help you to navigate large networks and locate objects or sections of your flowsheet. With this feature, it is not necessary to zoom in and zoom out repeatedly.
To add a viewport in AFSA V9:
1. Go to the Process Flowsheet tab in the ribbon and select the Modify tab. Then, locate the Views section.
2. Find the portion of the PFD in which you are interested and customize it in the way you want to show it.
Tip: Use the Pan option (under Object Actions section) to move across the flowsheet and the Zoom In and Zoom Out options (under View tab | Zoom section) to modify the view.
3. Click on Save Viewport button and define a name for the viewport. Then, click on Save.
4. Repeat previous step as many times as you want to save different viewports. Then, use the Switch Viewport menu to move to the section or portion of the flowsheet that you wish.
Note: Use the Manage Viewport button to rename or change the order of the viewports.
Keywords: PFD, Viewport, Zoom, Manage Viewport, Views, Section, Flowsheet
References: None |
Problem Statement: How is composition calculated for individual components if molecular weight (MW) is selected as basis? | Solution: You can select Molecular Weight as your basis for stream composition in Aspen FLARENET. In this case, the composition used by Aspen FLARENET will be calculated from the relative composition of the two components with MWs on either side of your specified value, to give your specified stream MW.
For example, suppose you specify Methane (MW 16), Ethane (MW 30), Propane (MW 44) and Butane (MW 58) and then specify a stream MW of 20. The stream will be treated as being composed of Methane and Ethane with the composition such that the combined MW was 20.
Keywords: Basis, MW, Composition
References: None |
Problem Statement: When you run a case, if you have an active scenario that is not being calculated, the status bar at the bottom left shows Edit in red color. In previous versions, this was in yellow with the label Done even if the current scenario was not being calculated. | Solution: The red status bar with Edit is the desired functionality. The status bar really applies to the current scenario on the flowsheet. Making it yellow would imply that the selected scenario has been calculated which might not be true.
In previous versions, if the case showed results, it displayed the status bar in Yellow for all scenarios. This behavior was changed from the older versions to avoid confusion with the current scenarios being displayed. If the displayed scenario has a red status bar, this will point out to the user that this scenario was not being calculated, which will flag some notice to identify if this is desired.
Keywords: Status bar, red, yellow, Edit, Done
References: None |
Problem Statement: What is Pressure Property? What is the difference between Pressure property and Static pressure? | Solution: Since Aspen Flare System Analyzer V8.0, there is a property located in the Pipe Editor > Summary called Pressure Property, as shown in the snapshot below.
Pressure Property is the same as the Static Pressure. The Static Pressure (Ps) is the pressure acting equally in all directions at a point in the fluid.
The physical properties at the inlet and outlet of a pipe are calculated at the static pressure condition. For this reason, the Static Pressure is called Pressure Property.
Keywords: pressure property, total pressure, static pressure
References: None |
Problem Statement: User receives errors in Aspen Flare System Analyzer when the model is run. The error messages indicate a PH Flash Failure error and Speed of sound error. In addition, the scenario can take long time to solve. | Solution: The PH flash failure and speed of sound errors can appear when there are too many hypos in the component list. It is a good practice to use no more than 20 components in Aspen Flare System Analyzer.
The user can check the composition of the hypos in the active sources. If the composition is small, it is recommended to lump the hypos together using the Combine button.
Keywords: PH flash failure, speed of sound error, hypo components
References: None |
Problem Statement: Aspen Process Explorer Excel Add-in returns #VALUE! if returning a data array larger than 5,461 elements. | Solution: For Excel 97 - Contact your Microsoft Support representative. Tell Microsoft you are experiencing the problem referred in the knowledge base article Q216531. Tell them you know there is a hot fix for this problem and you require it.
Article Q216531 - http://support.microsoft.com/support/kb/articles/Q216/5/31.ASP
For Excel 2000 - Follow the reSolution defined in the Microsoft knowledge base article Q250828. If you have further problems please contact your Microsoft Support representative.
Article Q250828 - http://support.microsoft.com/support/kb/articles/Q250/8/28.ASP
FAQ:
Q: After applying theSolution, how large of an array can I have?
A: In Microsoft Excel 97, the new array size limit is 2^14 or 16384. Arrays with >= 16385 elements will exhibit the #VALUE! behavior. In Microsoft Excel 2000, the array size is limited to 16384 elements per dimension. It is then possible to have an array of 16384 x 16384. Please Note: Be careful when creating large arrays that may overload the system's memory.
Q: Has Aspentech tested the fixes provided by Microsoft to fix the array size problem under Microsoft Excel 97 and 2000?
A: Aspentech has done partial testing to ensure the fixes correct the problem. However, Aspentech has not fully tested the fixes. Aspentech is not responsible for any other problems and side effects that could be introduced by the use of these Microsoft fixes.
Q: What about Microsoft Office XP (Excel 2002)? Does the new Microsoft Excel version have the same array problem?
A: No, Microsoft Excel 2002 handles 64K array dimensions. Microsoft Excel 2002 does not require any action by the user.
Keywords: Microsoft Excel
Add-In
array size
References: None |
Problem Statement: Using Aspen Process Explorer (APEx) Graphics Editor, how may I display data of specific (normal) repeat area fields of records?
Note 1: This | Solution: is applicable for custom and non-custom records.
Note 2: The number of repeat area values available to Graphics Editor is limited by the number of values entered in the fixed area reference to the repeat area..
Solution
1. From APEx Graphics Editor, select Draw > Data Field
2. Drag it to create the data field box
3. Right-click on the box, select Properties
4. Under the Data Source tab, type the record name, occurrence number & field name in the Tag Name box. ***Leave default VAL in the Field Name box****
Example 1:
I have a QueryDef record named JUNK and my #Output_Lines is 10. However, the actual field name in the repeat area is output_line. If I want to display the 5th occurrence, I have the following in the Tag Name box:
JUNK 5 output_line
Example 2:
I have an IP_AnalogDef record named BYOB. The reference to the repeat area for this type of record is IP_#_OF_TREND_VALUES. The number in this field is the maximum number of repeat area fields that could be displayed by Graphics Editor. Typically, there are only 2 occurrences listed in an IP_AnalogDef record. All other occurrences are written out to the history files. If you try:
BYOB 3 IP_TREND_VALUE
You will receive a message that there is no such data source. To correct this problem, change the number in IP_#_OF_TREND_VALUES to at least 3.
Keywords: graphics
PE
repeat areas
References: None |
Problem Statement: As users switch to Excel 2007, there have been several questions about Aspen PIMS and the use of Excel 2007 and older versions of Excel. | Solution: 1) If you would like to use Excel 2007 format files for input, then users who have an older version of Excel will not be able to run your model on their machine. It is not necessary to switch all input files to Excel 2007, some can be in older formats.
2) Some Excel 2003 and older files contain items that cannot be opened by Excel 2007 automation. This occasionally causes an error when the Aspen PIMS model is run for the first time on a machine with Excel 2007. This stops execution and indicates it cannot read a particular input table. Since this is an Excel automation issue, it cannot be fixed from within Aspen PIMS. To correct this, open the designated file directly using Excel 2007 and then re-save the file. When Excel saves the file, it converts the problematic items and resolves the issue. You can now go back to Aspen PIMS and run the model.
3) If you would like Aspen PIMS to create the output spreadsheets in Excel 2007 format, you can select this by right clicking on the model name, select Output Spreadsheet Format and select the desired Excel extension. The .xlsx format will only be available if Excel 2007 is installed on the machine.
4) Some Aspen PIMS output files like !PGUESS, !PDIST, etc. are created in old formats of Excel. Please note that per Microsoft, there are some Excel formats that cannot be saved by Excel 2007. If you open one of these files (like !PGUESS in Excel 2.1 format), then you will need to change the file format when saving it from within Excel 2007. Below is an excerpt from a Microsoft article describing the changes:
The following formats cannot be opened or saved in Excel 2007:
WK1 (1-2-3)
WK4 (1-2-3)
WJ3 (1-2-3 Japanese) (.wj3)
WKS (1-2-3)
WK3,(1-2-3)
WK1,FMT(1-2-3)
WJ2 (1-2-3 Japanese) (.wj2)
WJ3, FJ3 (1-2-3 Japanese) (.wj3)
DBF 2 (dBASE II)
WQ1 (Quattro Pro/DOS)
WK3,FM3(1-2-3)
Microsoft Excel Chart (.xlc)
WK1,ALL(1-2-3)
WJ1 (1-2-3 Japanese) (.wj1)
WKS (Works Japanese) (.wks)
The following formats may be opened, but not saved to in Excel 2007:
Microsoft Excel 2.1 Worksheet
Microsoft Excel 2.1 Macro
Microsoft Excel 3.0 Worksheet
Microsoft Excel 3.0 Macro
Microsoft Excel 4.0 Worksheet
Microsoft Excel 4.0 Macro
Microsoft Excel 97- Excel 2003 & 5.0/95 Workbook
Microsoft Excel 4.0 Workbook
DBF 3 (dBASE III)
DBF 4 (dBASE IV)
Keywords: Excel
2007
References: None |
Problem Statement: Given the batch handle, how can I retrieve the associated batch data using Visual Basic, and Aspen SQLplus?
This example provides a programmer with the specific code that queries the batch data. It does not contain the whole script that includes select statements and other related command words. The code listed below provides a programmer with the correct syntax to call batch data within Visual Basic and Aspen SQLplus. | Solution: Visual Basic Example
In Visual Basic, you can use the following as an example:
Dim BatchDataSource As AtBatch21ApplicationInterface.BatchDataSource
Dim Batch As AtBatch21ApplicationInterface.Batch
Dim Count As Long
Dim BatchHandle As String
Dim BatchXML As String
BatchDataSources.Refresh
Set BatchDataSource = BatchDataSources(1)
Count = BatchDataSources.Count
BatchHandle = tzvetcoffc2.6622
Set Batch = BatchDataSources.GetBatch(BatchHandle)
Aspen SQLplus Example
In SQLplus, you can use the following as an example:
local datasources, datasource, batchqry, batch, i, j, k integer;
local batchhandle char(20);
datasources = new(AtBatch21ApplicationInterface.BatchDataSources);
batchhandle = 'tzvetcoffc2.6622';
datasources.refresh;
batch = datasources('tzvetcoffc2.6622').GetBatch('tzvetcoffc2.6622')
Keywords: SQLplus
InfoPlus.21
References: None |
Problem Statement: When an administrator creates a new batch area, and immediately attempts to change the name of the default characteristics (such as START TIME, STOP TIME or UNIT), an error message B21BAI.60175 Specified characteristic is not defined is displayed. | Solution: Changing the names of the default characteristics is allowed, but they need to exist. When creating a new area no additional characteristics exist yet, so it is impossible to reference them. TheSolution is to create the batch area, accept the default names initially, create your own customized characteristics and then go back to the area properties and change the default characteristics to the new characteristic names you have just defined.
Keywords: START TIME
END TIME
UNIT
defined
characteristics
References: None |
Problem Statement: RBOB is blended offsite. There are final blend properties that must be met after the oxygenate is added. However it is also desirable to report and perhaps even spec. the properties pre-oxygenate. The RBOB correlation available through ABML can be used to accomplish this. | Solution: The RBOB correlation allows the user to specify the ethanol properties for the final blend. It then uses this information in conjunction with the pre-oxygenate properties to calculate the blend properties post-oxygenate. Below are the steps required to activate this correlation.
1) In table BLNSPEC, the blend must be specified as type = 11
2) Blend properties should be recursed and spec'd if necessary
3) Create ABML table using the ABMLCompleteMacro (or copy the ABML table from the attached model).
a) Define property names for post-oxygenate blend - this is the only place those names need to be entered
b) Correlation output properties must have different tags than the input properties.
4) Table SCALE should be updated to define reasonable ranges for the output variables that are being spec'd.
5) Table BLNSPEC should contain all desired specifications. This may be pre-oxygenate specs placed on the original property tags, or post-oxygenate specs placed on the correlation output property tags.
6) There is no need to enter the new output property tags into PGUESS and no need to change Table RFG for property mapping.
7) Reports will contain both sets of properties.
Keywords:
References: None |
Problem Statement: A new feature of v2004.2 Aspen Batch.21 is Golden Batch Profiling.
The Aspen Batch.21 Configuration manual page 8-5 describes the procedure for extending the batch-demo and configuring three profiling demos.
You are instructed to go to the Batch.21 Administrator and expand the tree down to Profiling. Then Right-Click to be able to load the Profiling demo.
At this stage you MAY get an error
Profiling License was not found | Solution: The very large majority of Aspen Manufacturing Licenses are 'installation only' license checking.
Golden Batch Profiling however needs to check a 'new' license which is a RUNTIME license.
The license itself is called SLM_Golden_Batch_Profile
For troubleshooting purposes, you first need to confirm that the license exists on your license server.
Next you would do Start => Programs => AspenTech => Common Utilities => SLM Configuration Wizard. From here you must ensure that the PC is pointing to the correct License Server.
Finally you will need to restart the Aspen Batch.21 Services from the Control Panel \ Services and restart the Batch.21 Administrator if any changes were made during the steps listed above.
Keywords:
References: None |
Problem Statement: How can I limit the rate of a crude cut, even if it includes a swing cut? | Solution: Several methods are available to limit the draw of a crude cut:
? Method 1: Include a capacity row in Table Assays
Method 2: Constrain the pool collector columns using a capacity constraint in Table Rows
Method 3: Include a capacity constraint in both Tables Rows and Assays
Method 1: Include a capacity row in Table Assays
This method involves a capacity row inserted into Table Assays which picks up the yield coefficients of the cut of interest. This method is most useful if (1) the cut is pooled together from several logical crude units and we are interested in limiting the draw from only one of the units and (2) the cut does not include a contribution from a swing (Type 4) cut.
We will demonstrate the technique on a medium naphtha stream, tag MN1.
Define a capacity row in Table Assays.
If the assay table is shared by several crude units and you want to limit the draw from one of the crude units, then include an eighth character in the capacity row name. The eighth character should match the last character of the logical crude unit tag.
Include the capacity limit in Table CAPS.
Method 2: Constrain the pool collector column(s) using a capacity constraint in Table Rows
This method involves using Table Rows to pick up the pool collector column of the cut of interest with a capacity row. This method is useful if the cut includes the contribution of a swing cut. It is not useful if the cut is pooled together from several crude units, but the modeler wants to limit the draw only from one crude unit.
Method 2 has the advantage of ease of application - the modeler only needs to pick up one vector in Table Rows, and the structure does not have to be augmented if additional crudes are included in Table Assays. However, we do not normally recommend constraining pool collector columns.
Note: Do not use this method if using a multi-period model and if inventorying the crude cut.
We will again demonstrate the technique on a medium naphtha stream, tag MN1.
In Table ROWS, pick up the pool collector column by using its name (SCR1MN1 in this example) in a column. Introduce a capacity row to limit the activity of this column.
Include the capacity limit in Table CAPS.
Method 3: Include a capacity constraint in both Tables Rows and Assays
This method is an extension of Method 1. It is particularly useful if the cut to be limited includes the contribution from a swing cut. This method can be applied regardless of how the cut is pooled. A capacity row is included in Table Assays to pick up the yield of the heart cut. The capacity row also picks up the swing cut contribution in Table Rows.
As before, we will demonstrate the technique on a medium naphtha stream, tag MN1.
Define a capacity row in Table Assays.
Include this capacity row in Table Rows. The capacity row should intersect with the appropriate swing vector (SCR1HN+ in this example).
Finally, include the capacity limit in Table CAPS.
For additional information about limiting a crude cut on a volume basis in a weight-based model (including swing cut impacts) seeSolution 127487.
Keywords: None
References: None |
Problem Statement: We get many calls from Setcim (or Setcim/InfoPlus-X) users who notice that their Process Explorer time and their database time differ by one hour. Sometimes they say that it stays wrong for around 6 months in the year. Sometimes they will say they just noticed it since we went on or off Daylight Savings Time.
The root of the problem is the way Process Explorer (including Excel Add-Ins) makes assumptions about Daylight Savings (DST) and Greenwich Mean Time (GMT) - also known as UTC. None of the other tools such as SQLplus, Engcon, DBMT, @aGlance, GCS etc. make the same assumptions, so it's always Process Explorer that appears to be 1 hour wrong.
As mentioned above, Process Explorer makes certain assumptions regarding the Setcim or InfoPlus-X database :
The 'internal' database timestamps are always relative to the same time base. That time base could be local summer time, it could be local winter time, it could be GMT time, it could be any time, so long as it is kept consistent all year through
Certain Database settings are configured so that these 'internal' times can be converted to local wallclock times so that they can be used directly in tools such as GCS, SQLplus, Engcon etc.
Other Database settings are configured so that these 'internal' times can be converted to GMT.
Process Explorer is the only tool that will take the 'converted to GMT' times and then use the local client PC settings to convert back to local times.
Every occassion we have seen where Process Explorer is 1 hour away from Setcim time is because the Database Administrator failed to use the recommended procedure for handling Daylight Savings Changes. | Solution: Knowledge Base documents 100030 and 100031 give very thorough descriptions of the recommended procedures for handling changes twice per year for Daylight Savings (in most parts of the world).
They even contain a disclaimer that if done incorrectly then Process Explorer times will appear wrong by 1 hour.
With Process Explorer 3.1, there is a new Timezone feature that will allow users to compensate for different timezone settings.
Keywords: Process Explorer
PE
GCS
SQL
Setcim
IPX
DST
UTC
References: None |
Problem Statement: Is it possible to interrupt and resume batches using the BCU? | Solution: Allow interrupts for the batch area by right clicking on the area in the Batch Administrator, selecting the Interrupts tab, and checking the box allowing interrupts.
Specify names for the interrupt and resume time characteristics to be used.
Select the lowest sub-level which you want to be able to interrupt. For instance, you want to interrupt the overall batch, leave the Subbatch Level at 1. If you want to interrupt a phase, set the number to 2
Create the necessary interrupt and resume time characteristics using the Batch Administrator.
In the BCU, create triggers for interrupt and resume with the desired conditions.
Under the newly created triggers, add interrupt and resume characteristics. For the names to be available from the drop down menu for characteristics, it may be necessary to refresh the batch configuration (Server | Update Batch Configuration).
See the attached document for screen captures demonstrating the method above.
Keywords: None
References: None |
Problem Statement: The flare stack radiates intense heat which constitutes a potential hazard. It is generally necessary to have an area around the flare in which people do not normally work. Is it possible to calculate the flare heat radiation using Aspen Flare System analyzer and plot the heat over distance profile?
In addition, flare stacks can generate high level of noise. Vendor's specifications should be checked to ensure that equipment complies with statutory noise levels. In Aspen Flare System Analyzer, is the noise calculated just from the friction loss or does it also include the flaring? | Solution: Aspen Flare System Analyzer does not do the radiation calculations. Generally, the flame on a flare stack is several hundred feet long and has a heat release of the order of 107 BTU/h. The capabilities of Aspen Flare System Analyzer allow users to provide the flare tip diameter and pressure drop curves in order to perform the hydraulic calculations.
The noise calculation in Aspen Flare System Analyzer assumes that the noise is generated due to the friction on the pipes. Aspen Flare System Analyzer does not report noise dB values for any other nodes (such as Relief valve, Tee, Flare Tip etc) except for normal pipes.
Keywords: radiation, noise, flare stack.
References: None |
Problem Statement: Is it possible to combine two simulations which use different VLE methods into one simulation and have each case retain its original VLE method? | Solution: Assume that we have two cases: case A and case B. We want to import B into A to make a merged model within A. A is using Soave Redlich Kwong, and B is using Peng Robinson. The merged model will retain both cases' original VLE methods.
Follow these steps to merge the models:
1.    Starting with case B, choose Peng Robinson as the VLE method under Home | Options | Methods. Verify that all the pipes have Model Default selected as the VLE method.
2.    Export case B as an *.xls file.
3.    Open case A. Go to the pipe editor under Home | Build | Pipe. Select all the pipes and go to Methods and change the VLE method to Soave Redlich Kwong. This will override the default method selected under Home | Options | Methods.
4.    Open case A and import case B's *.xls file. B's VLE method (Peng Robinson) will be set as the default VLE method in A. However, in step 3, we ensured that A is using Soave Redlich Kwong instead of the default method.
5.    Go to Home | Options | Methods and verify that Peng Robinson has been set as the default VLE method.
Keywords: VLE method merge combine import export
References: None |
Problem Statement: What checks are done when using the 'Check Model' option via the Calculations | Check Model menu item? | Solution: The 'Check Model' Option (found via the Calculations | Check Model menu item) checks for:
1. Hypothetical components created in a version earlier than version 3.51a
2. Whether a Scenario is selected or not
3. At least one active source with non zero flow
4. For relief valve, rated flow > nominal flow
5. Negative or zero pipe lengths
6. Elevation change less than or equal to pipe length
7. Damaged connectivity
8. Sensible pipe diameter
9. Relieving pressure is greater than the back pressure [MABP]
Keywords: check model, check, model
References: None |
Problem Statement: What does the 'Valid' column represent when accessing the Show Data Table option from the Plot Area Context Menu? What does this Valid column mean, and how is it determined whether data is valid or invalid? | Solution: In Aspen Process Explorer Show Data Table there is a column called VALID. It shows the validity of data, whether the data is valid (good data) or invalid (bad data).
Process Explorer connects to multiple databases to get data, and each occurrence or sample of data is structurally different. Below is an example showing how data is being structured for Aspen InfoPlus.21 database and how it can be determined whether the value is valid or invalid.
In Aspen Process Explorer the Valid column is true (Valid) if the data source that Process Explorer (PE) is getting data from determines that the sample is valid. This value has always been stored with the Quality sample. Since PE can also get data from databases other than Aspen InfoPlus.21 (IP.21), there is no way to say that 2 (as defined in the QUALITY-LEVELS Select10Def record) always means bad data. The PE clients look at the valid property of a sample to determine if they can plot the sample. PE clients do not look at the level or status except to display the value to the user.
This value is the VALID value. A sample contains: TIME, VALUE, LEVEL, STATUS, and VALID. The Process Data component talks to each CSDS (Client-Side Data Server) which is translating PE calls into actual database api calls. Each database returns a TIME and VALUE. The CSDS then determines the value of VALID depending on the database it is translating (VALID is not a field in any of the databases we retrieve data from).
Aspen InfoPlus.21 example:
For Aspen InfoPlus.21, this is dependent on the value of the Level for each sample. For example, if the level is marked as BAD (IP_TREND_QLEVEL is BAD = 2) then the sample is marked as invalid and the sample cannot be plotted.
In IP21, the CSDS receives the LEVEL and STATUS and the CSDS then fills in the VALID value depending on the value of LEVEL (QLEVEL_BAD = 2 results in a VALID = FALSE, QLEVEL_GOOD and QLEVEL_SUSPECT result in a VALID = TRUE).
Keywords: Valid
Column
VALID
References: None |
Problem Statement: Process Explorer doesn''t recognize when the record attribute for IP_STEPPED was turned on. | Solution: The IP_STEPPED flag in the database was not being used to determine the default configuration of data in Process Explorer. This problem has been identified as QCI 123141.
A preliminary fix was provided in v 3.1 SP1. A selector record, which returns IP_STEPPED data in string format, is used to determine whether tag data has been configured as ''Stepped.'' If this string begins with an S, Process Explorer now defaults to a ''Stepped'' display of data.
This fix was not documented in the SP1 release notes and will be incorporated into v 4.0 documentation.
Keywords: PE, Process Explorer, Stepped, Interpolated
References: None |
Problem Statement: Every time a script that contains a trigger is run, it fails with the message Unable to create XML DOM object. Even simple triggers that record a single characteristic fail with the same error.
The batch may be visible when using the Batch Detail Display. It is may also possible to record characteristics and create instances of sub-batches manually. | Solution: Update the version of the Microsoft XML Parser to v3.0.
It is important to note that the Windows Installer is requred to update the XML Parser. An older version of Windows Installer is included as part of IE version 5.5. However, the XML install file may not run under this version of the installer. The installer may need to be updated before upgrading to version 3.0 of the XML Parser.
Keywords: None
References: None |
Problem Statement: The BCU help file states only to implement processing gates when it is really necessary. So, what is the use of processing gates? How does it work with synchronized triggers? | Solution: In version 2.5 and before, each BCU unit did processing completely independently of any other unit, and in addition, each trigger within a unit did processing independently of any other trigger (with the one exception that sometimes two or more triggers might be waiting for new designator data to arrive).
This was possible because every piece of batch data was independent of any other piece of batch data; for example, recording characteristic A did not depend in any way on whether or not characteristic B already existed. (This changes in version 3, as we will see.)
However, there were rare BCU implementations, particularly involving custom commands and custom database records, where it was important to keep later units from processing a certain span of time before the earlier units finished with that span of time. For example, in one implementation, when the early BCU unit detected that a batch was starting, it launched a custom command that created a database record to hold extra information about that batch. The later BCU units launched custom commands that accessed these database records to add or modify the extra information.
The problem in this case is that there is no way to ensure that the early unit got executed for a given batch before the later units got executed. The later units might run first and encounter an error because there was no database record corresponding to the batch in question. (This changes in version 3, as we will see.)
There are two new features in version 3 to compensate for problems that would have been encountered under the version 2 logic.
The first feature is the Synchronize Triggers flag. This is found on the General tab of the Unit properties in the BCU Administrator GUI.
The Synchronize Trigger feature ensures that all trigger firings within a unit are executed in the time order that they occurred in actual history. This feature is necessary to prevent characteristics from possibly being recorded into the wrong subbatch instance. For example, say that trigger 1 firing indicates that a new Mix phase is starting, and that trigger 2 firing indicates that characteristic A should be recorded for the latest instance of the Mix phase.
If trigger 1 and trigger 2 can process independently, then trigger 1 might be executed several times in a row (e.g. from processing several hours' worth of data), which would create Mix phase instances 1, 2, and 3. When trigger 1 finishes and trigger 2 is processed over the same timespan, all three values of characteristic A will be recorded into the latest instance of the Mix phase -- instance 3.
Using the Synchronize Trigger option means that the trigger 1 will fire, creating Mix phase instance 1, then trigger 2 will fire, creating Mix[1] characteristic A, then trigger 1 will fire, creating Mix phase instance 2, then trigger 2 will fire, creating Mix[2] characteristic A, and so on.
The second feature is Processing Gates. Gates allow the advanced BCU implementer to synchronize processing between multiple units.
A Gate is quite simply a named timestamp. They are used by setting up a trigger so as to wait for a gate to open up, i.e. obtain an updated timestamp. A trigger that is waiting on a gate will not process any condition data points that are beyond the gate's timestamp value.
Updating a gate's value can be done manually by the user from the BCU Administrator, programmatically through the BCU Application Interface, or automatically by a BCU trigger in either the same or a different unit. Updating a gate's value implies the statement I have analyzed history up to and including this timestamp, and have finished all the work that I needed to do as a result. Anyone that has a dependency on me can safely continue processing up through this timestamp.
In the previous example of earlier and later units, the BCU implementer would set up the early unit's trigger to set a gate value after processing a data point, and would set up the first trigger in all the later units to wait on this gate. (This simple example also assumes that the triggers in the later units are Synchronized.) In this way, it is possible to ensure that the later units will never run out ahead of the early unit, and that the custom database record that they depend on will always exist.
Summarized:
Processing gates allow synchronized processing between BCU units when there are dependiencies.
Example:
Here's how it works:
Set Gate Unit
Trig1 Sets Gate1
Trig2 Sets Gate2
WaitGateSync Unit
Trig1 Waits for Gate1
Trig2 No Wait
WaitGateUnsync Unit
Trig1 Waits for Gate2
Trig2 No Wait
When a trigger is waiting for a gate, then it cannot process any further forward in time than the gate it is waiting on. If triggers in a unit are synchronized, and only one trigger is waiting for a gate, both triggers will have to wait for the gate, because they are synchronized.
If triggers are not synchronized, and only one trigger is waiting for a gate, then the trigger not waiting for a gate will process on its merry way, regardless of what other units or triggers are doing.
A. All four triggers in the Wait for Gate units get the data that should cause them to fire at 6:00. The Set Gate triggers get the data that causes them to fire at 6:04.
B. Trig1 in the Synchronized Wait for Gate unit has to wait until 6:04 before it can fire (although it's trigger firing timestamp will be 6:00), because it has a Wait for Gate setting. Trig2 in the Synchronized Wait for Gate unit has to wait on Trig1, due to synchronization.
C. In the Unsynchronized Wait for Gate unit has to wait until 6:04 before it can fire (although it's trigger firing timestamp will be 6:00), becuase it has a Wait for Gate setting. Trig2 in the Unsynchronized Wait for Gate unit will fire at 6:00, because it does not have to wait for anything to happen
Please note:
As noted in the BCU Administrator help, little or no checking is done to prevent nonsensical situations involving gates -- for example, telling a trigger to both update and wait for the identical gate name (which will always cause the trigger to stop progressing). It is possible to envision many pathological states like race conditions, deadlocks, and so on, none of which are analyzed or detected by the BCU; hence the strong warnings in the help file.
Keep in mind as a final point that when a trigger fires, and it includes a Command, you have the option of checking the box Wait for command to complete before continuing. If setting a Gate immediately after that trigger, the timestamp of the Gate can be significantly affected. If the box is checked, no further BCU processing will happen until the command is complete. So your Gate timestamp will be delayed by that amount of time. With the box unchecked, the Command is fired and the BCU continues processing, so the Gate timestamp will be set at or near the trigger timestamp.
Keywords: synchronize triggers
processing gates
References: None |
Problem Statement: How to configure Batch.21 in concurrence with Store&Forward?
What happens to Batch.21 during a Store&Forward data-interruption (storing of data)?
What happens to Batch.21 after a Store&Forward data-interruption (forwarding of data)? | Solution: In concurrence with Store&Forward, do NOT use the Extrapolate to current time option for any tag (seeSolution #104737).
IF the extrapolate option is checked, the BCU will assume that seeing no new data means that there is no new data. And because of this, the BCU continues with its scan cycles, which means it will permanently miss the buffered data.
DURING a data transfer interruption (storing of data), the BCU assumes that there is no new data for that period. AFTER a data interruption, (forwarding of data), the BCU will only interpret the last datapoint at the current timestamp. It will not analyse the forwarded data, because the scan cycles have been continuing and the start-time of the BCU is the current time.
IF the extrapolate option is NOT checked, the BCU calculates the interpolated value (with the previous & next value) DURING a data interruption (storing of data), the BCU will be awaiting the next value, and consequently it will not continue its scan cycle. The BCU status will be waiting for data and the BCU start-time will not change. AFTER a data interruption, Cim-IO Store&Forward will forward the data in sequence, and the BCU will continue its analysis from the moment where it waited, without missing batches.
Keywords: CimIO
Store & Forward
Extrapolate
Extrapolated
Interpolate
Interpolated
References: None |
Problem Statement: We recently had a customer installing AspenManufacturingSuite V4.0 including several layered products such as Batch.21 He noticed during the install of Batch.21 that the DOS windown showed several warning or error messages, but they flashed by on the screen so fast that he didn't have time to note down what they said. However it still looked as though it had installed correctly. Later he tried to complete the installation by loading the Batch.21 Server Components. He reached the Setup Screen where he is prompted for Username/Password. When he clicked 'OK' he got the error
the file cannot be installed
This could be seen both from a pop-up window as well as at the bottom og the Setup GUI He got this error whether he used a Network or a Local account. This user was on Windows 2000
He Looked at com plus packages in Win2000
He could see the aspen batch.21 package, but it was empty underneath - no roles or components. He tried an uninstall and the package went away. | Solution: We discovered that a few days before installing the products he had run RegClean on his machine. When he ran the Undo*.reg file - to load back the registry entries that had been deleted - he was able to successfully load the Batch.21 Server Components. Specifically there are several 'Legacy' TypeLib key/value entries that had been deleted under HKEY_LOCAL_MACHINE\Software\Classes\... We are not exactly sure which registry entries caused the problem, but this is a warning to be careful when running RegClean, and also ask you to ensure you retain the UNDO option from RegClean, just in case you get problems some time later.
Keywords:
References: None |
Problem Statement: Where should I declare a delta vector as FREE for a base-delta model? | Solution: By default all columns in a matrix will have only positive activity. Therefore a delta vector in a base-delta model must be declared free so it will allow for adjustments in either direction (positive or negative) based on the property balance calculation.
In the following SKHT submodel, the property balance row is ESULKHT. This pairs with material balance row ECHGKHT to determine what the sign of delta vector SUL will be. To allow for possible negative activities, it is required to free this delta vector. There are two places to declare it free.
The traditional way is by adding row FREE inside the base-delta submodel. Enter 1 under delta vector column SUL.
* TABLE
SKHT
TEXT
BAS
SUL
KE1
KTF
***
*
FREE
Free up Adjustors
1
*
VBALKE1
Combined Kerosene
1.0000
*
* VBALHYL
Hydrogen (FOE)
0.0061
0.0017
* VBALH2S
H2 Sulfide (FOE)
-0.0053
-0.0016
* VBALNC1
Methane (FOE)
-0.0020
* VBALNC2
Ethane (FOE)
-0.0001
VBALHYL
High Purity H2, kscf
0.1350
0.0376
VBALH2S
H2 Sulfide, kscf
-0.0547
-0.0165
VBALNC1
Methane, kscf
-0.0133
0.0000
VBALNC2
Ethane, kscf
-0.0004
0.0000
VBALNC3
Propane
-0.0002
VBALNC4
N-Butane
-0.0003
VBALHTK
Htd Kerosene
-0.9912
0.0045
VBALLOS
Loss
-0.0083
-0.0045
*
ECHGKHT
Feed Charge
1.0
-1
ESULKHT
Sulfur
1.0
0.5
-999
*
The other place to free this delta vector is by adding a row name which starts with the submodel name plus the 3-char delta vector in Table BOUNDS. In this example it is SKHTSUL.
* TABLE
BOUNDS
*
User Defined Bounds
TEXT
MIN
MAX
FIX
FREE
*
*
CONTROL VECTORS:
SKHTSUL
delta vector SUL in SKHT
1
Keywords: base-delta
delta vector
FREE
References: None |
Problem Statement: Sometimes it appears that changes to capacity MIN or MAX values made in Table CASE are ignored. For example, below a MAX capacity for CAT1 is entered as 30 in Table CASE. However when the model is run, there is no MAX for CAT1 reported.
Excerpt from Table CASE:
CASE 8
Change CAPS empty entry in MAX column
TABLE
CAPS
TEXT
MIN
MAX
CAT1
Crude Unit #1 BPD
30
CAT2
Crude Unit #2 BPD
60
Excerpt from Full | Solution: .html:
Process Capacity Used
Minimum
Units/DAY
Maximum
AT1
Crude Unit #1 BPD
40,000
VT1
Vac Unit #1 BPD
20,000
20,000
Solution
This happens when there is no existing MAX entry in the base table CAPS. (This scenario applies to MIN entries also.)
For example, in the following Table CAPS, there is no entry for CAT1 MAX. As shown above, then the entry of CAT1 MAX as 30 in Table CASE gets ignored.
* TABLE
CAPS
Table of Contents
*
Process Capacities ('000)
TEXT
MIN
MAX
*
CR01
Crude Processing
CR02
====================
CAT1
Crude Unit #1 BPD
CAT2
Crude Unit #2 BPD
60
There are two possibleSolutions:
1. Always enter a number in the base CAPS table if you want to change its value in Table CASE.
2. Use the Case Stack Bound Updates option under MODEL SETTINGS | GENERAL | Miscellaneous, and select Replace both MIN and MAX bounds if either provided. When this is selected, only one (MIN or MAX) has to be defined in the CAPS table and both can be changed via Table CASE.
Keywords: CASE
CAPS
replace
bounds
References: None |
Problem Statement: When using the Trend/Aggregate function of the Add-In, if a horizontal Fill direction is selected, no data is displayed. | Solution: This has been fixed as ER S001215A.NTI and will be included in the EIS v3.1 SP2 release.
Keywords: horizontal
References: None |
Problem Statement: The Graphic Console System, or GCS, is a graphics package that has been available for years, typically used by customers who purchased a Setcim database. GCS graphics are created in a proprietary format that cannot be converted into a Process Explorer format. Many customers may have created hundreds of these GCS graphics, and want to continue to use them.
There are two options. GCS by itself will connect to InfoPlus.21 databases, as well as Setcim databases, so this product can continue to be used until supported. Of course what you have to do is use Process Explorer.
For Process Explorer versions older than 2006.5, the following | Solution: is possible to open a GCS session from a Process Explorer hotlink.Solution
1. From the Process Explorer menu bar, select File > New > GCS This opens a GCS window with the base display
2. Save this display window. From the Process Explorer menu bar, select File > Save As
3. Open Process Graphics Editor From the PE Graphics Editor menu bar, select Draw > Hotlink
4. Add the hotlink to the graphic
5. Double-click on this hotlink
A. On the General tab of the hotlink properties box, Search to select the file created from step 2 (the .atgcs file). **Under Type, choose 'Plot' ***Click OK
B. From the PE Graphics Editor menu bar, select File > Save As
6. Verify it works correctly from PE
A. From Process Explorer menu bar, select File > Open *Open the graphic you saved in step 5B
Note: From the GCS display, by pressing CTRL & R-clicking the mouse, a GCS menu pops up. From this menu you may open any display.
Keywords: GCS
Process Explorer
hotlink
References: None |
Problem Statement: How to Change the Port Number for Aspen Online | Solution: The machine port number that is used by the Aspen Online service can be found in the system Services file. Navigate to:
C:\Windows\System32\drivers\etc\services
Right-click on the Services file and open with Notepad. Users that have read/write restrictions might need to copy/paste this file to the desktop in order to open.
Once the services file is open, navigate to the aspenonlinevx.x row under the <servicename> column. The adjacent column shows the port number. See Figure 1 below.
Figure 1 - Windows Services File
Before changing the port number ensure that Aspen Online Vx.x Service is turned off and then turned on after the change. See Figure 2 below
Figure 2-Component Services
Keywords: aspen online, port number, services, component services, services file
References: None |
Problem Statement: Problem occurs when running an Excel spreadsheet with Aspen Add-Ins on multiple PC''s. If the Excel *.xls file is created on one PC and stored on a shared drive, other PC''s may not be able to view the report.
It''s also been found that a spreadsheet created on one PC and then copied to a network drive, cannot be opened by the same PC that created it (gets error saying it can''t find the atdata.xla). However, if the spreadsheet is Saved As to the network drive, then it is able to be opened successfully. | Solution: First be sure that Process Explorer is installed. Then check that the Atdata.xla file is installed in the same directory for all computers. If not, you will need to move the file to the appropriate directory. Then in Excel, reload (remove and replace) the Aspen Add-Ins (Excel -> Tools -> Add-Ins).
Keywords: Excel
Add-Ins
share
network drive
References: None |
Problem Statement: This Knowledge Base article provides steps to resolve the following error message:
No such interface supported.
which may be received in the Aspen Batch.21 Administrator tool.
The above message is sometimes accompanied by the following error message found in the TSK_BCU_START.out file: failed to connect to BCU , 80004002.
Here is how to troubleshoot this message. | Solution: 1
Re-register AtBatch21AtlServer.dll file found in the following directory:
..\Program Files\AspenTech\Batch.21\Server
If this doesn't help, then try step 2 below.Solution 2
It is possible that the password of the account that is being used to start InfoPlus.21 and Batch.21 has been changed, which means that the passwords on all the services associated with both applications must also be changed. Use Control Panel/Services to make the password change for these services.
Additionally, the account/password for the Batch.21 Server Components must also be changed.
Here are the steps:
1. Click Start | Programs | AspenTech | Aspen Manufacturing Suite | Aspen Batch.21 | Server Tools | BCU Server Manager.
2. Click Configure...
3. Browse to the appropriate account and enter the new password.
Click OK and Close to exit.
Keywords: None
References: None |
Problem Statement: What is the syntax for defining a BCU API datasource? | Solution: Example ASP page using VBS:
(Replace all ( with < and all ) with > to use this script; formatting was changed to help display code correctly):
<HTML>
<HEAD><TITLE>hallo.asp</TITLE></HEAD> <BODY>
<%
set a = server.createobject(AspenTech.Batch21.bcu.bcuDataSources)
b = a.count
%>
Number of Batch datasources: <%=b%>
<BR>
<BR>
<%
datasource = 1
b = a.item(datasource).areas.count
%>
Number of BCU configured areas for the first Datasource : <%=b%>
<BR>
Keywords:
References: None |
Problem Statement: Is it possible to get aggregate data for more than one tag at a time into the Excel Process Data Add-ins? | Solution: Enter valid InfoPlus.21 tag names in cells A1, A2, A3 (vertically) or cells A1, B1, C1 (horizontally)
From the drop-down menu, choose Aspen | Process Data | Get Data |Trend/Aggregate Data
Enter the range from the spreadsheet where the tag names are located into the field ''Tags'' The entry should look something like Sheet1!$A$1:$A$3
Select an output range; Do not select the cells B1:C3 (if the tag info names are in A1 to A3) because they are reserved for additional information such as server and map. Do not select cells A2:C3 if the tag names are in cells A1 to C1.
Select ''Settings'' and the ''Data Source'' tab
If the tag names are listed in cells A1, A2, A3, then make sure ''read information for each tag in rows, across'' is selected. If the tag names are listed in cells A1, B1, C1, then make sure ''read information for each tag in columns, down'' is selected.
Press OK from the Settings Dialog
Press OK from the Trend/Aggregate Data dialog box
The data should fill into the spreadsheet
Keywords: values
cell
tags
multiple
References: None |
Problem Statement: If you do a query with the Batch Query Tool , and the query is attempting to return thousands of batches, it may take a long time (minutes to hours, depending on how much data is requested). During that time you will see high CPU usage of mtx.exe (Microsoft''s Transaction Server) and possibly MS SQL Server or Oracle (depending on which relational database you are using). If you select the cancel button from the Batch Query Tool, it stops the Query Tool from waiting for the results, however the query continues to run against the relational database from the transaction server. The product was designed to work this way, but if the processing of MTX causes a performance problem, the user has the option to to restart it, therefore ending the query. | Solution: If you want to reset the transaction server, you don''t have to reboot the computer, but can do the following:
On the server computer, start up the Transaction Server Explorer: Start | Programs | Windows NT 4.0 Option Pack | Microsoft Transaction Server | Transaction Server Explorer
In the left pane: Open: Microsoft Transaction Server
In the left pane: Open: Computers
In the left pane: Right Mouse Click on My Computer
In the context menu, select: Shut down Server Processes
This will shut down the transaction server, which includes the batch server. The next time a batch application accesses the batch server, it will automatically start back up.
Keywords: BQT
Performance
hangs
References: None |
Problem Statement: I can import the sample case such as Atmospheric Crude Tower.hsc in the Aspen HYSYS samples folder. However, why am I unable to import solid components into Aspen Flare System Analyzer from Aspen HYSYS? | Solution: Aspen HYSYS case file may contain components which are not available in Aspen Flare System Analyzer. One example is, the solid components. Aspen Flare System Analyzer does not support the solid components. In order to work, you will have to delete the solid components in Aspen HYSYS.
Keywords: Soild, Import sources, Aspen HYSYS
References: None |
Problem Statement: How do I better size my relief valve? How can I get a better representation of my rated flow? | Solution: For each relief valve, the inputs for correctly sizing a valve can be introduced on the Relief Valve Editor under the Methods tab.
The given value for the back pressure will now be used with the appropriate sizing method selected.
If this value is not known, or not defined, the program will use the Allowable Back Pressure instead to size the valve and to calculate the rated flows.
It is recommended to input this value under the sizing section to have a better calculation of the rated flow.
Keywords: Rated flow, backpressure, PSV
References: None |
Problem Statement: What is the upstream diameter ratio and downstream diameter ratio in orifice plate? | Solution: Upstream Diameter Ratio is the ratio of throat diameter to the upstream pipe diameter.
Downstream Diameter Ratio is the ratio of throat diameter to Downstream pipe diameter.
Keywords: None
References: None |
Problem Statement: Is it possible to close an AspenTech VBA window programatically? | Solution: To close a window in VBA, the class ThisDocument can be used with the method Close. Documentation from the VBA help files is contained below.
The Close method closes the document.
Syntax
object.Close(SaveChanges, FileName)
Arguments Type
Description
SaveChanges
TRUE saves,
FALSE does not save changes to the document.
FileName
(optional variant)
Saves changes under this file name. If FileName is omitted, the user must supply a file name.
Keywords: VB
References: None |
Problem Statement: Why do I get the same pressure profile in my network regardless of whether or not the Include Kinetic Energy option is active? | Solution: 2006.5 and earlier version: The simulation results will be the same irrespective of whether the Include Kinetic Energy option is selected whenever the Enable Heat Transfer option is active. When heat transfer calculations are enabled, the impact of kinetic energy changes will always be included.
Note that both the Include Kinetic Energy and Enable Heat Transfer check boxes are accessed from the General pagetab via the Calculations | Options menu item.
V7 and later: Even if heat transfer calculations are enabled, kinetic energy changes will be included if & only if Include Kinetic Energy option is selected.
Keywords: kinetic, energy, heat transfer
References: None |
Problem Statement: What inputs are required to update batch characteristics using the Aspen Batch.21 XML interface? | Solution: The following code demonstrates the necessary inputs to update a batch characteristic in a database. In the example, the characteristic being updated is BATCH VALUE.
<Datasource xmlns='Aspentech.Batch21' name='bedrock'><Area name=demo><InsertBatchData><BatchList><Batch><Designator name=BATCH NO>151</Designator><Characteristic name=BATCH VALUE instance=last forceOverwrite=1><Value>1000</Value><Timestamp isUTC=1>2005-04-20T09:50:54</Timestamp></Characteristic></Batch>< /BatchList></InsertBatchData></Area></Datasource>
The Datasource xmlns specification should not be changed. Leave it as 'Aspentech.Batch21'.
The name specification corresponds to the name in ADSA for the Batch.21 data source.
Keywords: batch
XML
sample
code
References: None |
Problem Statement: When running the Aspen Database Wizard to create the Aspen Production Record Manager database, there are three options
1. Create the database/tablespace, users and database objects
2. Create database objects in an existing database
3. Update the existing database and database objects
The following | Solution: details the Oracle permissions needed to run each of these three options.Solution
Create the database\tablespace, users and database objects - For this option, DBA level privileges are required. This is necessary because this first option will generate the tablespace in Oracle, a task that requires DBA level permissions in Oracle.
Create database objects in an existing database - This option is used when the Oracle Administrator does not want to grant DBA permissions to the person running the Aspen Database Wizard. In this case, the Oracle Administrator should run the create_DBandUser.sql scripts in the \AspenTech\Batch.21\Server\Setup\Oracle directory. This will create the TableSpace and User accounts in the database, but will not create the database objects such as the stored procedures and tables. After the Oracle Administrator creates the TableSpace and User using the scripts, the Batch Administrator should run the Database Wizard and select the second option, Create database objects in an existing database. This will create the tables, stored procedures, etc. When the Database Wizard is run with this option, the following permissions are required:
Connect
Resource
Create Public Synonym
Create Sequence
Create any Procedure
Create any Trigger
Create Table
Execute any Procedure
Create any Synonym
Select any Table
Select any Sequence
Insert any Table
Update any Table
Delete any Table
Update the existing database and database objects - This option should be used when upgrading an existing Batch from one version to the next. In this case, the required permissions are:
Connect
Resource
Create any Table
Create any Index
Create any Sequence
Create Public Synonym
Create any Procedure
Create any Trigger
Drop any Table
Drop any Index
Drop any Sequence
Drop Public Synonym
Drop any Procedure
Alter any Table
Alter any Procedure
Alter any Index
Alter any Sequence
Alter any Trigger
Select any Table
Select any Sequence
Insert any Table
Update any Table
Delete any Table
Execute any Procedure
Keywords: None
References: None |
Problem Statement: Aspen Process Data does not apprear on the toolbar within Excel after Aspen Process Explorer is installed. | Solution: During installation of the Aspen Process Explorer, an add-ins is placed in the Library folder of the Microsoft Office application Excel. As a one-time configuration step, the add-in can be added to Excel''s list of add-ins to use by selecting Excel menu item Tools -> Add-ins, and checking the box beside Aspen Process Data, which is the name of the add-in for InfoPlus.21.
Usually Aspen Process Data is added to the list of Add-ins, but in some rare case it doesn''t. Here are some steps to add Aspen Process Data to the list of Add-Ins.
Search for ATDATA.XLA file (most likely this file is located in C:\Program Files\Microsoft Office\Office\Libary)
Open Microsoft Excel
Go under Tools to Add-Ins and click on Browse
Go through the Browse to get to the Atdata.xla file within C:\Program Files\Microsoft Office\Office\Libary
Double click ATDATA.XLA file, it will then add this file to the Add-Ins list within Excel.
check the box next to Aspen Process Data and click OK
Now, Aspen should appear next to Tools and Data on the toolbar.
Keywords: atdata.xla
add-ins
aspen process data
References: None |
Problem Statement: What criterion is used to determine the status of aggregate data requested from Aspen InfoPlus.21 via Aspen Process Explorer, Aspen Process Data Add-In, or Aspen SQLplus?
The source data has a quality level of GOOD, BAD, and SUSPECT.. | Solution: Aspen Process Explorer obtains aggregate data from an Aspen InfoPlus.21 API routine. This routine sets the status level of the aggregate to SUSPECT if more than one percent of the source data in the aggregate period is BAD, and changes the status level to BAD if more than 50 percent of the source values is BAD.
Aspen Process Explorer uses the aggregate's status level to decide whether to display the value or not. Process Explorer will not display data on a trend plot which has a quality level of BAD.
Keywords: aggregate
References: None |
Problem Statement: Is it possible to query Batch.21 to find batches meeting particular criteria? | Solution: SQLplus can be used to query for batches that meet particular criteria. The following query can be used to return data for batches that fall in a given time span (between Myquery.TimeRange.Start and Myquery.TimeRange.End) and include a characteristic that starts with the string 'HE'.
When using the sample query, replace DATA_SOURCE_NAME with the Batch.21 server and AREA_NAME with the appropriate Batch area name (e.g., DEMO). CHARACTERISTIC_NAME should be replaced with the appropriate characteristic. TEXT_STRING can be replaced with the desired string (wild cards are allowed).
The single quotes should remain around server name, area name, characteristic name, and text string.
=======================================================================
--Declare variables
local data_sources, batch, myarea, myquery, batchlist, i int;
--Define Batch.21 data sources
data_sources = createobject('AspenTech.Batch21.BatchDataSources');
--Define Batch.21 area
myarea = data_sources('DATA_SOURCE_NAME').areas('AREA_NAME');
MyQuery = MyArea.BatchQuery;
MyQuery.Clear;
--Search for instances of CHARACTERISTIC_NAME that contain the desired text string
Myquery.CharacteristicConditions.Add('CHARACTERISTIC_NAME', 1, 6, 'TEXT_STRING');
--Define time range for query
Myquery.TimeRange.Start = '06-SEP-05 06:00:00';
Myquery.TimeRange.End = '09-SEP-05 10:00:00';
--Process and write the results
batchlist = MyQuery.get;
for I=1 to batchlist.count do
begin
write batchlist(i).characteristics.item('CHARACTERISTIC_NAME');
exception
write 'No characteristic name';
end
end
=======================================================================
The following information comes from the BCU Application Interface Help File located at:
(\Program Files\AspenTech\Batch.21\Help\en\AtBatch21BcuApplicationInterface.hlp).
OR
(\Program Files\Common Files\AspenTech Shared\Help\En\atbatch21applicationinterface.chm)
=======================================================================
The Add method allows you to add a new condition to the collection.
It is also possible to specify the numeric parameter for AtLike (6).
Myquery.CharacteristicConditions.Add('CHAR', 1, 6, 'Desired_String%');
=======================================================================
The Add method allows you to add a new condition to the collection.
Syntax
object.Add (CharacteristicName as String, [Instance as Long = 1], Op as atOperatorEnum, Value
=======================================================================
The atOperatorEnum enumeration allows you to define the operator used in a statistic condition.
Enumerator (Number)
Description
atLessThan (1)
Condition operator <
atGreaterThan (2)
Condition operator >
atLessThanOrEqual (3)
Condition operator <=
atGreaterThanOrEqual (4)
Condition operator >=
atEqual (5)
Condition operator =
atLike (6)
Condition operator LIKE, where LIKE is used in conjunction with a wildcard character to perform query functions.Wildcards:% (percent) -- represents one or more characters in a query_ (underscore) -- represents one character in a queryExamples:Record_Name LIKE A% -- will find all record names starting with ARecord_Name LIKE %TANK% -- will find all record names containing the letters tankRecord_Name LIKE _ANK -- will find all record names that are four characters long and end with the letters ank
atNotEqual (7)
Condition operator <>
Keywords: batches
query
SQL
References: None |
Problem Statement: Sometimes a user may get the message data server not responding when trying to insert or view a tag in Process Explorer. The fundamental problem is that Process Explorer cannot communicate with the TSK_API_SERVER running on the IP.21 server. But there may be many reasons why it cannot make this communication. Below is a list of troubleshooting tips that you can use to try to resolve this problem. | Solution: Check in the IP.21 Manager to make sure the database is up and running. Check especially the TSK_API_SERVER to make sure it is running.
Check the Configure Servers option within Process Explorer to make sure the connection is defined correctly.
Try to bring up the Tag Browser or Advanced Tag Browser within Process Explorer to see if you are able to search for tags. If you are using Advanced Tag Browser and it is not connecting, then check to make sure TSK_SQL_SERVER task is running in IP.21 Manager (Advanced Tag Browser does not use TSK_API_SERVER, which is why it may connect, even though Process Explorer cannot).
If Process Explorer and the IP.21 database are not on the same machine, then go to a DOS Prompt and see if you can ping the two computers to make sure they can see each other.
Check other client machines where Process Explorer is installed to see if those are working.
If Process Explorer is installed on the IP.21 server, then check to see if that one is working.
Check NT Task Manager to make sure h21archive.exe and h21task.exe processes are running.
Check to see if the Administrator tool works, on both the IP.21 server and on remote machines. It also uses TSK_API_SERVER to make it''s connection to the database.
If everything is working but you still get data server not responding, then restart the database from the IP.21 Manager
This behavior has also been linked to ELRON internet monitoring software. If the application is installed on the Process Explorer machine, the data server not responding message may be eliminated by removing ELRON software.
This behavior has also been seen when the NETBEUI protocol is installed on the IP.21 server machine. Try removing this protocol.
If TSK_API_SERVER appears to be running in the list of running tasks, it should be registered with NobleNet Portmapper. To prove that it is, go to Aspentech\InfoPlus.21\shared\bin, and run rpcinfo.exe. (In version 3.x, this is found in C:\Program Files\Common Files\AspenTech Shared\Portmapper). From the menu, choose Info\Portmap Dump. There should be a line item entry that looks like this: 300363 1 tcp 1644
If this line is not present, then the TSK_API_SERVER has not properly registered with Portmapper. Try stopping TSK_API_SERVER from the IP.21 Manager, and stopping the portmapper from Control Panel\Services. Restart them, and check for the entry.
We have seen an instance where the TSK_API_SERVER will not register with Portmapper, because it gets hung during startup, due to some leftover files from an incomplete shutdown or system crash. The files are located in the Aspentech\Infoplus.21\c21\n21\files directory. These files are called net_sm, net_sh, net_rt, and net_rtpid. Try deleting these files, and restarting TSK_API_SERVER.
Keywords: data server not responding
TSK_API_SERVER
References: None |
Problem Statement: Batch and the Batch Demo have been configured and the Scheduling Table in the BCU Administrator indicates that units are executing, but no batches are being created | Solution: Initialize the Batch Demo by executing the QueryDef record atcini.
Keywords: AspenChem
Aspen Chemical Demo
References: None |
Problem Statement: I would like to model the utility consumption in a process unit such that utility consumption would only be activated when the feed is more than certain amount. How should I model this? | Solution: You can use Mixed Integer Programming (MIP) to do this. What you can do is:
1. In that process unit (submodel table), create 2 columns that represent the utility consumption conditions when the feed is less than the specific amount vs. when the feed is more than the specific amount that activate the utility consumption. In the example below, SISO is the process unit where the FUEL utility consumption would only be activated when the feed is more than 1000 Bbl. Create an additional column NEW, this column would represent the condition when feed is less than 1000 Bbl, hence the entry in UBALFUL row is 0. The NC4 column represents the condition when feed is more than 1000 Bbl, and there is a utility consumption coefficient in UBALFUL row.
2. In T. BOUNDS, specify that the maximum value of variable SISONEW is 1000 Bbl
3. In T. MIP, group variables SISONC4 and SISONEW as one set of variable through SOSSET, and specify them as SOSTYPE 1. This allows only one of these two variables to take non-zero activity.
I am attaching a sample model where I have added the constraint above. You can run case 1 and case 2 to see the effect of FUL utility consumption when the feed amount of SISO unit changes.
Keywords: MIP, SOSSET, SOSTYPE, BOUNDS, turn on/off utility
References: None |
Problem Statement: How to have Web.21 as the data source for Excel Add-Ins? | Solution: On the menu bar in Excel, choose Aspen | Process Data | Options. Click the data source tab. Select Web.21.
Keywords:
References: None |
Problem Statement: The Aspen Batch.21 installation guide describes which versions of the Oracle client were tested with Aspen Batch.21. However, the version of the Oracle server components is not mentioned. This knowledge base article documents the versions of the Oracle relational database server which were tested with Aspen Batch.21 v2004.x. | Solution: Oracle relational database server version 9.2.0.1 was tested with Aspen Batch.21 v2004.0, v2004.1 and v2004.2. On the Aspen Batch.21 server, the OLE DB driver was the version which was installed with the 9.2.0.1 server components (the one available from the universal installer, the OLE DB driver is no longer a separate Oracle patch). Also, if the Oracle client or server version is on Windows 2003, there is an Oracle patch to support this operating system which brings the Oracle server components to version 9.2.0.3. Oracle 9.2.0.1 will not work without the patch.
Keywords: Test
Support
QA
QC
References: None |
Problem Statement: A particular characteristic may end up being stored against the incorrect batch -- i.e. not for the current (intended) batch, but for the previous or next batch. | Solution: The designator value should be correct at the moment a trigger condition becomes true, and the characteristic gets stored.
When a characteristic gets stored at the exact same moment that the designator value changes, it is not clear to which batch (designator value) this information belongs. It might belong to the end of the previous batch OR to the start of the next batch.
For instance, let's say the designator value has been 52 for a while, and at exactly 10:00, it changed to 53. All triggers firing before 10:00 would store their characteristics against batch 52. All triggers firing after 10:00 would store their characteristics against batch 53. But, what about triggers firing at exactly 10:00? Are these ending characteristics for the previous batch, 52, or beginning characteristics for the new batch 53?
In order to correctly synchronize the characteristics to the correct batch, the trigger in the BCU should acquire its batch number either before or after the time at which the trigger fired. To do so, there are some special settings on the Advanced tab of the trigger definition. Acquire batch number before/after time of conditions should be changed to a couple of seconds before the trigger fires, if the data should have been stored as an ending characteristic, against batch 52. Or the field should be changed to a couple of seconds after the trigger fires, if the data should have been stored as starting characteristics, against batch 53.
For advanced synchronization of complex triggers, seeSolution #105630
Keywords: designator
trigger
before after
Acquire batch number before/after time of conditions
References: None |
Problem Statement: When Process Explorer is used for the first time after installation, the error message, Process Explorer failed to initialize VAO components may appear. This knowledge base article explains what this message means and how it is resolved. | Solution: This has been observed to occur on Windows 95 machines more frequently than with NT machines. However, the reSolution to this problem is the same for both operating systems. The VAO components are Visual Basic files that should be initialized after the reboot at the end of the install. They most likely will not be initialized if the install process does not complete normally.
With this in mind, there have been problems where the installation fails when Process Explorer is installed over a Novell network. This has been tracked down to a Novell problem with long directory names. Novell has a patch that was released to correct this problem for MS Word, the patch is: TID#2953476. If one cannot obtain this patch, installing Process Explorer directly from the original installation CD should provide a clean install.
Additionally, a few problems have also been observed if WinInstall is used. If one believes the installation process to be free of errors, but still receives the, failed to initialize VAO components error when Process Explorer is run, the following will allow the VAO components to be initialized.
Go to the AspenTech\Desktop directory in the command prompt and run processexplorer -regserver
Go to the \ProgramFiles\Common Files\Microsoft Shared\vba directory and run regsvr32 vao50.dll.
Also in the \ProgramFiles\Common Files\Microsoft Shared\vba directory, run regsvr32 dhc50.dll
Moreover, if one searches these directories but finds that vao50.dll and dhc50.dll are not present, then the installation was not successful and should be redone. This error should not occur with Process Explorer 3.x as VBA has been made a core component and the installation procedure has been changed.
Keywords: VAO
install
VBA
References: None |
Problem Statement: There are multiple reasons for the following error message received when trying to use Excel Add-In to retrieve data from Aspen InfoPlus.21 (IP.21):
Error 50003: No Connect to Server.
1. Sometimes this error is received when selecting more than one tag in the Excel Add-in. This happens whether you reference multiple tags from the tag browser or manually input them into the dialog box.
2. Rocket Review software is also installed on the same computer.
3. Sometimes this error is received after a system re-boot or database re-start.
4. TSK_EXCEL_SERVER is hung
5. History in InfoPlus.21 is taking too long to be retrieved
6. RPC timeout
Following are the | Solution: s to the above problems:Solution 1:
1. Bring the Trend/Aggregate Data Dialog box up
2. Select Settings... | Data Source tab
3. Change the selection under Tag Information to the opposite of what is selected.
4. Select OK
5. The error goes away and the data fills into the cells.
Example:
Let's assume that your spreadsheet is configured as follows:
A1
A2
A3
1
Tag1
Tag2
Tag3
2
3
If you had Read information for each tag in rows, across selected, the Excel Add-Ins would want the Server and Map to be inserted into cells A2 and A3, and you would get the above-mentioned error message. Changing the selection on the Data Source tab to Read information for each tag in columns, down would insert the Server and Map into rows 2 and 3, as expected.
A1
A2
A3
1
Tag1
2 Tag2
3 Tag3
Solution 2:
SeeSolution # 106856 for details.Solution 3:
(ThisSolution was provided by Gilbert Santiago of DuPont and it applies to AMS versions 6.0 and up. [email protected])
In the InfoPlus.21 Manager, check the TSK_EXCEL_SERVER's command line parameters and see what port was specified (if any). Then use a tool like aports.exe or procexp.exe to see if the api is actually using that port. If it's not, then find the process that is using the port and see if it can be stopped and stop it. Stop the TSK_EXCEL_SERVER and restart it. Verify that it is using the correct port. Then, if possible, start the process that originally stole the port. If the correct port cannot be re-assigned then do a database stop/start or a system re-boot.Solution 4:
Restart TSK_EXCEL_SERVER in the IP.21 ManagerSolution 5:
See if anything can be done to speed up the accessing and retrieval of the InfoPlus.21 history. For example, can the drives that hold the history values be defragmented or can the processor load be reduced or can the drives that hold the history be upgraded to provide greater throughput?Solution 6:
SeeSolution # 126892 for details.
Solution 7:
On the client side, go to Portmapper ( C:\Program Files\Common Files\AspenTech Shared\Portmapper) Double click on RPCINFO.EXE
Then go to Info tab, and select Portmap Dump
In the following window replace the Host Name info with the correct server name, in this example the correct server should be LATAMTDC01
Then click ok.
On the server side, go to the IP21 manager and double click on TSK_EXCEL_SERVER from the left side list. Verify that the command line parameters is written in the correct form.
Keywords: None
References: None |
Problem Statement: Occasionally process engineers may have a need to connect their Process Explorer to a remote database server located at a different plant than the one they are in. Depending on their network configuration, they may or may not be successful. In case of an unsuccessful attempt, the following error message will be displayed: Data server not responding. | Solution: # 103374 describes the steps necessary to troubleshoot this error.
ThisSolution deals with the situation where Process Explorer can connect to a local database but will not connect to a remote database, which indicates that this is a network related issue.
Let's look for a moment at the database server logon authentication requirements for Process Explorer users operating in the domain-based environment. One of the cardinal requirements for the logon process to be successful is for the user to be on the same or trusted domain as the database server. If this requirement is not met, the connection cannot be established.
Many large corporations divide their network into several domains. For security reasons, in many cases no trust relationship exists among those domains. Here is how this limitation can be overcome:
Solution
The user will need to log into the domain where the database resides. Create an account for the user on the domain in which the database resides. Logging into the domain using that account will enable the user to connect to the database and view data records in Process Explorer.
Keywords: domain
data server not responding
References: None |
Problem Statement: Prior to version 3.1, Batch.21 used Event.21 to record the start and end time of batches and sub-batches. In 3.1 versions, Event.21 is no longer a required component of Batch.21, and the method used to store start and end times has changed. The following Tech Tip details how start and end time events are recorded in version 3.1 | Solution: In version 3.1, start and end times are stored as characteristics, not as events. For a BCU script to record the start and end times in the BCU Administrator, the characteristics must first be created in the Batch Administrator tool.
In the Batch Administrator, define Characteristic definitions for Start Time and End Time. These characteristics should be defined for the Batch level and for each subsequent subbatch level. If they are only defined for the batch level, there will be no way to record the start and stop times of the intermediate batch levels.
For example, in the Batch.21 Demo, characteristics for the Batch Definition include Start Time and End Time. These characteristics will be used to store the start time and end time of the entire batch. There are also three subbatch phases in the demo; the Dump, Mix, and React phases each have Start Time and End Time defined in their respective Characteristic Definition lists. These characteristics will store the start and stop times for each of the three phases. And below the phase level, there are Step definitions for the MIX and REACT Phases. Start and End time characteristics should be set up for these sub-phases.
BATCH Definition
Start Time
End Time
PHASE Definition 1. DUMP - Start Time - End Time
2. MIX
Start Time
End Time
a. STEP Definitions
i. FILL
- Start Time
- End Time
ii. MIX
- Start Time
- End Time
3.
REACT
Start Time
End Time
a. STEP Definitions
i. FILL
- Start Time
- End Time
ii. HEAT1
- Start Time
- End Time
iii. VENT
- Start Time
- End Time
Once the Characteristics are created in the Batch Administrator, they can be referenced in the BCU script. Each trigger that represents the start or end of a batch, subbatch, phase, step, etc. should record the corresponding start or end time characteristic. Sample BCU scripts included with the Batch.21 Demo illustrate how these characteristics are recorded.
Note: If you are upgrading a 2.5.1 batch system to 3.1, start and end time events from 2.5.1 are automatically changed to characterstics of the type START TIME and END TIME. The data type of these characterstics is a timestamp in 3.1.
Keywords: event table
e21_all_create
References: None |
Problem Statement: Sometimes a tag shows bad quality on Control Panel Tags tab in Aspen OnLineGUI, (see screen shot below), but when it is checked via Aspen CIM-IO Test API, the quality is actually good. Why? | Solution: When the online program runs, the tags are fetched in batch mode, that is, the tags are divided into small but easy to manage batches which are processed sequentially. If there is an invalid tag in a batch, the fetching process aborts such that the remaining batches, if any left, will not be processed and the tag qualities in the batches are left as bad.
It is necessary to identify all the invalid tags in order to get online project to run properly. To do so, one may use the tag validation features under Tools menu in Aspen OnLine GUI and the invalid tags would be listed as the outcome of the feature.
Keywords: Quality, Good, Bad, Tag
References: None |
Problem Statement: Cannot find documentation for the batch plots | Solution: It is in the Help file of the BCU administrator.
A defect has been submitted to include it in the APEx Help file in a future version.
Keywords:
References: None |
Problem Statement: The BCU can be run in test mode which primarily means that batch data is not submitted to the Batch.21 Relational Database. However, there are other differences when running the BCU in test mode versus normal processing mode. | Solution: When choosing to run the BCU in test mode, two aspects of BCU processing are not handled: Commands are not executed and the BCU will not search for batches to match by designator.
Commands are not executed. Commands are often used to fire SQLplus scripts that update tags in the InfoPlus.21 database. It is important to know that these tags are not updated when running the BCU in test mode, particularly if these InfoPlus.21 values are used for subsequent trigger conditions.
The BCU will not search for batches based on designator ID. In normal mode, when the BCU processes a characteristic value, it does a query against the Batch.21 database to see if there is already a batch created containing the designator value that matches during the trigger execution time. If no match exists, the BCU will create a new batch to record the designator and any other characteristic values against. When run in test mode, the BCU does not query to see if an existing batch has been created. Moreover, it does not create the new batch (since data is not being submitted to the batch database).
What does the BCU do while being run in test mode? It requests InfoPlus.21 history and runs through the trigger analysis and logic.
Keywords:
References: None |
Problem Statement: This article desribes why the following messages can appear in the Batch21Services.AtlServer.aspen.log.
(2188) 18.02.2005 08:31:24.289 AtAuditTrail::ConnectToAlarmAndEvent - Failed to create an Alarm and Event source component. Stopping all communication with Alarm and Event.: Class not registered
(1064) 18.02.2005 08:46:39.308 AtAuditTrail::ConnectToAlarmAndEvent - Failed to create an Alarm and Event source component. Stopping all communication with Alarm and Event.: Class not registered | Solution: Batch.21 automatically tries to connect to Alarm & Event. If Alarm and Event isn't installed or if the connection fails for another reason, one of these messages will be logged every 15 minutes. If Alarm and Event is not installed then these messages can safely be ignored.
Currently there is no way to disable this checking for older versions of Batch.21. This problem has been fixed in Batch.21 v2004.2.
Keywords:
References: None |
Problem Statement: the following error message occurs when opening the Batch Query Tool from any client, while it runs as expected on the Aspen Batch.21 Server :
JCK_IM: Unable to retrieve any areas from data source. This may be due to a connection or database problem, invalid Batch Data Source, or the permission on the Batch Data Source.
Check with your system administrator, or select a new Data Source and Area using Tools | options | Solution: This error is related to an authentication problem. As there have been some issues with using DCOM on Windows 2003 servers make sure the the user having this problem has necessary permissions following the procedure mentioned below:
1. Start Control Panel
2. Select Administrative Tools.
3. Select Component Services.
4. Expand Component Services
5. Expand Computers
6. Right-click My Computer and select Properties
7. Select the COM Security tab
8. Click the Edit Limits button for Access Permissions and verify the group or user has Local Access and Remote access permissions checked. Also verify and set the same for the Anonymous Logon.
9. Verify the limits for Launch and Activation Permissions the same way as #8
10. Click OK
In addition to setting the COM security limits (see procedure above), also set the defaults to include the appropriate group or user and verify the DCOM Aspen Batch21 Services is using default permissions or add the group and user to the custom permissions.
Note: The above procedure needs to be checked and if necessary DCOM settings needs to be modified accordingly as mentioned on the Aspen Batch.21 Server.
Keywords: Unable to retrieve
Batch query tool
References: None |
Problem Statement: If Aspen PIMS users are sharing model files with a user who has Office 2007 installed on their machine, they will not be able to open the Excel files if they are saved in Excel 2007 format. | Solution: Microsoft has issued a file format conversion utility to allow Excel 2003 to open files saved in Excel 2007 format. You can download the utility from Microsoft's website and run it on your machine. It installs an add-in that allows the new 2007 format files to be opened by Excel 2003.
Below is a link to the Microsoft website article about this conversion utility:
http://www.microsoft.com/downloads/details.aspx?FamilyId=941b3470-3ae9-4aee-8f43-c6bb74cd1466&displaylang=en
Keywords: Excel
2003
2007
format
file
References: None |
Problem Statement: Unexpected file format trying to import Web.21 graphics into Process Graphics. | Solution: Currently, it is possible to export an XML file from the Aspen Process Graphics Editor, then import the XML file into the Web.21 Graphic Studio. It is not possible to go the opposite direction. Doing so will result in the error above. The Aspen Process Graphics Editor Help file has a selection titled, Converting from XML to .atgraphic Format. This section begins with, XML documents that follow the PEGraphic.dtd specification can be converted to .atgraphic format.... Note that the XML files from the Web.21 Graphic Studio do not meet this specification.
Keywords: import
export
XML
References: None |
Problem Statement: A trigger is to fire based upon a change of state, but it fires with a mysterious delay. The trigger and the designator are both setup with the extrapolate to current time enabled. | Solution: When using extrapolate to current time, you have to be very careful about the reliability of the data. Make sure that:
1. the interface server clock and the Batch.21 server clock are well synchronized.
2. you always have data available (seeSolution #104737)
When having enabled the extrapolate to current time option, the BCU extrapolates the data to current time, which means that it will interpret the previous value with the current timestamp (it creates a new timestamp for the value).
If triggers are setup to fire based upon extrapolate to current time conditions, while such a tag is a lot behind in time, it will result in a time-shift equal to the time-difference between the (CimIO) datasource and the Batch.21 server.
Summarized: synchronize the systemclock between the Batch.21 server and interface server,
OR do not use the extrapolate to current time functionality.
Example:
For instance, a trigger is to fire based upon a change of state.
The trigger and the designator are both setup with the extrapolate to current time enabled.
Then, the COS happens on 15:05:13, as shown in IP.21 Administartor
At 15:06:55 the BCU analyses the data (as shown in the Batch.21 output file), and the trigger fires at 15:06:42 (as shown in the Batch.21 output file).
Why does the COS of 15:05:13 get interpreted to happen at 15:06:42?
In this example the system time of the Batch.21 server and the datasource where not synchronized, the datasource was one-and-a-half minutes behind. And the Batch.21 scan- frequency was set to 60 seconds.
So, during the 60 seconds scan frequency Batch.21 will NEVER retrieve current data, and will ONLY retrieve one datapoint ... the previous which will get a current timestamp (with a shift of 90 seconds).
It's even worse .... all other data (from before the previous value) will NOT be interpreted !!!
Keywords: extrapolate
synchronize
synchronise
delay
before
after
References: None |
Problem Statement: A Batch trigger fires and attempts to execute a Custom Command. No result is found, and the error reads Failed to Launch System Command: System Error 2 | Solution: This error will result if the custom command is something that is not an executable (such as dir for directory.) The placeholder I was using for the custom command was dir. This is not an executable, so the request to execute it failed. You can validate if a command can work by using Start|Run. dir fails this test.
Keywords: failed
launch
system
command
error
References: None |
Problem Statement: A new option starting with the v2004.1 cumulative patch is that when configuring a BatchSPC variable to be used with Aspen Batch.21 and Aspen Process Explorer SPC Charts, there is a tab in the SPC Variable Properties dialog box with the name of 'Batch Context'.
This tab provides the user with two pairs of Characteristic / Operator / Value input boxes.
They provide the ability to do a condition check (or filter) before the Characteristic value is pushed to the SPC Variable.
For example, you might be pushing the QUANT-MADE characteristic to the SPC variable but would like to limit it so that the value only gets pushed when the Product is Green M&Ms.
It would be on the 'Data Specification' tab that you would define 'Quant-Made' and it would be on the 'Batch Context' tab that you define the condition of Green M&Ms.
As previously mentioned, there are two condition (or filter) options. What will happen if you configure both of them? Would they be linked with an AND operator or an OR operator ? | Solution: If both conditions are configured on the Batch Context tab, they are linked with an AND operator.
Keywords:
References: None |
Problem Statement: Sometimes, when verifying a unit in the BCU Administrator, the following error message is received:
Operand1 has an invalid tag name or name/map combination .
or the following variation of this message:
Invalid tag/name or name/map combination. | Solution: In a vast majority of cases, the reason for this error is a typo in the tag name corresponding to the Operand identified in the error message. Instead of typing the tag name in, you might want to use the Aspen Tag Browser to locate the tags when creating unit scripts. This will ensure the tag name is correct, and does exist in the database.
The problem may also be caused by a corrupted Aspen Batch.21 Service configuration in the ADSA. One symptom of that would be that you could see all the configured services for the data source but could not edit them. In this case, delete the data source name, recreate it, and add and configure all the required services.
Finally, in Batch.21 v6.x and higher, check to make sure that TSK_BATCH21_SERVER is running.
Keywords: invalid tag name
References: None |
Problem Statement: Going by the Help files, adding a Characteristic using this syntax looks like it should work:
ch.Add ('MY_CHAR', , atGreaterThan, 0);
However an error occurs at execution. | Solution: Correct syntax requires a numeric indicator for the third field, chosen from this table:
Number
Operator
Description
0
<
atLessThan
1
>
atGreaterThan
2
<=
atLessThanOrEqual
3
>=
atGreaterThanOrEqual
4
=
atEqual
5
<>
atNotEqual
6
LIKE
atLike
Therefore correct syntax for the example line would be:
ch.Add ('MY_CHAR', , 1, 0);
CQ00038427 has been created to ask for a clarification in the online Help File. The earliest that clarification may appear is Version 2004.2.
Keywords: format not recognized
References: None |
Problem Statement: When defining an Aspen Batch.21 Automated Report, the user must specify the batches to include in the report. In the Report tab of the Automated Reports Properties window (shown in the screen capture attached to this | Solution: ) there are three methods available:
Do not specify a query
Batch Query
Report Query
The option Do not specify a query should be selected when At the end of every batch is selected on the Schedule tab of the Report Properties window. In this case, the report will be created for each batch after the batch is generated. No additional scheduling -- such as specifying the days and times to run the report -- is required.
The Batch Query option has been deprecated for v2004.x, which is why the radio button is grayed out. It is available only if the Batch server is v6.0.
The Report Query option allows you to import a query from an XML file saved by the Batch.21 Reporting web site. The followingSolution details the steps on how to save this XML file from the reporting web site so that it can be used in an Automated Report.
Solution
The XML file that is imported into the Automated Report is an Advanced Report that includes an embedded query. This embedded query is the query criteria that will then be used when the Automated Report is run. Basic Reports and Advanced Reports without embedded queries should not be imported as the report query in Automated Reports.
To create an Advanced Report with an Embedded Query:
Open the Batch.21 Web Reporting home page, http://<your batch.21 web server>/batch21, and select New Advanced Report from the Reporting section.
On the Advanced Report page, select Embedded Query | Modify from the Batches menu. This will launch a Batch Properties window with the Query tab selected.
In the Batch Properties window, click the Configure Query button. This will launch the Find Batches window.
Specify the desired batch query criteria in the Find Batches window. Click OK button when completed.
Note the Query Summary is now specified in the Batch Properties window. Click OK to save and close the Batch Properties window. .
The Batches section of the Advanced Report page will now show that an embedded query has been attached.
Save the Advanced Report by using the Page | Save As menu. It can be saved as either a Public or Private report, but if it is Private only the Advanced Report creator will be able to select it in an Automated Report.
The Report is now available to import into an Automated Report when the Report Query option is selected in the Automated Report Properties page.
Keywords: automated report
criteria
xml query
References: None |
Problem Statement: How to view composition for any header / pipe? | Solution: For composition and physical properties, go to View | Results | Composition / Physical properties. If you do see the composition / physical propertise option grayed out, go to File | Preferences | General TAB | check the box for 'save phase properties'.
Note: You need to run the case file once more.
Keywords: Composition, pipe, header,
References: None |
Problem Statement: Why is the Aspen Flare System Analyzer file not 'solving' for my scenario? | Solution: Check the following
1. At least one of the source should be active. Go to build ll Scenario ll Double click on Scenario name ll Sources tab. Make sure at least one of the sources is not ignored.
1. At least one of the scenario is active and In the PFD tool bar, you are viewing the correct Scenario. If you have selected the selected scenario (Under Calculations ll Options ll Scenarios), make sure the scenario you want to select are selected for calculation to perform.
Keywords: scenario, calculate
References: None |
Problem Statement: Is there any way to use a pipe with a larger diameter than 60’’ in Aspen Flare System Analyzer? | Solution: The largest diameter size available in Aspen Flare System Analyzer’s databank is 60 inch, which means this is the largest size for the users to directly select.
However, if the user needs to use a larger diameter for a pipe segment (or any other custom size), the manual input option can be used.
Changing the Nominal Diameter to “-” makes the user able to manually input a value for the Internal Diameter and Wall Thickness of a pipe.
If the user provides this information, the pipe can work with the new internal diameter.
Keywords: Pipe, internal diameter, large, nominal, input
References: None |
Problem Statement: What does Warning: liquid with vapor pressure drop method mean, and how should I adjust my model? | Solution: You need to be worried about this method if you are using wrong method for pressure drop. Such as, if the pressure drop method is isothermal gas or adiabatic gas and you got this message, that means the model you selected is not the right one (it is not gas but two phases). If you negligible amount of liquid you may ignore this warning. Otherwise, change the pressure drop calculation method to Beggs & Brill (for example) to handle two phases.
Keywords: Liquid with vapor, two phase pressure correlation, single phase pressure correlation
References: None |
Problem Statement: Why does my rated flow change a lot with small change relief pressure when I set up zero superheat temperature for pure component system? | Solution: This happens when you use the superheat temperature as exactly zero. With pure component, Aspen Flare System Analyzer will have difficulty to identify the fluid phase as vapor or liquid. In this case, you may want to give a very small value to avoid the problem.
Keywords: PRV, Rated Flow, Superheat
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.