OFM cookbook
OFM cookbook
handbook
Version 0.3
This handbook has been written solely as a personal effort to document, in one place,
several aspects related to the latest version of OFM: 3.0. I find myself very often writing
handouts and hints for users and it is so much effort to make them independent
documents that I decided to group them together in this file. This handbook is not a
tutorial, so it is not intended to beginners. Although new users will benefit from these
pages, it is mainly oriented to people who have created one or several projects and did
not catch all the details in the process.
This is not GeoQuest official documentation and I am not responsible for the results you
might get into while trying to apply its concepts. I tried, however, to make this
information as accurate as possible, but hey, it’s only version 0.2!
You can jump from one chapter to other, however sections within chapters are not
(believe me, it’s a big effort) auto contented. You can’t just jump from one to any other
one. You need to go, more or less in sequence.
All OFM documentation comes on-line, so you might notice that Chapter 1 will look
familiar to you. I personally find difficult to read helps on the screen and that is why I
decided to include this chapter instead of just referring you to the on-line instructions.
Someone who has never used the program could use this part. It will teach you how to
do a basic OFM installation and some basic tasks with it, such as plots, reports, DCA
analyses, bubble maps, etc.
FLEXlm is on of the darkest aspects of OFM administration and receives great attention
on Chapter 2. I tried to cover as many aspects as possible. Because FLEXlm supports so
many scenarios, you will end up using only five percent of the chapter, the one that
concerns to you. I still recommend reading the whole thing, to get a good overview of
the system and eventually improve your setup.
Chapter 3 covers the OFM database model and explains the ideas behind it. There are a
couple of examples there for you to follow and create a project from zero but it is not a
complete tutorial. To build a professional project, you should also read chapters 4, 5, 6
and 7.
Chapter 4 covers project variables. I believe that the explanations covering the different
variables and what you can do with them are quite interesting. This is an important
subject that anyone creating a project should understand clearly.
Chapter 5 deals with Units and Multipliers, another mystery for many of the users that
work with metric unit systems.
Chapter 8 will show you some tricks and tips to create new OFM projects faster, based
on pre-set templates.
Finally chapter 9 covers Back Allocation, one of the new modules of OFM 3.0.
I will appreciate help of any kind to make this handbook better. You can always suggest
sections, changes and of course, mail1 me your “ready for the press” chapters.
Milci
Introduction
This chapter will get you started with the software. I would prefer that you can get your
hands on a PC and install OFM yourself, so you could start with a fresh copy of the
software and examples provided with it, and work the book exercises through.
After a brief description of the licensing options, you will be guided through a typical
installation to get your hands on the software as soon as possible.
The rest of the chapter explains the very basic stuff that can be done with the OFM. This
handbook is not intended to train you on the use of the software (which deserves a full
book and there is plenty of documentation about) but to explain the guts of the
database engine and what you can do with it. In order to show you this, it is imperative
that you get a feeling of what does the program do, so you will be ready to figure out
how [if] can you implement your own needs.
Requirements
OFM is a full 32-bit application that uses the new Microsoft Foundation Classes libraries.
This is a huge improvement over the previous version (2.2). However, old 16 bit
Windows is not supported anymore. OFM 3.0 will exclusively run on the following
operating systems:
Notice that some Windows versions require patches (Microsoft calls them “Service
Packs” and they can be freely obtained. You can download them from their web site and
install the to correct bugs or other issues). Some of these service packs are provided
with this book. However, you should check Microsoft’s web site for the latest versions
that apply to your OS.
Although it runs fine on any of them, my personal vote goes for Windows NT 4.0
Workstation. It will also run on the NT 4.0 server, but this version of the operating
As a rule of thumb (and not very surprisingly), get the best hardware you can. Processor
speed is the driving force. Increasing the memory size helps, but not as much. Think of
the PC you will be using as a real workstation that will be processing vital data. Don’t
see it as a word-processing machine that also happens to run OFM.
You could also consider hardware with more than one processor but remember: None of
the Windows 9x will ever use them. Windows NT Workstation scales up to two
processors (NT server up to 16) but OFM is not designed to use this kind of hardware.
NT will perform better but you should not expect an increase in OFM performance. It will
run better (another processor will be doing part of the tasks) but OFM will use only one.
You can find below some tests performed on different hardware. It is interesting to
notice that, generally speaking, the performance increase (compared to OFM 2.2 and
Production Analyst) gets higher with bigger data-sets, so if you are an OFM 2.2 user
with a “50 wells x 5 years” database, you should not expect to be surprised with speed
increase.
Installation options
OFM 3.0 is distributed on a CD. This CD comes with the software itself and some other
commercial software from Microsoft®, FLEXlm® and Oracle®.
The following picture shows you the contents of the OFM 3.0 distribution CD. A quick
description of the included files follows:
• FLEXlm folder: This folder includes software for FLEXlm, the licensing software
used by OFM. There are different configuration options for licensing and in some
of them, you will require extra software. Inside this folder you have:
• Client folder: Under some circumstances, you might need some DLLs
that are not present in a standard Windows distribution. These are
normally needed for network floating licenses that are served by a Novell
WinDLLs folder: Finally, you might be also missing some standard Windows DLLs
and you have them there. Unfortunately, the list of supplied files is not complete.
You could find the missing files also with this book. OFM 3.1 CD fixes this issue.
To round up this section just let me say that you could run OFM by just having a
supported operating system and by ignoring all these folders with sub-installations. You
don’t have to install them unless you need them. I will explain you these cases, as we
move along.
Licensing Schemes
OFM has a protection mechanism to avoid running illegal copies of the software. As the
rest of GeoQuest software, it uses FLEXlm. FLEXlm is a third party software developed
by Globetrotter Software Inc. and became the de-facto licensing scheme used by most
software companies. We will explain in details this system on chapter 2.
When you order OFM from GeoQuest, you have three basic licensing schemes to choose
from:
• Stand Alone license with Hardware key and license file
• Stand Alone license with license file
• Network floating license with license file
You must choose which one is more convenient depending on your conditions.
• You have a mobile PC that could be running OFM on its own, without connection to the
office network.
• You don’t have a network.
• You decided that OFM would be installed and run on one particular PC only.
The other option of a stand-alone license is to tie it to a number that the software can
access and it’s unique to the chosen machine. Once the license is designed (i.e.,
prepared and sent to you by GeoQuest), it will allow you to run OFM only on that PC.
You can’t share this license with other co-workers, unless –of course- they come and sit
at your machine to do their job.
The numbers that can be used for this purpose are two:
• DiskID: the serial number assigned to the hard disk when it was formatted.
• HostID: the Ethernet number of your network card.
The big advantage of them is that there is no extra hardware (dongle) involved. You
don’t have to worry about someone taking your license away and stop working until you
get it back. These are real stand-alone licenses.
• You want to share a license over several potential users (imagine having one –or
more- electronic dongles, being shared between several PCs over a network).
• You don’t use OFM all the time. For instance, you could buy three licenses to be
shared by 5 engineers. Any of them could use the program on his/her PC, but only
three of them will be able to use it simultaneously.
This scheme is very flexible and could be much cheaper (following our example, three
licenses could be enough for five engineers), however you must meet these
requirements:
• You must have a network available to all PCs that will run OFM.
• You must have a server on your network (a machine that is on all times you want to
use OFM).
After that, you get an introduction screen (same as any other Windows program). Click
Next to continue.
Then you are presented with the Setup Type window (see figure). There is where you
select what do you want to do.
After doing the proper selections (again, if you have disk space, just install everything),
the setup program starts copying the files from the CD to your PC and registering the
application in your Windows.
At the end of the process, another window comes with a warning for users that will be
using ODBC. As mentioned later, this warning explains that you might have to install
2
They are exceptions. When you use a direct connect with ODBC, then your database does some
processing for OFM.
Ensure you open this file (license.dat) with Notepad. Notepad is the safest editor you
could use for license files. Don’t use a word processor for this task.
The text in this demo file supplied with the CD has three lines (wrapped here just for
displaying):
These are for the OFM itself (OFM32 feature) and the two optional modules: Material
Balance (OFMMBAL feature) and Back Allocation (OFMBA feature).
If you have been provided with a stand-alone license, then just replace the contents of
the license.dat file. The following ones3 are some examples of typical license files you
could have received from GeoQuest:
• A Dongle license:
FEATURE OFM32 lmgrd.slb 3.2 1-jul-2001 uncounted 2BAE10616BB77751D19\
HOSTID=FLEXID=7-b28440a2 ck=254
When you replace the old text with the new one, make sure you check that the new
lines are all that is there in the license file. You could have more characters that you
don’t see and this will confuse the software. Make sure that:
• You don’t have any extra lines at the beginning of the file.
• You don’t have any spaces at the beginning of the lines
3
These example licenses contain only the OFM32 feature, needed to run the program. They
don’t include the extra modules. If yours does, then it will contain additional lines.
Once you are done, save the file and start the program.
As a last comment, if you have a FlexID license (dongle), then remember that you will
have to install the Windows drivers for it. You can find a description of the installation of
these files in Chapter 2, in the Installing the FlexID Drivers section.
That’s all for a standard setup. This is all we need to get you started. Later on we will
explain the details of all other installation possibilities. For now, let’s just start a small
tour on OFM.
Once the installation has been successfully done, you are ready to use OFM. Start it as
any other Windows application. The icon is called OFM 3.0 and is under the OFM group.
Once you
start it,
you will
get the
following
screen4:
4
You might have a Tip of the Day window. If you do, just close it and proceed.
The next figure shows the contents of the HeaderID table. The first column contains the
names of the completions. Notice that there are 225 lines. If you want to review this
data, do an Edit/Project/Data/HeaderID.
After you inspect these values, just close this data window issuing a File/Close
command.
Monthly tables
The static master table contains basic information to build the base map we have seen
before. You have the X-Y coordinates, the names, etc. Eventually the information for the
symbol on the map goes there as well. However, you notice that it could be only “one of
anything” per entity (one X coordinate each, one name each, etc.), so you can’t store
historic data in this table. Data that changes with time (such as the water production
volume or pipe pressure) is stored in other kind of tables. The most common one is a
monthly table.
A monthly table is a separate table that stores one value per month. The classic
example is the production volumes measured once a month for every completion.
In the demodb there is a monthly table called MonthlyProd. Let’s see its contents.
The spreadsheet is empty because there is no data to be displayed yet! Click on the
‘Next’ arrow a few times until you reach the Blue_12:Ad_4 completion. This
completion has data in the MonthlyProd table, so you should see a figure like the next
one.
As you can see, this table stores values (once and only once per month) for a particular
completion (o whatever you have in a line of the static master table)
In March 1967, the Blue_12:Ad_4 completion produced 1984 bbls of oil, 245 Mcf of gas,
4061 bbls of water. The number of effective producing days of that month has not been
recorded. Neither was the pressure.
Click again next a few times to see data belonging to other entities (completions). When
you are done with it, do a File/Close to return to the base map.
The act of getting ready the desired data is called Grouping. This is a general term,
because you don’t normally analyze individual completions. You want to selectively
gather them together to perform more complex analysis. You group different data
together to process it as a set.
It is also important to notice that when you group data for analysis, OFM displays it in
its status bar (at the very bottom of the main OFM window). The next figure shows OFM
displaying that the completion Blue_1:He has been grouped.
In the previous paragraph, notice the word selectively. When you put data together, you
must select exactly what you want to group. All completions that produce from a
particular reservoir, all intervals completed on a particular well, etc. The act of selecting
is called filtering. You normally filter what you want and then group it together.
Finally, remember that in OFM, filtering and grouping data are separate functions. A
Filter is a sub-set of the total available information. The filtered data is grouped into
memory and then accessed by other OFM functions and applications. The tools available
to filter the desired items will be described in the following sections.
5
The contents of this list depends on the user’s choice. For now, we see the default list of the
entities of the static master table, i.e., the completions.
Filter by Completion
Use this method when you want to manually select, by their names, the list of
completions you want to filter.
1. Select Filter/Filter By/Completion from the OFM menu bar. The Selection
dialog box displays.
2. Select a well completion (for instance,
select BLUE_10:Ad_1A,BLUE_12:Ad_4,
BLUE_12:Li_1C and BLUE_13:Ge_4E)
3. You may select up to the total number
of completions displayed.
4. After clicking OK, OFM leaves the
selected completions on the base map. If
you want to perform a study on all of them
together, you can do a Filter/Group to
have them all available as one set. If you
do that, notice the status bar displaying a
description of the group (see next figure).
Notes:
• Re-select a highlighted well completion
to de-select.
• Click Select All to include all items in
the scroll list.
• Select Exclude to reverse the selection logic.
Notes
Filter by Category
You can (and should) define Filter Categories for your projects. This is one of the
most popular ways of filtering data. The
demo has five filter categories defined and
loaded with data.
Notes
• Select a highlighted category to de-select it.
• Click Clear to erase all selections without exiting the dialog box.
• A filter can consist of multiple categories.
Filtering by Query
A query is a call to the project database that defines a sub-set of data. An example of a
query is a call for wells with an oil production rate greater than 100 barrels per day. As a
result of this query, OFM would display those wells that ever produced more than 100
barrels of oil per day.
A query can be fairly simple as in the previous example, or it can be quite complex
accessing variables, user and system functions, and calculated variables.
1. Select Filter/Filter By/Query from the OFM menu bar. The Table Query dialog
box displays.
2. Click Edit Query. The Create Query dialog box displays.
3. Click on the following buttons.
• Project Variables
• System Functions
• User Functions
Notice that the list changes its contents, depending on what button you select.
These three lists display the available variables you use to build your query. The
rest of the query is assembled with operators and constants.
4. Select a variable or function from the scroll list.
For instance, select the Project Variables button and click on the item
DailyProd.Oil.
5. Click Add to move the selection to the editing window.
DailyProd.Oil moves to the editing window.
6. Use the keypad to add operators or modify the equation.
Complete the query by selecting the greater sign “>”, then “1” and “0”.
The final query will be: DailyProd.Oil > 10.
9. After querying the database, OFM leaves the selected completions on the base map.
If you want to perform a study on all of them together, you can do a Filter/Group
to have them all available as one set. If you do that, notice the status bar displaying
a description of the group (see next figure).
There are more ways of filtering data. We are not covering them all here because that’s
not the idea but you should understand by now that they will produce more or less the
same results: Extract a group of entities that interest you for a particular reason.
Creating a plot
Plots are created from the information stored or calculated from the project data. The
following steps create a plot displaying monthly produced oil and gas data vs. date from
all completions of the OFM demo database.
1. From the base map window, select Filter/Clear to cancel any active filters.
2. Select Filter/Group Data to gather all project data into memory for the plot.
3. Select Analysis/Plot or click the Plot icon located on the OFM toolbar. The plot
window displays. If the Edit Plot window pops up, dismiss it by clicking its Cancel
button.
4. Select File/New to create
a new plot. The Edit Plot
dialog box displays with the
Plot Data tab active.
5. Do the following
• Set the Number of
Graphs to 1.
• Set the X-Axis Variable to
Date.
• Double-click
Monthlyprod.Oil and
Monthlyprod.Gas to
select the Y-Axis Curves.
6. Click the Curve
Attributes tab and select
the Post Annotations
checkbox.
7. Click the Axis Control tab and change the Y-Axis Scale Type to Logarithmic.
8. Click the Grid & Tics tab and do the following:
• Select Show Grid for both the X- and Y-axes.
• Select Show Minor Tic for both the X- and Y-axes.
9. Click the Font tab and select the Auto Font Color checkbox.
10. Click the Legend tab and do the following:
• Select Legend Show.
• Select Draw Box.
11. Click OK to apply all the selections. The plot appears on the screen as shown on
next figure. At this step, the plot is finished. The next steps will improve the
appearance by adding a live header.
Note
If there are header lines on this window,
highlight each one and click Delete. Repeat
until the dialog box is blank. We want to
create new headers.
Note
You may also create the text lines by double-clicking variables from the variable
list and using the keypad provided on the dialog box. For our example, you will
have to type in the “Well: ” section but the rest could be done with the mouse by
selecting the “+” button and then by double clicking the Loadname system
function from the list. The finished text should appear exactly as shown on the
next figure.
17. Click OK. The Edit Headers dialog box redisplays with two header lines as shown in
the next figure.
18. Click OK. The two headers display on the plot.
19. Position the cursor on the top header line, click the left mouse button, then click the
right mouse button to display a pop-up menu.
20. Click Tag All to select both lines and use the mouse to move them around the plot.
21. When the headers are in the desired position, click the right mouse button again to
display the pop-up menu and click Done. The added headers display in the desired
position.
22. Let’s use the same plot format to display individual completions data. Scroll through
them in the database by clicking the Next and Previous icons located on the
toolbar. This will discard the group of all the completions we initially had and refresh
the plot with data from individual completions, one at a time. Notice that the header
“Well: ” + Loadname() updates itself automatically showing the name of the
completion being displayed.
23. Double click on the headers to choose a font and color for them.
Date-based report
1. From the base map window, select Filter/Clear to cancel the active filter
2. Select Filter/Group Data to group all project data for the report.
3. Select Analysis/Report. The Edit Report dialog box displays. Click Cancel
Notes
If the previous report format displays in the Select entry field of the Edit Report
window, highlight it and click Delete. We want to start a new report.
Notes
You could type in the previous line or just build it by double-clicking the desired
parts from the available lists. OFM adds the colon in between. Date, Oil.Cum,
Gas.Cum and Water.Cum are all in the Project Variables list.
Notes
• This report shows the selected values for all the completions of the project grouped
together. For instance, in February 1961, all completions had an accumulated Oil
production of 49.4 Mbbl. Notice that OFM generates the complete history of these
values along the report. That is why is called a date-based report.
• If you are interested in saving the report template, you can follow the last two steps.
6. Select File/Save to save the new report format (not the data) to a *.rpt file.
7. Type a report name and select a directory location; then click Save.
8. Close the report module with File/Close. You will be returned to the basemap.
Summary report
The following steps will show you how to create a summary report from the OFM demo
database. This type of report summarizes data from different items, generally with one
report line per item.
1. From the base map window, select Filter/Clear to cancel the active filter.
2. Select Analysis/Report. The Edit Report dialog box displays. Click Cancel.
Notes
If the previous report format displays in the Select entry field of the Edit Report
window, highlight it and click Delete. We want to start a new report.
Notes
4. Click OK. Then click Next until you get some data into the report.
5. Select Edit/Date Range. The Set Report Date dialog box displays.
6. Select At Last Date to make sure only the last date displays in the report, then click
OK.
7. Select View/Summary/By Item so all the report details display.
8. Select Edit/Breaks to set up some totals. The Report Break dialog box displays.
9. Select At End of Report and click OK.
Cumulative Cumulative
HEADERID Oil Gas
@Loadname( UNIQUEID Production Production <BOE>
Mm3 MMscm
---------- -------------------- ---------- ---------- ----------
BLUE_10:Ad BLUE_10:Ad_1A 0.0 0.0 0.00
BLUE_11:Li BLUE_11:Li_1C * * *
BLUE_12:Ad BLUE_12:Ad_4 39.1 3.4 39627.08
BLUE_14:Ad BLUE_14:Ad_3BU * * *
BLUE_14:Ad BLUE_14:Ad_6A * * *
BLUE_14:Li BLUE_14:Li_1C 141.2 15.2 143689.49
…………
Notes
10. Select Edit/Column Headers. The Edit Column Headers dialog box displays.
The Loadname() variable displays at the top of the dialog box.
11. Type 20 in the Width field to change the width of the UniqueID column.
12. On the same dialog box, select <BOE> from the drop-down list located at the top
of the dialog box and do the following (see previous figure):
• Select bbl/d from the Units drop-down list.
• Select M from the Multiplier drop-down list.
• Select Sum & Average from the Sub-total drop-down list.
13. On the same dialog box, select Gas.Cum from the Current Column drop-down
list.
14. Select Sum & Average from the Sub-total drop-down list.
15. On the same dialog box, select Oil.Cum from the Current Column drop-down list.
16. Select Sum & Average from the Sub-total drop-down list.
17. Click OK to apply the selections.
You should get a report whose last section looks like the following one:
Cumulative Cumulative
HEADERID Oil Gas
@Name( ) UNIQUEID Production Production <BOE>
Mbbl MMcf Mbbl
-------------------- -------------------- ---------- ---------- ----------
RED_5:Os_1 RED_5:Os_1 * * *
RED_5:Os_1A RED_5:Os_1A * * *
RED_6:Ad_3BU RED_6:Ad_3BU * * *
RED_7:Cl_3 RED_7:Cl_3 * * *
RED_7:Os_1 RED_7:Os_1 * * *
RED_8:Li_1 RED_8:Li_1 * * *
RED_9:Cl_2 RED_9:Cl_2 6.9 17.0 9.73
RED_9:Os_1 RED_9:Os_1 33.7 35.3 39.55
RED_9:Os_4 RED_9:Os_4 0.3 102.7 17.37
---------- ---------- ----------
16609.3 42829.4 23747.50 Sum
144.4 372.4 206.50 Average
20. Select Descending; then click OK. The new report displays.
21. OFM calculates and displays the new report.
If you want to use these settings in other reports (with a different set of completions,
etc.) you could save a report template.
22. Select File/Save to save the new report format (not the data) to a *.rpt file.
23. Type a report name and select a directory location; then click Save.
Cumulative Cumulative
HEADERID Oil Gas
@Name( ) UNIQUEID Production Production <BOE>
Mbbl MMcf Mbbl
-------------------- -------------------- ---------- ---------- ----------
BLUE_9:Li_1C BLUE_9:Li_1C 1471.7 555.9 1564.38
ORANGE_6:Li_1C ORANGE_6:Li_1C 1322.4 602.9 1422.92
ORANGE_1:Li_1 ORANGE_1:Li_1 999.3 598.0 1098.94
GREEN_7:Li_1C GREEN_7:Li_1C 928.5 496.6 1011.29
BLUE_1:Li_1C BLUE_1:Li_1C 905.4 475.1 984.59
BLUE_14:Li_1C BLUE_14:Li_1C 887.8 537.8 977.44
PURPLE_1:Cl_3 PURPLE_1:Cl_3 657.6 787.4 788.78
GREEN_10:Li_1C GREEN_10:Li_1C 641.5 513.9 727.16
BLUE_5:Li_1C BLUE_5:Li_1C 644.4 407.5 712.36
ORANGE_23:Ge_2A ORANGE_23:Ge_2A 548.9 770.3 677.31
BLUE_11:Li_1C BLUE_11:Li_1C * * *
ORANGE_16:Hu_2A ORANGE_16:Hu_2A * * *
---------- ---------- ----------
16609.3 42829.4 23747.50 Sum
144.4 372.4 206.50 Average
Notes
• Date is automatically placed first in the Select dialog box. Make sure you keep it
there. The first variable of a binary map file has a special meaning for OFM. A map
can be animated trough time and OFM assumes that the time is the first variable in
the file. If you put another variable there, they the map will work but animation will
not do what you expect.
• The System Functions and User Functions can also be viewed and selected for
the map by clicking the applicable button on this dialog box.
For our example, leave DATE as the first variable and add the following ones:
Oil.Cum, Water.Cum, Gas.Cum, Liq.Cum. Your Edit Report dialog box should
look like the following one:
5. Click OK. The Binary Map Generation dialog box (see next figure) appears with
Notes
• Data Type contains the Static, Monthly, Daily, or Sporadic options. Group By
defaults to the primary key of the database, which is named “UniqueID” in this
database. Group By also contains the filter categories, which were defined in this
database.
10. Click OK. The Creating Mapper File dialog box displays and indicates the progress of
the *.bmf file creation.
Notes
• You can abort the process by clicking Cancel.
Caution
Do not select the Date variable.
21. To animate the bubble map through time, select Tools/Animate. The Animation
Control dialog box displays.
22. Do the following:
• Set the start and end dates by clicking the drop-down lists and selecting a date.
• Increment time in steps or pause in increments of 1/10 second by clicking the
drop-down list and selecting a value.
Notes
• To pause animation, press the space bar. To stop animation, press Escape. To
restart animation, press the space bar.
1. From the base map, select Analysis/Grid Map. The Mapper File dialog box
displays.
2. This window shows a list of binary map files. You have several options to pick up the
desired map file.
• If you see the file you want to use in this list, just click on it and the select OK.
• If you want to use another file that is not on the list, then click Open and select
the desired file.
• You could also launch the Binary Map Generation window to build a new file.
3. Select …\demodb\sample2.bmf and click OK
(You could also select the cumvalues.bmf file
you created in the previous bubble map
section). The Select Variable dialog box
displays.
4. For this example, select Cumulative Oil
Production (Mbbl) and click OK.
5. The grid map displays.
6. Select Step/Select Date.
7. Choose the date 1989/07 and click OK.
8. Advance the Grid Map through time by
selecting Step/Previous Date and
Step/Next Date.
9. To animate the Grid Map through time,
select Tools/Animate. The Animation
Control dialog box displays.
11. When you have finished selecting your option, click OK. The grid map animation
proceeds.
Notes
• To pause animation, press the Space bar. To stop animation, press the Escape key.
To restart animation, press the Space bar.
15. Compare the surface map to the base map by selecting Window/Tile Vertically.
16. Close the surface map when you are finished.
17. Close the grid map by selecting File/Close.
3. Select a well with log data available. Do a Step/Select and pick up the
ORANGE_34:Li_1C from the list.
4. Double-click the first Trace Name at the top of Track 1.
5. The Log Trace Attributes dialog box displays with the Trace Attributes tab
active.
6. Do the following:
• From the Log drop-down list located in the Select area of the tab view, choose
SP.
• From the Color drop-down list,
choose Blue.
• Click to the right of the SP in the
Trace Name field and type the units
mV.
7. Choose –150 to 0 for the scale. Verify
your settings with the ones on the
following figure.
8. Click OK.
Creating Intervals
In this procedure, you will create intervals. Intervals can be created in both Cross
Section and Log Display and are used in both tools.
1. From the cross section, select Edit/Interval. The Edit Interval dialog box
displays.
2. Do the following:
• In the Name field, type Layer_1.
• From the Top Marker drop-down list, choose Winter.
• From the Base Marker drop-down list, choose Screed_1.
• From the Lithology drop-down list, select Shale.
3. Click Add and make the following:
• In the Name field, type Layer_2.
• From the Top Marker drop-down list, choose Screed_1.
Introduction
This chapter will show you the different procedures for successfully installing OFM.
Because the software could be used on different environments, there are many
installation options that you need to know before you decide which one is most
adequate for you. Before we start, I would like to explain some concepts. Make sure you
understand them before proceeding. The two keywords of this chapter are:
• Files Installation
• License Installation
The Files Installation is the task that consists in copying the OFM files to a disk that can
be accessed from the client PC (the workstation that will actually run OFM) and
registering the application. This disk could be a local hard disk6 or a network disk7. For
Windows (and OFM), there is no difference between them. Just that the latest is
accessed trough the network and has generally8 slower performance.
Apart from the main files copied to the directory where OFM is installed, there are some
local settings (in the client PC registry, system files) and shortcuts (icons) that are also
made by the installation process.
The License Installation is a different story. Assuming that OFM file-copying and settings
are properly done, there is still the need for correctly setting up the license before you
can run the program.
Don’t get confused by them. These two processes could be all seen as “the installation”.
However, keep the difference in mind because it will help you troubleshooting eventual
problems.
6
Could be also a ZIP/JAZ disk or even a CD-ROM, however, a hard disk is recommended.
7
By network disk we mean a disk space on another computer that is made available to qualified
users trough the network. From the client PC, this space is seen as an extra disk. The machine
sharing this space could be almost anything (Windows NT/9x, Novell, UNIX, Linux, etc.)
8
A fast server on a fast network can perform better than standard IDE hard disks.
You need to perform both parts to successfully install OFM on a PC. If one is not done,
then either OFM or Windows will be confused if an attempt is made to run the program.
Now that we know the two parts of an installation process, we can split the available
installations in two different flavors:
• A stand-alone installation
• A client workstation installation
Stand-Alone Installation
On a stand-alone installation, the setup program performs both parts (1+2) together. It
will ask you for a folder where to copy the OFM files and will also register
the OFM application on the PC where you are installing it. In the process, you can
decide
If you decide to install OFM in the local hard disk of the machine that will run OFM, then
you will have to pick up one of the first two ones and follow the instructions. Full Install
is the recommended one.
If you are planning to install OFM on a network disk, you will also have to select one of
these two the first time. At one point, you will need to copy the main file to the network
disk and the only choices you have to do that is by selecting a Full or Custom Install.
Notes
• In the cases where you will install to a network disk, make sure you have this disk
mapped to your PC (this means visible through a letter, such as Y:) and that you
have write privileges there. Because these setups will perform steps 1 + 2, at the
end, the installation program will register OFM in your PC and create the necessary
icons on the Start menu.
• On Windows NT/9x file servers, you could run the installation program from the
server’s console and install the files to a folder that will be accessed later by
workstations. Notice, however, that this will also register OFM on the server, as
another application that can be run from the server’s console. Normally, you keep
Installations Cases
The following pages describe four different installation cases where you will see practical
examples of the most common options.
Case 1
The next figure shows a standard plain-vanilla installation, like the one performed on
Chapter 1.
Case 1
This is all done on one PC. The setup program from the OFM CD-ROM is executed and a
Full (or Custom) installation is performed. During the process, the user selects the
C:\OFM30 (or any other local) folder as a destination. All OFM files go to that folder
(yellow arrow) and at the end, the installation program registers OFM on that PC (green
arrows). OFM becomes as another available application, through the Start menu.
Case 2
In this case, you decide to do a local install but choose as destination a directory on a
network disk.
Notice that the process is totally equivalent, with the exception that the selected
destination folder was P:\OFM30. P is not a local disk. It is a network disk, a resource
offered by a server on the network and mapped on the PC as P. The only difference
Case 2
Case 3
Case 2 gives you the ability to run OFM on a workstation that reads the OFM files off a
server. If you have multiple workstations, you could use OFM on all of them and make
them all read the program files from a single network disk (the OFM files are installed in
only one –probably read only- disk and shared between all workstations that run the
program). The following figure illustrates this.
Assume that a previous installation left the OFM files on the network disk P, under the
OFM30 directory.
We need to execute the installation program from the CD and select a Workstation
Install. During the process, the setup will ask the location of the OFM files. We must
enter (or browse to) P:\OFM30. The program will do the rest of the setup but will not
copy anything to the P disk. All files needed are there already. However, it registers
OFM on the workstation and creates the needed icons.
Case 4
Finally, the case 3 could have been done even simpler, without the original CD-ROM. A
special setup
program (called
setupws.exe –
Setup
Workstation) is
always copied
together with
the rest of the
installation files,
so just
executing this
file is enough to
install OFM from
a network disk
and perform a
Workstation
Installation.
Case 4
The setupws.exe will perform a workstation setup. As said before, it will ask you
where are the installed copies of OFM 3.0 files. The default location that appears is not
correct, make sure you Browse to the proper location. The following figure shows that
particular step of the process.
As mentioned before, one thing is installing the software files. Installing the license is a
different task. The installation of the files is somehow strait forward and almost
everyone has done it once, so it could be considered a familiar process.
On the contrary, very few people are familiar with FLEXlm and its configuration options.
This section will explain how FLEXlm works and the client side installations. The next
ones cover how FLEXlm works and the server side installations.
Depending on which one is found first, OFM will act differently. We will discuss each of
the possibilities in the following sections.
1- Registry
As seen in the previous figure, OFM first looks in the Windows registry for a string-type
key named license located in:
HKEY_CURRENT_USER\Software\Schlumberger\GeoQuest\Ofm\3.0
This key must contain a value like “M:\OFM30\license.dat”, which should be the
complete path, from the workstation’s point of view, to the license file.
An OFM installation does not create this key at all, and you should never have one
license key in your workstation’s registry, unless you manually create it. Nobody
recommends modifying your computer’s registry. Neither do I. This is an extra resource
you have (with the highest priority) to tell OFM where is the license information.
Finally, if this registry key exists and points to a non-existent file, OFM will ignore it and
continue looking for license information. If this file exists, then it will use it.
SET LM_LICENSE_FILE=C:\Lics\OFMlic.dat
C:\WINDOWS>echo %LM_LICENSE_FILE%
C:\OFM30\license.dat
C:>WINDOWS>
If the variable is set, then it echoes the value of the variable. If is not set, then it just
replies with a trivial message:
C:\WINDOWS>echo %LM_LICENSE_FILE%
ECHO is on
C:>WINDOWS>
9
Don’t insert spaces around the equal sign.
Once OFM finds the license file, it will read it and discover that it is a DiskID license. It
will then compare the machine’s DiskID with the one in the license. It they match, OFM
will start. There is no need to “check out” a license because there is no accounting
process. Stand-alone licenses are always single user licenses.
Notes:
• This license is called stand-alone because it does not depend on anything outside
the machine itself. When a stand-alone license is requested using the DiskID
number, only that number is required by GeoQuest to issue the license.
• A license of this type is always (and can't be otherwise) a single user license.
• Because there is no need for counting the number of licenses being used, there is no
license counting method needed.
• It does not matter whether the machine has a network card or not.
• When a disk is formatted gets a unique number and users can’t change that. This
means that the license will check this number to verify that the machine is the one
specified in the license file. If the hard disk is reformatted, this serial number will
change and the license will stop working.
The recommended setup for this license is just to place a copy of it in the OFM directory
and name it license.dat.
Once OFM gets to the license file, it will read it and discover that this is a HostID license.
It will then compare the machine’s HostID with the one in the license. It they match,
OFM will start. Notice that there is no network communications in the process. You don’t
need to have the network card connected to a network at all. Also, there is no need to
“check out” a license because there is no accounting process. Stand-alone licenses are
always single user licenses.
Notes:
• This license is called stand-alone because it does not depend on anything outside
the machine itself.
• When a stand-alone license is issued using the HostID number, only that number is
required by GeoQuest to design the license.
• A license of this type is always (and can't be otherwise) a single user license.
• Because there is no need for counting the number of licenses being used, there is no
license counting method needed.
• The machine has to have a network card installed. It is also important to state that it
is not enough to just plug-in the card. You have to have its drivers installed, so when
OFM queries the card for its number, it can do it trough the standard Windows
libraries.
• Is does not matter whether the machine is connected to a network or not.
• Sometimes, OFM fails to read the HostID of a card, even if the drivers are properly
installed. In those cases, you may have to install the Client for Microsoft Networks
software to get it to work. There are some files in this package needed by OFM that
are not apparently present in other network software you could have installed.
The recommended setup for this license is just to place a copy of it in the OFM directory
and name it license.dat.
Once OFM finds the license file, it will read it and discover that this is a FlexID license. It
will then verify that there is a hardware key connected to the machine and that its
number matches the one in the license file. There is no need to “check out” a license
because there is no accounting process. Stand-alone licenses are always single user
licenses.
Notes:
• This license is called also Stand Alone because it does not depend on anything
outside the machine itself, except for the hardware key and its drivers.
• When a stand-alone license is issued using the FlexID number, GeoQuest requires no
information from the machine. The license file and the hardware key will be
delivered together and will obviously match.
• A license of this type is always (and can't be otherwise) a single user license.
• Because there is no need for counting the number of licenses being used, there is no
license counting method needed.
• The machine has to have the key plugged-in. It is also important to state that it is
not enough just to plug it in. You have to have its drivers installed, so when OFM
queries for the key, it can access it correctly. There is a section ahead explaining
how to set up these drivers.
• Is does not matter whether the machine is connected to a network or not.
The recommended setup for this license is just to place a copy of it in the OFM directory
and name it license.dat.
When OFM is started, it first tries to find the licensing information. At one point, it will
find a license file11 similar to the following one:
Notice that this license has different format. There is a SERVER line with very important
information. OFM learns from this line that it should contact the server named
license_server on the TCP port 1701. It also knows that there is a special process
(daemon) serving OFM32, the license needed. The name of this “private” servant is
lmgrd.slb. OFM knows that just knocking on the “door” number 1701 of the
license_server “house” is not enough. It will also have to address the request to “Mr.
lmgrd.slb” in person, to ask for a license.
Notice that this license contains other information (such as the DAEMON line), but this
is relevant to the license server (the program running on the license_server machine).
OFM, as a client, does not need anything else.
If you understood the process, then you will agree with me that:
• The client machine (PC with OFM) should be able to communicate to the
license server. This needs a working TCP/IP network connection.
• The license server process should be up and running in the server machine.
If not, OFM will not receive an answer and will refuse to start.
Notes:
• There is communication between OFM and the license server every few
minutes. This requires minimum network resources (a few bytes per minute)
but also a constant connection between the two machines. If the network
10
We will cover the server installation later in this chapter
11
It will find a physical file if is told to do that. When OFM is directly pointed to the server, it just
checks out a license from the server with no need for the actual license file.
The recommended setup for this license is just to set up the environmental variable
LM_LICENSE_FILE to the port@server as required. See the corresponding section on
page 52.
When OFM is started, it first tries to find the licensing information. At one point, it will
find a license file similar to the next one:
Notice that this license looks a lot like the NT format. There is a SERVER line with very
important information. OFM learns from this line that it should contact the server named
license_server on the port 1701. It also knows that there is a special process
(daemon) serving OFM32 (the license name for OFM 3.0). The name of this “private”
servant is lmgrd.slb. OFM knows that just knocking on the “door” number 1701 of the
license_server “house” is not enough. It will also have to address the request to “Mr.
lmgrd.slb” in person, to request a license.
12
Supported UNIX versions are Solaris, Irix and AIX. AIX server software is included only with
OFM 3.1 distribution CD.
The recommended setup for this license is just to set up the environmental variable
LM_LICENSE_FILE to the port@server as required. See the corresponding section on
page 52.
When you use SPX/IPX, then you have to set an environmental variable in the OFM
workstations.
FLEXLM_COMM_TRANSPORT=SPX
Remember that you set it in the autoexec.bat (for Windows 9x) or in the System control
panel (for Windows NT), as explained on page 52. If you don’t set it, your workstation
will fail trying to communicate with the server. Notice also that there are no spaces
around the equal sign. If you add spaces, the check could fail.
Let’s take a closer look to the file. The first line ends with the IPX network number of
the server (0001002), so OFM can find it on the network. Notice that the FEATURE line
is the same as always. The same process (named lmgrd.slb) will be dispatching the
OFM32 licenses. The only difference is that this process is installed on a Novell server
(but OFM does not care).
Notice that this license contains more information (such as the DAEMON line), but this
is relevant to the license server (the program running on the novell1 machine). OFM,
as a client, does not need anything else.
The equivalent of the TCP port in Novell is an SPX socket. The number is also specified
in the license after de SPX keyword. In the example given (and most of the time), this
number is 1234. Leave this default socket number. Although you can change it, it is
very rare that you have to do that.
The recommended setup for this license is just to place a copy of it in the OFM directory
and name it license.dat.
These drivers are the ones needed by the hardware key (see figure), also
called dongle or Sentinel. If your OFM was licensed using this device,
you should install these drivers before you can successfully run OFM. A
third-party company named Rainbow Technologies, Inc. (www.rnbo.com)
maintains this software. The installation files are present on the OFM CD under the
\Flexlm\Flexid7 directory.
Depending on your operating system (the one of the machine that will run OFM), you
have to select the correct folder: Win_nt (for Windows NT) or Win_95 (for Windows
95 and 98).
Windows NT installation
Because NT runs on different hardware platforms, there are different installation
programs:
Don’t get confused by them. There is a batch file that will recognize what hardware you
have and start the appropriate setup program. The file is the install.bat and that is the
one you should run to start the installation process. I just wanted to mention them
separately, in case that the install.bat fails the recognition and you have to manually run
the setup.
You can customize your setup by passing arguments to the install.bat program. You
do that only if you know what you are doing. If this is the first time you do a FlexID
installation, don’t give any arguments. However, if you are interested, these are the
options:
• A window with the title bar Sentinel Driver Setup Program is displayed.
• Select Functions/Install Sentinel Driver from the menu bar.
• A dialog box with the default path for the NT driver is displayed. Change the drive
letter if necessary and click OK.
• The Sentinel Driver and associated files are copied to the hard disk.
• If the driver installation is successful, a dialog box with the message "Sentinel Driver
Files Copied Successfully".
• When the installation is complete, a dialog box with the message “Driver Installed
Restart your system" is displayed.
• OK to continue.
• Restart your computer.
You can customize your setup by passing arguments to the SENTW95.EXE program.
You do that only if you know what you are doing. The first time, you normally don’t
give any arguments. However, if you are interested, these are the options:
• A window with the title bar Sentinel Driver Setup Program is displayed.
• Select Functions/Install Sentinel Driver from the menu.
• Click OK when the "Driver installed! Restart your system." message appears.
Restart Windows.
FLEXlm
FLEXlm is a third party software developed by Globetrotter Software Inc13. and became
the de-facto licensing scheme used by most software companies. This software is in
charge of managing OFM licenses. The previous sections described all supported
licensing modes. This one will explain FLEXlm parts and how they relate together and
the following ones will cover the server side installation procedures.
13
Globetrotter’s web site can be checked at www.globetrotter.com.
The parts that get involved in the process vary, depending on the type of license you
have. The program (OFM) has only one version that supports all licensing modes. The
code is the same, disregarding the licensing method chosen by the end user.
The simplest schemes are the stand-alone licenses (DiskID, HostID and FlexID). On
those, there is no license server or network involved. They are also single user licenses,
so there is no need to count how many copies of the software are being used.
The other model is the network license that can be shared by users connected to a
network. This is a more complex system (requiring a license server machine, able to
account the seats being used) but by far, more flexible in network environments.
1. OFM starts and by one of the four previously described methods (Registry,
LM_LICENSE_FILE, etc.) it finds the license file.
2. OFM scans the license file and discovers that:
• It is a stand-alone license.
• The needed OFM32 feature is present on the license file.
• Depending on the license, OFM checks that the numbers in the license match the
hardware number of the PC where OFM is starting. The example shows a license
tied to the HostID of the machine. OFM checks that number on the PC’s
hardware (the same procedure is carried out for a DiskID or a FlexID license).
3. If the number matches that of the license, OFM starts normally, if not, aborts with
an error message in the Troubleshooting FLEXlm window.
4. OFM verifies regularly that the program is legally running (i.e., re-checking the
license numbers). For DiskID licenses, this does not make much sense (you can’t
reformat your hard drive while OFM is running!). However, when using HostID (the
HostID could come from a PCMCIA portable network card) or FlexID numbers, OFM
verifies that the hardware remains attached to the PC.
Notes
1. OFM starts and by one of the four methods described (Registry, LM_LICENSE_FILE,
etc.) it finds the license file.
14
This is true in almost all setups. However, it is possible to run OFM and the server software on
one single machine. In this particular case, the process is similar, with the exception that the
external parts of the network (patch cables, hubs, etc) are not needed. Only the network card
and software. OFM and FLEXlm do not see any difference (just that they both run on the same
PC). All software behaves the same, with the exception that OFM will contact the same machine
for the license.
Notes:
• If the daemons (server software) are shut down while users are using the
software, after a few minutes they will be invited to close their programs.
However, if the daemons are re-started before this happens, the license
count will be re-set to zero: some users will be using the program and the
daemon knows nothing about them. This situation fixes itself because when
OFM connects to the server to confirm the use of its license, the daemon
discovers the user and checks out a license. In a few minutes, all users will
be discovered and counted by the daemon.
• Notice that this license file includes the DAEMON line that is only relevant to
the server.
When lmgrd (the one that the administrator loads from the console) receives
a request for lmgrd.slb, it will check if that daemon is running. If is not, then
15
OUT means a user has taken a license. IN means that a user has returned a license.
• On Windows and UNIX both the lmgrd and lmgrd.slb are in two different
executable files. In the Novell implementation, both are contained in a single
NetWare Loadable Module (NLM) file: the LMGRDSLB.NLM
The administrator will start17 the lmgrd daemon with this license file. The daemon will
hook up to port 1701 and wait there for requests. So far, the only license it can take
care of is OFM32.
16
The only exception is when the file is designed for a server cluster, to implement redundancy.
17
How lmgrd is started (or stopped), depends on the server platform. For Novell and UNIX, it is
started with a command from a terminal. In Windows, there is a control panel to do that.
lmgrd
Port 1701
lmgrd.slb
*********************
** SERVER Information
*********************
SERVER criollo-arg 00605301f7e6 1700
*********************
** lmgrd.slb Info
*********************
DAEMON lmgrd.slb c:\flexlm\lmgrdslb.exe
*********************
** lmgrd.grt Info
*********************
DAEMON lmgrd.grt c:\flexlm\lmgrd.grt.exe
** Eclipse Licenses
FEATURE eclipse100 lmgrd.grt 98.000 1-jul-2001 1 6CA90018AD1650 ck=201
FEATURE unencodedhmd lmgrd.grt 98.000 1-jul-2001 1 F03E5138AC05 ck=247
FEATURE e300 lmgrd.grt 98.000 1-jul-2001 1 FCF3F0914D5A1F42A3B ck=61
FEATURE grid lmgrd.grt 98.000 1-jul-2001 1 8C03B0F11BE1999AD6F ck=55
FEATURE pvt lmgrd.grt 98.000 1-jul-2001 1 ACC39091939F1CEB7B7 ck=17
FEATURE weltest lmgrd.grt 98.000 1-jul-2001 1 1C33B0BB94F7CD2 ck=251
18
Anything starting with * is considered a comment. This is a nice trick to make your license files
more readable.
First of all, we have the SERVER line, which basically describes how the main daemon
(lmgrd) will start and in what machine. The machine name (criollo-arg) and TCP/IP
port (1700) are useful for license clients, but also extremely important for the server.
When you start the lmgrd with this license file, lmgrd reads this server line to verify
that it is being started on the correct machine (this line includes the hostname –criollo-
arg- and its HostID -00605301f7e6-) but also the port number it should hook up to
(1700).
Lmgrd does not administer licenses. It is like the receptionist for the real workers:
the vendor daemons.
Then comes the vendor daemons’ information. The keyword to describe them is either
DAEMON or VENDOR19. The first one describes our familiar lmgrd.slb, which is the
one that manages OFM. Notice that it also manages QLA® (another GeoQuest
program). You can see who will be asking for this daemon by analyzing the FEATURE
lines. Verify the OFM, QLA and OFM32 FEATURE lines. They are all managed by
lmgrd.slb.
What it is interesting is that the same lmgrd on port 1700 can receive requests for
many vendor daemons. Notice a new one named lmgrd.grt. This is the vendor daemon
that administers all Eclipse® licenses. Take a close look to the Eclipse FEATURE lines.
They all specify this daemon as their “accountant”.
So, in summary, when you start the lmgrd daemon with this license file, it will start
receiving requests on port 1700 for two different daemons: lmgrd.slb and lmgrd.grt.
All this will happen on one machine, the gaucho-arg, which HostID is 00605301f7e6.
You can’t start this license on another physical machine.
After starting lmgrd with this license file, the “licensing accounting department” will look
like:
lmgrd
1700
lmgrd.slb lmgrd.grt
19
VENDOR keyword is new to FLEXlm 6.x. Previous versions used DAEMON exclusively.
So, the previous license file could have been split into two. The first lmgrd could be
started with this license file:
*********************
** SERVER Information
*********************
SERVER criollo-arg 00605301f7e6 1700
*********************
** lmgrd.slb Info
*********************
DAEMON lmgrd.slb c:\flexlm\lmgrdslb.exe
The second instance of lmgrd can then be started with this one
*********************
** SERVER Information
*********************
SERVER criollo-arg 00605301f7e6 1701
*********************
** lmgrd.grt Info
*********************
DAEMON lmgrd.grt c:\flexlm\lmgrd.grt.exe
** Eclipse Licenses
FEATURE eclipse100 lmgrd.grt 98.000 1-jul-2001 1 6CA90018AD1650 ck=201
FEATURE unencodedhmd lmgrd.grt 98.000 1-jul-2001 1 F03E5138AC05 ck=247
FEATURE e300 lmgrd.grt 98.000 1-jul-2001 1 FCF3F0914D5A1F42A3B ck=61
FEATURE grid lmgrd.grt 98.000 1-jul-2001 1 8C03B0F11BE1999AD6F ck=55
FEATURE pvt lmgrd.grt 98.000 1-jul-2001 1 ACC39091939F1CEB7B7 ck=17
FEATURE weltest lmgrd.grt 98.000 1-jul-2001 1 1C33B0BB94F7CD2 ck=251
FEATURE wsim lmgrd.grt 98.000 1-jul-2001 1 8C73908104E118AA476 ck=214
FEATURE graf lmgrd.grt 98.000 1-jul-2001 1 7CA3A0712161999A769 ck=7
FEATURE fill lmgrd.grt 98.000 1-jul-2001 1 7C63C071120298FA277 ck=162
FEATURE edit lmgrd.grt 98.000 1-jul-2001 1 7C23705109CA9C19D7F ck=223
FEATURE vfp lmgrd.grt 98.000 1-jul-2001 1 3CE3A0D105AFDC8BBB7 ck=32
Now pay close attention to this scenario. The first copy of lmgrd will be receiving
requests on port 1700 and can only serve lmgrd.slb requests. If and Eclipse program
attempts to connect this daemon, it will not get any license.
The second lmgrd is started on the same machine, but on port 1701. This instance
could serve only requests for the lmgrd.grt daemon. If OFM tries to get a license from
this port, it will not receive a valid license.
lmgrd lmgrd
1701 1700
lmgrd.grt lmgrd.slb
The clients (OFM) normally scan through this variable and can recognize the several
possible values if you separate them with a semicolon (“;”). It is very important to see
that there is one space at each side of the division symbol. Also, the semicolon works for
a Windows client21. There is no limit on the number of options you can have. However,
20
Eclipse finds its license information only trough the LM_LICENSE_FILE, usually set by the eclrc
macro. Please refer to Eclipse documentation for further information regarding Eclipse software.
21
OFM runs only on Windows. For programs that use FLEXlm and run on other operating
systems, there are different ways of combining values in the variable. In UNIX, you need a colon
(“:”), in VMS a space (“ ”).
LM_LICENSE_FILE = D:\GeoDir\License\ggxlic.dat
If that is the case, when OFM starts, it will attempt to get the license information. If it
gets to this variable (because none of the previous methods worked – registry or
license.dat file), then it will try to get the OFM license from the GeoGraphix file. That
won’t work. To complicate things a bit more, the syntax of this file is different from ours,
so you can’t add the OFM features there and have one license file for everyone. You
have to modify this variable so it will satisfy both programs. You could modify it to:
Or
That should make them work. Remember that OFM checks for a license.dat file on the
OFM directory. If you place a license there, then OFM will use and it will never get to the
point of the LM_LICENSE_FILE variable. You have many options. Pick up the most
adequate for you and your client.
Simple redundancy
Why not distribute OFM licenses over several servers and use the LM_LICENSE_FILE to
implement some kind redundancy?
Say you have a server (criollo-arg) managing five OFM licenses and also another
server (gaucho-arg) managing another three. Then you can set the variable to
Your OFM clients will try to get a license from criollo-arg. If there aren’t any licenses
available there (or the server does not respond), then they will check on gaucho-arg.
Beware that if criollo-arg goes down, then only three licenses will be available. In other
words, if one server goes down, all its licenses go with it.
Remember that this is not redundancy, strictly speaking. It is just a way of using up all
your licenses.
22
You can access the latest FLEXlm user’s manual at http://www.globetrotter.com/manual.htm
Notice that the server part should always be newer than the client part. This brings up a
very important issue with OFM because version 3.0 was linked to newer client libraries,
so you have to upgrade the two daemons (lmgrd.slb and lmgrd). In the CD-ROM, OFM
comes with the new versions needed and you must install them to successfully run the
program. Once you upgrade the server software, OFM will be able to work. However, if
you use other vendor daemons that share the lmgrd daemon, ensure they will work with
the new version. If you are in doubt or scared to replace a current licensing scheme,
you could start another instance of lmgrd to manage lmgrd.slb licenses.
For instance, say that you have this current server configuration:
lmgrd
old version
(port 1700)
If you can’t be sure that lmgrd.grt and otherd will work with the new lmgrd needed
by OFM, then you could split the licenses in two files. One with all the features licensed
by lmgrd.slb and the other with the rest. Then you can end up with this configuration:
lmgrd lmgrd
old version new version
(port 1700) (port 1701)
(license.dat file)
lmgrd.slb
lmgrd.grt otherd new version
old version (other vendor daemons) (OFM 2.2, 3.x and other GQ
Eclipse Programs (Other applications) applications)
You programs that use licenses managed by lmgrd.slb (OFM and other applications) will
need now new license information. If you had an LM_LICENSE_FILE set, you will have
to create now a combo one to include both possible license server processes.
to
The options file will not be covered in this document. Refer to the FLEXlm manual
(http://www.globetrotter.com/manual.htm) and to GeoQuest to verify that this option is
being supported.
This section describes the installation process of the server software on a Novell server.
The Novell implementation of FLEXlm is slightly different than the next two ones
(Windows and UNIX). Both FLEXlm and GeoQuest daemons are combined into one
single file named LMGRDSLB.NLM. This is the only software you need to start the
daemon.
Of course, you will also need the license file and you have to gather some information
about the server to order it, so first we will explain how to order a license file for a
Novell server.
Then execute the CONFIG command. This should give you the rest of the needed
information. Among other data, you should write down these two values:
CONSOLE: CONFIG
Send this information to GeoQuest specifying that you are requesting a Novell server
license. This is an example of the information that you need to send to request the
license:
Once you submit this information, GeoQuest will design the license file and send it back
to you. For our example, the license could look like:
These are the defaults expected by the LMGRDSLB.NLM program. You could change
these defaults by giving command line arguments. For example:
It is strongly recommended to use the default settings unless you have to troubleshoot.
The bottom line is: keep it simple.
If you are having problems and don’t get anything working, you need to:
• Verify that your license file is correct. Check the syntax and look for strange
characters. Make sure you edit it with Notepad. Verify that there is nothing to the far
As a final comment, remember that this described only the server setup. All we wanted
to do in this section is to start the daemons. Once you get it running you need to supply
OFM with the “license information” and eventually, a copy of the license file before you
can run it.
• See the figure on page 50 to understand what OFM will do before contacting your
Novell server. Select how you want your OFM to find the license file. The
recommended setup will be to place a copy of the file in the OFM’s directory.
• Review the “Network license served by a Novell license server” section on
page 58. That section explains other settings needed by the OFM client using a
Novell server, particularly the FLEXLM_COMM_TRANSPORT variable.
• SERVER: line that describes the server running the FLEXlm software
• novell1: hostname of the Novell server running the FLEXlm software.
• 00605201f7e6: HostID of the Novell server running the FLEXlm software.
• SPX: transport protocol used for the licensing networking
• 1234: the SPX socket number that the software will use for licensing. It needs to be
a free port. Change it only in the rare case when the port listed is being used by
other program.
• @000000000001: the server's virtual address. For Novell 4.x a server usually gets
000000000001. Novell 3.x does not use this. Just use the same number.
• #91720001: The IPX Internal Network number of the server.
• DAEMON: line describing vendor a daemon. Since FLEXlm 6, DAEMON and VENDOR
are equivalent keywords.
• lmgrd.slb: name of the license administrator program (daemon) that should be
loaded on the server to manage the licenses.
Notes:
• Some of the parts of this license could be edited or modified. They are the ones
listed in bold.
• Connection between clients (OFM PCs) and the Novell license server is done only
using SPX/IPX network protocol, so you have to have this protocol installed on the
OFM machines. You don’t need TCP/IP, although having it does not affect the
operation.
This section describes the installation process of the server software on one of the
supported UNIX servers. FLEXlm recommends performing all the installation process as
a normal user, not as root. This is because starting the license server as a root user will
start the process with root privileges and that could create a security breach on the
server machine. So claim the security experts…
Remember that the supported UNIX platforms are Solaris, AIX and Irix (Sun, IBM and
Silicon Graphics). The OFM 3.1 CD is the one that contains the files for AIX (IBM). No
other UNIX Os is supported, but check with GeoQuest before giving up certain
architecture.
The UNIX implementation is based on three files: The license file and the two daemons
(lmgrd and lmgrd.slb). The CD also supplies a third binary file: lmutil. This file is not
needed for normal operation. It is only an information/troubleshooting tool.
Because you will also need the license file to perform the installation, you have to gather
some information about the server to order it. In the next section, we will explain how
to order a license file for a UNIX server.
Create on the server a /usr/local/flexlm directory. You can use any other directory as
long as you correct all the needed paths to point to it. Copy to this folder the files from
the corresponding CD subdirectory.
• For Irix (Silicon Graphics), copy the three files under
\Flexlm\Server\UNIX\SGI.
• For Solaris (Sun), either get the proper files from GeoQuest support or copy the
three files provided in the 3.1 CD under \Flexlm\Server\UNIX\Solaris 2.x.
• For AIX (IBM), copy the three files under the \Flexlm\Server\UNIX\Rs6000
folder. They are present in the OFM 3.1 CD only. If you only have 3.0, get them
from GeoQuest support.
From the /usr/local/flexlm directory, issue the following command (with the dot and
the dash):
>./lmutils lmhostid
lmutil - Copyright(C) 1989-1994 Globetrotter Software, Inc.
The FLEXlm host ID of this machine is "015A5F3E"
This command should give you the necessary HostID number for issuing the license. If
you can’t have the FLEXlm software installed, there is another way of doing it. The
process varies, depending on the operating system version. The following table shows
you one of the possibilities. This method is NOT recommended and should be used only
as a final resort, if you can’t use lmutil.
For getting the hostname of the server, just issue this command:
marrochi> uname -a
SunOS gaucho 5.5.1 Generic_10364-12 sun4u sparc SUNW,Ultra-2
marrochi>
23
For AIX, you need to remove the last two digits and use the lowest eight digits ignoring any
high level zeroes.
Once you submit this information, GeoQuest will design the license file and send it back
to you. For our example, the license could look like:
Some times, when UNIX files are copied from the CD, they loose their attributes and the
operating system does not see them as executable files. When you attempt to run them,
nothing happens. In these cases, you can manually set the executable attribute with the
chmod command. The next window shows you a directory listing where you can verify
that the two daemon files have the executable attribute on (notice the * at the end of
the name). Also an example on how to set this attribute for the lmgrd file.
-----------------------------------------------------
License file: /usr/local/flexlm/ license.dat
Notice that the scroll reveals an available OFM32 license. If you have extra licenses in
the same file, they will be output one by one, every time you press CR.
If you successfully got the license working, you can log out of the server and attempt to
start OFM. The lmgrd process you started will live as long as you leave the server
running, even if you log out. If the server is restarted, the license manager will not start.
There is a section ahead explaining how to set up an automatic start of the lmgrd
daemon.
• SERVER: line that describes the server running the FLEXlm software (license
administrator program)
• gaucho: hostname of the UNIX server running the FLEXlm software.
Notes:
• Some of the parts of this license could be edited or modified. They are the ones
listed in bold. If any of these parameters is modified, the all license files should
be modified as well. Normally there are two copies of this file: the one that
FLEXlm uses and the one that OFM uses. Both should be the same.
UNIX mini-help
This is by no means a UNIX tutorial. Is just a list of the commands I encounter myself
using while finding my way around Solaris. I don’t have experience with Irix nor AIX
(the other UNIX supported by OFM license servers), so no help will be given.
Solaris Commands:
VI commands:
VI is the last text editor you want to choose. However, is present in all UNIX
distributions and one day, you will have you use it. This is not a tutorial, just a list of the
most common commands.
File Utilities:
• z files:
Compressed files with z extension.
• Z files
Compressed files with Z extension
myfile -> myfile.Z issue command: compress myfile
myfile.Z -> myfile issue command: uncompress myfile
• TAR files:
Create a tar file
tar -cvf destination source(s)
c: Create file
• Local mounting (the CD-ROM drive is installed in the host machine. We assume that
there is a directory /cdrom already created)
Solaris 2.x
mount –F hsfs –o ro /dev/sr0 /cdrom
Irix
mount –t iso9660 –o setx /dev/scsi/sc0dn10 /cdrom
AIX
mount –v cdrfs –r /dev/cd0 /cdrom
If the host machine does not have a CD-ROM drive and you want to access one in
another machine, then the task is more complicated. First, log on to the machine that
has the drive and mount it locally. Next, share it so you can access it from other
machines. Finally, from the machine where you want to access the remote CD, mount it
as a remote drive.
Solaris 2.x
Execute share –F nfs –o ro –d /cdrom /cdrom
Irix
Add the line /cdrom –ro to the /etc/exports file and run the
command exportfs -av
AIX
Same as Irix.
Solaris 2.x
Execute mount –F nfs –r cdrom_host_machine_name:/cdrom /cdrom
Irix
Execute mount –r cdrom_host_machine_name:/cdrom /cdrom
AIX
Same as Irix.
Finally, remember that to eject a CD-ROM, you can issue an eject command. Eject will
eject a floppy first (if there is one) and then the CD-ROM. The operation will un-mount
the CD-ROM and eject it. Notice that this will not work if you are using the CD-ROM.
Beware that a CD-ROM will be considered in use even if you have a terminal opened and
the working directory is one that belongs to the CD-ROM (inside the /cdrom directory).
For this task, you need to be a super user (i.e., root). The procedure is different for
different UNIX platforms, so they will be explained one at a time. If any of the
mentioned files do not exist, simply create it. They are all ASCII files and any text editor
will work.
Solaris 2.x
When Solaris boots, it goes to a standard directory and runs all programs there that
start with a capital S. The convention does not only specify the first letter of the file
name, but also a number that determines in what order these programs are run. Solaris
runs them, one by one and pass them a “start” command line argument. This directory
is similar to your Windows Start Up menu. Anything going there will be started at boot
time. However, you not only have to place the file there but also its name has to start
with S and a number, for instance S90flexlm.
A similar task is done when the system is shut down. A different directory is scanned
and every program starting with a capital K is run with a “stop” command line
argument. The number needed to specify the sequence is also needed, so you could
have a file named K87flexlm to shut down the flexlm daemon.
The programs here are normally shell scripts (like PCs batch files) that start and end all
needed software. You need to add your FLEXlm scripts in those directories, so it is
started and shut down properly.
Because Solaris will pass them a “start” and “stop” command line argument, then you
can use the same script for both tasks. You can place this script with the rest of the
FLEXlm software and make links to it. This keeps all your work in one directory, easier
to maintain.
#!/bin/sh
#
# Startup for Flexlm Licensing Daemon
#
LIC_DIR="/usr/local/flexlm"
LIC_BIN="/usr/local/flexlm"
#
if [ ! -f $LIC_BIN/lmgrd -o ! -d $LIC_DIR -o ! -f $LIC_DIR/license.dat ]
then
echo "lmgrd startup: cannot start"
exit
fi
case "$1" in
'start')
# Start the license manager:
nohup $LIC_BIN/lmgrd -c $LIC_DIR/license.dat > /tmp/license.log 2>&1&
;;
'stop')
# Stop the license manager:
$LIC_BIN/lmutil lmdown
;;
*)
echo "Usage: /etc/init.d/flexlm ( start | stop }"
;;
esac
exit 0
The previous was the full feature file, with some error control built in. If you want to get
it to work with minimum typing, you could also try this script.
#!/bin/sh
LIC_DIR="/usr/local/flexlm"
LIC_BIN="/usr/local/flexlm"
case "$1" in
'start')
nohup $LIC_BIN/lmgrd -c $LIC_DIR/license.dat > /tmp/license.log 2>&1&
;;
'stop')
$LIC_BIN/lmutil lmdown
;;
esac
exit 0
Notice that there are two lines that reflect the directories where you have the binaries
and the license files. In out case, they are both equivalent. If you decide to relocate
them, you will have to adjust them accordingly.
There is a section that process the “start” argument and another one that does the
“stop” shut down part.
In PCs, to create a batch file you need to specify a bat extension. In UNIX, any name
works but you have to convert the text file into an executable one. To turn the flexlm
text file into an executable batch file, issue this command:
You can also test that your script works by running it with start and stop arguments.
In order to tell Solaris to execute it at boot time, you need to create a link in the proper
directories.
Go the /etc/rc2.d directory and create a link to our script with a name that starts with
a capital S (Start). Remember that after the S, you must also define a number. Pick
up one that is not being used. The rest of the name is up to you, make sure you use
something that lets you identify this script with the FLEXlm daemons start up. For
example, assuming that the flexlm script was saved on the /usr/local/flexlm directory,
you should type:
ar0s03:marrochi> cd /etc/rc2.d
/etc/rc2.d
ar0s03:marrochi> ln -s /usr/local/flexlm/flexlm S90flexlm
ar0s03:marrochi> ls -Fa
./ S47asppp* S75cron* S91leoconfig*
../ S69inet* S76nscd* S92rtvc-config*
K20lp* S70uucp* S80PRESERVE* S92volmgt*
K60nfs.server* S71rpc* S80lp* S93cacheos.finish*
README S71sysid.sys* S88sendmail* S99audit*
S01MOUNTFSYS* S72autoinstall* S88utmpd* S99dbora@
S05RMTMPFILES* S72inetsvc* S89bdconfig@ S99dtlogin*
S20sysetup* S73nfs.client* S90Charisma@ S99gxtdaemon_init@
S21perf* S74autofs* S90flexlm@ S99zeh_queues@
S30sysid.net* S74syslog* S90hpnpd@
ar0s03:marrochi>
Notice that we created a link. This link is /etc/rc2.d/S90flexlm and points to our
flexlm script file. If you issue an ls –Fa directory listing, links appear with an @ at the
end. If you want to verify your link, you can also issue an ls –lag. This will show you
the created link and also the original file it points to.
Remember to use a name that does not exist and that it needs to start with capital S.
This procedure will ensure that your flexlm script will be executed with “start” option
every time you boot the server. It is not a requirement to shut down the daemon, so
instead of adding a script to kill it, you can just let it die when the server is shut down.
However, if you want to be completely neat, you can add a FLEXlm shutdown script.
Create a link to the same script file in the /etc/rc0.d that starts with a capital K. Same
applies to the number. Choose one that is not being used.
ar0s03:marrochi> cd /etc/rc0.d
Now, every time the server is shut down, Solaris will run each of the scripts that are in
this directory that start with capital K. Each of them will be run with a “stop” command
line argument. When K11flexlm is run, the lmgrd daemon will be terminated properly.
The windows server runs on either Windows 95/98 or Windows NT. The differences are
not many. However, when the installation is done on an NT machine, the license server
can (and should) be installed as an NT service.
All you need are a few programs and the license file. As in the other installations, we will
start with gathering information needed to request the license.
Copy them to any folder on the target server and run the lmtools program. It will open a
window like the following one:
Hostd ID's------------------------------------------------------------
HOSTNAME=ntserv
USER=someuser
DISPLAY=someuser
INTERNET=163.186.32.15
0000214db4d9
DISK_SERIAL_NUM=123010ea
As per the results of our example, the hostname of the Windows machine is ntserv, the
HostID is 0000214db4d9 and the DiskID is 123010ea.
Notice that there is other information, but is not relevant for us. After saving the file
with this information, you can delete the two files (lmgr326a.dll and lmtools.exe) from
the server.
This is all the information needed to order the license. The following table shows you
what you would send to GeoQuest to generate the license:
With this information, GeoQuest will mail you back the license, which should look like:
This panel IS NOT the server software. The software is in the lmgrd.exe and
lmgrdslb.exe files. This control panel is just a nice tool to install the server software
and control the parameters that will be used to start the server, instead of issuing a
command line start.
Click on the Setup tab and using the Browse buttons, find the correct files for each of
the options. The debug.log file does not initially exist on the flexlm directory. Just point
to the directory and the program will create it.
The two check buttons on the lower part (Use NT services will be disabled in Windows
9x setups) are the ones needed to set Windows to automatically run the license server
For NT, it is recommended that you install the server as an NT service. You could
remove it from the Services list later, with the Remove button.
Click on the Licenses tab and check that the server is set up to use the same license
file you set in the previous step. Click on the button to take a final look at the license file
before you try to start the server. Remember that the DAEMON lone of the license has
to point correctly to your vendor daemons executable file, so remember to check this
part. If you are happy with the license file, the go to the Control tab and start the
server with the Start button.
The first check you have to do is to click on the Status button. In less than a second,
you should see a message like the one shown in the figure (your-server: license
server UP (MASTER))
If you don’t get this message, the something went wrong. Go to the Diagnostics tab.
In the case of an NT server, when you select to Use NT Services, the control panel
installs the lmgrd.exe as a service. You can view the results of this action if you open
the Services Control Panel and find the FLEXlm License Manager service. This is
another place to start and stop it. The next figure shows you the service registered.
• SERVER: line that describes the server running the FLEXlm software (license
administrator program)
• ntserv: hostname of the Windows server running the FLEXlm software.
• 00605201f7e6: HostID of the Windows server running the FLEXlm software.
• 1700: TCP port number that FLEXlm software will use for licensing. It needs to be
a free port. Ports below 1024 are normally reserved and can't be used. Change it
only in the rare case when the port listed is being used by other program.
Notes:
• Some of the parts of this license could be edited or modified. They are the ones
listed in bold.
Environment variables:
LM_LICENSE_FILE=1700@criollo-arg
FLEXLM_COMM_TRANSPORT=(null)
LMGRDSLB_COMM_TRANSPORT=(null)
Feature requested:OFM32
Version requested:3.0
Hostids found:
Ethernet address: 00-00-21-4d-b4-d9
Volume serial number: 123010ea
Internet IP Address: 163.186.32.15
Username: marrochi
Display name: milci
Node name: milci
Feature: OFM32
License path: Y:\OFM30\license.dat
FLEXlm error: -57,17. System Error: 10047 "(null)"
For further information, refer to the FLEXlm End User Manual,
Environment variables:
LM_LICENSE_FILE=(null)
FLEXLM_COMM_TRANSPORT=(null)
LMGRDSLB_COMM_TRANSPORT=(null)
Feature requested:OFM32
Version requested:3.0
Hostids found:
Ethernet address: 00-a0-24-8f-4d-af
Volume serial number: 374911f0
Username: dpetkovs
Display name: exec
Node name: exec
Notice that both examples show interesting information. The second one reveals an
error, and probably because the user forgot to set the FLEXLM_COMM_TRANSPORT
variable to SPX, as mentioned. At least, there is a problem with it because the program
can’t see it set to anything at all24.
They also provide all hardware numbers as detected from the program. The Ethernet
address is the HostID, the Volume Serial Number is the DiskID. Very interesting
information to double-check these numbers with the ones in the license file.
As a final comment, this tool helps troubleshooting the OFM workstation. The server
troubleshooting tools are not that complete and in some platforms, they don’t exist at
all. We will cover them later, as we move forward.
Lmtools
Lmtools can be run on the clients to get information from the client or from the server.
Also, if you have a Windows server, you can run it on the server as well, to get similar
information.
From the client side, you can use Lmtools to query the server as if you where a client
application. This check is quite complete because uses almost all the system (your
server setup and your network connection to it). The following figures show the kind of
24
What actually happened here is that the variable was set, but with spaces around the equal
sign. This confused OFM, that refused to start.
Lmtools is very easy to use, as long as you understand how it works. Lmtools can
perform the most common tasks you normally do with a remote server (such as re-read
a license file, diagnose individual vendor daemons, get a complete list of all licenses
available, shutdown the server, etc.)
The only catch you could step into is that Lmtools finds out what server it needs to
contact by either inspecting the LM_LICENSE_FILE variable or by directly accessing
the license file.
LM_LICENSE_FILE: This variable has to be set properly (to a port@server o the path
to the license file). If you don’t want to modify your current variable settings and just
want to occasionally use this tool, you could open an command prompt window, set the
variable and then run the program from the prompt. The LM_LICENSE_FILE variable will
keep the value and stay valid only inside that command window and will not affect the
rest of the applications. For instance, for successfully running Lmtools, I could do this:
License file: If you have access to the license file, you just have to point Lmtools to
it. You do it using the Browse button and selecting the proper license file. The following
image shows a session where LM_LICENSE_FILE had no value, but Lmtools was
100pointed to the license file. Notice that the Current License File box displays the
license being used in the troubleshooting.
Lmtools buttons
Checksum: Use this button to verify the integrity of the checksum value of feature
lines. If you doubt whether you have extra or missing characters, just locate this file
Notice that this check is very powerful because it ‘certifies’ your server, network and
communications between the machine where you execute Lmtools and the license
server.
Shutdown: Beware with this button. You can remotely shutdown a server but there is
no way to remotely start it back. You will have to re-start it from the server’s console.
HostID: This reports the HostID, DiskID, FlexID, hostname, IP address, etc. from the
machine where you run Lmtools, as seen by FLEXlm software.
Reread: You can force the server to re-process the license file with this button. If you
modify the license file, you can force FLEXlm to re-read it and activate the changes.
Status: The status is a quick summary of the licenses available on a server. Again, once
you define which server to check, hit Status to get some quick information. Lmtools
asks you for some details on what you want. You can specify a single feature, a server
or a vendor daemon. If you don’t specify any of them, you get everything. The next
figure shows you an example.
Switchr: This button does not do anything for us. It works only with FLEXlm servers
running on VMS to switch the output of a feature to a new file. GeoQuest does not
support VMS servers.
Time: You can inspect your client’s local time with this button. This will not give you the
server’s time.
Version: This button lets you specify an executable file to verify its version and other
information. The next scroll shows the results of asking for lmgrd.exe, lmgrdslb.exe
and lmgrd.grt.exe version details. Remember that you specify the filename of the
executable file containing the FLEXlm software. I could only test this for Windows
servers.
Version------------------------------------------------------------------------
Flexlm v6.1a
FLEXlm Copyright 1988-1998, Globetrotter Software, Inc.
FLEXlm 6.1 (libmgr_s.a), Copyright (C) 1988, 1997 Globetrotter Software, Inc.
FLEXlm 6.1 (liblmgr.a), Copyright (C) 1988-1997 Globetrotter Software, Inc.
Version------------------------------------------------------------------------
Flexlm v6.1a
FLEXlm 6.0i (libmgr_as.a), Copyright (C) 1988-1998 Globetrotter Software, Inc.
FLEXlm 6.0i (liblmgr.a), Copyright (C) 1988-1998 Globetrotter Software, Inc.
FLEXlm 6.0i (libmgr_s.a), Copyright (C) 1988, 1998 Globetrotter Software, Inc.
Version------------------------------------------------------------------------
Flexlm v6.1a
FLEXlm 5.12a (libmgr_as.a), Copyright (C) 1988-1997 Globetrotter Software, Inc.
FLEXlm 5.12a (libmgr_s.a), Copyright (C) 1988, 1997 Globetrotter Software, Inc.
Lmutil
Lmutil is a command line utility. It is available for Windows and UNIX. There is no such
a thing for Novell servers. Lmutil is similar to Lmtools, without the Windows interface.
The following scroll is lmutil’s help, given when it is just ran, without any arguments:
C:\OFM30\Flexlm\LMUTIL.EXE
lmutil - Copyright (C) 1989-1998 Globetrotter Software, Inc.
usage: lmutil lmcksum [-k] [-pre_v6]
lmutil lmdiag [-n]
lmutil lmdown [-q] [-vendor name]
lmutil lmhostid[-ether|-vsn|-flexid]
lmutil lmhostid
lmutil lminstall [-i infile] [-o outfile] [-overfmt {2, 3, 4, 5, 5.1,
or 6}] [-odecimal] [-maxlen n]
lmutil lmremove feature user host display
lmutil lmremove -h feature host port handle
lmutil lmreread [-vendor name]
lmutil lmswitchr vendor new-file, or
lmutil lmswitchr feature new-file
lmutil lmstat [lmstat-args]
lmutil lmver flexlm_binary
lmutil lmver flexlm_binary
lmutil -h[elp] (prints this message)
C:\OFM30\Flexlm>
Notice that this is more or less the same as Lmtools but from a command line. The next
scroll shows you the equivalent results of lmutil in a UNIX terminal:
ar0s03:marrochi> lmutil
lmutil - Copyright (C) 1989-1994 Globetrotter Software, Inc.
usage: lmutil lmcksum [-k]
lmutil lmdiag [-n]
lmutil lmdown
lmutil lmhostid
lmutil lmremove feature user host display
lmutil lmremove -h feature host port handle
lmutil lmreread [daemon]
lmutil lmswitchr feature new-file
lmutil lmstat [lmstat-args]
lmutil lmver [binary-file]
OR
link the following file names to lmutil:
"lmcksum", "lmdiag", "lmdown", "lmhostid", "lmremove"
"lmreread", "lmstat", "lmswitchr", or "lmver"
ar0s03:marrochi>
The following scroll shows some example results of using lmutil under UNIX.
ar0s03:marrochi>
-----------------------------------------------------
License file: /usr/local/flexlm/licenses/license.dat
-----------------------------------------------------
"ApplicationManager" v5.000, vendor: lmgrd.slb
License server: ar0s03
floating license starts: 1-jan-95, expires: 1-jul-2001
ar0s03:marrochi>
Ping
Ping is a utility included with standard TCP/IP network software. It is a basic command
but very useful for troubleshooting FLEXlm. You can use ping only for Windows and
UNIX servers. Novell uses SPX/IPX, and I don’t know any equivalent.
Ping is a command that takes usually one argument: the machine you want to ping. If
the TCP/IP communications are working between both machines, then ping returns the
echo of the remote computer25.
Because a network link can be seen as a chain, you can try to troubleshoot it in
sections. Be optimistic and try the full link first. If it does not work, the test the different
parts.
For example, suppose you are sitting on an OFM PC and you can’t get the license from
the server gaucho. One of the things you must check is that your OFM machine can
actually contact the gaucho server through the network, so you open a command
prompt window and type:
C:\WINDOWS>ping gaucho
C:\WINDOWS>
If ping succeeds, the message finds the machine and gets its reply. This can ensure you
that you have a valid link to the server. Your problem has to be somewhere else. If it
fails, then your license process will fail as well.
25
Ping is not using the complete TCP/IP stack. In fact, it does not even use TCP (just ICMP) but
is usually enough for testing purposes.
In the PC world, there are two common systems to find the correspondent IP address of
a given hostname: WINS26 (Windows Internet Name System) and DNS (Domain Name
System). WINS is only for Windows. DNS is the standard used for the Internet and any
platform running TCP/IP supports it.27
Is up to your client’s IT department, to decide which system is being implemented on
their network and you should discuss the subject with them, in case you encounter
problems resolving names to IP addresses.
If the ping command does not work (ping gaucho fails), then you have to decide
whether it failed due to a real connection problem or simply because your ping could not
figure out the corresponding IP address.
The easiest way to do this is by directly pinging to the IP address of the target machine.
The following scroll shows you an example:
C:\WINDOWS>ping 163.186.32.25
C:\WINDOWS>
When you ping directly to the desired IP address, then you don’t use the name-to-IP
process and aim directly to machine. If pinging to the IP fails, then you have a network
problem. If pinging to the IP works but pinging to the name fails, then your name
resolving mechanism is failing.
When only your name resolution isn’t working but you can successfully ping to the
server’s IP, then you have two possible ways to go:
26
This is the most popular name-to-IP dynamic system. You can also use a static local file with
the equivalents between names and IPs. This file is named hosts (for the DNS system) and
lmhosts (for the WINS system).
27
When you type www.slb.com on your Internet browser, it is resolved to 192.23.80.10 by the
DNS server you are set to use.
And I can’t ping gaucho but I can ping gaucho’s IP, then I could edit the license file to
Using IP numbers is not the best decision you can make and should be left as a last
resort. In some networks, IP addresses are dynamic28, which means that the same
machine could boot every time with a different numbers. If that happens and you are
using IP numbers, your licensing system will fail. Try to fix resolving problems and use
computer names. That is my best advice.
Another problem is that sometimes, you can ping to the IP address of your server
successfully, but when you attempt to ping its name, you get an error about a bad IP
address. This means that somehow, your computer could resolve the name into an IP
address, but the number is not the actual address of the machine. In these cases, you
can’t even use IP numbers. You need to fix your naming resolving problems.
C:\WINDOWS>ping localhost
28
It is very popular to have a network where IP addresses are given on demand to clients by a
server. The most popular implementation is known as DHCP (Dynamic Host Configuration
Protocol).
C:\WINDOWS>
This is a ping to your own machine (a very short trip). Localhost is a generic and
universal name for your machine, as seen from its console. The equivalent IP (also
universal) for localhost is 127.0.0.1 (see the previous scroll). This is not the real IP of
your machine. It is a universal convention. Localhost is always an equivalent for the
name of the machine. 127.0.0.1 is always an equivalent to your own IP address.
If this works, then find out the real IP of your machine and ping to it29. Doing this will
ensure that your TCP/IP software works. Notice that you don’t need the network cable
to perform this test, so no external hardware is actually tested.
If the previous step worked, then the next one will try to ping to the closest device you
can. The possibilities are now two:
When your PC and license server are in the same sub-network, then all there is in
between is simple hardware (cables, hubs). You need to check your network cable and
connections. Try to ping the server from another machine, to make sure that the
problem is not the server’s connection. If all this is OK, then check with your client’s
network administrator and explain him your tests.
The figure shows the results of winipcfg in Windows 9x. Notice the values of the PC
and router’s IP addresses. Make sure you have properly selected the PC’s network card
(Novell 2000 Adapter, in the example).
The scroll shows how to get the same information on Windows NT.
C:\WINNT>ipconfig -all
0 Ethernet adapter :
C:\WINNT>
The progressive checks I have previously described are in the following scroll. Notice
that I start checking the client PC and then progress to the router and finally the license
server.
C:\WINDOWS>
Introduction
This chapter is intended to illustrate the OFM database model and engine, from a DBA’s
point of view. I would assume that you are familiar with basic terms and feel more or
less comfortable designing a relational database (with Access ®, for instance). If you
have been using OFM and understand a bit of its database jargon, you should be fine. At
the end of this chapter, you will understand the database model and understand the
basics of a project creation. Further information to build a professional database will be
given in other chapters.
Databases
Generally speaking, there are two basic schools for storing data: Flat File and Relational
databases.
This is just fine and I am sure that everyone has built a file like this before. However,
things get very complicated when you start storing history of values that change in time
(or depth). For example, suppose we want to add WHP history. We could start loading a
spreadsheet like this one:
WellName X loc. Y loc. TD Feb-98 Mar-98 Apr-98 May-98 Jun-98 Jul-98 Aug-98 …
BLUE_1 1243 5433 1600 2200 2200 2160 2100 2100 2060 2020 …
RED_2 2354 2343 1520 2190 2190 2140 2070 2090 2044 2010 …
GREEN_5 4232 1232 1500 2195 2195 2150 2085 2095 2052 2015 …
The data is split among different files. The computer resources needed to manage these
individual files are much less, however, there is a big drawback: the maintenance and
use of this data is much more complicated. Imagine you want to put in one report all
BLUE_1 data: You will have to access all three files and synchronize them. If by mistake,
you call this well BLUE_1 in one table and BLUE-1 in another, then automating a report
will fail. This seems quite simple to solve, but when you have a few thousand wells, with
tens of years of history, then is not that simple.
Imagine also the case where you want to report, for a particular geographic region, the
cumulative oil produced: you need to access the first table and based on the
coordinates, select the list of wells. Then you need to open the oil file and based on that
list, add all oil produced by them. Although working with independent tables is feasible,
it gets hairy when the amount of data increases.
The system that unifies the different files is known as the database engine. The main
functions of a database engine are:
Notice the extra column in the main table (ID). This is very common for relational
databases. The ID is a number that is unique and identifies each row of the table
(record). Notice also that, as said before, the well names are now present in only one
table, so changing it there will change it also for all data related to it. The database
The previous model is not a good relational database design. Imagine that if you add a
well, you will have to modify the structure of the tables and add a column to store the
new well’s data. Changing the database structure is not something you want to do very
often, so you have to reorganize the division scheme in a more efficient way. The next
figure shows another possible scheme. Pay attention to this one because this is almost
the way OFM does it. Notice that adding a well is now simple as adding rows to existing
tables.
Table keys
One of the tasks of the database engine is to retrieve the data stored in the database.
To do this, it needs to identify every record without problems and this is implemented
around primary keys. A primary key is a column of the table (or combination, as we
will see) that has a unique value. If you look at the Main Table of the previous figure,
the column ID is used for that. No two different wells (records) can have the same ID
value. If that happened, then it would be impossible for the database to identify the
other data (for example, OIL production) related to it.
Of course, primary keys are very useful for relating tables as well. Notice that you can
inspect the OIL records and tell to which well they belong by inspecting the ID field.
When a primary key (such as the ID of the Main Table) is used to relate two tables, it
receives the name of foreign key in the related table.
Remember:
• A primary key is mainly used by the database for identifying records in a table. As
a general rule, you must have a primary key defined per table.
• Primary keys are also perfect for relating tables. Primary keys that are used for a
relation between tables are known as primary keys on their native table and as
foreign key on the other side of the link.
In general, when building a relational database, you can define any number of tables
and establish the relationships between them as you please. When you use OFM, you
don’t have so much freedom but neither have so much work. OFM is a customized
relational database that automatically defines and maintains all needed relations for you.
The advantage is that you don’t have to worry about them. OFM’s engine does this
database engineering for you. The disadvantage is that you can’t do anything you want.
You have to adapt your data to the OFM model. Don’t worry, it is quite complex and
generally speaking, you will never face the limitations.
As a customized relational database, OFM has predefined table-types: you have a list
of available table models, each one optimized to store a particular type of data, and you
have to assemble your database combining them. An OFM table-type has predefined
how this table will be related to the other ones. You still have the control over the rest
of the table parameters (such as number, names and data type of the columns). The
OFM predefined table types are:
• Static
• Static Master
• Monthly
• Daily
• Sporadic
• Filter (Sort)
• Xref
• Lookup
• Trace
• WBD
• DEV
Monthly, Daily: These tables store values that belong to the entities stored on the
master static table and that change with time. Monthly tables allow you to store one and
only one value per month. Daily are equivalent to monthly but they allow you to store
up to one value per day. The primary key of these tables is composed of two columns:
the static master’s primary key (a foreign key) and the Date column. Because this
relation is pre-defined for the OFM database, when you define one of these tables, OFM
automatically adds the primary key and Date columns an establishes the relation to the
static master table. You can have many monthly or daily tables.
Sporadic: A sporadic table is designed to store values sporadically. These values can be
sporadic in time or depth, so this is the type of table to use, for example, when you
want to load tests data (sporadic in time) or core data (sporadic in depth). Because it
can be used for these two purposes, OFM does not completely define primary key of the
table. OFM just adds the master table’s primary key. The first column you need to define
is the one that will be merged with the ID to generate the primary key (ID + DATE or ID
+ DEPTH). You can have may sporadic tables.
A valid question arises now: If I can store values with any time frequency, why should I
choose monthly or daily, when I have much more flexibility with sporadic tables?. The
answer is not very clear now, but I will try to justify it: sporadic data can’t be
commingled. If you put your production data in a sporadic table, you will be able to
report it only one entity at a time. You can’t report a group of wells together. Not even a
group of layers that belong to a same well. Just think: in order to add values, they have
to belong to exactly the same date, and this is by no means guaranteed with sporadic
tables.
Filter (Sort): A filter table (known also as a sort table) is not exactly a table but can be
seen as one. You can have only one filter table per database and it can have up to fifty
columns. This is a special table that you don’t create (although you can change its
30
An entity is the general name for whatever you put as a row in the static master table. They
could be perforated intervals (if you can measure production per interval), wells (if you measure
total well production), a pump, a tank, any combination of them, etc.
Xref and Lookup: These tables contain reference data that is not related to anything.
These tables are normally used as a dictionary or a translation table. For example:
Trace: These tables are designed to store log curves (i.e., data that belong to a well
and changes with depth). Beware that OFM defines just the foreign key on this table-
type. It does not create the first -mandatory- column (DEPTH). You have to manually
define this one and you should use that name. You can have many trace tables.
WBD: You can have only one of these tables and they are designed to store well
equipment information, such as casing, packers, etc. The Well Bore Diagram module
uses this data. The structure is predefined and you can’t modify it. In other words, you
can’t choose the number or names of the columns of this table. If you need a WBD
table, it will be automatically created for you by OFM.
DEV: These tables store well deviation data. OFM uses deviation data to plot well
trajectories on the base map and also to convert depth dependant data to TVD. This
table is also automatically created by OFM when you need it. You can have only one per
database and you have no control on its columns.
The following figure summarizes most of the mentioned table types and the way they
relate to the static master table. Notice that the Xref and Lookup tables are not related
to anything, so you have total freedom for loading any data there.
We will start considering simple production data measured once a month and then move
forward to more rich data sets. In order to load simple production data, you must have
(at least) two related tables (see next figure):
The static master table contains a list of the possible owners of the production data to
load. First, you populate the static master table with entities and then load the
production table with the production data that belong to these entities. You can’t load
data in the production table that belongs to an entity that is not listed in the static
master table.
In order to clarify this, let’s analyze the following figure. Notice the two mentioned
tables, with some names:
Entities
Primary Key
Notice also the arrows showing you the relations kept by OFM between the two tables
(joining data with same colors). As mentioned before, in the relational database jargon,
the column (or combination of columns) that contains the data that uniquely identifies a
row is known as primary key. This key is what can’t be repeated in a table because it
identifies the record and allows the database engine to find it.
In the static master table, OFM automatically creates the primary key using the first
column of the table (you must create this column as string type). You can see the
primary key (named by the user as UID) of the static master table in the previous
figure.
For monthly tables, the primary key is a combination of two columns. The owner of the
record (the primary key of the static master table) and the date. The OFM database
engine automatically creates both columns when you define the table as MONTHLY.
This combination is unique for every row of the monthly table. You can’t have different
data (more than one record) for the same entity on the same date, but you can have as
many dates as wanted for the same entity.
Just to refresh previous statements, I would like to repeat that when a primary key is
being used in other table for a relation, in this related table it is known as a foreign
key, so:
The minimum contents of this file needed to implement our example would be:
//Contents of *.def
*TABLENAME HEADERID Static Master
UID STRING 10
X FLOAT
Y FLOAT
SYMBOL STRING 10
The previous file will just create the tables. To populate these tables, you need to
prepare the data files. They are also ASCII files and you should also try to stick to the
standard file name extensions. For the data that goes to the master table, the
recommended extension is “xy”. For the monthly data, try to use “prd”, “dat” or “inj”.
Remember that you first load the master table and then the monthly table. The first file
(*.xy) would then be:
//Contents of *.xy
*TABLENAME HEADERID
*UID *X *Y *SYMBOL
P1:R1 1232 3212 OIL
P1:R2 1232 3212 OIL
P3:R3 1235 3210 GAS
P3:R1 1235 3210 OIL
Once you load this file, OFM will have these four entities loaded. It will also know the
coordinates and what kind of symbol it should use to plot them on the base map.
Then, the second file you need is the *.prd file, which could have two different formats.
They are equivalent and produce the same results. You can pick the one you like.
The other possible format for the “*.prd” file could have been:
Notice that:
• Because this is a monthly table, the format of the date did not include the day. If
you do, OFM will simply ignore it. Possible date formats could have been 199601,
19960123, 960101, 9601 and some other ones.
• Columns don’t need to be nicely aligned. All you need to separate one column from
another one is a space, a tab or any combination of both. For this reason, if you
have string names with spaces in between, you need to enclose them between “”.
For example, if instead of using P3:R1, you could have used P1 R1, but it should
have been enclosed in “” in the ASCII file, as “P1 R1”.
Divide to Group
You choose what to store as an entity, depending you what kind of data you receive
from the field. If you receive the production figures for every perforated interval of
every well in your project, then you want to have this data available and store
production per interval (you entities will be intervals). If all you get is the total well
production and you can’t separate this figure among the different intervals of the well,
then you will have to store the production per well (you entities will be wells). If you
receive both, nothing stops you from mixing them, as long as you understand how to do
that.
Once your basic data is stored in the database, OFM gives you plenty of tools to group31
this data as pleased. The concept you need to remember is:
Store the data with as much details as you can and the use OFM features to
group it and analyze it as needed.
Say that you could store production per perforated intervals. Then you could:
• Group this data per well (group all intervals belonging to a well)
• Group this data per reservoir (group all intervals perforated on the same reservoir)
• Group this data per field (group all intervals perforated on the same field)
• Group all this data (group all intervals of all wells)
• Group only the intervals that produce oil
• Group only the intervals that have been producing during last quarter
• Etc.
Again, once you have the details, you use OFM grouping tools to analyze data together,
as needed for your study. For instance, you don’t normally report cumulative production
per interval but a summary report of cum figures per reservoir, field, etc.
How does OFM know which intervals belong to what reservoir, or well, or field, or…?
Well, you have to tell OFM this when you load the data, so you can use it later.
31
There is a short section ahead explaining what is the meaning of grouping in OFM. Be patient.
32
Easily means by a few mouse clicks. There are other important ways of grouping data, for
example by querying the database. You could query the database to find all wells that have been
producing with a water cut greater than 30% during the their last six months of production and
then group them together.
OFM lets you create one special type of table known as filter (or sort) table where each
column is known as a category. This table is related to the static master table and has
one (and only one) row per every row of the master table (see figure above). You could
think of if as an extension (extra columns) of the static master table. The big difference
is that OFM has many menus and operations associated with the filter table columns
that are not available for static master columns (and vice-versa).
Once you define the filter table and load it with data, you can quickly locate groups of
entities that interest you. For instance:
• Grouping by selecting the Reservoir 1 of the Reservoir filter category will give you
one set of data of all the intervals producing from this reservoir (P1:R1 and
P3:R1).
• Grouping by selecting the B1 of the Battery filter category will give you one set of
data of all the intervals.
• Grouping by selecting the Field A of the Field filter category will give you one set of
data of all the intervals producing from this field (P1:R1 and P1:R2).
• Grouping by selecting the Field A of the Field filter category and Reservoir 1 from
the Reservoir category will group only one interval (P1:R1).
All this is has been explained to stress how important is to store data with as much
detail as possible and how OFM lets you group it easily, once you define and load data
to the filter categories.
Filter categories are flexible and accessible through many features of OFM. You can
create up to fifty (50) filter categories and they must contain alphanumeric data.
Wellbore
When you load many intervals per well (your entities are perforated zones), you could
look at a well as a group of different zones or as a whole (the total of all the zones
perforated on that particular well). OFM has a special switch for that, under
Tools/Settings/Preferences (in OFM 2.2 was Options/Wellbore). See the next figure.
By selecting Use individual completion data, OFM maintains visibility over the
different intervals of the wells (for instance, you could select just one zone of a well for
your plot). You probably noticed this if you clicked on a well with more than one
interval: OFM pops up a window with the list of zones asking you which one you want to
use.
belongs.
33
In the name P1:R1, P1 means nothing to OFM
34
Coordinates are used to plot the symbol on the map when you choose to draw the symbol at
surface coordinates.
Wellbore information is a very important point when building the OFM database. If you
have interval data, then you load all intervals and tell OFM to which well they belong to
by associating the wellbore column as shown
before.
You can have any number of intervals per well.
However, you still need to point OFM to a valid
wellbore column.
Patterns
Patterns are another way of easily grouping entities. The main difference is that when
pattern data is grouped together, the figures will be affected by a Factor and Loss
coefficients individually (before grouping) using the following equation:
Data grouped = Data stored * Factor * (1 – Loss)
Because you assign different values to different entities (and possibly during different
time frames), each entity gets affected by it’s own set of coefficients before being
grouped for your study. Patterns deserve a full document and will not be treated here,
just mentioned. Any volunteers out there to write the Patterns Bible?
For instance, you could use an arithmetic average for production pressure, so when you
group all zones that produce from a reservoir, you will get as a group pressure a value
that is the average of all the individual pressures!
Review 1
Let’s review what has been described so far. If you understood the previous sections,
you can simply skip this one. However, if you read ahead and agree with me, you can
make sure you understood them. Off we go…
OFM has a customized relational database model. By customized we mean that although
you can imitate OFM’s database with other software, you can’t do the opposite: You
can’t build any database structure with OFM. In most cases, this is never a limitation.
Most of the OFM database is centered on a very important table: the static master
table: almost all tables hang from this one, like an inverted tree. However, there are
only two levels: the static master above and all others tables one level below. There are
tables that do not relate to the static master (Xref and Lookup). They are not connected
to this tree.
The first column of the static master table acts as the primary key, used to access the
static master table records and also to establish the relations between this table and the
other ones. The following figure summarizes this concept.
Finally, the filter table looks like an expansion of the static master table. If we had
another static table on the drawing, it would be linked as the filter table. The difference
is that the filter categories can store only alphanumeric values, not numbers. Also, there
are many OFM tools related to Filter categories that are not available unless the data is
in that table. Notice that the relationship is one-to-one, so you can’t have two (or more)
filter records related to a single static master record.
Although the Filter table looks as if it was an extension of the static master, it is still
related to it and it is not at the same master level in the tree mentioned before. Don’t
get confused by the drawing. You could still drag this table below and re-arrange it as
an inverted tree with only one top node: the static master table.
Xref and Lookup tables have no relationship to the static master table and are not
needed in the figure.
Exercise 1
Using the next figure as example, design an OFM database to store the available
information.
Notice the meters on surface. There is data available per layer (we will assume that the
values recorded are total production per month). There are also many reservoirs, some
Notice that production goes to more than one tank and also that there are many fields
present.
Static Data
The first thing we will do is decide what we will store as entities in our static master
table. Because we do have details of production per interval, we will put layers (or
perforated intervals, completions, etc.) on the static master.
Next, we need to decide what will be the notation to use for these entities. The
convention I will use is a short well name, followed by reservoir information. This format
is quite common. For example, the layer from well P1 producing from reservoir A will be
known as P1A or more clearly as P1:A. The “:” means nothing to OFM, just helps us to
distinguish the different parts from the name. We could also append the type of
production to the name, such as P1:A:O, for Oil. However, experience says that this
does not help much the end user and is more work.
So now, we have the names. They are: P1:A, P1:B, P2:A, P2:C, P3:A and P3:C
(check them against the figure and make sure you understand the notation). We need
the static information of these completions, such as the coordinates, date it started
production, date it was shot, well reservoir and field to which it belongs, the tank to
which is going its production, the hydrocarbon being produced, depth of the completion,
etc. Remember that these are values that are unique and will not change with time. You
decide the information you want to store, depending on how helpful it will be for your
work. OFM is quite flexible here. However, a recommendation will be made:
All other info should be placed in other tables, such as the filter table or spare
static tables. The static master table is accessed by OFM very often, so keeping it
small speeds up the project.
The name and coordinates are the values that must go in the static master table.
Because our field has several layers per well, we must include also a wellbore
information column. All other information could go to the filter table, as long as it is not
numeric data.
Remember that the split between the static master, filter and spare static tables is a
crucial point. You can review on chapter 1, on page 16 (Filter by Category) how will an
end user enjoy the Filter information. Anything that can be categorized with an
alphanumeric value should go to the Filter table, if possible. If it is a numeric value
(such as the porosity of the formation), should go to a spare static table. If the project is
small, you could place this numeric data in the master table. However, if you expect the
project to grow, then a spare static table should be seriously considered.
We can now design the static master and filter tables. Assign them some arbitrary
names and load the data. Because the project is small, extra static numeric information
(WellTD and DateOn) will be loaded to the master table. The tables should then look
like:
The ASCII files needed to build and populate the static master tables are:
Notice that the well names include a space, so they had to be enclosed between “”. If
not, OFM will interpret them as two different columns.
Production Data
Once we have the master table, we can load the production data. As mentioned before,
we are assuming that the data is being measured once a month, as the total amount
produced by the completion. It is also quite normal to get the actual number of days the
completion has been produced per month (to calculate an effective rate), so our field
report could look like the following spreadsheet:
Notice some characteristics of the data. Some completions producing from A reservoir
have been considered of type OIL in the filter table, although they also produce gas and
water, as shown in the table. The exception is well P3, that has been drilled on a sector
where only gas is present (P3:A produces only dry gas).
Completions producing from reservoir C and B produce no gas. Just oil and some water.
To simplify our design (and this is quite common), we will define just one monthly table
for production data. This table will include space for all possible data recorded monthly.
There is no need to create one table for gas production, one for oil, etc. In fact, this will
not help when implementing other features. Normally, if it is monthly production, goes
to one table.
The definition file needed to build a monthly table will be similar to:
Notice that we used INT1 for the days column. Because is a number between 0 and 31
with no decimals, a 1 byte integer is enough for it.
Notice that you don’t need one definition file per table. You could have combined both
tables in one single definition file:
Now we need to prepare the production data file. Again, we have two possible formats
(both were shown in page 120), but we show only one of them.
*TABLENAME MONTHLYPROD
*DATE *DAYS *GAS *OIL *WATER
*KEYNAME P1:A
9907 20 120 80 20
9908 25 130 81 21
9909 22 150 78 20
9910 30 140 78 19
9911 30 100 75 22
9912 24 110 74 23
*KEYNAME P1:B
9908 21 0 60 16
9909 27 0 59 15
9910 28 0 59 13
9911 22 0 60 16
9912 23 0 60 17
*KEYNAME P2:A
9908 30 150 120 31
9909 30 130 118 33
9910 30 145 140 30
9911 30 184 130 30
9912 29 123 128 45
*KEYNAME P2:C
9908 29 0 480 120
9909 29 0 500 130
9910 30 0 500 128
9911 3 0 60 15
9912 29 0 520 160
*KEYNAME P3:A
9907 28 600 0 0
This file is quite similar to our field spreadsheet. The main differences are the dates,
which instead of Aug-98 are input as 9908. Don’t worry yet. This file could have also
been prepared with different date formats and loaded producing equivalent results.
Some of the different date formats that can be loaded to OFM are displayed in the next
example
*TABLENAME MONTHLYPROD
*DATE *DAYS *GAS *OIL *WATER
*KEYNAME P1:A
199907 20 120 80 20
199908 25 130 81 21
19990902 22 150 78 20
991011 30 140 78 19
Nov-99 30 100 75 22
Dec-1999 110 74 23
*KEYNAME P1:B
9908 21 0 60 16
----------------
Up to now, we have never mentioned how to load data to a table and relate it to the
well name. All explanations always considered data loaded to entities (the first column
of the static master table). You can see the *.prd file, which specifies that the data will
be loaded to the different completions P1:A, P1:B, P2:A, etc. This is data is known as
loaded to the completion level.
There is some data, such as the ones in this section that has to be loaded to the
wellbore level. This is automated by the program and how data is loaded is specified
by selecting the correct table type. The next section explains this subject.
Up to know, I described how data is loaded and linked to the primary keys of the master
table or, in other words, “any data loaded is assumed to belong to one entity”. When
data belongs to an entity it is known as data loaded to the completion (or entity)
level.
Once more, OFM is a customized database. You have some available table types with
pre-defined settings. These settings include the information needed to link the table to
the existent data. The figure on page 127 shows some tables related to the master
using the primary key.
Notice that these are not all table types (listed on page 115) and some have been
omitted deliberately from the drawings until now. We will complete the picture to
include three new table types: TRACE, DEV and WBD. The way OFM links them to the
rest of the database is totally equivalent, so we will explain just the TRACE table type.
Trace Tables
The next figure displays what we have on a typical database. Notice that the colors now
are chosen to clarify how log data is linked to the master table. When you define a
TRACE table type, OFM creates the first column of it and relates it NOT to the primary
key of the static master table (as in the previous monthly and filter examples) but to the
column that contains wellbore information (blue arrows). So the owner of log data is a
wellbore and not whatever we put on the static master table lines.
This database engineering is completely handled by OFM’s database engine. Lucky we!
All we need to do is specify the new table type as TRACE.
The files needed to implement this log table are two: one definition file to build the table
structure and one with the log data. Here they are:
Deviation tables
• DEV is a table type to store well deviation data. You don’t define this table with a
definition file. OFM will automatically create it when you load deviation data via an
ASCII file with extension dev. The DEV table structure is fixed and you can’t fiddle
//Style 1
//Contents of a *.dev
//Specifying the Deltas in X and Y directions
*DEPTH *XDELT *YDELT *TVD
*KEYNAME “Well P1”
0 0 0 0
600 -20 -12.8 550
1053 -83 5.8 930
1122 -221.5 74.4 1002
1600 -250 120 1200
*KEYNAME “Well P2”
0 0 0 0
300 -20 -12.8 286
1053 -85 25.8 920
1120 -123.5 74.4 1002
1558 -140.4 699 1202
//Style 2
//Contents of a *.dev
//Specifying the Absolute coordinates in X and Y directions
// This is assuming that the static master table columns that contain
// coordinates are named X and Y.
*DEPTH *X *Y *TVD
*KEYNAME “Well P1”
0 12321 82394 0
600 12301 82381.2 550
1053 12238 82399.8 930
1122 12099.5 82468.4 1002
1600 12071 82514 1200
*KEYNAME “Well P2”
0 12843 81029 0
300 12823 81016.2 286
1053 12758 81054.8 920
1120 12719.5 81103.4 1002
1558 12702.6 81728 1202
*end_header
*info
*end_info
*units DEPTH '
*units DIAM in
*units WEIGHT #/ft
*casing
*top 0.00
*bottom 185.00
*od 18 5/8
*id 16
*jts 6
*cement
*top 0.00
*bottom 185.00
*od
*openhole
*segment
*top 185.00
*bottom 2650.00
*od 12.50
*segment
*top 2630.00
*bottom 10350.00
*od 8.50
*window 1
*top 0.00
*bottom 4500.00
*fraction 20.00
*window 2
*top 4500.00
*bottom 6400.00
*fraction 60.00
*window 3
*top 8100.00
*bottom 10400.00
*fraction 20.00
• Notice that this file has a quite complex syntax. You normally don’t load WBD via
ASCII files, so don’t worry. There are two other basic ways of inputting WBD
information to OFM. One is using the interactive interface of the Well Bore Diagram
module of the program, where you fill up all information using forms. The other one
is using the WBDBuilder.xls file, which has a spreadsheet where you can input this
data and a macro to generate the ASCII file. This utility spreadsheet is a part of the
OFM Plus package.
• You can have only one WBD table type and its name will always be WBD.
Review 2
Here comes our second review. This time will be a little shorter because we (hope it’s
not only me) are getting into speed. Take again a look at figure at page 128 and try to
think as I do. There is a variety of data to be loaded (production, logs, equipment, etc).
Before loading any data, you need to ask yourself: Who is the owner of the data? Does
it belong to an interval or to a well? Does it belong to a tank, a pipe, a field, etc? So far,
we’ve seen that either the completions or the wellbores owned data. We also gave two
data loading scenarios:
• Data loaded to the completion level (names present on the first column of the
master table)
• Data loaded to the wellbore level (names present on the wellbore column of the
master table)
Xref No Relation
Lookup No Relation
Notice that once you choose the table type (depending on your data), OFM knows how
to link the table. This is the same as saying: OFM knows where are the names that will
own the data for the table. For example,
• If it is a monthly table, the data there will be loaded to one of the names present in
the first column of the static master table. You can see this exemplified on the
figure of page 127.
• If it is a trace table, the data there will be loaded to one of the names present in
the wellbore column of the static master table. You can see this exemplified on the
figure of page 135.
Both category levels can be displayed in a single figure, such as the following one.
Notice that the table legend specifies also the possible number of tables (one or many).
Notice also that this is still an inverted tree, with the static master table as the main
node. However, the place (column) where the other tables hang depends on the table
type. Some are connected to the primary key column (red arrows) and some are
connected to the wellbore column (green arrows).
A group table is a table that contains data being owned by a sort category value. For
instance, in our example, we could load production data to a MONTHLY table that will
be owned by the tank A!
Although you could eventually (and you better have a good reason) load this data to the
same production table (MonthlyProd in our previous example, page 135), I would not
recommend this practice. Just think:
“When you group a reservoir, are you displaying the addition of the pieces or just the
data loaded to the reservoir?.”
“When you group a tank, are you displaying the addition of the perforated intervals
filling up the tank or just the values you measured on the tank itself?”
In order to access this data (you can’t click on a reservoir!), you have to activate a
special setting in OFM that instructs the software to load ALSO the group tables35. If you
don’t, OFM will ignore the values loaded there. This setting in under
Tools/Settings/Advanced (see next figure). In OFM 2.2, this setting is under
Sort/Load Control Setup.
All you need now is a proper ASCII file with the data that you want to load to the
different batteries. This file will look like:
*LOADBY TANK
*TABLENAME TANKPROD
*DATE *OIL
*KEYNAME A
199907 79
199908 345
199909 340
199910 340
199911 340
*KEYNAME B
199708 nnn
…
35
Don’t get confused by the notation. You don’t have to load this data to separate tables,
although I recommend to do that. If you decide to store this data to an existing table, then Load
group tables means “also load the data that is loaded to the filter categories”.
• The *LOADBY TANK command (*LOADBY is a reserved word, TANK is the name
of a filter category previously defined).
• *KEYNAME A (A is one of the names that appear in the TANK sort category, not a
wellbore, not a completion)
Now let me demonstrate it. The following graph shows the results. Notice that we have
the addition of the pieces (MonthlyProduct.Oil for all the intervals producing to tank A)
and the data loaded (TankProd.Oil). Notice also that they are different.
• Make sure that the Load group tables option was on.
• Group the tank A data.
We can see the same results on a report. Just another way of seeing the same
results.
TANK: A
TANKPROD MONTHLYPRO
<DATE> OIL OIL
Review 3
We have introduced three ways of loading information to OFM: completion, wellbore and
category levels. These modes are available depending on the type of table that will
receive the data. Some types accept only one of them (MASTER, FILTER, TRACE, WBD
and DEV), some will accept two (MONTHLY, DAILY, SPORADIC and STATIC). Notice that
this is not a limitation. It simply does note make sense try to load log data to a reservoir
or deviation data to a completion!.
Symbols
The completion level is the case when you load data to the lines of the static master
table and because of that, each data owner gets a symbol on the base map. Any line of
the static master table gets a symbol on the base map.
When you load data to the wellbore level, there is a symbol on the map indirectly
related to the wellbores. This symbol is the combination of all completions that belong to
the same well, so at first sight, you might think that each symbol on the base map is
related to the well but what it is really happening is that several completions (with equal
coordinates) are plotted on the same spot. It looks as a well symbol but it is not.
The third method does not plot any symbol at all. A filter category (such as the TANK,
used in our examples), does not get any symbol on the map. It is something that lives
inside the filter table.
Formats
There is almost no difference in the format of data ASCII files used to load the different
data. The keyword *KEYNAME is used for all of them and the argument it takes is the
name of the owner of the data. If it is a completion, the completion name (P1:B). If it is
a wellbore, the wellbore name (Well P1). If it is a filter category, the filter category
name (A).
• OFM knows that the data in the ASCII file goes to a completion once it recognizes
the destination table-type. So, when you load to a completion, OFM knows that the
name must be listed on the first column of the static master table.
• OFM knows that the data in the ASCII file goes to a wellbore once it recognizes the
destination table-type. So, when you load to a wellbore, OFM knows that the name
must be listed in the column of the static master table associated with Wellbore.
• When you load to a filter category, OFM can’t tell what you are trying to do unless
you use the *LOADBY command (notice it goes at the very beginning of the ASCII
file). This command specifies that data goes to a filter category and to which one as
well. The argument is the name of the desired filter category (*LOADBY TANK).
Remember that to use category data, you need to specifically tell OFM to load the group
tables, as explained on page 143.
Someone was surely pleased to be able to load measured tank volumes to a tank using
the filter category level. But no, no… that wasn’t enough: “I want to see the tank on my
map, and click on it and get its data!!!”. To satisfy this user, OFM programmers came
up with yet another data level: Objects!
Objects are defined by adding another column to the static master table that OFM uses
to tell the difference between them and completions. This column must be a string type
Notice the last column of the static master table. I decided to name it Object and it is
of type string with room for only one char. As
mentioned before, a c tells OFM that the line is a
completion. Anything else (such as the T) is considered
an object.
Data Associations
This section explains a very important aspect of a project: the Data Association settings.
You reach these settings from Edit/Map/Association. You can see the Data
Association window used in the previous figure.
Data association is a place you have to visit once, after creating a project and return
only if you change the table structure of your database. OFM needs to know certain data
to perform normal actions and unless you specifically tell it where is this data, the
results you will get will be erroneous. For instance, the base map module needs to know
where is the X and Y coordinates data. If you don’t, it will not know how to draw the
map! Remember that OFM is very flexible and lets you choose any name for columns, so
your X coordinate could be in a column named X, Xcoor, Long, Xposition, etc. You
can use ANY name you like, but then you need to tell OFM about it. There are no default
names, so you have to go through this, at least once.
We will explain each of the settings of this window. Notice that this will give us more
reasons to justify what should go on the static master table and what should not.
Well Type – Sort/Table/Exp: When OFM plots a symbol on the map, it can use
different shapes or colors, depending on your preferences. Because each line of the
static master table will be plot on the base
map, you need to specify which symbol you
want for each of them.
OFM is very flexible with symbols and gives you three possible places to store the
acronyms. The information could be in one of the columns of the static master table
(Well Type – Table), in one of the columns of the filter table (Well Type – Sort) or it
could be the result of some algorithm (Well
Type – Exp.)
It is very important that you use only ONE of
these three options. You should not activate
more than one at a time. That will confuse OFM.
Beware of this because unfortunately, you CAN
select more than one, as shown on the next
figure.
Wellbore: Use this button to tell OFM which column of the static master table contains
the wellbore information. Wellbore information is used to group completions that belong
to a well. It is also used to establish links to data loaded to the wellbore level, such as
logs, deviation or wellbore diagram information.
Alias Name: Use this button to tell OFM which column of the static master table
contains the alias information. The alias name is used for plotting names on the base
map.
Object Type: Use this button to tell OFM which column of the static master table
contains the object information. A c (or C) in this column will tell OFM to treat that
entity as a completion. Any other letter will instruct OFM to consider it an object.
X Coordinate, Y Coordinate: Use these buttons to tell OFM which column of the static
master table contains the X and Y coordinate values.
Reference Depth: If you ask OFM to convert depth information to TVD, it will consider
the reference depth as offset. If needed, this value must be stored on the static master
table and this button is to tell OFM the name of this column.
Completion Depth: If you work with deviated wells and load deviation information,
OFM could eventually plot the completion at surface (using the X and Y coordinates) or
in the correct spot of the well trajectory. In order to do that, OFM needs to know the
trajectory and the measured depth of the completion: the completion depth. In those
cases, you need to store this information in the static master table and this button is to
tell OFM the name of this column.
Bottom Depth: If you work with deviated wells and load deviation information, OFM
could eventually plot the completion at the bottom of the well. In order to do that, OFM
needs to know the well trajectory and the TD value. In those cases, you need to store
this information in the static master table and this button is to tell OFM the name of this
column.
Project: You could nest several projects. This is basically a project that acts as a main
menu, where each symbol on the map is actually a pointer to another project. The main
project contains no data and is just an entry point to other projects. If you decide to do
that, you must tell OFM where is the project that the symbol makes reference to by
specifying the full path to it (for instance, C:\OFM\Projects\Porj1.ofm). This info goes in
a column of the static master table and you need to tell OFM about it with this button.
Exercise 2
This exercise will guide you through the needed steps to build a project and associate its
data. The project will be similar to the one we have been describing in this chapter,
although we will also include other stuff, such as static, daily and some sporadic data.
I hope that at the end of it, you feel comfortable building on your own, a simple project.
Before starting, I need to warn you that we are not covering some important details of
any real-life project (such as units, multipliers, imputed variables, etc). They will be
covered in other chapters.
First of all, we need to prepare a definition file with all the tables we want to put in the
project. Remember that this information could be split in different files but if we know in
advance what are the tables we will need, we can just create one big file, like the
following one:
//Contents of exercise2.def
*TABLENAME XY Static Master
LAYER STRING 15
X FLOAT
Y FLOAT
WELL STRING 15
WELLTD FLOAT
ALIAS STRING 5
Notes:
• Prepare ASCII files with a reliable PURE ASCII editor. For small files, Notepad is the
best. For the ones that are too big for it, you could use WordPad, but make sure
you save as Text Only.
• The first table defined is the static master table and its first column will be used as
the primary key of the project. This column is named LAYER and is of STRING type,
so LAYER is the primary key of the static master table.
• We do not define the LAYER or the DATE for Monthly or Daily tables.
• For the TESTS table (sporadic) we define DATE as the first column.
• Information that is not strictly needed in the static master table (such as the date it
started production) was moved to another static table named Properties.
• Notice that every time we need to define a column that will contain a date, we use
an UINT4, i.e., an unsigned 4 bytes integer number. That’s the recommended type
for a date variable. Don’t confuse a DATE variable (like 2-Feb-98) with a DAYS
variable (the number of days per month).
This file should populate the XY table with eight completions that belong to five wells.
Notice that only names that have spaces are enclosed between “”.
The next step will decide the Filter categories that we need, and prepare the srt file to
define and load them with data. The next figure shows the contents of our file.
Notice that there is a new TYPE: WINJ. I will use this string to identify water injectors. I
need is to assign them a unique TYPE, because we will use this column for map symbol
information. Having a unique string for the injectors allows me to assign them a unique
symbol on the map. The water injected will be loaded to a separate table: the
MonthlyInj.
The window consists basically of two different sections. The section above (Look in) is
a file navigator that allows you to find the files you want to load (they don’t need to be
all in the same folder). Once you find the desired file, you highlight it and click the Add
button. This moves the file to the second section of the window, the lower box with the
list of the files OFM will load (Files to Load).
36
Although you are creating a NEW project, the window title is OPEN. It is not clear, but that’s
how it is in OFM 3.0. OFM 3.1 fixed this detail.
Verify that your three files are being recognized as well. If you have given them
different extensions and OFM did not guess them correctly, select the Data Type
manually. After verifying the three files, click LOAD.
Notice that five spots are already on the map. You could have them or not. This
depends on the names you have chosen for coordinate columns and how OFM
interpreted them. That is not important yet. Notice the tabs of the status windows
looking for errors.
My Status tab displays a log of what OFM has done while loading and processing the
files. I will reproduce it here because it is very important that you get familiar with these
messages. I will add my comments in bold.
Make sure you follow and understand the previous OFM information outputs. This is the
first place to check after a data load and generally where you discover any errors in the
process. If you had the same results, go ahead. If not, review your ASCII files and
procedures against our examples and try again.
5) The next thing you should do is to associate the data of the project. This will make
sure that OFM knows where to find important data such
as coordinates, wellbore names, alias names, object
information, etc. Select Edit/Map/Association and
associate the data as shown in the Data Association
window figure. You associate them by clicking on each
button and selecting the proper column. If you make a
mistake, there is a Clear option to de-associate.
Right click on the map and select Legend/Draw. A legend with the symbols’
description comes in the map.
Select Edit/Map/Well Names and select to show Alias names. Click OK. Your map
should look more or less like this:
Your final base map could look like the one shown in the following figure:
6) Now it’s time to verify your data. To view that al the desired tables are actually
present in the project, do an Edit/Project/Definition. This opens a window named
Edit Tables Definition.
This window is very important. First, it lists the tables available in the project and their
type (notice in the next figure that the TANKPROD table is of MONTHLY type). When the
table type is grayed out, OFM indicates that there is data already loaded to the table.
Try the XY table to see what I mean.
8) Now check your filter data. Do an Edit/Project/Sort. The filter table information
will be displayed. Make sure you check the values.
9) You could verify that OFM associates completions to their respective wells. You
should remember that OFM does that using the master table column associated with
wellbore (in our case, the WELL
column).
10) At this point, the project is almost finished, except that it contains no useful data to
analyze, just some names and coordinates. It is time to load some production data.
When you select a file with extension prd, the Data Loader assigns it a Data Type of
DATA (see the next figure).
To add the data in this file to the project, load the file with File/Get External
Data/Data Loader. This is the same window we used before. If you still have the old
files left in the lower section, clear them with the Clear All button and just add the new
file with the production data.
11) The production data of the completions has been loaded. It is time to verify it.
Select Edit/Project/Data. Then select the table MonthlyProd and click OK.
A grid appears with the MonthlyProd columns. The data shown in the grid belongs to
whatever is being grouped by OFM. Group the first completion (P1:A) to populate the
grid with its data. Review the others by selecting them one by one. The grid displays
only one completion at a time. If you want to quickly cycle them, you could use the
Next and Previous arrows.
After you are happy with your monthly production data, do a File/Close and return to
the base map.
12) The next step will be to prepare the file with the static data for the PROPERTIES
table. The file contents will look like the following one
After you prepare the file, load it with the data loader. With a dat extension, OFM will
assign it a Data Type of DATA. Load it and inspect the Status tab. After loading this file,
my Status tab displays:
If I wanted to view this data, I can do Edit/Project/Data. Then select the table
Properties and click OK. The next figure shows the results.
13) Let’s load some logs data to the table LOGTRACES. The ASCII file with log data
looks like:
Notice that this log data has been loaded to the Wellbores and not the completions.
To check this data, you can’t do an Edit/Project/Data/LogTraces because you can’t see
this kind of data in the OFM spreadsheets. The only way you have to verify it is doing a
log report.
Select Analysis\Log Report. When the Edit Report window comes up, add the
desired data to
the report.
First, add the
depth and
then the other
log curves, as
shown in the
figure.
14) In this step, you will load data to the TANK filter category. The reason why we have
to do this is because when we add all productions from all intervals producing to a tank,
it never matches the values that our field technicians measure at the tanks. So it
became a standard procedure in the company to report both: the total measured and
the total of the completions that fill the tanks.
MONTHLYPRO TANKPROD
OIL OIL
DATE (Addition)(Measured)
---------- ---------- ----------
19990801 480.0 475.0
19990901 500.0 479.0
19991001 500.0 492.0
19991101 60.0 37.0
19991201 520.0 501.0
The other way of achieving our boss’ requirements is to load the tank measured-values
to the tank filter category. This was described as loading data to the category level. The
file we need to do this is:
Notice that the file starts with the LOADBY command. Also, notice that the production
goes to a separate monthly table named TANKPROD, which has been created with our
definition file. This is a must if we want to report this data simultaneously with
completion production data.
After loading this data (Data Type is DATA), I get the following log on the Status tab:
Again, you can’t check this data from the OFM spreadsheets. You have to report it out.
The steps needed to do this are:
TANK: A
MONTHLYPRO TANKPROD
OIL OIL
DATE
---------- ---------- ----------
19990701 80.0 79.0
19990801 341.0 345.0
19990901 345.0 340.0
19991001 362.0 340.0
19991101 353.0 340.0
19991201 347.0 0.0
e- Group the other tanks data by doing Step/Select and select the tank B. Inspect the
results for the other tank.
f- Remember to reset the Step/Category to LAYER after you finish. To leave the
report module and go back to the base map, do a File/Close.
15) Finally, the last data we want to add to our project is test data. In our case, tests
are performed periodically to different layers to estimate their total monthly production.
Because these tests are not done every day (or every month) but a few times a month,
we decided to store this data in a sporadic table called Tests.
The ASCII file format with the data needed to load our test information is the following
one:
*TABLENAME TESTS
*DATE *HOURS *OIL *GAS *WATER
*KEYNAME P1:A
19990707 11 1.16 1.9 0.26
19990712 14 1.3 2 0.35
19990722 24 3 4 0.80
19990807 11 2 0.26
19990812 14 1.3 2.2 0.35
19990822 3 4.4 0.9
19990907 12 1.2 2.1 0.36
19990912 15 1.3 2.75 0.35
Notice that there is only data for two layers (to keep our example short) and that data is
a not formatted in column. Remember that as long as there is a space or tab (or any
combination of them) between numbers, OFM will process the file correctly.
Notice as well that the dates include the day of the month (year – month and day).
Sporadic (in time) tables store the complete date data.
After loading this data (Data Type is DATA), I get the following log on the Status tab:
This concludes our second exercise. Hopefully now you have an idea on how to create a
basic project and decide to get your hands dirty experimenting with your own data.
OFM gets its data from various sources (ASCII files, ODBC connections, Production
Analyst files, PI/Dwights files, etc.) and store it in a proprietary database. This means
that the OFM database files can be opened (and understood) only by OFM.
A project consists of several files that you can recognize by their extension. The main
file is the *.ofm one. In our previous exercise example, the main file was named
“exercise 2.ofm”.
When you load data (from a ASCII file or any other possible source), the data goes to
the respective OFM binary file and once is there, the source is not needed anymore. This
means that, for our previous exercise, you can safely delete all the ASCII files you
prepared for the build (*.def, *.xy, *.srt, *.prd, *.dat, etc.). Once you loaded the data,
you don’t need them anymore.
As a good advice, always keep your data in its original format. They could help you to
rebuild a project from zero, in case the OFM binary files get corrupted.
The following table describes the meaning of the OFM binary files, according to their
extensions. Binary means that these files can’t be opened with any other program but
OFM.
Project is the name you have given to your project. In the previous section, we created
one named “Exercise 2”, so our configuration file for that particular project will be
named “Exercise 2.o3”.
Exercise 2.ofm Main file. Is the file you open with OFM to load the project. It
contains, among other stuff, your tables’ definitions.
Exercise 2.o3 Configuration file. It keeps values such as the grid settings, the
option of displaying the names and legend on the map, you data
associations, etc. You can safely delete this file and OFM will create
a new one with the default settings.
Exercise 2.o11 These files hold the information of the static master table (XY) as
Exercise 2.i21 well as any other static table of the project (Properties). If you
Exercise 2.d21 delete these files, your project becomes useless.
Exercise 2.i23 These files keep daily data. In our project, just the DailyProd
Exercise 2.d23 table. Notice that because we have not added any data to this table
yet, these files have not been created.
Exercise 2.i29 These files hold data loaded to log tables (LogTraces). If you
Exercise 2.d29 delete these files, your log data will be lost. However, the rest of
the data will work fine.
Exercise 2.i24 These files store sporadic data (our Tests table). Again, you could
Exercise 2.d24 delete them and loose only that data.
Exercise 2.i22 These files have the monthly data (our MonthlyProd and
Exercise 2.d22 TankProd tables). If you delete these files you will loose the data
of both tables.
After this quick revision, you should have a better understanding of an OFM database
file structure. The data is split in different files, according to its nature. This gives you, in
some cases, the freedom of deleting the files to get rid of the data and re-load it.
Sometimes, this is the only way of recovering a partially corrupted project. However,
there are some files that, even in extreme situations, you can’t delete without corrupting
the whole project. We will mention them as we move along.
As a final remark, notice that there is no relation between files and tables. The relation
is file to data type: If you delete monthly data type, you delete all tables that store
monthly data.
I can’t close this section without repeating once more:
Although after loading it to a project you don’t need it, always keep your data in its
original format. It could help you to rebuild a project from zero, in case the OFM binary
files get corrupted.
If you don’t have your ASCII files, you can generate most of them exporting the data
from an OFM project to ASCII files. This is a very recommended practice if you don’t
have your own source data files. We will cover this subject in a separate section.
If you have gone through the whole chapter, I expect you to have the minimum
knowledge to start experimenting with your data before the real project. You could start
preparing your ASCII data files and even attempt to build your first database. However,
I have skipped important points that you will also need to know to build your final-
professional project. The next chapters will cover them, one at a time: OFM Project
Variables, Units and Multipliers, Basemap Symbols and Project Optimization.
If you need to build more than one project, you should also visit the Reusing Projects
chapter. It will basically cover how to use a nicely designed project as a template for
others.
Introduction
In the previous chapter, I described the OFM database structure and I deliberately used
general terms, such as tables, relations, columns, fields, rows, etc. Most people normally
understand these. However, I completely ignored a very important word in the OFM
jargon: a variable. Now it is time to introduce this keyword and you better get used to
it quick.
A variable in OFM is, as you would expect, something that has a name and stores a
value (a number, a string, etc.). OFM don’t use the term column to reference a value.
For instance, you won’t find anybody in the OFM world talking about the OIL column of
the MONTHLYPROD table. They will refer to the MONTLHYPROD.OIL input variable,
instead.
OFM variables come in three flavors: Input, Calculated and Imputed. The next
sections will explain them carefully, so be patient.
Suppose that you have a table loaded with the oil production of a completion and you
want produce a report out of it. This table is named MONTHLYPROD and the column is
OIL. If you were using a low level programming, you would have to write some code,
like:
With this function, you could then report oil production for well P1 with code like:
FirstDate = 199001;
LastDate = Today;
PRINT “Oil Production Report for Well P1”
As you can see, you have to manually sweep the needed dates; and retrieve and print
the OIL value that corresponds to that particular Date and Well.
Because these database operations in OFM are quite common, they are well known in
advance. OFM was created to do many of these things for you. For example, it does the
walking through the table and gets the OIL value that corresponds to the date and well
being processed. All you have to do to generate a report like the previous one will be
specify the DATE and MONTHLYPROD.OIL variables. OFM does the rest!
This is a very obscure subject for OFM beginners: Most variables don’t have one unique
value, but are indexed. They don’t have a single value because their value changes with,
at least, one external index. In our previous example, the OIL changes with the date
and the well, however the name of the variable is just one: MONTHLYPROD.OIL.
Another commonly used index is depth. If you report log curves, you specify the names
of the variables (LOGTRACES.DEPTH, LOGTRACES.GR, LOGTRACES.SP, etc). The GR, SP
and any other log curve will have the proper value, depending on the well selected and
the depth being reported. Very cute.
OFM has three types of variables (yeah, I’ve said that already). The use of each type
should be clear after you go through this chapter. Usually, once you know what you
want to do with the variable, there is only one type that would do the work.
Input Variables
These variables have already been introduced to you in previous chapters. These are
basically the table columns. The names they get are completely defined after you create
them. If you create a table named MonthlyProd and in that table you define a column
named Days, then you have an input variable named MonthlyProd.Days.
Another example will be the variable that holds the well names of our last exercise. It
would be XY.Well.
These are also the only variables you can load with data. Anything that you load to a
project goes to an input variable. Input variables hold the raw data. Whatever you load
to them is what their value will be.
So, suppose that I have a table named MProd like the following one:
The OIL value is the total volume produced by the completion during the month. If I
want to have the effective producing rate, I have to divide this value by the number of
days that the zone produced during that month. For instance,
This figure is a value that can be calculated from the raw data and you don’t have to
load. OFM calculates it. One way of doing these calculations is using Imputed Variables.
For this particular example, you could create a imputed variable in the MProd table
named OILRATE as an imputed variable which equation would be:
This variable will be accessed as any other Input variable and its name will be
MProd.Oilrate. Your final MProd table could then look like:
Notice that highlighted values have not been loaded. OFM calculates these figures based
on the existent data and the specified equation.
Notice also that when you specify the equation OIL/DAYS, OFM knows that all
calculations refer to the month selected. It is not possible to divide February’s OIL value
by December’s DAYS. It is done ONE RECORD AT A TIME. This keeps the equations
Calculated Variables
This is where OFM power really shines. Calculated variables are variables that you define
completely: the name and the equation. You can virtually use them to calculate anything
you can imagine. To build your calculated variables you can use constants, simple
operators, input and imputed (and even other calculated) variables, and more than 250
system functions. You can also use user functions, which are functions that you code in
a special scripting language.
For instance, imagine that you want to report, once a month, the total oil accumulated
from beginning of production. You don’t externally calculate this value and load it. You
ask OFM to calculate it for you! Example, say that you have a MonthlyProd table like:
The left-hand side is your input table. The highlighted values on the right hand side are
calculated by OFM. If a value changes for the OIL input variable, OFM will show the
changes in the CumOil variable.
CumOil = @CumInput(MonthlyProd.OIL)
Notice the @ sign. This identifies one of the available system functions37 as part of the
definition. This function is CumInput and is designed to accumulate an input variable,
such as MonthlyProd.OIL.
OFM does not have any pre-defined calculated variables. They have to be defined by
you for every project you build. Don’t’ worry now about this. There are some tricks and
tools to quickly create a set with the most popular variables.
37
System Functions is a world in itself and will be covered in later chapters. For now, just
believe that is a pool of available functions that you can use to build your own calculated
variables. There are system functions for mathematics, statistics, database access, file
manipulation, etc. More than 250 functions are available in this pool and the number gets bigger
in every new OFM release.
• Input Variables
• Imputed Variables
• Calculated Variables
The Input type is easy to understand. They are the columns of the tables that you need
to fill in (load) with your raw data.
Imputed are variables that you don’t load with data, you just define the equation and
OFM calculates the values. The equation you can specify is very limited.
Calculated variables are extremely powerful and the equation can calculate virtually
anything.
“Ok”, you’ll say. “I got them but… why is that we have a Imputed type, very limited
compared to the Calculated ones? Why can’t I just ignore the Imputed ones and build all
my equations using Calculated Variables?”.
This question is intimately related to the Group procedure of OFM38 and the main
difference between them is when the equation is actually calculated. The answer is NO,
you can’t ignore them. I’ll explain:
If you Group these two completions together, OFM will load their data and consider it as
only ONE set (i.e., a group) The result will be a unique set of values, like the following
one:
38
Grouping is the bread-and-butter of OFM operations. You can’t do anything without grouping
the desired data, so you better understand very clearly what’s coming now.
This is a group. The input variables ProdDays, OIL, WATER and GAS have been defined
in a way that a group of values is the addition of values, so, the OIL of the group for
Nov-99 is 135 which is the addition of 75 (for P1:A) and 60 (for P1:B). The same
happened to the other figures.
Let’s go back to our OILRATE equation. To simplify the point, I will just consider one
month: Nov-99.
At this point you should detect a mistake here. If P1:A produced at 2.50 bbl/day and
P1:B at 2.72 bbl/day, how can you get the group of both to produce only 2.6 bbl/day?
Both together, they should report a total rate of 5.22 bbl/day!
The difference is because some calculations (particularly the rates) can’t be carried out
from total figures. They must be performed to all individual components and then added
to get the group value. This is the same as pointing out that:
Calculated Variables are calculated using the group values, i.e., after the data has been
grouped. Imputed variables are calculated before the data is grouped. Although this
seems to be a big limitation, it is not. The rates example is about the only type of
equation that suffers this difference so it is more or less the only type of variables that
you have to define as Imputed. All the other operations can be normally done over the
group figures with Calculated values with no difference in the results.
Loosen up your eyebrows! There is an exercise to prove you this. Imagine the case of a
CumOil. Adding the individual accumulated values or accumulating the monthly value of
the group leads to exactly the same result, as shown in the next tables:
The grouping of the accumulated values (Imputed variable style) leads to:
Remember that most of the time, any calculated value works the same if calculated over
the group value (Calculated variable style), so almost all your calculations can use all the
OFM power of calculated variables. The only (well known) exception is rates, that you
will have to calculate using imputed variables and where just a division is enough.
As mentioned before, these are the table columns. Chapter 3 covered a database
construction and plenty of input variables were created. The examples just showed how
to create the variables but mentioned nothing about other attributes you normally set to
complete the definition. This section will tell you all there is to know about these
variables.
Remember that in Chapter 3, the table-structure of the projects was created from ASCII
definition files. The next figure repeats one of the files used.
//Contents of exercise2.def
*TABLENAME XY Static Master
LAYER STRING 15
X FLOAT
Y FLOAT
WELL STRING 15
WELLTD FLOAT
ALIAS STRING 5
Notice that we create the table specifying the name and table type (Static, Monthly,
etc.) and then the input variables, specifying the name and also variable type. There are
many variable types in OFM: FLOAT, STRING, UINT4, etc. The next section is dedicated
to them.
Notes:
In the FLOAT and DOUBLE types, the decimal point counts as a digit. The sign does not.
1234.67 7 digits: OK
-1234.67 7 digits plus sign: OK
• The difference between INT and UINT is the possibility of using a sign. UINT means
Unsigned INT, and integer that uses all bits for the number and there is no sign
involved. They go from zero to 2n-1, where n=8 for 1 byte, 16 for 2 bytes and 32 for
4 bytes. An unsigned integer of 4 bytes (UINT4) is the preferred type for storing
variables that will store dates.
• Choosing the correct type can change your project performance dramatically. For
instance, if you store the number of days of production per month, this number goes
from 0 to 31. If you don’t need decimal precision, then INT1 (or UINT1) is enough.
If you choose a DOUBLE for this variable, the size will increase eight times.
• For variables that will store alphanumeric string values, you have to specifically
choose the size. If you need to store a 3 or 4 chars alias name, then defining the
variable as STRING 20 is a total waste. Think about your future needs but don’t over
dimension your variables.
• If your numbers are big but without much definition (like 25,000,000), then you
don’t need precision. Later on you will see that you can load just 25 (an UINT1 is
enough) and then tell OFM that the data is in millions!. This is another good reason
to sit down and think before choosing your variable types. Most of the time, FLOAT
is enough for values that have some kind of decimal precision. You need to have a
good reason to choose DOUBLE. Remember that a poorly designed database can
easily be two, three or even more times bigger than another one that does the same
thing but was designed properly. Because OFM uses index technology to access the
files, you will not notice a huge performance increase with smaller files that are
located on your local hard disk (the indexes stay the same size). However, when
your files are on a network drive, definitely it is not equivalent to move 30MB than
100MB (at a rate typically much less than 10 Mbps).
Below you can see a Carry Forward option and a parameter. This comes only in
Monthly and Daily table types and it is basically to create a staircase effect on the curve.
Suppose that you have a monthly table with an OIL input variable (OFM expects one
value per month). Now suppose that you received the following data from the field:
Date OilVolume
199801 50
199803 48
199810 51
Date OilVolume
199801 50
199802 0
199803 48
199804 0
199805 0
199506 0
199807 0
A month with no loaded value appears with a zero value. This is by no means the real
thing because we know that the field reports only when there is a change in the
production. It production stays as the previous month, they don’t send any information.
In other words, the well maintains the production until you are told a new value. You
could decide to synthetically create a load file with the last month value repeated (a lot
of manual work every month) or simply use the Carry Forward feature.
If you activate this for the OIL variable, OFM will maintain the value of the variable up to
a maximum of records that you specify. Suppose that you activate this setting with a
parameter of 3. Then OFM will repeat the value up to three times. If it’s a monthly table,
three months. If it’s a daily table, up to three days. In our monthly table, the result will
be:
Date OilVolume
199801 50
199802 50
199803 48
199804 48
199805 48
199506 48
199807 0
199808 0
199809 0
199810 51
The bolded values are generated by OFM. This feature can save you a lot of work,
depending on what you want to do. The maximum number of times that OFM can
repeat the value is 255 (255 months or 255 days, depending on the table type).
Although these values are generated by OFM, they behave exactly the same way as if
you loaded them.
Finally, notice that the name of the variable appears in the Variable Name combo box
but also in the Name text box. If you want to rename a variable, just select it, correct
its name and click on any other part of the window, out of the Name box.
Beware that variables can be being referenced by their name from other places of the
database. Changing the name will turn all these references invalid.
MIXED: Well P1
LOWER: well p1
UPPER: WELL P1
The second part of the window is the Math Options box. This is where you define
what will OFM do with the variable when you group several pieces together as one set.
In the Average Type, you can select NONE (simply addition, the most common option
and the default), ARITHMETIC (arithmetic average), GEOMETRIC (geometric
average) and HARMONIC (harmonic average). Remember, most of the time, you want
to add all data together so NONE is the proper choice. However, if you have a variable
that does not add together to get the group value (such as the pressure), then you need
to select the proper averaging method.
At the bottom of the window, you have the last section, related to data checking. The
Data Range box. You can specify a minimum and maximum values for a variable, so
anything outside the range will be rejected. For instance, if your variable stores the days
of production per month, you could set the valid range from 0 to 31.
Finally, the Default Value could be confusing. This is not the value that OFM will report
if no value is ever loaded. It is the value that OFM will load if an attempt is done to load
a value that is out of range. Obviously, this value works in combination with the Data
range mentioned before.
Suppose that you define a data range from 0 to 31 and a default value of 30 for the
DAYS variable. If you try to load the value 45, it does not fall in the valid range, so the
default value of 30 is assigned. You need to be very careful about this default range. I
would still use a range and a default value of –99999. Why?, well, if anything goes
wrong during the load, I am sure that I will know about it. Try to plot a variable that has
values between 0 and 31 with a –99999 spike and you’ll see what I mean.
Reusing attributes
To close the Input Variables section, I would like to come back to the subject of reusing
the attributes. If you set your input variables attributes from within the program, you
can export them to an ASCII definition file that includes all these settings. If you do
File/Export/Table Definitions, you will generate a detailed definition file. The
following example shows you a detailed definition of the DAYS and OIL variables. Notice
that there is a command to define every attribute. I will not cover the commands in this
book but you should know that you can load a definition file with all these attributes and
they will be properly interpreted by OFM.
Imputed Variables
These are variables which value is not loaded but calculated by OFM. They have to be
defined in the tables and the procedure is very similar to one used for input variables.
Remember that these variables are calculated before grouping the data and they are
normally used for rate calculations, such as oil rate, gas rate, etc. Although you don’t
load these variables, they are actually created on the tables and occupy space. The
difference is that OFM calculates/refreshes the values automatically for you.
An Imputed variable can be defined from within the program. To do this, select the
desired table after Edit/Project/Definition and click Fields. The Variable
Definition window comes up. Click New and type the name of the new variable. For
example, we will create the OILRATE variable, so type in this name. So far, it looks as
any Input variable.
In the Type combo box, select one of the available Imputed Variable types. In the
example, we are using CALCULATED*4. Notice that when you select one of the
Imputed types, an Equation box appears for typing in the equation.
OIL/DAYS
As mentioned before,
equations must be simple. All
you can use are the basic
four operators and very few
system functions. The master
rule is that equations must
have all they need in the
variables of the table and in that particular record. You can’t use a system function that
needs values that are present in other tables or records. Complete the rest of the
definition and click OK. OFM will add the column to the table and fill it with the correct
calculated values. This might take some time. If it does, OFM places a message in the
status bar asking you to wait until de operation is finished.
As I said before, you can use the operators and also some simple system functions.
Here is a nice application: suppose that your rate is defined as above. If in one month,
there are no days loaded, OFM assumes a zero value and the equation can’t be solved.
In those cases, you could use a conditional solution, depending on your data. These are
two typical examples:
Both use a system function: @IF. This function takes a condition and the two possible
results. If the condition is satisfied, then the first result is returned. If not, the second
one is used.
The following figure shows the previous trick applied to the variable GASRATE.
Reusing Attributes
Imputed variables are part of tables and as such, they can go in the definition file.
Because of that, you can completely define an Imputed variable with all its attributes in
a def file. This is an example:
It is extremely important to notice that when the equation has spaces inside, you need
to enclose it between “”. If not, OFM will read just up to the space and if that part is
invalid, it will not load it properly.
Calculated Variables
Calculated variables probably deserve a whole book. It is not my intention to cover them
completely here. Really, I don’t have so much drag. Ideally, you learn them as you need
and basically you start by paying a nice read to all the available system functions. 99%
of the calculated variables use system functions.
Remember that OFM does not create any calculated variable. That is your job. The idea
of this section is to show you what they are, how to create them and give default
attributes and finally show you some tricks and tools that can help you quickly create a
common set. What do I mean by a common set, when I have not even explained them?
All right, here is a list of some of the most common things you will do with calculated
variables:
This list is quite trivial and happens to be a subset of what is already included in the
Demo database that installs with OFM. By no means, this list is about all you can do.
There are virtually no limits on the calculations you can perform with these variables.
Calculated variables are pure equations. They take no more space than what the
definition does (a few bytes) and they are calculated on demand, when they are
needed. This takes some computation time, but I have not noticed any delay ever. Feel
free to use them because there is no real performance penalty.
Because they are calculated, they are not related to tables and you can assign them any
name. You need to decide a naming convention, because it is very easy to have tens
(even hundreds) of calculated variables and start getting confused with what they do.
During normal work, the end user sees all the variables (input, imputed and calculated)
in a sorted list, so it is a very smart decision to split calculated variable names in two
parts: Subject.Function. For instance, for the variables listed above, I would choose
theses names:
Oil.Cum
Oil.CalDayRate
Oil.Cut
Oil.CutCum
Gas.Cum
Gas.CalDayRate
Gas.Cut
Water.CutCum
Water.Cum
Water.CalDayRate
Water.Cut
Water.CutCum
Etc.
Cum.Oil
Cum.Gas
Cum.Water
Cut.Oil
Cut.Water
CalDayRate.Oil
CalDayRate.Gas
CalDayRate.Water
Etc.
What is the purpose of splitting the name: remember that the list will be presented
sorted to the user and having a prefix splits it in sections (the Oil.xxx section, the
Gas.xxx section, etc.). This helps the end user a lot.
As we will see, OFM suggests a CV prefix. Using this default will ruin all your efforts to
make your users’ lives easier. If all of them start with CV, then there is no way of quickly
find anything on the list.
Notice the following figure that OFM is already displaying the name of the variable we
are creating (Ratio.WOR) on the top of the window. The list in the middle shows all the
project variables. So far, because there are not any calculated variables, the list only
includes Input and Imputed variables. From that list, you can directly select with the
mouse (double clicking or click/Add) the desired pieces to assemble your equation. The
same list is used to display three different sets: Project Variables (Input, Imputed and
Calculated), System Functions and User Functions. Notice that there is one button
for each. Try clicking on them and see how the content of the list changes accordingly.
After you are done, return to the Project Variables list.
Notice also
that you have
a numeric
keypad and
some logical
operators. This
can be used to
build the
equation with
the mouse.
From the list, double click on MonthlyProd.Water. This should move it to the big white
box above. This box is where you will define the equation. Then click on the division
button of the numeric keypad. Finally, double click on MonthlyProd.Oil to complete the
equation. Your window will look like the following figure.
Click OK to finish
your equation and
go back to the
Calculated
Variable window.
This is the window where you set all that there is to set for calculated variables. Mainly
the units, output multiplier and report format. There are two other buttons. One is the
Equation Definition, that sends you back to the Edit Calculated Variable window,
to modify the equation. The other one is the Plot Attributes, where you set the
cosmetics of how the variable will appear in plots.
P1:A
Monthly Monthly Water
Water Vol Oil Volume Oil
Produced Produced Ratio
DATE bbl bbl %
---------- ---------- ---------- ----------
19990701 20.0 80.0 25.00
19990801 21.0 81.0 25.93
19990901 20.0 78.0 25.64
19991001 19.0 78.0 24.36
19991101 22.0 75.0 29.33
19991201 23.0 74.0 31.08
Notice the multiplier (%), report column header and plot attributes (legend, color, etc.).
This is about all there is to set about a calculated variable. Report and plot attributes
and the unit/multiplier couple. The most important part is the equation.
c [Oil.Cum=@Cuminput(MonthlyProd.Oil)]
*pn "Cumulative Oil Production"
*pa green solid none 0
*u bbl
*mu M
*rh "Cumulative" "Oil" "Production"
*rf 10 1 Right
c [Oil.CalDay=MonthlyProd.Oil/@dom(date)]
*pn "Oil Rate (Cal. Day)"
*pa green solid none 0
*u bbl/d
*mu 1
*rh "Oil" "Rate" "(Cal. Day)"
*rf 10 0 Right
c [Oil.WellCount=@countinput(MonthlyProd.Oil)]
*pn "Wells Producing Oil"
*pa green solid none 0
*u wells
*mu 1
*rh "Wells" "Producing" "Oil"
*rf 10 0 Right
c [Oil.ProDay=MonthlyProd.Oil/MonthlyProd.Days]
*pn "Oil Rate (Pro. Day)"
*pa green solid none 0
*u bbl/d
*mu 1
*rh "Oil" "Rate" "(Pro. Day)"
*rf 10 0 Right
c [Gas.Monthly= MonthlyProd.Gas]
*pn "Monthly Gas Production"
*pa red solid none 0
*u cf
*mu M
*rh "Monthly" "Gas" "Production"
*rf 10 0 Right
c [Gas.Cum=@Cuminput(MonthlyProd.Gas)]
*pn "Cumulative Gas Production"
*pa red solid none 0
*u cf
*mu MM
*rh "Cumulative" "Gas" "Production"
Notice that every definition starts with a c (calculated). Then you have the given name
and the equation. Below this line come the attributes, which look pretty much like the
ones we’ve seen for input/imputed variables.
What is very interesting is that if you have one of these files with all your variables, all
you have to check is that your table names and variable names match those you have in
the target project. If they do, you can load it directly. If not, you can easily do a Search
& Replace with any text editor and adjust the file to your needs.
Suppose that you production table is not named MonthlyProd but Mprod instead. If you
attempt to load this file, OFM will complain because it makes reference to a table that
does not exist. If you replace
MonthlyProd for Mprod in the file
and load it again, it will probably
work. This is a harmless procedure.
If the variable was there and you
load it again, it will overwrite it, so
you can repeat the load process as
many times as needed, without
destroying what is already loaded.
In my personal exercise, there were some errors in the process. They were listed in the
Errors tab. Let’s try to analyze what happened.
Many calculated variables are quite common (such as accumulated oil) and are present
in almost any project. Some other variables are the result of your own analysis methods
and you can’t find them in any parser file. You will have to create those special ones. No
escape.
If you work with only one project, then may be you should consider modifying one of
the parser files provided with the demo data sets. This could be a good starting point.
Just edit it, modify any needed table and column names and attempt to load it. The rest
of your variables will probably be created by hand, one by one using the provided
interface.
Exporting
If you work with many projects, then the goal will be trying to re use the variables as
much as possible. You can not only load a parser file, but you can also export it. If you
have a project with all you calculated variables, then select File/Export/Parser Data,
choose the destination and filename and OFM will create a parser file with all the
calculated variables. This file could be inspected, modified (if necessary) and then
loaded to all your other projects. You can even split your calculated variables in several
parser files and load only the ones that apply to a particular project. For instance, you
could have parser files for standard projects, for log interpretation, for decline analysis
data access, etc.
Notice that the OFM variable names chosen do not include a prefix and suffix. However,
the names are chosen to locate them easily on a long list.
If you are not using any of the standard templates (most of the cases), you need to get
some information before proceeding. It is mainly the need to tell CV Builder what are
the names of your input and imputed variables. Once you have them, you will be ready
to proceed.
The Alt Input sheet has four sections, all equivalent, where you can store up to four of
your custom database structures. This is nice because you can maintain up to four
different databases and keep all your info in the file. If you maintain more than four,
then just copy the spreadsheet CVBuilder.xls to a second file and you can store another
four.
Each of the four custom sections has three different parts:
Name
Constants
Table and
variable
names
Give it a name, fill up the appropriate values for the constants and then input your
tables information. This utility generates up to 323 variables, depending on the data you
have available. The following table is all that CV Builder takes as input. While
producing the file, if the data is not available (you left the corresponding cell blank),
then the related variables are not created. So, if you leave the Monthly Gas Production
information blank, there will be no variables associated to monthly gas (not accumulated
gas, not gas/oil ratio, etc.)
Most projects will have just a few of the required input data, so you might end up filling
just 10 or 20 lines.
The next table is the full list that can be input to CV Builder. If you have additional
information, then CV Builder will not be able to create any variable using it. That will
be your manual work.
Finally, after you fill up the three sections of one of the four Alt Input areas, you can go
back to the Output sheet, select your customized
model from the menu and click Apply to generate the
parser file based on your template.
CV Builder Example
The next section is based on the project described on Chapter 3, under the exercise 2.
Because we have not used any standard template, we have to fill up one of the four
areas of the Alt Input sheet.
I select one of then, I give it a name (“Exercise 2”) and fill up the constants section.
Finally, on the variables section, I fill as much as I can, based on my structure. This is a
summary of what I had and input there.
This is a nice set to start with. I then go to the Output sheet, select “Exercise 2” from
the menu and click Apply to generate the file. CV Builder thinks a bit and writes the
file. I finally load this file to my
project and get the entire standard
CV Builder calculated variables, as
displayed on the next figure.
Introduction
OFM primary task is to process and display numbers. These numbers are usually based
on quantities we measure and as such, they have units.
OFM can also work with multipliers, which are simply a different way of displaying the
variables. For instance, 10,000,000 barrels could be expressed as 10,000,000 barrels
(multiplier 1), 10,000 M barrels (multiplier M), 10E+6 barrels (multiplier 10E+6) etc.
Regarding units, you can configure OFM to work with two different units systems. For
multipliers, you have three different sets. Any combination of these (six in total) is
possible and the user can select the one that best fits his/her needs. The interesting
part of all this is that OFM allows the user to modify both systems and allows simple
creation of new units and multipliers.
This chapter will explain all there is to know about units and multipliers. Units could be a
real troublemaker subject unless you understand how it works. Please follow the
explanations in detail.
Units
OFM has two different unit systems. They are called English (some times Field) and
Metric. The user selects the desired system and OFM will convert the variable figures
accordingly. The next listing is an example of a report in english units.
BLUE_1:He
Cumulative Cumulative Cumulative
Oil Gas Water
Production Production Production
DATE Mbbl MMcf Mbbl
---------- ---------- ---------- ----------
19950701 0.6 18.3 0.0
19950801 3.6 26.7 0.1
19950901 5.8 32.2 1.9
19951001 8.0 37.9 6.1
19951101 10.7 41.2 11.8
19951201 14.5 41.8 22.1
Notice that the values are expressed in bbl (barrels) and cf (cubic feet). Also, the final
figures are expressed using multipliers. For instance, in Mbbl (thousands of barrels) and
MMcf (millions of cubic feet).
BLUE_1:He
Cumulative Cumulative Cumulative
Oil Gas Water
Production Production Production
DATE Mm3 MMscm Mm3
---------- ---------- ---------- ----------
19950701 0.1 0.5 0.0
19950801 0.6 0.8 0.0
19950901 0.9 0.9 0.3
19951001 1.3 1.1 1.0
19951101 1.7 1.2 1.9
19951201 2.3 1.2 3.5
19960101 3.0 1.3 6.1
19960201 3.7 1.3 8.6
19960301 4.3 1.3 11.5
Notice that the multipliers have not changed. They are independent from the unit
system but all figures now are expressed in m3 (cubic meters) and scm (standard cubic
meters). Notice also that numbers have been all affected by proper the conversion
factor. (18.3 MMcf -> 0.5 MMscm)
OFM is flexible
OFM knows absolutely nothing about units (Hugh?). The units system is all based on
user defined rules. However, it comes with a default set of rules to let you start up with
something. The whole thing is based on a table like this one
Notice that there are a Metric, an English, an Operator and a Factor columns. They
define the labels for a particular unit in both systems and how to convert between them.
For instance,
m3 bbl * 1.59E-01
If you are thinking “Ah…, so OFM knows about units” I can tell you again. Not really,
nothing stops you from defining something like [bbl] = [miles] * 3 (a volumetric unit
equivalent to a length unit!). Don’t be scared. It is not that bad. This is the price of
flexibility, because you can define your own units and the equivalent in both systems.
The only limitation you have is that the both systems units must be related by a simple
equation, such as E=M*k or E=M/c. The classic exception is the temperature units
equation, from Fahrenheit to Centigrade and because it can’t be implemented with a
simple factor, is the only equation pre defined in the OFM code (Ok, Ok, it knows about
units…but just this one!)
When you define a variable, you assign it a unit. OFM lets you pick one from a list that
displays all the available ones. These are either the first column (if you are set to work
in metric units) or the second one (if you are set to the english system).
When you assign a unit to a variable, you are giving OFM the power to convert it, as per
the unit system selected by the user. After a unit has been assigned, OFM knows:
Notice the following two figures. They display the list with units available for the same
variable on the same table. However, the first one shows the units available when you
are set to metric (Tools/Settings/Units/Use Metric Units checked) and the other is
the available list when the english unit system is active. Notice that the list of available
units matches the active unit system and that they are what is listed in the above table.
If the user selects the English Units system, the program just reads the number from
the binary file and appends the corresponding english unit label. Notice that it
gets this information from the definition of the variable. In this case, it will report
100bbl.
If the user selects the Metric Units system, the OFM must perform a conversion. It
will read the number from the database (100), check the definition of the
variable to get the unit (bbl), then get the conversion factor and the metric unit
label (1.59E-01 and m3). As a result, OFM performs the calculation and correctly
reports 1.59 m3.
a possible conversion. Before loading any data, it will check how the target variable is
defined. In our example, it is defined in bbl (or m3, depending the system). Because
OFM already knows that the numbers are metric, it will consider the numbers to be m3.
The 100 of the file now means 100 m3 to OFM. However, because the program must
store this value in english, it goes to the units table, gets the conversion factor and
applies it to the number before storing it. As a result, OFM stores 100/1.59E-01, i.e.,
628.93.
Notice the big difference between this case and the previous one. The number that goes
to the database is completely different than the one you loaded. You loaded 100 and
OFM stored 628.93!. This is not a problem, because OFM will report this value correctly:
100m3 or 628.93bbl, depending on the system selected by the user.
If the user selects the English Units system, the program just reads the number from
the binary file and appends the corresponding english unit label. Notice that it
gets this information from the definition of the variable. In this case, it will report
628.93bbl.
Summary
When you load data, OFM knows what are the units of the input file based only in the
*METRIC command. It does not care about the Tools/Settings/Units user selection.
This is used only for “using” the data. This means that you could be set to english units
and load a file with the *METRIC command. OFM will correctly load it in metric units.
Input units and output units are independent.
For using the data, select the desired unit system via Tools/Settings/Units
For loading data in metric units, you must specify the *METRIC command in the first
line of the file. No *METRIC means that data comes in english units.
Problems
Well, hopefully the previous explanation was enough to show you how the unit system
works in OFM. It is very simple. However, it is very easy to run into problems,
particularly when you work with metric units.
Problem 1
Suppose that you are loading data in metric units (your file includes a *METRIC
command) but you have not assigned a proper unit to the target variable. The next
figure displays this scenario.
Notice that the load file includes the metric command but the target variable (PRD.OIL)
has no units defined. Because OFM does not know anything about how to convert
PRD.OIL, it does not convert! The final number that goes to the binary file is the same
as the one in the load file (100), but this 100 are 100 m3! You ended up with a metric
number in your binary file!
If you use this number, the results vary and can produce serious errors.
For example, suppose that you have not defined any units for PRD.OIL yet. If you use it
in english units, OFM does not calculate any conversion. It just displays the internal
number: 100. Although it will not append any unit label (it is still undefined), it displays
the expected number.
If you switch OFM to metric units, OFM will attempt to convert the internal number but
because there is no unit defined yet, it does not do any conversion and it displays again
the 100 (still with no unit label).
A week later, you discover that you forgot to assign units to your PRD.OIL, so you go
into Edit/Project/Definition and assign a bbl (and it’s equivalent m3) unit to it: Big
mistake... The data loaded does not change and the number 100 is still in the database.
The story ends up with a reflection: You can’t assign a unit after loading metric data.
If you discover that you are missing a unit after you loaded metric data, you can fix it by
defining the unit and reloading the data file. OFM will be then able to perform the
conversion and store the numbers correctly.
If you load english data, the number that is stored is the same as what you had in your
load file. If you define a unit later, you should not have major problems.
Problem 2
Take a look at the default units table on page 218. On the english side, you normally
have one line for each unit. However, some of them have the same metric equivalent
(with a different factor, of course).
Here is a classic example:
All bbl, gal and acre-ft have the same metric equivalent: m3. If you notice the figure of
page 221, the list of available units in the metric system lists three m3. Although their
labels are the same, their factors are not!. These are the three m3 labels present on the
table on the Metric side.
Imagine that you initially defined PRD.OIL in bbl. Then you load 100m3 to it and this
number gets properly stored as 628.93. The conversion factor used for the load was the
one for bbl (i.e., 1.59E-01).
Now suppose that you, by mistake, change the PRD.OIL units from the initial m3 unit to
one of the other two available m3 (for instance, the one for gal). This is not difficult. In
the available metric units list, there are three m3 and they all look the same. You can’t
tell the difference from the list!
When you report again in metric, OFM will use a new conversion factor!. It will use
3.79E-03 instead of the original 1.59E-01 and it will innocently report 2.38m3
(2.38=628.93*3.79E-03)
You loaded 100m3, you used it and checked that you had 100m3 and one day, OFM
reports 2.38m3. Simply because you selected a different m3 from the available metric
units list.
Even more confusing is that after seeing this funny behavior, you decide to switch to
english units and re generate the report. Your eyes will open wide when your oil now
reports as 100 gal. A real confusing situation if you don’t understand the system.
Most of the cases, this is only relevant for people working with metric units because
there is clear conflict with m3, a very popular unit in the field. This is easy to fix by
assigning units in the english unit system. However, notice that for people working in
the english unit system, also have some conflicts (they are two psia on the english unit
list). Fortunately, because their numbers are loaded and stored in english units, there is
no conversion involved and any psia can do the job. They will never face a problem
unless they start reporting in metric units and the pressure comes in a different metric
unit than the one desired.
m3 bbl * 1.589874E-01
m3 gal * 3.785412E-03
m3/d bbl/d * 1.589874E-01
m3/m3 bbl/bbl * 1
m/sec ft/sec * 3.048000E-01
scm cf * 2.831685E-02
scm/d cf/d * 2.831685E-02
kpa psia * 6.894757E+00
kpa psig * 6.894757E+00
kpa psi * 6.894757E+00
cm inch * 2.540000E+00
m ft * 3.048000E-01
kg lb * 4.535924E-01
scm/m3 cf/bbl * 1.781080E-01
m3/scm bbl/cf * 5.614586E+00
m2 acre * 4.046873E+03
bar psia * 6.894757E-02
bar psig * 6.894757E-02
kPa/m psi/ft * 2.262059E+01
kg/kg wt% * 100
When you create a project, OFM builds a binary file that belongs to the project based on
this ASCII file. The binary file that contains the units has a o12 extension.
In our example of Chapter 3, we built a project named exercise 2, so its binary file
containing the units defined for the project was exercise 2.o12.
Once you build a project, the ASCII Units.def has no further use, until you build another
one. If you need to change something of the
units table of a particular project, you need to
modify the binary file. This is done from
Edit/Project/Units. The following window
displays the results of this command. The list
displays all the units available and the equivalent
labels on both systems. You can Delete or Edit
and existent unit or Add a new one.
BLUE_12:Ad_4
Cumulative
Oil
Production
DATE Mbarrel
---------- ----------
19670101 2.0
19670201 3.9
19670301 5.9
If you find yourself creating projects and modifying the default units from this section,
you should consider editing your Units.def file to include your new units settings in all
the future projects you want to create. Remember, the Units.def is a template file that
OFM uses only to create the o12 file of new projects. Changes in this file will not be
translated into existent projects.
Multipliers
OFM has also the possibility of expressing the numeric variables using multipliers. This
consists in affecting the value by a factor and modifying the label including the multiplier
symbol used. For instance,
Multipliers are independent from the unit system and any unit/multiplier combination is
valid. There are three different multiplier sets that a user can select via
Tools/Settings/Units/Units Multiplier Style. The next figure displays the three
available options:
Notice that there are four columns. The first three are the labels and the fourth is the
factor column, the number that will be used to format the outputs. Notice that the
Scientific column lists values like E+3, etc. These are just the labels to be used, the only
numeric information is stored on the factor column.
As we mentioned before, once the project is created, this information goes to the o13
file. If you want to modify the multipliers of a
project, then do Edit/Project/Multipliers.
A window comes up with the list of multipliers that
are available in the style you have active. Click on
any of them to Delete or Edit it. You can also Add a
new one to the list.
Notice that there is some inconsistency between the names, reason why I will repeat
again the equivalencies between the two windows.
Metric 1 – Metric
Metric 2 – Scientific
English – Field
Scientific Cumulative
Oil
Cumulative
Oil
Multipliers Production Production
Style DATE E+3bbl DATE E+3m3
---------- ---------- ---------- ----------
19670101 2.0 19670101 0.3
19670201 3.9 19670201 0.6
19670301 5.9 19670301 0.9
19670401 7.8 19670401 1.2
19670501 9.7 19670501 1.5
19670601 11.9 19670601 1.9
Metric Cumulative
Oil
Cumulative
Oil
Multipliers Production Production
Style DATE Kbbl DATE Km3
---------- ---------- ---------- ----------
19670101 2.0 19670101 0.3
19670201 3.9 19670201 0.6
19670301 5.9 19670301 0.9
19670401 7.8 19670401 1.2
19670501 9.7 19670501 1.5
19670601 1.9
Problems
Generally speaking, multipliers never bring problems. You could eventually face some
trouble related to data if you loose control over your multipliers (a user changes the
setting). Notice that M means 1,000 in the Field style but also 1,000,000 in the Metric
one. You should verify your multiplier style if you notice these kinds of problems with
your database. Normally, once a multiplier style is set, users rarely change it. However,
keep an eye on it and be ready to solve problems with them.
Introduction
When you open an OFM project, you have available a set of modules, each one
implementing a different set of specialized tools. Modules talk to each other (when they
have to) and share the results. These modules are available through the Analysis menu
and you open them as you need. However, there is one module that is always opened
and is the starting point of almost anything you can do with your data: The basemap
module.
The base map is the only module that must be opened. Although you can de-activate
the map to save OFM some work, you can’t completely get rid of it because if you close
it, you close the project.
The main function of the basemap is giving a graphic representation of your field. Every
entity in the static master table is plotted on the map, so you generally see a symbol
representing each of them. You can also superimpose a grid, names and annotations to
represent geographic details, such as rivers, borders, etc. Anything but the symbols is
optional.
Symbol information comes from the combination of two different sources of data: The
well type info and the symbol mapping info.
This information specifies, for every entry in the static master table (every entity) what
symbol should be used. This information is given as an acronym, usually a three to six
characters string. You should not include spaces in these acronyms. OILPRO is a valid
one, however OIL PRO could create problems.
OFM allows you to provide this information through three different methods:
You must decide which method you want to use and tell OFM about it. This is done with
the Data Association window. Do a
Edit/Map/Association to reach this part of the
program. The figure shows an example of this
window. Notice the three top buttons. They allow
you to select one of the three options mentioned
above. In this particular example, the sort (filter)
table was chosen, particularly the WELLTYPE
column of this table.
The next figure shows an example of a filter table that keeps the well type information
on a (highlighted) column named WELLTYPE.
This option is the most popular one. Normally you already have a filter category that can
be used to quickly recover wells (or completions) based on the symbol information (for
instance, filter by category to get all oil producers).
The next figure shows an example of a static master table that keeps the well type
information on a (highlighted) column named TYPE.
This is the option I would use if my database is so simple that I don’t have any filter
categories defined (no filter table at all). Also, you want to store the well type
information on the static master table only if you project will be not only simple but
small. Remember that the static master table is used very frequently by OFM and you
must keep it to a minimum size. If you can store this information on a different place,
do it.
For instance, you could create a calculated variable that inspects the accumulated GOR
value to date and decides, based on it, if it should return a GAS or an OIL acronym.
Another application could be to retrieve the acronym from a particular table. The next
paragraphs will give you a couple of examples implementing some ideas.
This variable inspects the last value of the cum GOR and compares is to the constant
200000. If it is greater, it returns a GAS acronym, else an OIL one.
Notice that all acronyms (symbols) you will get from this function are only these two. In
the cases where you don’t have a GOR (an injector well, for instance), the function will
not return a proper value. You should be careful when designing this kind of variables.
We could create a new improved variable, such as the following one
This variable first tests if there are Oil and Gas values. If both are greater than zero,
then it will return whatever our first function (Map.Symbol) decides, based on the GOR.
If one (or both) is zero, then it will ignore the GOR and return an “UNDEF” acronym.
39
Symbols do not automatically update. Refer to the Symbols are not Dynamic section ahead.
The following map displays the results of this variable and the symbols representing
these three possible values.
The following section displays the definition of a possible sporadic table to hold map
symbol acronyms.
*TABLENAME WELLTYPE
*DATE *WELLTYPE *REMARKS
*KEYNAME "BLUE_10:Ad_1A"
19590101 "OIL" ""
*KEYNAME "BLUE_11:Li_1C"
19590101 "OIL" ""
*KEYNAME "BLUE_12:Ad_4"
19670101 "OIL" ""
*KEYNAME "BLUE_12:Li_1C"
19590101 "OIL" ""
*KEYNAME "ORANGE_14:Li_1C"
19670101 "OIL" ""
*KEYNAME "ORANGE_15:Ge_1"
19291401 "" ""
19590101 "DRY" "Original State of the well (Shot Off depth by Western)"
19841201 "OIL" "Re-perforated on depth (By Schlumberger)"
19841203 "OIL" ""
*KEYNAME "ORANGE_27:Ge_1"
19590101 "DRY" "Original State of the well (Shot Off depth by Western)"
19701201 "OIL" "Re-perforated on depth (By Schlumberger)"
19951201 "DRY" "Production finished"
*KEYNAME "ORANGE_28:SWD"
19590101 "SWDIS" ""
. . .
Once the data is loaded, you should verify that it was accepted properly. The next figure
shows the results for one of the completions.
The next part would be to build a calculated function that returns the last of the
acronyms (for the example above, the ORANGE_27:Ge_1 completion, it would return
DRY for Dec-95) and associate it in the Data Association window.
This is the second source of information that OFM needs to draw the symbols on the
basemap. This information can come from two sources:
An ASCII file
The acronyms themselves
ASCII file
For the first option, you must have an ASCII file like the following one:
OIL Oil_Producer 3 3
GAS Gas_Producer 5 2
GINJ Gas_Injector 73 2
WINJ Water_Injector 11 4
CINj Carbon_Dioxide_Injector 18 1
DRY Dry_Hole 2 1
SPROD Steam_Production 21 1
DISCV Discovery_Well 32 1
MON Monitor_Well 60 1
WACO2 WACO2 50 5
ITA TA'd_Injector 51 5
PTA TA'd_Producer 52 5
0 White
1 Black
2 Red
3 Green
4 Blue
5 Yellow
6 Magenta
7 Cyan
8 Gray
9 Brown
10 Dark Red
11 Dark Green
12 Dark Blue
13 Dark Gray
14 Dark Brown
15 Purple
Acronyms
The second method is purely controlled by OFM. It scans all available acronyms and
assigns them randomly a symbol and a color. The description is the acronym itself.
There is no ASCII file involved in this method.
When OFM draws the symbols, it scans the database finding the acronyms, as specified
in the Data Association window. Then, depending on the user decision, it will match
each acronym with a symbol based on a reference ASCII file or randomly.
Let’s see all the possible options.
with the program. This file is the OFM\Symbols\Welltype.def. The content of this
file is listed next.
This file is standard and included with the OFM program. Notice that the first column
lists all the acronyms that will be recognized by OFM, so if you want to create your own
ones, they will have to be listed there as well. If you use one like OILPRO, OFM will not
know how to draw its symbol because it is not listed in the default file.
Finally, once you click OK on both windows, OFM calculates the new symbols and colors
and updates the map.
Notice that this file must have a sym extension and the syntax is similar to the standard
OFM\Symbols\Welltype.def file. Select the desired symbol file, click OK on all
windows, and wait while OFM calculates the new symbols and colors and updates the
map.
*clear
DRY Dry_Well 9 1
GINJ Gas_Injector_Well 73 2
GAS Gas_Producer_Well 46 2
HORIZ Horizontal_Well 47 1
OIl Oil_Producer_Well 3 4
P&A Plugged_&_Abandoned_Well 4 1
SWDIS Salt_Water_Disposal_Well 60 6
WINJ Water_Injector_Well 11 4
WSUPP Water_Supply_Well 48 4
*eof
The following map displays the results of using the previous file. Notice that the symbols
and colors are now different. Notice also that the legend descriptions have changed as
well, reflecting the contents of the user defined symbol file.
Symbol-Shape Files
Symbol shapes come from ASCII files as well. The syntax of these files is similar to the
annotation files and is basically a set of commands that instruct OFM how to draw the
symbol. OFM comes with almost 100 pre defined shapes, each one in its own file. This
information is located in the OFM\Symbols directory. The file name is the number of
the symbol and the extension is ano (an annotation file). The following figure displays
the contents of this directory, as installed with OFM. Notice that you have 94 valid
symbols (1.ano to 94.ano) and the welltype.def file mentioned before, in the Using
Default mapping information section.
LI 1
ARC 5.000000 5.000000 1.000000 0 360
Notice that the gas symbol commands are more complex because each part of the
symbol must be described (the main circle and the eight spikes).
You can do further customization to map symbols. All options are selected using the
Well Symbols window,
that you can reach it with
Edit/Map/Symbols. This
window has four important
sections.
Control frame: This is the section where you can overwrite the current shape
settings. You can pick up one of the legends (from the Name combo box) and
set the values accordingly. You can change the legend itself (editing the name
on the combo box), change the shape, color and the acronym (Abbreviation)
associated. Your changes will be reflected on the map immediately after clicking
OK.
Action buttons: These are the buttons on the right hand side of the window. OK
will apply the changes, Cancel will close the window without applying the
changes, Undo will undo the last change.
Save File and Get File will allow you to work with symbol files (the ones with sym
extension). This is the best place to generate custom *.sym files because you can
input all the information (acronym, shape, color, and legend) and save it to a symbol
file. Because this is an interesting task, it will be the subject of the following section.
This section will give you an example of how to easily create a symbol file. This is the
ASCII file with sym extension that can be used to match acronyms to shapes/colors
/legends.
*clear
DESC1 Description_1 89 4
DESC2 Description_2 61 6
DESC3 Description_3 9 15
DESC4 Description_4 40 4
DRY Dry_Hole 2 1
GINJ Gas_Injector 73 2
GAS Gas_Producer_Well 5 2
OIL Oil_Producer 3 3
P&A Plugged_&_Abandoned 4 1
PROSP Prospect 1 1
SWD SWD 53 5
ITA TA'd_Injector 51 5
PTA TA'd_Producer 52 5
WACO2 WACO2 50 5
WINJ Water_Injector 11 4
WSUPP Water_Supply 48 4
WATER WATER 60 6
GAS GAS 61 7
*eof
The main problem is to find out the shape/color numbers you want. This can be easily
done from within OFM. Open any OFM project and go the Well Symbols window
(Edit/Map/Symbols).
This will create the file with the correct information (acronym, shape number, color
number and legend description) and a sym extension. It will include your current
symbols (the ones you modified and the remaining ones). If you want, you can edit out
the undesired parts with any standard editor.
As the title of this section states, basemap symbols are not dynamic. This means that
OFM does not update the basemap symbols every time it re-draws it, but only some
times. This is a very important point and I could have added it as a note on previous
sections, but I decided to use a chapter section to remark this subject.
OFM scans the acronyms, find the matching symbols/shapes/colors and updates the
map only when you ask it to do so. Beware of this, because if you change one acronym,
you might not see the results one the map immediately.
The same applies to acronyms generated by functions. If your function returns a new
acronym, the map will not reflect the changes.
The only way of forcing OFM to re-scan your data and calculate all your symbols again is
to cycle it through the entire symbol process. This task takes time and justifies why OFM
does not do it automatically.
The map symbol refresh cycle can be done following the next steps:
Select Edit/Map/Association
Re-select the well type option (table, sort or
expression) with the corresponding button.
Re-select the same name as before and click OK.
Re-select the same Well Symbol File information
Click OK as needed (select the sym file if you select
this option) to return to the base map.
OFM spends some time recalculating your symbols. The
basemap should reflect now the changes.
The last section of this chapter will hopefully bring up new ideas of what you can do
with the flexibility that OFM has to handle map symbols.
Filter Categories
If you have defined filter categories, you could use any of them to quickly change the
kind of information that the basemap symbols display. For instance, suppose that you
have a RESERVOIR filter category. If you select it as the source of acronyms (first
figure) and then ask OFM to create the symbols from this data (second figure), you map
symbols reflect immediately the geographic location of your reservoirs!
The next base map shows the results for the demo
database.
And use it for the symbols. If you create the previous variable set OFM to use it (Well
Type – Exp = Map.Symbol and Create from Data), you get the following map.
Remember that OFM basemap symbols model is customizable and could allow you to
represent different views of your data on the same base map. Make sure you explore
the possibilities. I’ve seen users doing bubble maps for applications that could have
been done with the base map.
Introduction
We all know that when we work with a project of respectable size, the performance of
the program suffers. In these cases, there are some tips and tricks that you could apply
to optimize the use of OFM resources and improve the overall performance. This brief
chapter will list some well-known examples. Some are just an adjustment, some require
a redesign of your database.
Every time you do a zoom or a filter that leaves less than 100 symbols on the map, OFM
turns automatically into the detailed mode.
Table Manager
When you select a group of wells and group them together, OFM scans all available
information and builds the group’s set of data. The set includes data from all the tables
of the project. For instance, if you are grouping for a production plot, OFM will also get
all the information for logs, wellbore diagrams, tests, etc. If your project is big, this
takes considerable time. By default, OFM uses all tables of the project. However, you
can suspend any one that you consider
not useful for your work. For instance, if
all you do is declines, you can safely
ignore the log data. To do this, you use
the Table Manager window you open
using the Edit/Project/Table
Manager menu. De selecting a table
does not affect the information loaded to
it. Data is left untouched, just ignored as
if it was not there.
The option of Decline Cases (in the lower part of the window) is for the DCA
information that is stored to the cases database. This information does not go to a table,
so is not listed with the rest of them. However, every time you group data, if there is
any DCA data saved to any case of the database, OFM will retrieve this data (unless, of
course, you uncheck it).
Consider this optimization trick if your OFM database includes information that is not
intended for all users. If you work with only a section of the database, disallow the rest.
Although their functionality is equivalent, there are some differences between them:
The advantage of Well Lists is that you can read or prepare them with any text
editor.
The advantage of Filter Files is performance. Filter Files are much, much faster
than Well Lists.
If it you are given a list of wells that will be analyzed together (for instance, you receive
an email with a list of wells to perform some kind of analysis), you could easily create a
Well List file from it. However, try to use it only the first time. After you open the well
list and have the desired wells on the base map, save it as a Filter File. The following
time that you need to filter the same group, use the binary filter srt file. This will save
you some time.
Auto Grouping
Grouping data takes time and you should do it only when you have to. In the
Tools/Settings/Auto Group window you have some options to automate the
grouping of wells. Be careful
with them, because with these
options, you give OFM the
power of automatically loading
groups of wells and that is not
always what you need.
The second option (Auto group wells that meet your query criteria) will do a
similar thing, although it will auto group all wells information after a filter using the
query tool.
The third option (Auto group wells after map zoom in) groups all data of all the
wells left on the map after a zoom.
The fourth option (Sum individual well forecasts when grouping wells) relates to
how OFM will act regarding the data for DCA groups’ forecasts. If you leave it
unchecked, it will retrieve the forecast of the group. If you select it, it will retrieve the
individual forecasts of the wells that belong to the group and add them together. This
second option takes longer and should be activated only when you need it.
Tables design
Finally, this part relates to the proper design of the project and not an optimized use of
it. These tips have been mentioned in the previous chapters of this handbook, however,
I would like to put them all together in one section.
When you design tables, you need to make sure that the data type you choose is
enough for what you need to store, but not too big. Increasing the size of strings or
numbers precision without need, increases the database size. A bigger project works
slower, so you should carefully choose your data types.
Pay attention to the size they take on the file. For strings, every character you want to
store takes a byte. For numbers, it depends on the precision. Precision does not only
affect the disk space, but also the computational time. It does not take the same time to
add two integers (INT1) than two floating-point numbers (DOUBLE). The same applies
to any other numeric calculation.
For imputed variables, the list is smaller, but the same ideas apply.
For any of these variables, also remember that multipliers can also help you to reduce
your requirements. If you need to store a quantity that is a big number but with lots of
zeroes at the end (like 25,000,000), then you could efficiently use output and input
multipliers to cover the “thousands” or “millions” part and reserve space to store only
25.
The last design advice I can give you is: keep your static master table as small as
possible. Don’t add columns there that are not strictly needed. OFM accesses the static
master table for almost any operation and you can speed up this task if you reduce the
size to a minimum. The next table displays what is the mandatory information that
should go to the static master table.
All other info should be placed in separate tables, such as the filter table or spare static
tables.
Introduction
OFM provides several tools to simplify the task of creating new projects. Some will let
you create a project from zero with a few mouse clicks producing a database based on
pre-defined templates. You can also define your own templates and build databases
based upon them. This chapter discusses the different possibilities that OFM provides.
A project consists of several parts: A database structure (input and imputed variables), a
set of calculated variables, the data associations for the base map and decline curve
analysis, units, multipliers,
the data that goes in it, etc.
When you build a new project
using a template, most of
them (except the actual data
and some other ones) can be
easily pre-set using a project
template. The following
sections will describe how
you can create a new OFM
project based on a template.
If you select the Production Analyst® template, OFM will not ask you for any of the
information needed to build your new
project. It will create it based on the
Production Analyst database model.
This model is well known by the PA
users and it is a quite simple database
structure.
When you click OK, OFM builds a PA-like project. The next listing displays the structure
of this template. The figure displays just the names and types of the input/imputed
variables. Of course, it also defines all other attributes as well, such as report formats,
curves colors, units, etc.
The following figure displays the list of calculated variables that are also created when
you select this template.
CV.LIQUID=prd.oil+prd.water
CV.CUMOIL=@cuminput(prd.oil)
CV.CUMGAS=@cuminput(prd.gas)
CV.CUMWAT=@cuminput(prd.water)
CV.CUMWINJ=@cuminput(winj.winj)
CV.CUMLIQ=@tsum(cv.liquid)
CV.CDOIL=prd.oil/@dom(date)
CV.CDGAS=prd.gas/@dom(date)
CV.CDWAT=prd.water/@dom(date)
CV.CDWINJ=WINJ.WINJ/@dom(date)
CV.CDLIQ=cv.liquid/@dom(date)
CV.CNTOILW=@countinput(prd.oil)
CV.CNTGASW=@countinput(prd.gas)
CV.CNTWATW=@countinput(prd.water)
CV.CNTWINJ=@countinput(winj.winj)
CV.CDOILW=cv.cdoil/cv.cntoilw
CV.CDGASW=cv.cdgas/cv.cntgasw
CV.CDWATW=cv.cdwat/cv.cntwatw
CV.CDLIQW=cv.cdoilw+cv.cdwatw
CV.PDOIL=@if(prd.days=0,cv.cdoil,prd.pdoil)
CV.PDGAS=@if(prd.days=0,cv.cdgas,prd.pdgas)
CV.PDWAT=@if(prd.days=0,cv.cdwat,prd.pdwater)
CV.PDLIQ=cv.pdoil+cv.pdwat
CV.PDOILW=prd.oil/prd.days
CV.PDGASW=prd.gas/prd.days
CV.PDWATW=prd.water/prd.days
CV.PDWINJW=winj.winj/winj.widay
CV.PDLIQW=cv.pdoilw+cv.pdwatw
CV.GOR=prd.gas/prd.oil
CV.OGR=prd.oil/(prd.gas)
CV.WOR=prd.water/prd.oil
CV.GLR=prd.gas/cv.liquid
CV.LGR=cv.liquid/prd.gas
CV.WGR=prd.water/(prd.gas)
CV.WCUT=100*prd.water/cv.liquid
Beware that OFM does not complete the Map/DCA associations for this project template.
Map associations are not completely defined and you should check them against the
next figure (Edit/Map/Associations)
DCA associations are usually defined in the template as per the next figure (From
the DCA module, Edit/Scenario). However, it is a good idea to double-check
them.
Map Associations
Well Type - Sort None
Well Type - Table TYPE
Well Type - Expression None
Wellbore WELLBORE
Alias Name ALIAS
Object Type None
X Coordinate XCOORD
Y Coordinate YCOORD
Reference Depth KBELEV
Completion Depth CDEPTH
Bottom Depth TDEPTH
Project None
DCA Associations
Date Date
Oil Rate CV.CDOIL
Gas Rate CV.CDGAS
Water Rate CV.CDWAT
Cumulative Oil CV.CUMOIL
Cumulative Gas CV.CUMGAS
Cumulative Water CV.CUMWAT
Gas Oil Ratio CV.GOR
Water Oil Ratio CV.WOR
Water Cut CV.WCUT
Oil Cut CV.OCUT
If you select the GeoQuest Standard template, OFM will not ask you for any of the
needed information to build your project. It will all be based on a template suggested by
GeoQuest. This template creates a quite
extensive database structure, with many
tables and variables. The menu on the
right hand side allows you to select a
language. For now, the only available one
is English.
The GeoQuest template is very interesting
and could be considered as a case study.
It is prepared to work with monthly and/or
daily data. It also has room for PVT,
financial calculations, etc.
Once you select this template, you must
also have to tell OFM where will be your
data coming from. The window displays
the Ascii Flat Files option, which means
that all your data will come from ASCII files.
This combination will build a project with the GeoQuest-type structure and expect you to
supply the data in ASCII files (*.xy, *.prd, etc).
When you click OK, OFM builds the project. The next listing displays the structure of this
template. The figure displays just the names and types of the input/imputed variables.
Of course, it also defines all other attributes as well.
As you can see, the structure is quite big. So it is the list of calculated variables that are
created with this template:
/* Calculated Variables
frequency.Monthly = @change(@month(date))
frequency.Yearly = @change(@year(date))
frequency.Quarterly = @change(date)
cumDlyProd.Oil = @CumInput(DlyProd.Oil)
cumDlyProd.Gas = @CumInput(DlyProd.Gas)
cumDlyProd.Water = @CumInput(DlyProd.Water)
cumMthProd.Days = @CumInput(MthProd.Days )
cumMthProd.Oil = @cuminput(MthProd.oil)
cumMthProd.Gas = @cuminput(MthProd.gas)
cumMthProd.Water = @cuminput(MthProd.water)
cumMthProd.Co2 = @cuminput(MthProd.Co2)
cumMthProd.Condensate = @cuminput(MthProd.condensate)
cumMthProd.GasLift = @cuminput(MthProd.GasLift)
cumMthWaterInj.Days = @cuminput(MthWaterInj.Days)
cumMthWaterInj.Volumen = @cuminput(MthWaterInj.Volumen)
cumMthGasInj.Days = @cuminput(MthGasInj.Days)
cumMthGasInj.Volumen = @cuminput(MthGasInj.Volumen)
cumMthCo2Inj.Days = @cuminput(MthCo2Inj.Days)
cumMthCo2Inj.Volumen = @cuminput(MthCo2Inj.Volumen)
cumSteamInj.Days = @cuminput(MthSteamInj.Days)
cumSteamInj.Volumen = @cuminput(MthSteamInj.Volumen)
cdMthProd.OilRate = MthProd.Oil/@dom(date)
cdMthProd.GasRate = MthProd.Gas/@dom(date)
cdMthProd.WaterRate = MthProd.Water/@dom(date)
cdMthProd.GasLiftRate = MthProd.GasLift/@dom(date)
cdMthProd.CondRate = MthProd.Condensate/@dom(date)
cdMthProd.Co2Rate = MthProd.Co2/@dom(date)
cdMthWaterInj.Rate = MthWaterInj.Volumen/@dom(date)
cdMthGasInj.Rate = MthGasInj.Volumen/@dom(date)
cdMthCo2Inj.Rate = MthCo2Inj.Volumen/@dom(date)
cdSteamInj.Rate = MthSteamInj.Volumen/@dom(date)
pwMthProd.OilRate = cdMthProd.OilRate/Mthprod.Active
pwMthProd.GasRate = cdMthProd.GasRate/Mthprod.Active
pwMthProd.WaterRate = cdMthProd.WaterRate/MthProd.Active
pwMthProd.GasLiftRate = cdMthProd.GasLiftRate/MthProd.Active
pwMthProd.CondRate = cdMthProd.CondRate/MthProd.Active
pwMthProd.Co2Rate = cdMthProd.Co2Rate/MthProd.Active
Pay close attention to the previous list. It does not only include more than 180
calculated variables but also some user functions. This model is quite complete and it is
a good advice to experiment with it.
As with the PA template, OFM does not complete the data/DCA associations for this
project template. Make sure you review them as per the next tables.
Map associations (Edit/Map/Associations)
DCA associations are defined in the template as per the next table (from the DCA
module, Edit/Scenario)
Map Associations
Well Type - Sort None (Your Filter
category)
Well Type - Table None
Well Type - Expression None
Wellbore None (WELLBORE)
Alias Name None (ALIAS)
Object Type None
X Coordinate None (XCOORD)
Y Coordinate None (YCOORD)
Reference Depth None (KB)
Completion Depth None (INTEREST)
Bottom Depth None (TOTALDEPTH)
Project None
DCA Associations
Date Date
Oil Rate cdMthProd.OilRate
Gas Rate cdMthProd.GasRate
Water Rate cdMthProd.WaterRate
Cumulative Oil cumMthProd.Oil
The Finder project template is based on production views offered by Finder. The OFM-
Finder link is being aggressively improved with each version and this is the reason why
you have to specify what Finder version you want to connect to (7.3, 8.0 or 8.5). The
table structure that OFM builds to receive all this possible data is quite extensive and
varies depending on the Finder version you are connecting to. We will not cover this link
now, but decided to include this short description to complete the templates section.
This template is a user-defined template. It is based on any project you have and you
want to use a model. If you have spent a lot of work on a project and you want to re-
use it as a model for your next databases, then you can “steal” the information from it
and turn it into your Company Standard template.
The files you need to get from the model project are the ones that contain the database
structure (*.ofm), the calculated variables (*.04) and DCA associations (*.14).
Input variables
Imputed variables
Calculated variables
DCA associations
In the figure, you can see the Company Standard template option allowed (the three
needed files have been copied to the OFM directory). The figure also has the Data
Source option set to Ascii Flat Files. OFM will then create a project using your custom
template and expect the data to populate it to be supplied with ASCII files (*.xy, *.prd,
etc.)
The Company ODBC Standard works like the Company Standard template but is though
to be used with projects that connect to data sources using ODBC technology. Your new
project will be based on an existent ODBC project selected as a model. In order activate
this option in the New OFM Project window, you need to copy four files from your
desired ODBC project to the OFM directory and rename them as indicated in the next
table.
Source Project file Copied to OFM directory as
Model.ofm Ivdef.cdb
Model.o4 Cvdef.cdb
When you select User Defined template, the Data Source can be set to different options:
Ascii Flat Files: Project structure (*.def & .par) and project data (*.xy, *.prd, etc.)
information will be supplied via ASCII files.
ODBC options: This creates a project that will connect to an external database via
ODBC. In general, you have two ways to go with these options:
You can select an ODBC project template (*.dbt file), build the structure based on it
and populate it with data through the ODBC link.
You can manually build your project structure based on the data offered by your
ODBC data source and then populate it with data also coming from the ODBC
source.
Data Sources
Although you can pick up several templates and several Data Sources, you can’t select
any combination you want. For instance, you can’t build a project based on the PA
template and an ODBC data source. The next table shows the valid combinations.
Introduction
Back Allocation (BA) is an OFM optional module. Although the module is always installed
with the program, it has its own license and you have to order it as a separate product.
BA’s main function is to calculate and distribute some total measured production among
the entities (wells, completions, etc.) that contributed to it. A common example would
be a total production measured at a tank that comes from several wells. If you can’t
measure directly how much has been actually produced by each well, you can use back
allocation to calculate it.
The result will be the individual well production and it can be:
Suppose you have a situation like the one in the following picture. All you have is the
periodically measured volume (shown in the graph) that goes into the tank. You don’t
know what percentage of this total come from each of the three wells.
The previous paragraphs explained the problem that back allocation needs to solve. The
OFM proposed solution is based on three legs:
The production uptime must be measured (or assumed) for every well. You need this
as an input data for the algorithm. For instance, if well A was shut in one day, obviously
the production that went into the tank that day came from wells B and C only. The
algorithm should allocate no production to A on that day because it was closed. The
total measured that day should be that one of B + C.
Finally, the distribution policy is the key of the whole trick and it is where you have to
put all you effort and knowledge of your field. All the algorithm converges to what
percentage of the total production goes to each well and how OFM does this must be
properly analyzed and defined. OFM implements four different methods to implement a
distribution policy. You should pick up the one that best fits your particular case. Before
we go into the details of each of the individual techniques, let’s explain the simplest of
the back allocation algorithms.
The purpose of this section is not to explain how OFM does allocation, but guide you
through a numerical example to give you some idea of the job. Try to follow not only
the explanations but the numbers as well. The example is quite simple, with a small
number of wells.
The easiest distribution policy could be as simple as a numeric constant per well. For
example, suppose that somehow, you came to a set of numbers that rate your well
production. For instance: KH (layer width times layer permeability). These numbers
40
You can set up the back allocation to consider a custom difference between the total measured
and the sum of the allocated values. This is set with a shrinkage factor.
Then, you can easily calculate that each well contributes to the total with a percentage
of:
Allocation Percentages
A=47% B=12% C=41%.
Then you can easily allocate the production into each of the wells by calculating the
respective percentage from that total. The results will be something like:
Allocated volumes
Total A B C
17.0 7.99 2.04 6.97
19.0 8.93 2.28 7.79
18.5 8.70 2.21 7.59
18.5 8.70 2.21 7.59
16.0 7.52 1.92 6.56
16.0 7.52 1.92 6.56
16.0 7.52 1.92 6.56
16.0 7.52 1.92 6.56
18.5 8.70 2.21 7.59
19.5 9.17 2.33 8.00
For example, the first A volume was calculated as the 47% of 17.
This is the most basic algorithm you can implement, but not the most accurate. As
mentioned before, you need to include the uptime to get closer to reality. Suppose that
on the last day, well A was closed. The total measured production (11.0) should be
distributed only among B and C. In our primitive solution, A got 5.17!
If you notice the results graph, you will see that the total production was actually
12.00
10.00
8.00 A
6.00 B
C
4.00
2.00
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
distributed following the calculated individual percentages. All wells have the shape of
the measured production because they are just related by constants.
As mentioned before, you should use the individual well uptimes to get closer to real
life. Uptime is input data and you need to supply it. If you don’t have it available, OFM
can assume default values.
For our example, let’s assume that the uptime table is:
Production Uptime
A B C
24.0 19.2 16.8
24.0 19.2 21.6
24.0 14.4 21.6
24.0 14.4 21.6
21.0 0.0 21.6
21.0 0.0 21.6
Look at the numbers closely. They represent the hours per day that each of the wells
was producing. If our algorithm honors this data, Well B should not get any production
during the days that it was opened 0 hours. The same goes for well A, the last two days
of the period.
We can build a complex allocation table based on the fixed percentages and the uptime.
This is what OFM calls theoretical values and it is just a new table, like the following
one:
Theoretical Values
A B C
11.3 2.3 6.89
11.3 2.3 8.86
11.3 1.73 8.86
11.3 1.73 8.86
9.87 0 8.86
9.87 0 8.86
9.87 0 8.86
9.87 0 8.86
9.87 2.88 8.86
11.3 2.88 8.86
11.3 2.88 8.86
11.3 2.65 9.84
11.3 2.65 8.36
11.3 2.65 8.36
0 2.88 8.66
0 2.3 8.86
This table contains a coefficient that not only honors the percentages but also the
uptimes. For instance, the first value for A was calculated as
The same was done for each well, each day. As expected, for the dates where the
uptime was zero, the theoretical value was also zero.
If you do this for the complete allocating period, you get a table like:
Total A B C
17.0 55.14% 11.24% 33.61%
19.0 50.27% 10.27% 39.47%
18.5 51.59% 7.90% 40.50%
18.5 51.59% 7.90% 40.50%
16.0 52.71% 0.00% 47.29%
16.0 52.71% 0.00% 47.29%
16.0 52.71% 0.00% 47.29%
16.0 52.71% 0.00% 47.29%
18.5 45.68% 13.33% 40.99%
19.5 49.01% 12.51% 38.48%
19.5 49.01% 12.51% 38.48%
20.3 47.46% 11.15% 41.40%
18.8 50.60% 11.89% 37.52%
18.8 50.60% 11.89% 37.52%
11.3 0.00% 24.96% 75.04%
11.0 0.00% 20.65% 79.35%
The first column lists the measured (input data) production and the three other ones
show what percentage of this total each well gets for every day of the period. Notice
that the resulting percentage is zero for wells with zero hours of production.
The final calculation is now quite simple and is shown on the next table. The graph also
displays the result of the algorithm.
A B C
9.374 1.911 5.714
9.551 1.951 7.498
9.544 1.462 7.493
9.544 1.462 7.493
8.433 0.000 7.567
8.433 0.000 7.567
8.433 0.000 7.567
8.433 0.000 7.567
8.451 2.466 7.583
12
10
8
A
6 B
C
4
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Notice that now, the curves are not similar to the total because they are not only
proportional to the allocation constant but also to the uptime. When the uptime is zero,
they get no production.
I use the word complex, just to differentiate the next technique from the previous one.
This one is not difficult to understand and is the natural expansion of the simple back
allocation introduced before.
The technique explained before assumes that the allocation constants are exactly that:
constants. The only factor that was allowed to change was the uptime. OFM allows you
not only to specify a varying uptime, but also varying allocation constants.
The four different allocation options are related to different techniques of specifying how
these constants vary in time.
And after the eighth day, B goes from 21 to 40. Using this coefficient change, we can
obtain, that for the first week, the percentages are (as in the previous example):
Notice that this variation is independent from the uptime. A coefficient change could be,
for instance, due to some successful work-over performed on B, duplicating its
production.
The uptime has the same effect than before. If B is closed, B’s production will be zero.
However, when it produces, the volume that it gets should be double than before the
work over.
If you use this change in the allocation constants for our previous example, the resulting
numbers will be (different periods shown with different colors):
Theoretical Values
A B C
11.3 2.3 6.9
11.3 2.3 8.9
11.3 1.7 8.9
11.3 1.7 8.9
9.9 0.0 8.9
9.9 0.0 8.9
9.9 0.0 8.9
8.8 0.0 8.0
8.8 5.0 8.0
10.1 5.0 8.0
10.1 5.0 8.0
10.1 4.6 8.9
10.1 4.6 7.5
10.1 4.6 7.5
0.0 5.0 7.8
Notice that the first part uses the original allocation constants and the correspondent
uptime. The numbers are the same as in our initial example. The other section (with
colored cells) uses the new coefficients and the theoretical values vary.
However, for the second part of the table, the B percentage changes from 12% to
21%. The first non-zero value of B, in this new period was calculated as:
Using these new theoretical volumes, we can calculate the new final allocation
percentages and calculate what part of the total goes to each well. The result is shown
on the next table:
Allocated Values
A B C
9.37 1.91 5.72
9.55 1.95 7.50
9.54 1.46 7.49
9.54 1.46 7.49
8.43 0.00 7.57
8.43 0.00 7.57
8.43 0.00 7.57
8.39 0.00 7.61
7.47 4.27 6.77
8.50 4.25 6.74
8.50 4.25 6.74
8.67 3.99 7.64
8.51 3.92 6.37
8.51 3.92 6.37
0.00 4.43 6.87
0.00 3.69 7.31
Notice that B actually doubled its allocated production and that A and C had decreased
them slightly to keep the total consistent with the measured values. The first section of
the table matches the previous example, but the second uses the new set of
coefficients. The graph displaying the allocated values is displayed in the next figure.
10
8
A
6 B
C
4
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
The next section is based on the batest database supplied with the OFM installation. It
is under the OFM\Samples\BA folder. The idea is to work some examples trough, so
you can practice and run a back allocation job.
First of all, you need to understand how to load data to a Filter Category level. This was
explained on Chapter 3, under the Data levels extension - Group tables section. This
is important because the measured data that will be allocated back must be loaded to a
filter category. If you are not familiar with this, you should read the mentioned chapter,
specially that section before continuing.
The batest database has 9 wells that allocate their productions to two tanks. These
tanks are called POM (Point Of Measurement), so there are POM1 and POM2. These are
the points where production is measured daily
Wells are named POM1:01,…POM1:04, POM2:01,… POM2:05. The next figure displays a
general overview of the project’s basemap.
For the rest of this section, we will concentrate only on POM1. Because POMs are
independent, you can study them one at a time without problems. Remember that
anything that is explained for POM1 will have its equivalent for POM2.
The next figure displays the filter table of the project. Notice that it is here where you
define to which FlowStation (POM1 or POM2) each well is connected to. So, when OFM
back allocates the daily measure volume at POM1, it will be allocated to the first four
wells of the project (the ones with FS=POM1). Same applies to other FSs.
The demonstration will be done using the daily values for one month. Generally
speaking, OFM allocates one month at a time. The daily values measured on one month
could be then allocated to a daily table (one value per day, per well) or to a monthly
table (the result of the month, per well). The next tables show the values loaded to this
database for the important tables.
Flowstation daily table: These are the measured values that will be allocated among
the wells connected to POM1.
Welltest sporadic table: This table contains different factors per well that change
sporadically. Only the wells from POM1 are listed.
POM1
ProdTime daily table: this table contains the uptime per well, in hours during the
month that will be allocated.
Having introduced the available data, we can summarize what our back allocation run
will do: the FLOWSTATION group daily table data for POM1 will be allocated to the
different wells (four) that are connected to it. The job will be done using uptime
information from the daily table PRODTIME. The allocation policy will be based on
extra data contained sporadic tables, either EQUATIONS or WELLTEST, depending on
the technique chosen. Finally, the results could be just reported, dumped into a ready-
to-load OFM ASCII file or written directly back to the OFM project tables.
Having explained the data in the batest project, we will run some back allocation jobs
and check the results.
This is the first of the methods and resembles very much the examples we’ve been
talking about. The allocation constant comes from a sporadic table. In this example, the
variable that holds the constants is EQUATIONS.KH. Notice that whether this constant
is actually the KH of the well or any other rating number is irrelevant for OFM. This
constant will be used just as a weighting factor. Notice that because this constant comes
from a sporadic table, OFM will step its value along the month. In other words, the value
of the constant remains constant through the days until a new value appears in the
sporadic table.
For instance, looking at the equations table of the previous pages, we can say the for
POM1:02, the month we will allocate (July), will start with a value of 10 and stay
constant during the month. On August 10th, it will go to 30. So July will have a constant
value of 10.
For POM1:04, the first value (20) is given on July 5th. OFM will consider a zero value
until then, the keep it to 20 from July 5th to July 24th and then fix it at 30 from July 25th
onwards.
• On the first tab (Control), set up the correct date interval to allocate (the complete
month of July 1998).
• Select Daily Allocation, to generate daily data that can be later loaded to the
individual wells.
• Select Create Report and choose a file name. This file will be created with all the
information of the run. I had experienced some problems using long filenames with
spaces. If you don’t get your report file generated, choose a simple name.
• Select Create BA Input File and choose a file name. This file has a special syntax
and can be opened by OFM to recalculate an OFM BA run. All BA settings go there,
so it could be convenient to have one made. This is like a template of the run. If you
change some data (for instance, add a new KH value to tables), then you can open
this file to quickly recalculate the run. OFM automatically runs the case when you
open the file. There is nothing else to do. Just open and close it.
• Fill up the Report Header lines with some descriptive information. This will be used
to describe the run in the report file.
• The Status options lets you specify an “on” or “off” status from a sporadic table.
OFM will consider only the days within an “on” status and allocate zero production to
the well during the “off” periods.
• You can also specify a default value for the uptime. This can be used in combination
with the Status options, so you can build a production schedule with the status and
default uptime hours value.
• Once you set this tab as shown, proceed to the next one.
• The Welltest tab is used when you want to allocate production based on well test
data. We are not using that data now, so make sure you clear all settings with the
Clear All button. Your tab should look like in the next figure.
• Go to the Equations tab and also clear all settings using the Clear All button.
Because our allocation policy will be just based on the coefficients (KH or Value) of
the EQUATIONS.KH variable, set that one as displayed in the figure. You need to
also set the variable that contains the date when these constants need to be applied,
so OFM can create a step function properly. Make sure you set the Date properly, as
shown in the figure (EQUATIONS.DATE).
opens once, after the job. If you close it, the only way to recover it is to re-run the
job. However, because we selected to create a report file, all this information is also
in this file, which can be opened with any text editor.
First of all, we will display all input data in tables, so you can reference them while
following the explanations.
The first table displays the input uptime and sporadic KH values. Notice that the KH
constant has been stepped by OFM as stated before. The KH keeps its value until a new
one is specified. Notice also the zero values for POM1:04. There was no previous value,
so zero was assumed until a new value was loaded (July 5th).
With this data, OFM calculates the theoretical values, which are just the multiplication
of KH and the percentage of uptime. The next table displays the results of these
calculations.
OFM Theoretical Values
Notice that these values are calculated, normalizing them against a 24 hours day, as
Using these theoretical values, you can calculate a percentage table, where each cell will
have the percentage of the total that each of these theoretical KH values represent. For
instance, in the first day, we can calculate:
Allocated Percentages
Date POM1:01 POM1:02 POM1:03 POM1:04
19980701 25% 25% 50% 0%
If you do that for the complete month period, the table you get is like the following one:
Allocated Percentages
POM1
Date POM1:01 POM1:02 POM1:03 POM1:04 OIL
19980701 25% 25% 50% 0% 10000
19980702 25% 25% 50% 0% 10000
19980703 25% 25% 50% 0% 10000
19980704 25% 25% 50% 0% 10000
19980705 17% 17% 33% 33% 10000
19980706 17% 17% 33% 33% 10000
19980707 17% 17% 33% 33% 10000
19980708 17% 17% 33% 33% 10000
19980709 17% 17% 33% 33% 10000
19980710 11% 33% 33% 22% 10000
19980711 11% 33% 33% 22% 10000
19980712 11% 33% 33% 22% 10000
19980713 11% 33% 33% 22% 10000
Now, simply allocating the calculated percentage of the measured value (displayed on
the last column) to each well, each date, you get the final manual results:
Rounding off numbers and keeping its addition under control is not that trivial. You can’t
just apply a simple rounding algorithm because all numbers will be rounded off the
same way and the addition will not match. For instance, say you have these three exact
values:
16.6666…
16.6666…
66.6666…
Added together, they are equal to 100. If you decide to round to one decimal, then you
might end up with:
16.7
16.7
66.7
But now, added together equal 100.1, not 100. This will introduce undesired errors!
OFM has a particular way of rounding the BA result values. It takes the smaller numbers
and truncates the decimals. Finally, it will adjust the bigger value to create a proper final
result. In our three numbers previous example, OFM will adjust the values to:
16.6
16.6
66.8
Now the addition gives the correct number: 100. That’s how OFM rounds off the back
allocation result numbers.
By default, OFM produces allocated values with no decimals. The following table displays
the manual and OFM calculations. Notice that OFM rounded off the decimals following
the explained rounding procedure.
As mentioned before, the explanation followed the oil phase. For gas or water, OFM
applies the same ideas.
The back allocation file (*.ba) is an ASCII file with all the commands needed by OFM
to perform a back allocation run. This purpose of this file is to quickly run (or repeat) an
unattended job. You create it by running a BA job and the, by just opening it again,
OFM repeats the run.
Normally, you should not touch this file because corrupting it can easily produce a
program crash. However, there is a very interesting parameter there called precision. By
default, the *.ba file says:
*Precision 0
• Run your back allocation job as normal and ask OFM to create a BA file. OFM runs
the job, produces the results and the *.ba file.
• Open this file with a good text editor and change the precision from 0 to 1
• Open this modified file with OFM (Analysis/Back Allocation/Open…). OFM will
re run the complete back allocation job, but using the new precision.
You can’t set any number of decimals. It is zero by default and you can change it to 1.
This is the second method and resembles very much the previous one. However, when
you use KH or a Value, the same allocation constant applies to all produced or injected
phases (oil, gas, steam or water). When you use well test data, you can specify different
values for different phases, so it requires more data but gives you more control over the
final allocated values.
The allocation test data usually comes from a sporadic table. In this example, the
variables are different for each phase. For oil, is WELLTEST.OIL, for gas
WELLTEST.GAS and for water is WELLTEST.WATER. These values are sporadic and
handled the same way the KH constant was: The values are used to build a step
function that changes when a new value is found. To specify the dates of these sporadic
test values, you need to set also the correspondent date variable. For our case, is
WELLTEST.DATE.
With this method, there is also another control tool and it is the possibility of discarding
test data, based on its quality. If you suspect that some values are wrong, then you
can exclude them from the run and ignore them as if they were not there. In order to do
this, you need an extra column in the well test table to “certify” the quality of each test.
In our example, this column will be WELLTEST.STATUS. This kind of data can’t be
anything. OFM expects some pre-defined values to consider a test as valid. Some of the
values that OFM will interpret as a good test are:
• “good”
• “g”
• “ok”
• “1”
Notice that whether these values are actually the test values of the well or any other
rating system is equivalent for OFM. These constants will be used just as a weighting
factor, but this time, one per phase. Notice that because they come from a sporadic
table, OFM will step its values along the month. In other words, the value of the
constant remains constant through the days until a new value (of good quality) appears
in the sporadic table.
Consider the oil phase, for instance. For POM1:01, the month we will allocate (July), will
start with a value of 2700 and on July 6th, it will go to 2600. There is another test on
July (2550 on July 27th), but it is considered bad and OFM will ignore it. The value of
2600 will be maintained until the end of the month.
For POM1:03, the first value (2455) is given on January 1st. However, because the test is
bas (“*”), OFM will consider a zero value until the next valid one, on July 5th.
• On the first tab (Control), set up the correct date interval to allocate (the complete
month of July 1998).
• Select Daily Allocation, to generate daily data that can be later loaded to the
individual wells.
• Select Create Report and choose a file name. This file will be created with all the
information of the run.
• Select Create BA Input File and choose a file name. This file has a special syntax
and can be opened by OFM to recalculate an OFM BA run. All BA settings go there,
so it could be convenient to have one made. This is like a template of the run. If you
change some data (for instance, change some well test value), then you can open
this file to quickly recalculate the run. OFM automatically runs the case when you
open the file. There is nothing else to do. Just open and close it.
• Fill up the Report Header lines with some descriptive information. This will be used
to describe the run in the report file.
• The Status options lets you specify an “on” or “off” status from a sporadic table.
OFM will consider only the days within an “on” status and allocate zero production to
the well during the “off” periods.
• You can also specify a default value for the uptime. This can be used in combination
with the Status options, so you can build a production schedule with the status and
default uptime hours value.
• Once you set this tab as shown, proceed to the WellTest one. Here is the tab
where you specify where is your well test data. Notice the settings I have done in
the example window. The welltest.date and welltest.status columns provide the
date and quality of the test. The rates are provided by the welltest.oil,
welltest.gor and welltest.bsw variables.
• Notice that you can specify the gas either as a rate or as a ratio (in this case, we
used a ratio so GOR must be checked).
• The same applies to the water, where you can specify it as a rate or bsw/fraction. In
this case, the water is given as bsw and fraction, so check both.
• Finally, select the last tab (Write back to OFM) and select Clear All. This tab is
used in the cases where the BA needs to know the tables that will receive the data.
This is needed in two cases and both are set in the Control tab:
3. When you select Write back to OFM, the algorithm needs to know what tables
will be receiving the data.
4. When you select Create OFM Load File, the file contains commands (such as
*TABLENAME xxx) etc, and OFM needs to know what tables will be receiving the
data.
• You can navigate the results and perform a quick check. This window opens once,
after the job. If you close it, the only way to recover it is to re-run the job. However,
because we selected to create a report file, all this information is also in this file,
which can be opened with any text editor.
First of all, we will display all input data in tables. Notice that the WELLTEST.OIL
value has been stepped by OFM as stated before. The WELLTEST.OIL keeps its value
until a new (and valid) one is specified.
Notice also in this table, that bad test data (such as the one for POM1:01 of July 27th)
has been ignored.
With this data, OFM calculates the theoretical values, which are just the multiplication
of the test oil value and the percentage of uptime. The next table displays the results of
these calculations.
Theoretical Values
Date POM1:01 POM1:02 POM1:03 POM1:04
Oil Oil Oil Oil
19980701 2700.00 2350.00 0.00 0.00
19980702 2700.00 2350.00 0.00 0.00
19980703 2700.00 2350.00 0.00 0.00
19980704 2700.00 2350.00 0.00 0.00
19980705 2700.00 2350.00 2455.00 0.00
19980706 2600.00 2350.00 2455.00 0.00
19980707 2600.00 2350.00 2455.00 0.00
19980708 2600.00 2350.00 2455.00 0.00
19980709 2600.00 2350.00 2455.00 0.00
19980710 2600.00 2350.00 2455.00 0.00
Using these theoretical values, you can calculate a percentage table, where each cell will
have the percentage of the total that each of these theoretical oil values represent. For
instance, in the first day, we can calculate:
Theoretical Values
Date POM1:01 POM1:02 POM1:03 POM1:04
19980701 2700.00 2350.00 0.00 0.00
Allocated Percentages
Date POM1:01 POM1:02 POM1:03 POM1:04
19980701 53% 47% 0% 0%
If you do that for the complete period, the table you get is like the following one:
Allocated percentages
Date POM1:01 POM1:02 POM1:03 POM1:04 POM1
Oil Oil Oil Oil Oil
19980701 53% 47% 0% 0% 10000
19980702 53% 47% 0% 0% 10000
19980703 53% 47% 0% 0% 10000
19980704 53% 47% 0% 0% 10000
19980705 36% 31% 33% 0% 10000
Now, simply allocating the calculated percentage of the measured value (shown on the
last column) to each well, each date, you get the final manual results:
Notice the rounding done by OFM that corresponds to what was explained for the KH
allocating method. You can also edit the *.ba file, change the precision to 1 and re run
the case. However, remember the axe. It might not be worth the effort to add just one
decimal. After all, these results are all approximations.
This is the third method and it is quite different from the others seen so far. The method
is based on the well-characteristics plot. This plot is the Pressure-Rate plot shown in the
next figure.
120
100
Pressure
80
60 Pressure
40
20
0
1 2 3 4 5Rate 6 7 8 9 10
If pressure, slope and Yo are given, you can easily calculate the rate as:
This is how OFM calculates the theoretical values for this method. Remember that these
values are used to calculate the allocation percentages, so at the end, your allocated
values added together match the measured one.
The needed inputs for this method are pressure (from the well tests), and the slope/Yo
pair (from the well characteristic).
This method calculates an allocation percentage that is used for all phases, so you don’t
have as much control as with the previous well test method.
The input data usually comes from sporadic tables. In this example, the pressure
variable comes from the test table (WELLTEST.THP). The rest of the data will come
from the EQUATIONS sporadic table. In out case, EQUATIONS.SLOPE and
EQUATIONS.Y0.
All data is allowed to change in time, so each set will be associated with its respective
date column (WELLTEST.DATE and EQUATIONS.DATE)
As usual, the algorithm needs the uptime, so the PRODTIME.UPTIME variable will also
be used.
Unfortunately, there is no complete data loaded to our bademo project, so we will start
the explanation by loading some data to the mentioned variables.
The next table displays the well characteristic equations. Notice that they will be added
to the EQUATIONS table and complete the present records.
Also, we need to load some pressure data to the WELLTEST table. The values loaded
are displayed next.
Once you get this data loaded to the project, we can go on with the explanation.
• On the first tab (Control), set up the correct date interval to allocate (the complete
month of July 1998).
• Select Daily Allocation, to generate daily data that can be later loaded to the
individual wells.
• Select Create Report and choose a file name. This file will be created with all the
information of the run.
• Select Create BA Input File and choose a file name. This file has a special syntax
and can be opened by OFM to recalculate an OFM BA run. All BA settings go there,
so it could be convenient to have one made. This is like a template of the run. If you
change some data (for instance, change some well test value), then you can open
this file to quickly recalculate the run. OFM automatically runs the case when you
open the file. There is nothing else to do. Just open and close it.
• Fill up the Report Header lines with some descriptive information. This will be used
to describe the run in the report file.
• Notice also that there is a Shrinkage factor. If you leave it to 0, the addition of the
allocated data will match the data loaded. If you specify a certain value (for
• The Status options lets you specify an “on” or “off” status from a sporadic table.
OFM will consider only the days within an “on” status and allocate zero production to
the well during the “off” periods.
• You can also specify a default value for the uptime. This can be used in combination
with the Status options, so you can build a production schedule with the status and
default uptime hours value.
• Once you set this tab as shown, proceed to the WellTest one. Here is the tab
where you specify where is your pressure data. Notice the settings I have done in
the example window. The WELLTEST.DATE and WELLTEST.THP columns provide
the date and test pressure needed by the algorithm. The rest of the test data is not
needed.
Notice that to keep the exercise simple, we have not associated the quality variable
for the well test data (the WELLTEST.STATUS). Because OFM now does not know
how to differentiate good tests from bad ones, all tests will be used (all THP values
will be used, even the ones from the status variable signals them as bad. If you
want to apply quality to test data, then associate the variable.
• Finally, select the last tab (Write back to OFM) and select Clear All. This tab is
used in the cases where the BA needs to know the tables that will receive the data.
This is needed in two cases and both are set in the Control tab:
5. When you select Write back to OFM, the algorithm needs to know what tables
will be receiving the data.
• Once you click OK, OFM runs the back allocation job using the specified data. When
its finished, it opens a window with the results. The window is split in two panels.
The left-hand side is to navigate through the data and the right-hand side panel
displays the selection. The next figure displays the results of the data allocated to
POM1:01.
• You can navigate the results and perform a quick check. This window opens once,
after the job. If you close it, the only way to recover it is to re-run the job. However,
because we selected to create a report file, all this information is also in this file,
which can be opened with any text editor.
First of all, we will display all input data in tables. Notice that now we have two sporadic
tables (WELLTEST and EQUATIONS) and each one has its respective date. OFM will
step the values before starting the calculations.
With this data, OFM calculates the theoretical values, using the already introduced
rate equation, now affected by the uptime value:
Uptime * (Pressure − Yo )
Rate =
Slope * 24
Applying this equation to each day and each well, we can calculate the OFM theoretical
rates, listed on the next table:
Using these theoretical values, you can calculate a percentage table, where each cell will
have the percentage of the total that each of these theoretical oil values represent. For
instance, in the first day, we can calculate:
Theoretical Values
Date POM1:01 POM1:02 POM1:03 POM1:04
19980701 1333.33 1148.57 557.00 0.00
Allocated Percentages
Date POM1:01 POM1:02 POM1:03 POM1:04
19980701 44% 38% 18% 0%
If you do that for the complete period, the table you get is like the following one:
Allocated percentages
POM1-01 POM1-02 POM1-03 POM1-04 POM1
44% 38% 18% 0% 10000
44% 38% 18% 0% 10000
44% 38% 18% 0% 10000
44% 38% 18% 0% 10000
36% 31% 15% 19% 10000
35% 31% 15% 19% 10000
35% 31% 15% 19% 10000
35% 31% 15% 19% 10000
Now, simply allocating the calculated percentage of the measured value (shown on the
last column) to each well, each date, you get the final manual results:
Notice the rounding done by OFM that corresponds to what was explained for the
previous allocating methods. You can also edit the *.ba file, change the precision to 1
and re run the case.
Finally, this is the fourth method available (at this time) to calculate allocation volumes.
The method is based on two possible equations:
-Di * t
( )
Rate = Qi * e 30.4
Where:
Rate = Theoretical rate calculated by OFM
Or:
Qi
Rate = 1
Di * N * t N
1 +
30.4
Where:
Rate = Theoretical rate calculated by OFM
Qi = Initial Rate
Di = Decline rate per month. If decline rate is nominal, you must tell OFM.
t = Time in days since a reference date
N = Hyperbolic/Harmonic exponent
The following table lists the coefficients that will be used for this method. Again, the
batest project does not contain adequate data for this technique, so you will have to
load these values before proceeding.
Notice that we will not be using N (all null values), so the equation used will be the first
one.
• On the first tab (Control), set up the correct date interval to allocate (the complete
month of July 1998).
Notice that although there is no data loaded for N, it is still associated. This is not a
requirement. Because all our n data is 0, OFM will use the n-less (first) equation. If
you clear the n field from this window, OFM will also select the same one.
• Finally, select the last tab (Write back to OFM) and select Clear All. This tab is
used in the cases where the BA needs to know the tables that will receive the data.
This is needed in two cases and both are set in the Control tab:
• You can navigate the results and perform a quick check. This window opens once,
after the job. If you close it, the only way to recover it is to re-run the job. However,
because we selected to create a report file, all this information is also in this file,
which can be opened with any text editor.
The next figure displays the results of the stepped values that will be used by OFM.
Notice that we added a column (t) with the elapsed days between the equation
reference date and the date of the allocation.
With this data, OFM calculates the theoretical values, using the already introduced
(first) rate equation, now affected by the uptime value:
-Di * t
( )
Uptime * Qi * e 30.4
Rate =
24
Applying this equation to each day and each well, we can calculate the OFM theoretical
rates, listed on the next table:
Theoretical Rates
POM1-01 POM1-02 POM1-03 POM1-04
598.76 298.90 299.26 0.00
598.70 298.83 299.21 0.00
598.64 298.76 299.16 0.00
598.58 298.69 299.11 0.00
598.52 298.62 299.06 596.46
598.46 298.55 299.01 596.34
598.40 298.48 298.97 596.22
598.34 298.42 298.92 596.10
598.29 298.35 298.87 595.99
598.23 194.45 298.82 595.87
598.17 194.42 194.42 595.75
598.11 194.40 194.40 595.63
598.05 194.37 194.37 595.52
597.99 194.34 194.34 595.40
597.93 194.31 194.31 595.28
597.87 194.28 194.28 198.39
597.81 194.25 194.25 198.35
597.75 145.67 194.22 198.31
597.70 145.65 194.19 198.27
Using these theoretical values, you can calculate a percentage table, where each cell will
have the percentage of the total that each of these theoretical oil values represent. For
instance, in the first day, we can calculate:
Theoretical Values
POM1-01 POM1-02 POM1-03 POM1-04
598.76 298.90 299.26 0.00
Percentages of Total
POM1-01 POM1-02 POM1-03 POM1-04
50% 25% 25% 0%
If you do that for the complete period, the table you get is like the following one:
Allocated percentages
POM1-01 POM1-02 POM1-03 POM1-04
50% 25% 25% 0%
50% 25% 25% 0%
50% 25% 25% 0%
50% 25% 25% 0%
33% 17% 17% 33%
33% 17% 17% 33%
33% 17% 17% 33%
33% 17% 17% 33%
33% 17% 17% 33%
35% 12% 18% 35%
38% 12% 12% 38%
38% 12% 12% 38%
38% 12% 12% 38%
38% 12% 12% 38%
38% 12% 12% 38%
50% 16% 16% 17%
50% 16% 16% 17%
53% 13% 17% 17%
53% 13% 17% 17%
53% 13% 17% 17%
62% 16% 22% 0%
61% 17% 23% 0%
64% 16% 21% 0%
39% 10% 13% 39%
42% 10% 14% 35%
42% 10% 14% 35%
35% 11% 15% 39%
19% 14% 19% 48%
14% 19% 19% 48%
0% 22% 22% 56%
Now, simply allocating the calculated percentage of the measured value to each well,
each date, you get the final manual results:
Read me Last
This section should have gone in the first part of the chapter because it contains very
important issues you need to know before attempting to do any back allocation job.
After reviewing the four methods, you should now get the idea of how it works: Every
method calculates a theoretical value and then, it is used to calculate a final allocation
percentage.
You can do a back allocation job supplying different sources of data for different wells.
I.e., you can back allocate a group of wells using well test data, another one using DCA
parameters, etc.
If you set OFM to combine methods (by associating all the input variables in the BA
setup), the program will attempt to use them in sequence (well per well, date per date)
until it finds one that can be applied. This is an extremely important point because the
theoretical values that are calculated by the different methods are numerically different.
Very different.
For example, for POM01:01, July 1st, the four methods calculated these theoretical
values:
When you run a job, you need to be very careful and supply it with consistent data. If
you need to mix up the methods (because you don’t have equivalent data for all wells),
then you have to verify thoroughly data quality and consistency.
Because the allocation percentages are those of a total measured in one flow station
(POM), then you can easily apply different methods to different flow stations. For
instance, it would be easy to allocate all POM1:xx wells using KH and all POM2:xx wells
using DCA numbers. Because the parameters are consistent within the flow station, then
the allocation percentages will be also consistent.
It is best to try and allocate all of the volumes of a particular flow station using the same
method by making sure the same type of data is available for each contributing well.
When you decide to mix methods, then you have to associate all parameters in the BA
setup. Take as an example the following figures:
Setting up all parameters allows OFM to use the first possible method. The rules OFM
follows are:
• For the oil phase, the default sequence is: Well Char, KH, Decline and Welltest.
• For the gas phase, the default method is Welltest only.
• For the gas phase, the default method is Welltest only.
• For the Water phase, the default method is Welltest only.
• For the Water and Gas Injection, the default method is Welltest only.
You need to be aware that [rumors are] OFM will not mix the KH method with any other
one. If it uses KH for a particular flow station, then it will not attempt to use coefficients
from any other method. The sequence in which OFM will apply the methods can be
changed by editing the ba file (see later).
In previous sections, we only explained part of the available options of BA jobs. This
section will explain the rest. We will concentrate on the Back Allocation Setup window,
one tab at a time.
Writeback Results to OFM: This will load the results directly to OFM, saving you the
work of loading the ASCII file with the data loader. I would not recommend to directly
load the results without previous inspection, so don’t be very excited about this option
and use it at your own judgement.
41
If you don’t specify the destination variables, OFM will still generate the load file, but you will
have to edit it and add the names manually.
• If you do supply the measured OIL value and the Gas and Water as ratios to OIL, (in
the Well test tab) then OFM will allocate the real OIL (from the supplied measured
value) and the calculated Water and Gas volumes based on the real oil and the
provided ratios.
A Facility Change variable can be used to signal a network change. For instance,
suppose that POM1:01 well’s production is switched from POM1 to POM2 at July 20th.
You can use a variable (that should contain the name of the flow station to which the
well is connected to -such as POM1, POM2, etc.-) to indicate possible changes during
the allocation period. If you don’t have such needs, just leave it blank.
The *.ba file is an ASCII file containing the surface network, methods, calculation
control, and production volumes for the back allocation session. To start up you do not
need a *.ba file. The BA module can generate this file when you use the Control tab
option Create a BA input file. You may want to do specialized BA configurations by
editing this file outside of OFM using an ASCII text editor. We mentioned one of the
possibilities when we talked about the numerical precision of the runs.
Because all the data needed for the run is in this file, you can to a Tools/Back
Allocation/Open from any project, not necessarily the one used to create it.
However, you can’t modify an existing ba file from within OFM. All you can do is build a
new one from scratch and for that, you need to have the project with the data opened.
42
This section is based on a handout sent to me by Doug Woodruff.
When you open a *.ba file, this simply replaces the BA section of the Windows registry
with the contents of the *.ba file. In this way you can regenerate the results of the BA
session without being connected to the original OFM project that the BA session was
originally built around.
The only interaction that BA has with the OFM database is when the BA setup is
configured to read the OFM tables for BA data input and when any allocated results are
written back to an OFM production table. Once the BA data is read in, program registry
is updated with this information.
After a session is opened, the BA output window will appear showing the results of the
BA session. If you want to make changes to the BA session, you should first open the
original OFM project that contains the tables your BA session used. You can then make
changes to the session if you are using the original OFM project by selecting the
Tools/Back Allocation/Setup… command and make your changes and then use OK
to have the calculation occur and view the output results.
Network section
A #network command in the *.ba file shows the start of the network section to
described the network along with its defined variables. Here is an example of the
network section with two nodes called POM1 and POM2.
#network
*header "Testing"
*header "Testing"
*header "testing"
*define FS #WELL
*methods *Oil *wellchar *kh *decline *welltest
*methods *Gas *welltest
*methods *Water *welltest
*methods *WaterInj *welltest
*methods *GasInj *welltest
*FS "POM1" *shrinkage 0.000000
WELL "POM1:01"
WELL "POM1:02"
WELL "POM1:03"
Three lines starting with *header indicate some header information for your report.
Next the *define FS #WELL causes the variable FS and WELL to be defined. FS is
the flow station and Well is the level under the FS level. A *method line then gives the
priority of each allocation method for each phase. The first method listed will have the
highest priority. The four methods are *welltest, *kh, *decline and *wellchar.
Each occurrence of *FS then denotes a new flow station. In this example there are
two. Then attached to each FS is the listing of the WELLs that are attached to that FS.
For POM1 there are four wells and for POM2 there are five wells. Volumes are
measured at the FS level and back allocated to the WELL level for each FS.
The *shrinkage factor can be multiplied by the measured volume before the
allocation is done if it is non-zero. Inside OFM BA you can only designate one shrinkage
value shared by all of the flow stations. You can designate a separate shrinkage factor
for each flow station by modifying the *.ba file and entering the shrinkage factor
accordingly for each *FS line. Using either 0.000 or 1.000 as a shrinkage factor is the
same thing since zero is interpreted as no shrinkage.
As you can see, there are quite a few commands in this file, but you don’t need to learn
them all. The BA module itself initially generates the file, which then you can edit it just
to fine-tune the job, before you run it again.
Together with the job control section (#network) there are also data sections with all
the numbers extracted from the project that are needed for the job. That is the reason
you don’t need the original project to run a job. All the information is contained in the
*.ba file.
#network
*header "Allocating using"
*header "Decline Data"
*define FS #WELL
*methods *oil *wellchar *kh *decline *welltest
*methods *Gas *welltest
*methods *Water *welltest
*methods *WaterInj *welltest
*methods *GasInj *welltest
*FS "POM1" *shrinkage 0.000000
WELL "POM1:01"
WELL "POM1:02"
WELL "POM1:03"
WELL "POM1:04"
#end
#control
*Daily
Conclusions
We can’t leave this chapter without some final conclusions. First of all, most of the
companies are already doing back allocation with different tools and then pass on the
results to reservoir engineers. Having this module built in into the program makes the
process a lot more convenient because it is very easy to access the needed data43 and
control the quality of the results44. Also, the generated results could be loaded directly
without intermediate steps. It could all be done in one place.
The methods available are quite a few and they all calculate theoretical values before
the final back allocation. This is very important, because you can supply different data
and get experimental results. For instance, you could use any rating number (not an oil
rate) in the well test method and still use the algorithm. The theoretical values will not
mean much because they do not come from a real rate, but they will be used to allocate
the measured production correctly, using your magic rating numbers.
Finally, the back allocation module is flexible (it allows you to use different types of data
and with different formats) and simple. However, if you need to customize more your
run, you can always dig into the *.ba file, for tuning.
43
The input data is usually already loaded to the OFM projects.
44
There are several ways of performing quality control of the results, from a simple plot to
compare the new value with the previous ones, to the possibility of writing user functions to look
for data anomalies.