Tuesday, 12 April 2011

Oracle Apps R12 Schema Analysis

 Oracle Apps R12 Schema Analysis

Continuing on my series on analysis, here is some of the initial analysis I did after installing Oracle applications R12.  Much of the analysis involves queries to the Oracle data dictionary under the covers.  This analysis is on the sample “database”, VIS, which is actually composed of many schemas.
I also did a fair bit of analysis from a DBA standpoint: tablespaces, logs, processes, DBMS_JOBS, etc.  And other analysis on the unix and middle tier side.  Tuning.  I’ll try to keep it all organized in different posts.
When I’ve started a new position, these are the types of queries I might run on the schemas to get a feel for the systems.  Just how was it made? Does it use PK, triggers, or external code to enforce data integrity?  What are the most important tables?  What are potential performance issues? Any issues?  And so on.
Many of these queries you won’t find the GUIs like SQL Developer, or Toad.  You have to write them yourself.  It pays big dividends to know the data dictionary!
(Sorry for the odd font. WordPress has not made changing the font for select sets of text easy or obvious. If anyone knows the secret, on how to change fonts in WordPress A LA Wordpad, please let me know. )
 

 

------------------------------------------------
------------------------------------------------

How many tables are there?

select count(*)
from dba_tables

COUNT(*)
----------
35637

------------------------------------------------
------------------------------------------------

What are the schemas, and how many OBJECTS in each schema?

select  OWNER
, count(*)
from dba_objects
group by owner
order by owner

OWNER                            COUNT(*)
------------------------------ ----------
ADS10GEUL_US                          266
AHL                                   632
AK                                    191
AMS                                  1540
AMV                                   188
AMW                                   500
AOLDEMO                                12
AP                                   1132
APPLSYS                              4122
APPLSYSPUB                             25
APPS                               193095
AR                                   3501
ASF                                    10
ASG                                   181
ASL                                    59
ASN                                     4
ASO                                   297
ASP                                    14
AST                                    38
AX                                    145
AZ                                    107
B2B                                  3812
BAM                                    59
BEN                                  2796
…

TSMSYS                                  3
UDDISYS                               189
VEA                                    48
WCRSYS                                 30
WIP                                   356
WIRELESS                             1206
WKSYS                                 370
WK_TEST                                47
WMS                                   413
WPS                                     7
WSH                                   517
WSM                                   133
XDB                                   880
XDO                                    64
XDP                                   295
XLA                                  1214
XLE                                    85
XNB                                    13
XNP                                   265
XTR                                   545
ZFA                                     4
ZPB                                  3636
ZSA                                     4
ZX                                    383

271 rows selected.

------------------------------------------------
------------------------------------------------

What are the schemas, and how many TABLES in each schema?
Order by the count, to see which schemas have the most tables,
as a proxy for importance.

select owner
, count(*)
from dba_tables
group by owner
order by 2

...
FII                                   589
JTF                                   651
AR                                    686
MSC                                   701
BIS                                   707
PA                                    720
APPLSYS                               937
SYS                                   955
BSC                                   986
HR                                   1300
APPS                                 1492
IGS                                  1636
FEM                                  1715
GL                                   2128

229 rows selected.

From the number of tables, the important schemas are:
HR, APPS, IGS, FEM, GL

------------------------------------------------
------------------------------------------------

What are the most common schemas and object types?

select owner, object_type
, count(*)
from dba_objects
group by owner, object_type
order by 3

...

APPS                           TABLE                     1492
BSC                            INDEX                     1513
ORDSYS                         JAVA CLASS                1525
B2B                            TRIGGER                   1528
IGS                            TABLE                     1636
FEM                            TABLE                     1715
BEN                            INDEX                     1819
GL                             TABLE                     2128
APPS                           INDEX                     2157
FEM                            INDEX                     2422
EGO                            INDEX SUBPARTITION        2514
HR                             INDEX                     2799
EGO                            INDEX PARTITION           2882
IGS                            INDEX                     3101
APPS                           TYPE                      3775
SYS                            VIEW                      3776
GL                             INDEX                     4036
APPS                           TRIGGER                   4183
SYS                            JAVA CLASS               20335
APPS                           VIEW                     32991
PUBLIC                         SYNONYM                  35181
APPS                           SYNONYM                  42331
APPS                           PACKAGE BODY             51284
APPS                           PACKAGE                  52328

1656 rows selected.

The schema APPS, uses a LOT of packages. Only 1492 tables, but 51,284 packages!  
Over 34 packages per table!

APPS also has 3775 TYPEs!  I've never seen so many TYPEs before.

------------------------------------------------
------------------------------------------------

How many indexes are unique vs. not?

select UNIQUENESS
, count(*)
from dba_indexes
group by UNIQUENESS

UNIQUENES   COUNT(*)
--------- ----------
UNIQUE         31216
NONUNIQUE      30661

In the entire database, only about half of all the indexes are Unique.

------------------------------------------------
------------------------------------------------

By schema, how many indexes are Unique?

select owner, UNIQUENESS
, count(*)
from dba_indexes
group by owner, UNIQUENESS
order by count(*)

...
BIS                            UNIQUE           513
AMS                            NONUNIQUE        521
JTF                            UNIQUE           555
AR                             UNIQUE           570
OKL                            NONUNIQUE        572
PA                             UNIQUE           582
BIS                            NONUNIQUE        626
JTF                            NONUNIQUE        700
PA                             NONUNIQUE        732
GL                             UNIQUE           743
B2B                            UNIQUE           770
AR                             NONUNIQUE        821
APPS                           UNIQUE           822
SYS                            UNIQUE           855
APPLSYS                        UNIQUE          1018
HR                             UNIQUE          1305
BEN                            NONUNIQUE       1355
BSC                            UNIQUE          1386
IGS                            NONUNIQUE       1464
APPS                           NONUNIQUE       1485
HR                             NONUNIQUE       1559
IGS                            UNIQUE          1651
FEM                            UNIQUE          2218
GL                             NONUNIQUE       3293

437 rows selected.

The important schemas, APPS, HR, GL, mainly use NONUNIQUE indexes.

------------------------------------------------
------------------------------------------------

By schema, what is the count of Unique vs. Non-Unique indexes,
where the unique index count, is greater than the non-unique?

SELECT  u.owner
, u.the_count as unique_count
, nonunique.the_count  as nonunique_count
from
(
select owner, UNIQUENESS
, count(*) AS the_count
from dba_indexes
where   uniqueness = 'UNIQUE'
group by owner, UNIQUENESS
)  U ,
(
select owner, UNIQUENESS
, count(*) AS the_count
from dba_indexes
where   uniqueness = 'NONUNIQUE'
group by owner, UNIQUENESS
) nonunique
where   u.owner = nonunique.owner
and     u.the_count > nonunique.the_count
order by u.owner

OWNER                          UNIQUE_COUNT   NONUNIQUE_COUNT

....

PRP                                      52              18
QPR                                     100              48
QRM                                      13               5
RE                                       17               5
RG                                       68              19
RRS                                      32              24
SCOTT                                    28              21
SIZZLE_DDR                                2               1
SSP                                       9               6
SYS                                     855             253
SYSTEM                                  179             126
VEA                                      14               7
WCRSYS                                    9               7
WIRELESS                                337             173
WKSYS                                    67              12
WK_TEST                                  14              10
WSM                                      36              31
XDB                                     393              19
XDO                                      23               1
XDP                                     101              52
XLA                                     222             125
XLE                                      26              17
XNP                                      83              65
ZPB                                     236              58
ZX                                      154              38

120 rows selected.

------------------------------------------------
------------------------------------------------

What is the percentage of non unique indexes by schema, and
which schemas have the highest percent of non-unique indexes?

SELECT  u.owner
, u.the_count as unique_count
, nonunique.the_count  as nonunique_count
, nonunique.the_count / (u.the_count + nonunique.the_count) * 100
as  pct_nonunique
from
(
select owner, UNIQUENESS
, count(*) AS the_count
from dba_indexes
where   uniqueness = 'UNIQUE'
group by owner, UNIQUENESS
)  U ,
(
select owner, UNIQUENESS
, count(*) AS the_count
from dba_indexes
where   uniqueness = 'NONUNIQUE'
group by owner, UNIQUENESS
) nonunique
where   u.owner = nonunique.owner
order by u.owner

OWNER                          UNIQUE_COUNT NONUNIQUE_COUNT PCT_NONUNIQUE

...
OZF                                     125             335     72.826087
ISC                                      54             149    73.3990148
OPI                                      71             196    73.4082397
BEN                                     467            1355    74.3688255
POA                                     106             318            75
RLM                                       8              24            75
AST                                       3              10    76.9230769
PJI                                      27              92    77.3109244
GL                                      743            3293    81.5906838 <-
IES                                      19              95    83.3333333
BIV                                       3              17            85
DDD                                       2              18            90
BIX                                       3              82    96.4705882

212 rows selected.

The important schema, GL, has 81% non-unique indexes.

------------------------------------------------
------------------------------------------------

Which tables have no constraints at all???

Select OWNER || '.' || TABLE_NAME
As no_CONSTRAINTS
From all_TABLES
Where    owner <> 'SYS'
AND     owner <> 'SYSTEM'
Group by OWNER || '.' || TABLE_NAME
MINUS
/*    Set of all tables with any kind of constraint */
Select OWNER || '.' || TABLE_NAME
As no_constraints
From all_constraints
Where    owner <> 'SYS'
AND     owner <> 'SYSTEM'
Group by OWNER || '.' || TABLE_NAME

...
ZPB.ZPBDATA2_EXCPT_T
ZPB.ZPBDATA42_EXCPT_T
ZPB.ZPBDATA43_EXCPT_T
ZPB.ZPBDATA62_EXCPT_T
ZPB.ZPBDATA82_EXCPT_T
ZPB.ZPBDATA_EXCPT_T
ZPB.ZPB_BUSAREA_VALIDATIONS
ZPB.ZPB_CURRENT_TIME_TEMP
ZPB.ZPB_DC_VAL_RESULTS_GT
ZPB.ZPB_WF_INACTIVE_USERS_GT
ZX.ZX_DATA_UPLOAD_INTERFACE
ZX.ZX_JURISDICTIONS_GT
ZX.ZX_PTNR_NEG_TAX_LINE_GT
ZX.ZX_TAX_DIST_ID_GT
ZX.ZX_TEST_API_GT

6648 rows selected.

6648 tables have no constraints at all;  PK, FK, NOT NULL, value constraints.

------------------------------------------------
------------------------------------------------

Which tables have no foreign keys at all???

Select     d1.OWNER || '.' || d1.TABLE_NAME
as table_owner
From        all_tables  d1
Where        d1.owner <> 'SYS'
AND       d1.OWNER <> 'SYSTEM'
Group by    d1.OWNER || '.' || d1.TABLE_NAME
minus
/*  Tables that have parents, are children  */
SELECT     A1.OWNER || '.' || A1.TABLE_NAME
as table_owner
FROM               ALL_CONSTRAINTS A1,
ALL_CONSTRAINTS B1,
ALL_CONS_COLUMNS C1
where A1.Constraint_Name = B1.R_Constraint_name
And A1.Owner=B1.R_Owner
And a1.CONSTRAINT_TYPE IN ('P', 'U')
And C1.Table_name=B1.table_name
And C1.Constraint_Name = B1.Constraint_name
And C1.owner = B1.owner
And     c1.owner <> 'SYS'
AND   C1.OWNER <> 'SYSTEM'
Group by   A1.OWNER || '.' || A1.Table_name
minus
/*  Tables that have children, are parents  */
SELECT     B1.OWNER || '.' || B1.TABLE_NAME
as table_owner
FROM       ALL_CONSTRAINTS B1,
ALL_CONS_COLUMNS C1
where      B1.R_Constraint_Name = C1.Constraint_name
And                B1.R_Owner=C1.Owner
And     c1.owner <> 'SYS'
AND   C1.OWNER <> 'SYSTEM'
And                B1.CONSTRAINT_TYPE = 'R'
GROUP by   B1.OWNER || '.' || B1.Table_name

...
XTR.XTR_JOURNALS
XTR.XTR_JOURNAL_ENTRY_ACTIONS
XTR.XTR_MARKET_PRICES
XTR.XTR_MASTER_CURRENCIES
XTR.XTR_MIRROR_DDA_LIMIT_ROW
XTR.XTR_MIRROR_DDA_LIM_ROW_TMP
XTR.XTR_MTS_RECORDS
XTR.XTR_PARTY_DEFAULTS
XTR.XTR_PARTY_INFO
XTR.XTR_PORTFOLIOS
XTR.XTR_POSITION_HISTORY
XTR.XTR_PRODUCT_TYPES
XTR.XTR_PRO_PARAM
XTR.XTR_RATE_SETS
XTR.XTR_REVALUATION_DETAILS
XTR.XTR_REVALUATION_RATES
XTR.XTR_ROLLOVER_TRANSACTIONS
XTR.XTR_STANDING_INSTRUCTIONS
XTR.XTR_TAX_BROKERAGE_RATES
XTR.XTR_TAX_BROKERAGE_SETUP

3540 rows selected.
Elapsed: 00:02:56.38

------------------------------------------------
------------------------------------------------

Which tables have no indexes at all???

Select     d1.OWNER || '.' || d1.TABLE_NAME
as table_owner
From               all_tables  d1
Where              d1.owner <> 'SYS'
AND        d1.OWNER <> 'SYSTEM'
Group by   d1.OWNER || '.' || d1.TABLE_NAME
MINUS
select
TABLE_OWNER || '.'  || TABLE_NAME
From  all_indexes
Where     table_owner <> 'SYS'
AND         table_owner <> 'SYSTEM'

…

ZPB.ZPB_UNIV_DIMENSION_GROUPS
ZPB.ZPB_UNIV_HIERARCHIES
ZPB.ZPB_VALIDATION_TEMP_DATA
ZPB.ZPB_WF_INACTIVE_USERS_GT
ZX.ZX_COMPOUND_ERRORS
ZX.ZX_COMPOUND_ERRORS_T
ZX.ZX_DISTCCID_DET_FACTS_GT
ZX.ZX_DISTRIBUTION_LINES_GT
ZX.ZX_MRC_GT
ZX.ZX_PTNR_NEG_LINE_GT
ZX.ZX_PTNR_NEG_TAX_LINE_GT
ZX.ZX_SIM_PROCESS_RESULTS
ZX.ZX_SIM_RULES_B
ZX.ZX_SIM_RULES_TL
ZX.ZX_SIM_RULE_CONDITIONS
ZX.ZX_TAX_DIST_ID_GT
ZX.ZX_TAX_PRIORITIES_T
ZX.ZX_TAX_RELATIONS_T
ZX.ZX_TEST_API_GT
ZX.ZX_UPDATE_CRITERIA_RESULTS
ZX.ZX_VALDN_STATUSES_GT

7225 rows selected.

------------------------------------------------
------------------------------------------------

List tables that have NO PRIMARY KEYS!

Select     tables.owner || '.' ||
tables.table_name
as   table_owner
From               all_tables      tables,
(Select
owner,
TABLE_NAME,
constraint_type,
CONSTRAINT_NAME
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'P'
)  constr
Where              tables.owner = constr.owner (+)
And                tables.table_name = constr.table_name (+)
and   tables.owner <> 'SYS'
and tables.owner <> 'SYSTEM'
And                constr.owner IS NULL
And                constr.table_name IS NULL
ORDER BY 1

...
ZX.ZX_SIM_RULES_TL
ZX.ZX_SIM_RULE_CONDITIONS
ZX.ZX_SIM_TRX_DISTS
ZX.ZX_SRVC_SBSCRPTN_EXCLS
ZX.ZX_SRVC_SUBSCRIPTIONS
ZX.ZX_SRVC_TYP_PARAMS
ZX.ZX_STATUS_B
ZX.ZX_STATUS_TL
ZX.ZX_SUBSCRIPTION_DETAILS
ZX.ZX_SUBSCRIPTION_OPTIONS
ZX.ZX_SUMMARY_TAX_LINES_GT
ZX.ZX_TAXES_B
ZX.ZX_TAXES_TL
ZX.ZX_TAX_DIST_ID_GT
ZX.ZX_TAX_PRIORITIES_T
ZX.ZX_TAX_RELATIONS_T
ZX.ZX_TEST_API_GT
ZX.ZX_TRANSACTION
ZX.ZX_TRANSACTIONS_GT
ZX.ZX_TRANSACTION_LINES
ZX.ZX_TRANSACTION_LINES_GT
ZX.ZX_TRX_HEADERS_GT
ZX.ZX_TRX_LINE_APP_REGIMES
ZX.ZX_TRX_PRE_PROC_OPTIONS_GT
ZX.ZX_TRX_TAX_LINK_GT
ZX.ZX_UPDATE_CRITERIA_RESULTS
ZX.ZX_VALDN_STATUSES_GT
ZX.ZX_VALIDATION_ERRORS_GT

30173 rows selected.

Of 35K tables, over 30K had no primary key!
This totally blew me away!!

------------------------------------------------
------------------------------------------------

Count of tables with no PK, by owner:

Select     tables.owner
, count(*)
From               all_tables      tables,
(Select
owner,
TABLE_NAME,
constraint_type,
CONSTRAINT_NAME
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'P'
)  constr
Where              tables.owner = constr.owner (+)
And                tables.table_name = constr.table_name (+)
and   tables.owner <> 'SYS'
and tables.owner <> 'SYSTEM'
And                constr.owner IS NULL
And                constr.table_name IS NULL
group by tables.owner
order by 2

OWNER                            COUNT(*)
------------------------------ ----------
TSMSYS                                  1
MWA                                     1
HCC                                     1
DISCOVERER5                             1
WPS                                     1
INTERNET_APPSERVER_REGISTRY             1
WCRSYS                                  1
SSP                                     1
CTXTEST                                 2
ASP                                     2
ASF                                     2
WK_TEST                                 3
EDWEUL_US                               3
OUTLN                                   3
HERMAN                                  3
EUL_US                                  3
ADS10GEUL_US                            3
DEMO                                    4
OCA                                     4
XNB                                     4
SCOTT                                   4
DCM                                     4
…
AR                                    633
BIS                                   703
PA                                    718
APPLSYS                               753
BSC                                   986
APPS                                 1391
IGS                                  1630
FEM                                  1695
GL                                   2124

224 rows selected.

The important schemas, APPS, and GL have thousands of tables with no PK.

----------------------------------------------------
----------------------------------------------------

For each schema, the ratio of tables with no PK

select
alltab.owner
, nopk.noPK_count
, alltab.the_count
, (nopk.noPK_count / alltab.the_count) * 100
as nopkratio
from
(
select     owner,
count(*)  as the_count
From               dba_tables
group by owner
)    alltab
,
(
Select     tables.owner
, count(*)  as noPK_count
From               dba_tables      tables,
(Select
owner,
TABLE_NAME,
constraint_type,
CONSTRAINT_NAME
FROM ALL_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'P'
)  constr
Where              tables.owner = constr.owner (+)
And                tables.table_name = constr.table_name (+)
and   tables.owner <> 'SYS'
and tables.owner <> 'SYSTEM'
And                constr.owner IS NULL
And                constr.table_name IS NULL
group by tables.owner
) nopk
where   alltab.owner  = nopk.owner
order by 1

OWNER                          NOPK_COUNT  THE_COUNT  NOPKRATIO
------------------------------ ---------- ---------- ----------
ADS10GEUL_US                            3         48       6.25
AHL                                   163        164 99.3902439
AK                                     47         59 79.6610169
AMS                                   371        375 98.9333333
AMV                                    32         42 76.1904762
AMW                                   176        176        100
AOLDEMO                                 6          6        100
AP                                    327        327        100
APPLSYS                               753        937 80.3628602
APPS                                 1391       1492  93.230563   <-
AR                                    633        686 92.2740525   <-
ASF                                     2          2        100
ASG                                    49         60 81.6666667
ASL                                    21         24       87.5
ASO                                    63         69 91.3043478
ASP                                     2          2        100
AST                                    11         11        100

…

FV                                    171        171        100
GCS                                    84         84        100
GL                                   2124       2128 99.8120301  <-
GMA                                    45         47 95.7446809
…

HERMAN                                  3          3        100
HR                                    586       1300 45.0769231
HRI                                   326        326        100

…

WK_TEST                                 3         13 23.0769231
WMS                                   132        133 99.2481203
WPS                                     1          1        100
WSH                                   128        128        100
WSM                                    53         53        100
XDB                                    17         22 77.2727273
XDO                                    28         28        100
XDP                                    73         78 93.5897436
XLA                                   213        216 98.6111111
XLE                                    22         22        100
XNB                                     4          4        100
XNP                                    41         50         82
XTR                                   216        216        100
ZPB                                   200        200        100
ZX                                    127        127        100

224 rows selected.

So many schemas have no primary keys in any of their tables at all!!!

Over 90% of the tables in the important schemas, APPS, AR, GL, have no primary keys.

------------------------------------------------------------
------------------------------------------------------------

List all the triggers:

Select table_owner || '.' ||  table_name, trigger_name
from all_triggers
order by 1, 2, 3

...
XTR.XTR_PARTY_INFO                 XTR_AU_PARTY_INFO_T
XTR.XTR_PORTFOLIOS                 XTR_AU_PORTFOLIOS_T
XTR.XTR_POSITION_HISTORY           XTR_AIUD_POSITION_HIS_T
XTR.XTR_PRODUCT_TYPES              XTR_AU_PRODUCT_TYPES_T
XTR.XTR_PRO_PARAM                  XTR_AU_PRO_PARAM_T
XTR.XTR_RATE_SETS                  XTR_AU_RATE_SETS_T
XTR.XTR_REVALUATION_DETAILS        XTR_AU_REVALUATION_DETAILS_T
XTR.XTR_REVALUATION_RATES          XTR_AU_REVALUATION_RATES_T
XTR.XTR_ROLLOVER_TRANSACTIONS      XTR_AID_ROLLOVER_TRANS_T
XTR.XTR_ROLLOVER_TRANSACTIONS      XTR_AU_ROLLOVER_TRANSACTIONS_T
XTR.XTR_ROLLOVER_TRANSACTIONS      XTR_BI_ROLLOVER_TRANSACTIONS_T
XTR.XTR_ROLLOVER_TRANSACTIONS      XTR_BU_ROLLOVER_TRANSACTIONS_T
XTR.XTR_STANDING_INSTRUCTIONS      XTR_AU_STANDING_INSTRUCTIONS_T
XTR.XTR_TAX_BROKERAGE_RATES        XTR_AU_TAX_BROKERAGE_RATES_T
XTR.XTR_TAX_BROKERAGE_SETUP        XTR_AU_TAX_BROKERAGE_SETUP_T

6778 rows selected.

------------------------------------------------------------
------------------------------------------------------------

List Triggers by Schema/Owner:

Select owner
, count(*)
from dba_triggers
group by owner
order by 2

OWNER                            COUNT(*)
------------------------------ ----------
SCOTT                                   1
FLOWS_FILES                             1
INV                                     1
ODM                                     1
OE                                      1
MGDSYS                                  1
D4OSYS                                  1
WK_TEST                                 1
HERMAN                                  1
RE                                      2
SYSTEM                                  2
HR                                      3
LBACSYS                                 4
PTG                                     4
DEMO                                    6
AR                                      7
SYS                                     9
ORABPEL                                14
JTF                                    14
XDB                                    21
CFD                                    26
MASTER                                 27
WKSYS                                  36
ORASSO                                 39
MDSYS                                  47
OLAPSYS                                48
MOBILEADMIN                            81
WIRELESS                              108
FLOWS_010500                          159
FLOWS_020000                          182
FLOWS_030000                          219
B2B                                  1528
APPS                                 4183

The important schema, APPS, uses 4183 triggers on its 1492 tables.
2.8 triggers per table, on average.

------------------------------------------------------------
------------------------------------------------------------

Initial Conclusions:
The Oracle Apps R12 sample database, VIS, uses hardly any primary keys.
Many non unique indexes.
And uses a LOT of package and triggers.
Lots of potential for data integrity and tuning issues.
See and do the analysis!


Why I Like The Merge Statement

November 22, 2010
Over the past few years, I’ve stopped using the Update statement, and  started using the Merge statement for updates instead.  While many use it to do both inserts and updates at the same time, you can use Merge for only updates, or inserts.
Say I want to update some fields in the SCOTT schema.  Traditionally, I would first do the analysis by looking at the data.
Select
e.EMPNO
,e.ENAME
,e.JOB
,e.SAL
,e.COMM
,e.DEPTNO
,d.DNAME
,d.LOC
From emp  e
Join   dept d
On e.DEPTNO = d.DEPTNO
Where d.dname = ‘SALES’
And e.comm is not null
EMPNO ENAME      JOB              SAL       COMM     DEPTNO DNAME          LOC
———- ———- ——— ———- ———- ———- ————– ————-
7499 ALLEN      SALESMAN        1600        375         30 SALES          CHICAGO
7844 TURNER     SALESMAN        1500          0         30 SALES          CHICAGO
7654 MARTIN     SALESMAN        1250       1750         30 SALES          CHICAGO
7521 WARD       SALESMAN        1250        625         30 SALES          CHICAGO
4 rows selected.
Eventually, I get a Select statement showing the exact rows I want to update.
If you know that the deptno was 30, you could use the traditional update statement in this way:
Update emp
Set comm = comm  * 1.25
Where comm is not null
And deptno = 30
/
4 rows updated.
Or, if you wanted to do a subquery.
Update emp
Set comm = comm  * 1.25
Where comm is not null
And deptno in
(select deptno
from Dept
where dname = ‘SALES’
)
or
Update emp e
Set comm = comm  * 1.25
Where comm is not null
And exists
(select deptno
from Dept d
where e.deptno = d.deptno
and dname = ‘SALES’
)
The Select needs to be rewritten into an Update statement.  You need to get exactly the same Where clause.
In a lot of systems, things can get complex, and you might actually have to write PLSQL code to find different values and store them to multiple variables.  That is, instead of using a single SQL statement.
Of course, the classic error with an update statement is not to use the correct Where clause, or no Where clause at all.
Update emp
Set comm = comm  * 1.25
14 rows updated.
This will update all rows in the table.  Probably not what you want to do.
————————
Using the Merge statement, you can take the Select statement you used in your  analysis, and put it directly into the Merge statement.
You can include as many fields as you like in the Select clause, including fields that are not part of the primary key, or Where clause.
You can also include multiple tables in the Select clause.  In this case, Dept and Emp.
Merge into emp e
Using
(
Select
e.EMPNO
,e.ENAME
,e.JOB
,e.MGR
,e.HIREDATE
,e.SAL
,e.COMM
,e.DEPTNO
,d.DNAME
,d.LOC
From emp  e
Join   dept d
On e.DEPTNO = d.DEPTNO
Where d.dname = ‘SALES’
And e.comm is not null
)  src
on   ( e.empno = src.empno)
When matched then update
Set e.comm = src.comm * 1.25
4 rows merged.
But the big advantage that I like, is that the Merge statement forces you to do a clean join.
The join condition between the Select clause, and the table you are updating, MUST be correct.
For example, if you use a select statement that would give a cartesian product ….
Select
e.EMPNO
,e.ENAME
,e.JOB
,e.MGR
,e.HIREDATE
,e.SAL
,e.COMM
,e.DEPTNO
,d.DNAME
,d.LOC
From emp  e
,   dept d
Where e.comm is not null
7844 TURNER     SALESMAN        7698 1981-SEP-08 00:00:00                1500          0
30 ACCOUNTING     NEW YORK
7844 TURNER     SALESMAN        7698 1981-SEP-08 00:00:00                1500          0
30 RESEARCH       DALLAS
7844 TURNER     SALESMAN        7698 1981-SEP-08 00:00:00                1500          0
30 SALES          CHICAGO
7844 TURNER     SALESMAN        7698 1981-SEP-08 00:00:00                1500          0
30 OPERATIONS     BOSTON
16 rows selected.
Merge into emp e
Using
(
Select
e.EMPNO
,e.ENAME
,e.JOB
,e.MGR
,e.HIREDATE
,e.SAL
,e.COMM
,e.DEPTNO
,d.DNAME
,d.LOC
From emp  e
,   dept d
Where e.comm is not null
)  src
on   ( e.empno = src.empno)
When matched then update
Set e.comm = src.comm * 1.25
*
ERROR at line 1:
ORA-30926: unable to get a stable set of rows in the source tables
When the rows don’t line up exactly, you get the unstable set of rows error.
So, it forces you to do your data analysis and line the rows up correctly.  I have found this to be very useful.  Especially when you are dealing with bizarre datasets that don’t have a primary, or even an effective key.
Once this helped to catch an error that would otherwise have gone unnoticed, and too many rows would have been updated.  Good one!
Hope this is useful.

Failure To Do Data Analysis

October 27, 2010
Databases are all about, data.
So it’s totally bizarre how so many people working with databases never look at the data!
It’s really so fundamental, it’s hard to make a good analogy.  Imagine a tailor making your clothes, but never looked at the fabrics!  Can you think of a better analogy?
I’ve met programmers who have charged ahead and written one thousand lines of code, and have not even looked at either the data in the tables, the input, or the output!
———-
On one system that I worked, a programmer wrote a java program that would:  take an input file, read it, find the associated row in the table, flag a column in the table.  He told me on a Friday that he had run it, and it ran fine.  And on Monday, he told me the same thing.
I was curious as to what the results were, so I did a quick GROUP BY on the column.  A strange conversation ensued.
Me:  Hey, this field is NULL in every row of the table.
Java programmer:  Why are they NULL?
Me:  I’m asking you, it’s your program.  Why are they NULL?
Java programmer: I don’t know.
Me: Did you run a GROUP BY after you ran it to see how many got updated?
Java programmer: No.
Me:  How about before you ran the program?
Java programmer: No
Me:  Do you have any debug in here to point out how many rows you are updating as you go?
Java programmer:  No.
Me:  What fields are you using to search for the row?
Java programmer: I don’t know, whatever they gave me.
Me:  How do you know you are not updating 100,000 rows instead of a single row when you run the update statement?
Java programmer: I don’t know.
Me:  We got the data in, row by row.   We are going to be flagging them, row by row.  You need to get a single row when you run the update statement.
Java programmer:  Really?
I found out what the fields were that he was using for to search on.  I then did a GROUP BY, HAVING COUNT(*) > 1 to see if the combination was unique or not.   Fortunately, it was.  So, why were they not updating?
At this point, it went well beyond me advising, or helping him.  This was what I call, doing his work for him!
The guy had actually been programming for at least 10 years, and had the title, Senior Java Programmer.
———-
At the same job, there was another Senior Programmer who had been also working for at least 10 years.  He was writing a program that read the database for campaign responses, and created an output file, which was sent to an outside company.
We started a second campaign, with fewer responses.  The manager pointed out to me that they were suddenly getting strange results in the output file.  Instead of ones and zeros (0, 1), they found letters.  He read them to me, “N-U-L-L”  !!!!
The programmer had said there was a problem with the database.  Hmm.  A senior programmer who did not understand the concept of NULL.
———-
At another job, I made a package for the java team.  It ran fine.  But one guy kept bothering me.
Java programmer:  Hey, your program isn’t working.
Giving him the benefit of the doubt.  Really?  Show me.
He showed me his program.  Note, I said, his program.  Not, the input data.
Over an hour later, he saw that the problem was in fact his program.  He had failed to look at what the data output from the package was giving him, for the input into his program.
———-
So many people, like each in these examples, completely fail to look at the underlying data directly, say with SQLPLUS.  They don’t know what they data looks like.  Either before, or after they run their program. They don’t know what the data in the various columns look like.  Or even if there is any data in the table at all.  I’ve seen this with java programmers, project managers, business analysts, DBAs, and even, get this, database developers!
To build a meaningful system, you MUST look at, and understand your data!  It’s really quite simple.
I’ll write about some of my typical data analysis techniques in subsequent posts.

Poor Or No Error Trapping

October 19, 2010
More Architecture Mistakes To Avoid
One thing that should be blatantly obvious to anyone who has ever had to fix some code on the weekend, or in the middle of the night, is the importance of excellent error trapping and logging routines.
When you need to fix something, now, you don’t want to be reverse engineering the code or the whole system.  You want to know what happened, and what to do.  Now.
You need what I call, instantly digestible error messages.  I don’t want to hunt it. I don’t want to kill it.  I don’t want to clean it.  I don’t want to cook it.  I just want to eat it.  I want both the cause, and the solution, in the same message.
The error trapping routine must be totally reliable.   Let me list some strange things that I’ve seen in production.
Complex and Useless:
I mentioned in another post, that one place actually split the error lookup table into two tables, vertically!  If that wasn’t enough, there was no way that errors that hadn’t previously been imagined, would ever make it to the error log.  The only errors that could make it to the error log were program logic errors, not underlying Oracle errors.
Any DBA should get chills at the thought.  There are all kinds of strange errors that Oracle can suddenly throw at you such as a mysterious Oracle 600 error.  There is no way in the world that you can expect, all the unexpected.
The third problem with this alleged error trapping routine, was that they didn’t allow even adding a DBMS_OUTPUT debug statement in the middle of the program to say what was happening.
The fourth problem with this “system”, was that there were in fact, get this, SEVEN OBJECTS to do the error logging.  I can’t remember them all, but I do remember counting seven of them.  Two lookup tables.  At least one logging table.  A number of procedures.  Nested deeply.
I concluded it wasn’t possible to add more complexity.  How about you?  Can you think of any more complexity to add to the error trapping?
Nesting Error Trapping Routines, aka Not Doing Structured Programming:
In my early days I once went for an interview.  Until that time, I’d mostly worked with ProC, not PLSQL.  During the interview, they kept asking me, “What about exceptions?”.  I told them that I used Proc’s WHENEVER SQLERROR, and how I used it.  But they kept asking the question, which I thought was pretty odd.
When I returned, I looked up PLSQL’s EXCEPTION error handling.  It was pretty simple.  I kept wondering, did I miss something?
I didn’t get the job.  But later I realized what their problem was.  It wasn’t how to trap a PLSQL error.  It was that they were not doing standard structured programming.  They were adding more and more code, into the error handling routines themselves.  Nesting them deeply into spaghetti, without GOTO statements.   Was this procedure called from the main body of procedure x, or the error trapping routine of procedure y?  Or the error trapping routine of procedure z?  Or something else?  No wonder they had problems.
Writing The Error Routine So It Itself Crashes:
At a big financial firm that I was at, there was a lot of really bad code.  One program kept bombing.  Not only that, but the error trapping routine itself, bombed, as well as the procedure.
At the beginning of the PLSQL routine, there was:
Savepoint A;
Then, a procedure was immediately called, which had a commit in it.  This of course obliterated the Savepoint, created just a fraction of second before.
There was no error trapping through the program, only a single catch all at the end:
Exception
When Others Then
Rollback to A;
Result:
When the program gave us trouble, we never knew the cause, or the solution.   We weren’t allowed to change code in production.  But moving new code through dev and test was problematic, because those environments weren’t maintained, and weren’t running.
One Catch All Error Routine For Everything:
The other mistake is just to use one error handling routine for everything.
Exception
When others then
Result:
This “works”, but usually leaves lots of questions.  Just where in the program did it bomb?  What were the parameters passed in?  What value was it processing?   What row was it on?   How do I recreate this error?  What’s the cause?  What’s the solution?
No Error Trapping At All!:
But the worst thing you can do, is to have no error trapping at all!
This was standard practice at one place. There were a number of application log tables, to log who logged in, and who ran what program.  Developed by the front end team.  But there were no error trapping routines, and not even a error log table.
I asked one of the people who built it why there was no error trapping.  Her response:  “We don’t think we’ll get errors”!!!!  I said, “What are you, psychic!?”
Result:
The errors were not handled cleanly. The programs just bombed.  Constant reverse engineering of the crappy code every time there was an error.  Poking through both code, and input data.  Lots of wasted people time.
Correction:
Learn error trapping, and do it.   Create reliable error handling routines.  Do not put more and more complexity on this aspect which needs to be simple and reliable.
Give meaningful, instantly digestible error messages for this fast food culture.
In the error message, list what package and procedure is creating the error message, and where in the program it is.  And, the parameters passed in, the values of the variables, what row the program was on when it stopped.
Don’t have just one catch all error trapping routine.  Put error trapping around the important parts of the program, and trap the problems there.  So, if it’s really important that the big load finishes correctly before subsequent processing occurs, wrap that part up in an error trapping routine, and abort the program if it doesn’t work, before it proceeds.

Using the Wrong Object

September 20, 2010
More Database Design Mistakes To Avoid
Given that you need more information in or from the database, what do you do?
Do you create one of the following?
a new query
a procedure or function to extract and calculate from existing data
a field, or series of fields.
a view
a snapshot / materialized view
a table
a schema
another database
Creating another database, when a schema would do:
Once I was asked to create another database.  I thought that this was for a completely new system.  In fact, it was closely aligned with the existing system.  All it needed was a separate schema in the same Oracle database.
Result:  Two databases to size, maintain, backup and recover.  Slower performance over database links.  After the fact, even though creating a new schema and importing the data would not have taken more than an hour or so, they still kept the dual databases!
Creating a table, when a function would do:
At another place, there were a lot of loads.  And then, reports and queries were run against the loaded data after the loads finished.  There was a “programmer” who was new to databases. Instead of sticking with this paradigm that he was already quite familiar with, he decided to be creative and do everything at once.  He created a load, and then retrieved the data in one procedure.  So, if you just wanted to retrieve the data, you had to run the load as well!
If this wasn’t enough, he also created a table for return codes.

The programmer created an unneeded Return Code Table
So, he would insert a success or failure code into a table.  And then read the table to see what the return code was.
A return code table might have made sense if the return code was needed for subsequent processing by other programs, but this was not the case.
All that was needed was a procedure that returned an OUT parameter.
Procedure_1 ( p1   in  varchar,
P_rtn_code out number)
Or, a function with a return code.
Function_1 ( p1   in  varchar)
returns number
Make sense?
Creating a series of tables and views when a single field would do:
At the same shop, another programmer wanted to exclude certain pieces of information (rows) from the web.
In programmer fashion, instead of consulting with me, he charged ahead and built something.  In addition to the existing table, they now wanted a second table of the rows to exclude.  And to make it work, they also made a series of three views that selected from the two tables.

The programmer incorrectly created a series of tables and views
They also speculated that “performance won’t be an issue”.  However, they had no tests or evidence to make this conclusion.
Intuitively, this was bad design just listening to them describe it.   I looked into what they were doing.
It turned out that the information they wanted to store, was actually determined by two fields. Which were already in the primary key of a particular table.  One that they were already reading from!  This was easily seen by looking at the very final view that they would use.  And by listening to what they were saying.
“So, these two fields determine this final one right?”  “Yes.”  “Why not just add this single flag field to the existing table?”  “We’ve already done it this way, and we will have to do it over.  Well, we want to do it this way.”  !!!
Correction:

The series of tables and views, reduced to a single field
All they needed was to add a simple flag field to the existing table that had the two determining fields in the primary key.   Update this field to Y, or N.  When they retrieved, add a condition to the WHERE clause:
Where The_Flag = ‘Y’
The result was that they built this “design” and put it into production anyway.  It ran pretty slowly.  A few weeks later, the programmer came up to me and asked, “My program is running slow. Can you run optimizer to make it go faster?”.  However, optimizer didn’t help much, and I knew it wouldn’t.  Reading from a series of five tables and views, is of course going to be slower than reading a field from a single table.
Lessons:
Listen to your database expertise.
Use the correct object.
Rework bad designs

Defining Recursive Relationships In Lookup Tables!

September 20, 2010
Another Database Design Mistake to Avoid
Recursive programming is used in a few places in computer science. Most notably in stack processing.  The big picture looks something like:
x program starts  (a)
which calls itself again (b)
which calls itself again (c )
then iteration c finishes
iteration b finishes
iteration a finishes
Recursive relationships are also sometimes used in database design.
The classic example is an organization that has employees.  And managers.   The managers are also employees.
Unfortunately, I’ve seen the recursion pulled out into another table.  A lookup table.


Recursive Relationship Incorrectly Defined in a Lookup Table
I’m convinced that the lookup table was created because someone learned how to normalize into first normal form.  And they came to believe that 1NF and making lookup tables is everything you need to know about database design.  However, database design is actually a pretty deep subject, and 1NF is just a start.
Correction:
Correct Way To Define a Recursive Relationship
Define recursive relationships in a single table.

Simple Rule Of Thumb To Determine the Number of Tables In A Relationship

September 19, 2010
More on Database Design Mistakes to Avoid
Here is my rule of thumb for the number of tables, for each relationship.
RelationshipNumber Of Tables
Many to ManyM:N3
One to Many1:M2
One to One1:11
You would think this would be obvious.  But so many database people I’ve spoke to had never heard this concept.  Once I was in an interview, and mentioned this idea.  One of the interviewers took out a piece of paper, and made a note of it.
One exception would be supertypes and subtypes.  Different fields required for each subtype do not apply to the others.  Ie.
Supertype:  Phone_Call.
Subtypes:  Cell_phone_call, Land_line_call, Pay_phone_call, VOIP_Call, Skype_Call, Calling_Card_call
Roaming does not apply to a call from a land line.
Another exception would be putting large objects LOBs in a separate table.  Covered in another post.
Again, don’t go blindly following rules.  Read the The Dreyfus Model of Skills Acquisition.  At the other end, don’t go overboard on skepticism, speculating, and questioning everything.

Combining Identifying Data, Transaction Data, and LOBs in a Single Table



Another Database Design Mistake to Avoid
In another post, I mentioned that as a rule of thumb, if data has a 1:1 relationship, all the fields should be in the same table.
Of course, with many rules, you can go overboard.  Blindly following rules is the mark of a beginner.  Search for:
The Dreyfus Model of Skills Acquisition
Here is a caveat.
From a DBA standpoint, if the table has large objects in it, such as BLOB (Binary Large Object), CLOB (Character Large Object) or record objects, this is when I would consider splitting these into a separate table and even a separate tablespace.  This is in spite of the fact that under the covers, Oracle does move big objects into their own table.
Once I went on an interview and was told about the data design.  A java programmer insisted putting a large object into each row in the main customer table.  This was to be used for online test results, and he wanted it object-like for his java programs.
It should have been obvious that the customer data was identification data, and the online test would be transaction data.  Even if it was a 1:1 relationship, intuitively, the test should be separated into another table.  What if the company decided to offer a new series of multiple online tests for their existing customers?  Of course the existing data model would not accommodate such a scenario.
A big problem had arisen.  For every row created, a 4 kilobyte LOB was created also.  Most of these LOBs were in fact never used.  Most of the database was empty.  Over 90% empty.
As the database grew in size (a terabyte I seem to recall), the whole system got slower and slower.  And so, in DBA fashion, they decided to implement Real Application Clusters to handle the work load.  A more complex and expensive option.
Solution:

Correction of Too Many Tables for a 1:1 Relationship, including a BLOB
My take was that they should have changed the database design.  Instead of trying to fix the Leaning Tower Of Pisa.  Only create a new row with a LOB if a test was actually taken.  The existing hardware would probably have worked fine.  Hire a data architect who understands database design.  And a java programmer who knows SQL well enough to insert a new row, and do table joins.

Database Design and the Leaning Tower of Pisa

September 14, 2010
Introducing a number of posts on database design.
A System’s Database Design is like the Foundation to a Building
Database Design is one of the most critical areas in databases.  It is like the foundation to your house.  If you have a bad foundation, you either cannot build at all.  Or, you have to do all kinds of things to compensate for the bad foundation.
A great example of a bad foundation is the leaning tower of Pisa.
The tower was not built with the right foundation for the local clay.  Three floors were built, and the tower began to lean because of the weight of the building.  Rather than tearing down the structure and building it again properly, they just decided to wait 100 years for the clay to settle.  And then they built four more floors, parallel to the ground, attempting to compensate for the bad angle of the foundation!   But more problems arose.
“In 1838, the architect Alessandro Della Gherardesca decided it was a perfectly good idea to dig out a walkway around the tower so that visitors could see its carefully crafted base. Predictably, this only exacerbated the tower’s lean. Benito Mussolini was the next to try his hand at straightening the tower. He ordered the foundation to be filled in with concrete, but the concrete sunk into the wet clay and the leaning tower continued its prolonged descent towards the ground.”
From:
“Several plans have been tried over the years to stop the tower from falling. Some of them have been almost disastrous. In 1934 an Italian engineer drilled 361 holes into the base and filled them with mortar. The tower promptly leaned over some more. In 1993, 650 tons of lead were hung from the North side of the building to try and stop the lean increasing. For a while it worked.
In 1995, they decided to try and increase the foundations under the South side of the building. They froze the ground using liquid nitrogen, to stop it moving, and then started to remove stones, so they could insert metal rods. What they didn’t know was that the stones they were removing were part of the original foundation of the building. That is the nearest the tower has come to disaster. In one night the lean increased as much as it normally increases in two years. They quickly added another 250 tons of lead and decided to rethink the whole thing.
At this point everyone was just about ready to give up. Then a British engineering professor came up with yet another idea. His plan was to remove ground from under the high side, instead of trying to add ground under the low side. In 1999 work began, and was done very slowly, so that the building wouldn’t get a sudden shock. At the beginning of June 2001, the work was complete, and the tower had been straightened up by about 16 inches, which returns it to the position it held in 1838.”
Trying to make an inappropriate design, work:
So many database designs, that I didn’t build, but have had to struggle with, are so much like the Leaning Tower of Pisa.  Someone charged ahead and made a rough “design”.  Then rushed to put some objects on top.  Because the design didn’t work right, they had to use lots of code to correct the design.  And then more code.  And then someone comes up with a bright idea to fix it.  That makes things worse.
DB Designs Are Hardly Ever Corrected Or Refined:
The unfortunate thing in database designs is that unlike programs, they are rarely, if ever, reworked.  If a program doesn’t compile, the code is corrected until it does compile.  And then if the program doesn’t produce the right output, the code is again corrected until it does. I’ve seen one relatively simple program be refined and put into production at least eight times.
The database design is usually thrown up as quickly as possible, because there are a whole bunch of programmers, business analysts, and managers waiting for “the design”.  I’ve even been asked, in interviews, “how do you design for speed?”!
Immediately objects are created on top of the database design.  Views, procedures, loads, java code, etc. As time goes on, everyone gets a better understanding; the requirements, the data, the user refines what they really are looking for.  An architecture actually emerges.  You would think that the DB design would be refined and corrected.  Like programs are.
But I’ve never seen that.  Why?  For a few reasons.  Source code would need to be changed to accommodate the different table and field names.  Most managers and programmers see this as too much work.
Other reasons are emotions.  Like sunk costs in an old car, they feel they have to get their money’s worth out of it now.  They have already “invested” in their bad design.  Other emotions are probably pride, “look how ingeniously I added all this code to make this strange design work”.  If you correct the design to simply the code, the programmer doesn’t have such complex code to show off now.    Or laziness.  “It’s too much work”.  But the amount of work is never quantified.  Even if it is only 5 minutes of a simple search and replace.   Or politics.  One database expert is outnumbered by 5 or more java people.
The Design Is The Database:
In 1992, I learned DB Design on a summer job using DataPerfect.  DataPerfect was a great product to learn DB Design with.  Really, why? There was essentially no source code.  The Design WAS the Database!
If the design didn’t work, then the system didn’t work.  So, I didn’t have the “luxury” of a whole bunch of source code to “fall back” on.  The design MUST work.
DataPerfect also didn’t use SQL, which is probably one factor that lead to its demise.  So, while most programmers today struggle with learning the paradigm of SQL sets of data, I didn’t have to.  My whole focus was on the design.
I was the whole computer department that summer.  The architect.  The designer.  The builder.  I had no internal politics to worry about.
The system I built was a Contact Management system.  Every time someone contacted the organization, they would keep a record.  Some of what I thought were good ideas didn’t work.  One example was a page I made to list all the key people in an organization;  President, Comptroller, VP of ???, etc.
But as I got the data to enter, the data didn’t conform to my preconceived ideas.  Few companies had the positions I’d thought of.  The business cards had strange titles I had never heard of.  Interestingly enough, the titles kept getting longer and longer.  I started with 15 characters for the title.  Then 20.  Then, 30, 40, and eventually 50.  And still one title didn’t fit in a 50 character field.  Executive Vice President of blah, blah, blah.  One thing I concluded that summer was that, really important people have short titles. Such as, president.
So, I refined the design.  I exported the data I’d already entered, changed the design, and reimported it.  Repeat.  Rinse and Repeat again. Through the whole life cycle. Until the system was perfected.
As I did this, I learned database design concepts intuitively by experience.  One to many. Many to many.  Cool!   When I returned to university, I studied intro and advanced database, and aced the classes.
The main point being, that summer, the database design was rebuilt probably dozens of times.   For a relatively simple system.
Sometimes I wonder how many database designers have had been able to learn from that kind of experience.  Since working in enterprise environments since 1995, I have never seen database designs redesigned to any significant degree.  As bad as they are, they remain set in stone, like the Leaning Tower of Pisa.
A Good Database Design Eliminates Thousands of Lines of Code:
Given my experience, I’ve concluded that a good database design should eliminate thousands of lines of code.  Not require thousands of lines of code to make it work.  In triggers, procedures, middle tier code, front end code, etc.
Database Design Mistakes to Avoid:
Since I started working, I’ve usually come into companies that already had DB designs in place.  Bad designs.  And really pathetic designs.  Designs that I knew were clearly wrong from the moment I saw them.  Here, I will document some of these mistakes.  And what the ultimate results were.
These mistakes are not necessarily going to be defined from a theoretical standpoint such as “violating third normal form”.   Although there may be a theoretical basis for the same conclusion, it is a mistake for more immediate considerations: inaccurate results, cartesian products, really slow performance, dupe data, etc.
Indeed, as I saw these mistakes over the years, I realized that a number of them don’t fit into the theory.  Or, the theory doesn’t address it.
Hopefully, by seeing them, you will benefit from the mistakes of others.  And learn enough to not do them yourself next time.
I’m sorry that I have taken so long to get around to this.  I had hoped to present these concepts at an Oracle User Group meeting first, but have not had the opportunity yet.  If you would like me to present, please contact me.
To better systems.
Rodger Lepinsky

For Your Eyes Only



Before I worked with databases, while still in university, I started a wedding photography business which I ran for 11 years.
As I look back at the photography, I really did learn a LOT about business. Marketing: how do I reach those potential customers? Sales. Dealing with customers. And suppliers. Gross margin. Net profit. Cash flow. Operations: altogether, it was about 40 hours of work to shoot a wedding, when you counted up sales, preparation, shooting, and many hours of negative and paper handling to deliver the goods. Capital budgeting; how many more pictures do I need to shoot to make this 1500 dollar lens pay for itself? Break even points. Strategic alliances.
I read a book called Shooting Your Way To A Million, by Richard Sharabura.
There was one chapter called, For Your Eyes Only, where he described how to make a technical portfolio and teach yourself the technical aspects of photography.  Do many test shoots, and put them all in binder.  Different films.  Different lighting conditions, including all the variations in studio lights.  Different exposures.  Different film processing.  Different kinds of objects and people. And always, always, do a lighting test with Polaroid film before shooting.
Why?  When you got into a new situation, you could go back to your reference material for guidance and a starting point.  Unlike today, where you can instantly see the digital result, with film cameras, you didn’t really know for sure what the result would be until a number of hours, or days, after the shoot.  By that time, it would be too late to make changes.  The models or wedding guests would be gone, or there would be no more access to the location, and the deadline would be upon you.
So, a great creative idea, executed poorly, with a simple mistake on the exposure, or processing, etc., would be crap.  You wouldn’t get paid, and you wouldn’t get any referral business.
The technical binders were a great idea, which I did.  When I started working with computers in 1990s, I copied the idea.
At that time, the internet was not in the companies I worked with.  And documentation was often non-existent or awful.  What documentation there was, often did not have clear examples.  They could have pages of prose, and parameter lists, that told you everything, except what you needed to know.  Which for myself, were simple examples that worked.  I noted that even the book, C By Example, had more words in prose than in examples.
When I figured something out, I printed out some clean documentation.  One subject on a single page of paper.
The binders accumulated over time.  Novell. Oracle. Unix. Uniface. Visual Basic. C. Make and Makefiles. SQL Server.

The technical binders I created - about 34 inches wide.
Recently, I got an automatic document feed (ADF) scanner, and I’ve been scanning these binders into PDF files.  What a killer app.  I can keep the information, but not all the space and weight of the paper.
It’s difficult to believe that I’ve taken so long to start a blog.  I’d thought of writing for a long time.  But there has always been something else to learn and take up my time. Presentations at the Oracle User’s Groups.  Self study of Oracle Applications. RAC. Chartered Financial Analyst (CFA) material. Java. JDeveloper.  SQL Server.  And many other interests unrelated to work.

10 comments:

Anonymous said...

I'm not sure why but this site is loading extremely slow for me. Is anyone else having this issue or is it a problem on my end? I'll check back later on and see if the
problem still exists.
Also visit my blog - Black friday 2012

Anonymous said...

anthropology dating http://loveepicentre.com/taketour.php masculine male dating site

Anonymous said...

[url=http://loveepicentre.com/advice.php][img]http://loveepicentre.com/uploades/photos/4.jpg[/img][/url]
tagged dating [url=http://loveepicentre.com/success_stories.php]dating sites free chatrooms[/url] who is joe jonas dating now
milton keynes dating [url=http://loveepicentre.com/faq.php]friends dating angola[/url] justin timberlake dating esmee denters
speed dating in southern california [url=http://loveepicentre.com/taketour.php]true adult dating[/url] who is nikki reed dating

Anonymous said...

eharmony dating problems http://loveepicentre.com/testimonials.php dating over 50 free

Anonymous said...

mcdougal littell geometry test generator software http://buyoem.co.uk/category-100-105/PC-Diagnostics?page=2 computer control software [url=http://buyoem.co.uk/es/product-37375/Babylon-Pro-7-0]zip cd iomega software for xp[/url] systems management software brands
[url=http://buyoem.co.uk/product-31549/Maxprog-iCash-5-1-MAC]Maxprog iCash 5.1 MAC - Cheap Legal OEM Software, Software Sale, Download OEM[/url] youtube converter software
[url=http://buyoem.co.uk/product-37186/ZC-Dream-Photo-Editor-2008-2-7][img]http://buyoem.co.uk/image/1.gif[/img][/url]

Anonymous said...

[url=http://redbrickstore.co.uk/products/lopid.htm][img]http://onlinemedistore.com/3.jpg[/img][/url]
pharmacy jobs available in australia http://redbrickstore.co.uk/products/lasuna.htm aarow pharmacy conn [url=http://redbrickstore.co.uk/products/aciclovir.htm]national boards for pharmacy technician practice book[/url]
rite aid pharmacy stone mountain georgia http://redbrickstore.co.uk/products/serophene.htm list of universities offering pharmacy couress in germany [url=http://redbrickstore.co.uk/products/kamasutra-intensity-condoms.htm]kamasutra intensity condoms[/url]
pharmacy residency programs http://redbrickstore.co.uk/products/lopid.htm old time pharmacy soda fountain [url=http://redbrickstore.co.uk/products/albendazole.htm]pharmacy rep jan garwood[/url]
community pharmacy madison wi http://redbrickstore.co.uk/products/sustiva.htm pharmacy logo pestle [url=http://redbrickstore.co.uk/products/caverta.htm]caverta[/url]

Anonymous said...

If ѕtudentѕ of Erotіc Massage all ovеr аgаin neхt
week. My little bгother waѕnt аllowеd to
use Romе's thermal baths. Liberation from this material infection does not mean you are stressed and tired. These really are the very essence of ancient Erotic Massage practice. There is a lot that goes into the Sushumna. Still others probably like having a big megaphone for their hobby.

Look into my web page :: tantra london

Anonymous said...

[url=http://certifiedpharmacy.co.uk/categories/stop-smoking.htm][img]http://onlinemedistore.com/6.jpg[/img][/url]
don vidic pharmacy http://certifiedpharmacy.co.uk/products/ventolin.htm regulations for pharmacy technicians [url=http://certifiedpharmacy.co.uk/products/prandin.htm]pyrimidine k mart pharmacy[/url]
online pharmacy technician classes http://certifiedpharmacy.co.uk/products/evecare.htm schools for pharmacy technology in tampa [url=http://certifiedpharmacy.co.uk/products/minocycline.htm]minocycline[/url]
sc board of pharmacy http://certifiedpharmacy.co.uk/products/alavert.htm th peoples pharmacy [url=http://certifiedpharmacy.co.uk/products/albendazole.htm]pharmacy vials 20ml[/url]
pharmacy technician state certification http://certifiedpharmacy.co.uk/categories/pain-relief.htm canada canadian drug pharmacy [url=http://certifiedpharmacy.co.uk/products/avodart.htm]avodart[/url]

Anonymous said...

dating for singles [url=http://freeinternetdating.info/friendship/friendship-versus-sexual-relationship]friendship versus sexual relationship[/url] dating royal palm beach
high school dating last http://freeinternetdating.info/romance/free-online-exotic-romance-books malou postanes filipina asian dating website
porn star dating game [url=http://freeinternetdating.info/romantic/romantic-carabian-all-inclusive-resorts]dating web sites for overweight people[/url] search free america single dating site

Anonymous said...

tall people dating service http://loveepicentre.com/faq/ completey free internet dating sites
dating site w chatroom [url=http://loveepicentre.com/faq/]southern baptist dating services[/url] dating tim
dating glass bottles [url=http://loveepicentre.com/advice/]dating stupid[/url] amateur dating site [url=http://loveepicentre.com/user/ben_bec/]ben_bec[/url] rules for dating christian