OpenTV Tunes Into ER/Studio for Data Modeling

Embarcadero ER/Studio

OpenTV Tunes Into ER/Studio for Data Modeling

Information Management Magazine, Nov/Dec 2010

Product Reviewer

REVIEWER: Neetu Jaiswal, senior database administrator for OpenTV.

BACKGROUND: OpenTV is one of the world’s leading providers of advanced digital television solutions dedicated to creating and delivering compelling viewing experiences to consumers of digital content worldwide. The company’s software has been integrated in more than 145 million devices around the world and enables advanced program guides, video-on-demand, personal video recording, interactive and addressable advertising and a variety of enhanced television applications.

PLATFORM: We run Embarcadero ER/Studio on the Windows XP operating system using Oracle databases.

ADVERTISEMENT

PROBLEM SOLVED: In my role as a senior database administrator, there are occasions when I need to understand the structure of my databases, including a clear view of my database schema. This entails reverse engineering the database schema to create a database model that shows me how database elements such as tables and views relate to each other. This process would be exceedingly difficult without a tool to automate the process.

PRODUCT FUNCTIONALITY: We primarily use ER/Studio to reverse engineer the database from the schema. There are several challenges involved with reverse engineering of a data model from the existing database schema, and ER/Studio helps us overcome those. And, after reverse engineering, I do not have to arrange the table myself because ER/Studio does it automatically and gives me a clear picture, which I appreciate. The importance of this reverse engineering process is that it ultimately helps us improve the performance of our applications, which is the goal. We additionally use ER/Studio to create schema scripts out of the data models to a data definition language file so we can run them at a later time. My team at OpenTV also uses ER/Studio to generate different types of reports and some scripts.

STRENGTHS: One of my favorite features of ER/Studio is the way it presents the model. The tool gives us a wide array of layout options to look at data models and produce diagrams, from circular to orthogonal to hierarchical. There is a facility in the way we can change the layouts and that helps with studying the relationships of the tables, which is particularly useful when we’re reverse engineering from a source that does not contain diagram information. By unhinging the layout, we get a clear picture of the data models.

WEAKNESSES: An issue arose when I attempted to create a schema script from a schema with more than 1,000 tables. The process was progressing slowly and then it eventually failed. I have not duplicated this action since, so I am not sure if it was an isolated instance or not.

SELECTION CRITERIA: Our OpenTV team had been using DBArtisan, another Embarcadero tool, for our database administration functions. We were very happy with the way DBArtisan performed, so we opted for another Embarcadero tool when our need for data modeling arose. It sealed the deal when Embarcadero gave us a good deal for purchasing two of its products at once.

DELIVERABLES: The outputs that I generate most frequently are data models, as well as schema scripts from the models. Additionally, I regularly generate both pictorial and tabular reports.

VENDOR SUPPORT: Although I have only had one occasion to interact with Embarcadero’s support team, I would rate my experience a five out of five. There was an instance when my machine crashed and, when rebuilding it, my ER/Studio license showed as expired. I called Embarcadero’s tech support and they immediately helped me remedy the situation.

DOCUMENTATION: I truthfully did not rely too heavily on the documentation that came with ER/Studio, mainly because I have used other data modeling products in the past and found it very simple to navigate through the tool and find what I needed on my own. The GUI felt familiar and intuitive to me.

Product reviews are customer testimonials. We thank the author of this review for taking the time to share his or her expertise.

 

Embarcadero ER/Studio XE Introduces More Scalable Environment for Database and Data Warehouse Modeling

Embarcadero ER/Studio XE Introduces More Scalable Environment for Database and Data Warehouse Modeling
Embarcadero partners with Netezza to expand data warehouse system support.

SAN FRANCISCO – Nov. 9, 2010 – Embarcadero Technologies, Inc. today announced the availability of ER/Studio® XE, its award-winning database design and data modeling tool suite. This latest version offers improvements in the user experience, enhanced scalability and performance, and expanded database management system (DBMS) support for Netezza’s TwinFin appliance running versions 5.0 and 4.6.

Embarcadero’s ER/Studio XE now offers the most collaborative way for data management professionals to build and maintain enterprise-scale databases and data warehouses. Built-in facilities automate routine modeling tasks so users can analyze and optimize databases and data warehouse structures faster and easier. With an improved and more scalable server-side repository for model management, ER/Studio XE introduces the most productive way to share, document and publish models and metadata to distributed teams.

“ER/Studio XE is the best solution for enterprise data modeling and data management,” said Karthikeyan Kaliyaperumal, data architecture leader at The Nielsen Company. “The ER/Studio shared metadata Repository is vital to the success of our enterprise projects, allowing all our teams to work in a collaborative fashion.  Now, with the added Netezza support, we can more accurately document, engineer and manage our BI-focused data. Above and beyond the functionality of the tools, Embarcadero provides exceptional support, helping our team be more successful.”

With the release of ER/Studio XE, Embarcadero is announcing its partnership with Netezza, a global provider of data warehouse, analytic and monitoring appliances. By adding support for Netezza 5.0 and 4.6, customers can leverage ER/Studio XE to morequickly design, understand and maintain their data warehouse architectures.

“More organizations are moving beyond traditional data warehouse and business intelligence approaches to more cost effective, high-performance alternatives,” said Matt Rollender, director of technology and strategic alliances, Netezza. “Our partnership with Embarcadero gives our customers industry-leading data modeling support that will help them better understand their data and provide even more meaningful information to their business users.”

Increasingly, more Embarcadero customers are relying on business analytics to make sound business decisions, and are turning to Netezza for its state-of-the-art data warehouse appliances.

“With a faster and more highly scalable repository, and by expanding ER/Studio’s DBMS support for Netezza, our customers can better manage and reuse their existing data assets and ultimately optimize internal processes to save time and money,” said Jason Tiret, director of modeling and architecture solutions, Embarcadero Technologies.

In addition to better leveraging data assets, ER/Studio XE provides organizations the most advanced documentation, reporting and business intelligence available in a modeling tool. ER/Studio XE supports Oracle, DB2, Sybase, MS SQL Server, InterBase, Netezza, Teradata, MySQL, PostgreSQL, and more, all from a single window.

Embarcadero XE branded products are distinguished by three key features: direct, native support for multiple DBMS platforms and deployment environments from a single interface ; Embarcadero® ToolCloud technology for centralized license management and on-demand tool access; and an easy upgrade path to Embarcadero® All-Access™ XE.

Pricing and Availability
ER/Studio XE is now generally available worldwide. Pricing begins at $5,995 and includes ER/Studio Data Architect, ER/Studio Business Architect, ER/Studio Software Architect, ER/Studio Repository, ER/Studio Portal, MetaWizard Import/Export, ToolCloud and multi-platform support.

As with all Embarcadero tools, ER/Studio XE will be available via the Embarcadero All-Access toolbox.

For complete product information, including videos, data sheets, feature and what’s new lists, and an FAQ, visithttp://www.embarcadero.com/products/er-studio-xe.

About Embarcadero Technologies
Embarcadero Technologies, Inc. is a leading provider of award-winning tools for application developers and database professionals so they can design systems right, build them faster and run them better, regardless of their platform or programming language. Ninety of the Fortune 100 and an active community of more than three million users worldwide rely on Embarcadero products to increase productivity, reduce costs, simplify change management and compliance and accelerate innovation. Founded in 1993, Embarcadero is headquartered in San Francisco, with offices located around the world. Embarcadero is online at www.embarcadero.com.

Media Contact:
Michelle Baum
Chase Communications
303-284-8440
michelle@chasecomm.net

###

Embarcadero, the Embarcadero Technologies logos and all other Embarcadero Technologies product or service names are trademarks or registered trademarks of Embarcadero Technologies, Inc. All other trademarks are property of their respective owners.

 

Live Webinar: Ralph Kimball on Cost Effective Design Techniques

We have a great webinar lined up with Ralph Kimball and Jason Tiret on November 18th. Don’t miss this one. Ralph will be covering data warehouse design techniques and Jason will talk a bit about the new ER/Studio XE releasing next week. Here’s the abstract:

Designing, re-designing, or maintaining a data warehouse? Then join Dr. Ralph Kimball, one of the world’s foremost experts on data warehouse design, for an informative live webinar on “Cost Effective Techniques for Designing the Data Warehouse”.

Date: Thursday, November 18th, 2010
Time: 11:00 AM PT / 2:00 PM ET

A cost effective data warehouse project doesn’t have to be an oxymoron. Rather than committing to a full-fledged waterfall style approach, it is possible to build a data warehouse incrementally and inexpensively, while at the same time preserving high-level architectural goals that lead to a true enterprise data warehouse.

Join Embarcadero for a live webinar with Dr. Ralph Kimball and learn more about specific techniques for building and maintaining data warehouses more cost effectively. In this webinar, Dr. Kimball will describe the specific design techniques for building and maintaining cost effective, highly-scalable data warehouses. You’ll learn more about:

  • Dimensional schemas and how to use these proven data models with business users and implement them across BI delivery tools
  • Conformed dimensions and how these essential enabling structures can provide integration across separate data sources in the data warehouse
  • Time variance tracking techniques for representing all possible time tracking situations
  • Data quality management and how to tackle data quality problems at the source
  • Incremental development techniques for an agile approach to designing enterprise data warehouses

This webinar will also include a discussion and demonstration of some of these data warehouse design techniques with Embarcadero’s data modeling tool, ER/Studio.

About the Speaker:

Ralph Kimball founded the Kimball Group. Since the mid 1980’s he has been the data warehouse/business intelligence (DW/BI) industry’s thought leader on the dimensional approach and trained more than 10,000 IT professionals. Prior to working at Metaphor and founding Red Brick Systems, Ralph co-invented the Star workstation at Xerox’s Palo Alto Research Center (PARC). Ralph has his Ph.D. in Electrical Engineering from Stanford University.

 

Master Your Database with Free Training

Get data modeling training with every new ER/Studio License

For a limited-time, when you buy any new license of ER/Studio you will also get free access to the Embarcadero Data Modeling Essentials Training course. This new ten-part online course it worth over $600 and is a great way to enhance your data modeling knowledge together with ER/Studio. Our experts will help you learn more about various modeling relating topics including:

  • Visualizing the data and understanding the data sources and data flow
  • Improving communication and collaboration
  • Diagnosing and understanding the impact of changes with the Where-Used and Compare and Merge facilities
  • Collaborative modeling in the ER/Studio Repository, and more

This offer is for a limited time only, so act now and we can also send you a free Data Management Expert T-Shirt as an additional thank you for your purchase.

The offer for free Data Management Essentials Courses ends December 31st, 2010.

Getting started is as easy as 1, 2, 3…

(1) Download a Free Trial

Download ER/Studio to begin mastering your database

(2) Buy and Register

Buy and register any edition of ER/Studio before 12/31/10

(3) Get Free Training

Access and download the free training here from the registered users download page

Data Management a Top Priority for Wall Street Firms by Wall Street & Technology

Data Management a Top Priority for Wall Street Firms by Wall Street & Technology.

A good article by Melanie Rodier from Wall Street & Technology magazine on Data Management and the finance industry. June 2010.

Data Management a Top Priority for Wall Street Firms

Jun 09, 2010

Data is integral to everything financial organizations do. Beginning with a business deal, transaction or trade, data is aggregated, manipulated, named, channeled into systems, renamed, processed, parsed, and analyzed by people and applications relying on its accuracy to determine the next decision.

But while data management was once considered strictly an IT issue, the credit crisis has changed the way data is viewed by financial organizations. Today it is recognized as the business driver that it is, one that is central to the enterprise and key to its survival.

“It’s no longer data management for its own sake, but it’s to serve business drivers,” says Fritz McCormick, a senior analyst with Aite Group.

Reflecting the cultural shift toward data as a business driver, organizations are hiring chief data officers and proactively reexamining their data management policies and procedures from end to end, experts say. “Businesses need to focus on cost and compliance at an individual and at a corporate level,” observes Ron Ruckh, managing partner at consulting firm LEGG Smart. “One component that flows in these directions is data — on the business side, not just on the IT side.”

Indeed, the responsibility for ensuring data quality has been transformed from an IT task to a pervasive business priority. That transformation largely has been driven by risk management and compliance demands. >>

“All that analysis about regulatory reform and systemic risk has led the world to understand the importance of data precision, complex processes and the ability to trust your models,” says Michael Atkin, managing director of the Enterprise Data Management (EDM) Council, a nonprofit trade association. “That has been a kick forward for the practice of data management.”

Driving ForcesFinancial organizations are driven by profitability, fear of doing something wrong or of damaging their reputation, and regulation, Atkin suggests. And post-crisis, regulators are pushing hard for more transparency at financial organizations. It all begins with accurate data, he says. “It might be hard to make the case [for data management] on profitability or reputational issues [alone], but regulation has emerged as the current driver,” Atkin contends.

“Failed trades, errors on client statements, erroneous performance calculations and inaccurate risk measurements, along with a host of other costly, embarrassing and potentially disastrous outcomes, result from poor data quality,” wrote Fred Cohen, group VP and global head of outsourcing in consulting firm Patni’s capital markets and investment banking practice, in a recent report on reference data management. In the current regulatory climate, he tells WS&T, when clients are evaluating their reference data management, they are first preoccupied with reducing risk rather than cost. “Previously, cost was No. 1. Now it’s No. 3, after risk and efficiency,” Cohen says.

The risk of poor data extends beyond operations to investor relations. With regulators publishing new rules on the accuracy of data on advertising materials, financial firms have been forced to pay much closer attention to both the content they do put out to the market as well as the information they don’t put out, according to Conor Smyth, SVP of global sales for data management vendor MoneyMate, which last fall signed a deal with Schrodersto provide the investment manager with a managed service to control the aggregation and cleansing of product information for presentation on its website. “Asset managers need to mitigate the risk of communicating incorrect or out-of-date information to the market,” Smyth comments.

Gerard Walsh, head of web and CRM at Schroders, commented at the time of the deal that the firm’s decision to hire MoneyMate stemmed from an understanding of the significance of the data management challenge the firm sought to address. “The focus of this initiative is to present accurate, up-to-date and error-free product information on our websites,” he noted.

Understanding Data in Context

Financial institutions generally recognize the operational importance of data precision and have been addressing data definition problems such as identification and tagging, says the EDM Council‘s Atkin. But the concept of data content management rather than data processing management is still a relatively new idea for the financial industry, he points out.

“It’s about getting your arms around what it means to manage data where precision and comparability matter as opposed to something that [merely] feeds data systems,” Atkin relates. “People understand it, but it’s still not a given for all financial organizations, even though it’s something that’s been talked about for 10 years.”

Now, however, as risk management and regulation are increasingly driving data management, and as IT budgets begin to rise again, even firms that weren’t already motivated by the operational efficiencies that can be gained from automated data processes are beginning to reinvest in data infrastructure, Atkin suggests. But as they re-evaluate their data infrastructures, he stresses, firms must start by identifying the pertinent data, including the instruments that the firms trade, securities’ prices, market entity information and macroeconomic statistics, such as weather and interest rates.

“It’s not a technology problem — it’s a communications problem between business requirements, operational requirements and technology requirements,” Atkin asserts. “You have to translate it well. More than a technology problem, it’s an alignment challenge to speak the same language to each other.”

According to Sateesh Prabakaran, chief architect, managing director, BNY Mellon, even if an organization has a mature infrastructure, data management can be a challenge that requires careful balancing of business and technology requirements. “In a large organization like ours, where we play a critical role in the global markets, data is the premium ingredient,” he says. “BNY has made the effort to focus on data quality, migration and cleansing infrastructure, including [the ability to query data and its] reliability and availability, as well as the non-still nature of data. But it has to be the right balance between business pressures and the practicalities of implementing that.”

Prabakaran points to data lineage in particular as a key challenge. “There is always the context of data as it travels through different processing nodes that sometimes transcend boundaries,” he says. “Lineage sometimes gets morphed, undated, etc. Unless you have a holistic view of how it started, of the journey it took, it would be very hard to recognize the lineage of the data.”

Answering some of the key operational questions in the business workflow requires understanding the unique nature of data housed in various data processing units, Prabakaran continues. “These may have been developed specifically to answer very direct questions based on what we knew at the time, but as things change, some questions are completely new. In order to answer these questions, we have to improvise with whatever systems and data points we know and try to be intelligent in interpreting and getting the information out of different sources,” he explains.

“Since it’s the bread and butter of different operational issues, we can’t afford guesses,” Prabakaran adds. “You have to strive for inner consistency so that the data can withstand the object of scrutiny. That’s why lineage is important in solving unanticipated needs of business.”

Bridging Silos

Equally important is to have a holistic view of data at any point in time. “We strive for dashboards that give you a piece of information within the context of cross lines of business,” Prabakaran says. “If you look at some of the dashboards flight controllers use, it’s not how you direct one flight to its destination, but how you navigate around the others. You need to have a holistic view.”

As the market emerges from the recent financial crisis, there is a growing emphasis on treating core reference data management issues as an enterprisewide priority, confirms Patni’s Cohen, referring to a reference data management survey conducted by Patni and FSO Knowledge Exchange in December 2009 and January 2010. The survey questioned 52 senior financial industry executives, 48 percent of whom were from buy-side firms and 12 percent from brokerages.

“A significant number of firms acknowledge that siloed organizational structures and practices had resulted in duplicated reference data sources,” Cohen said in the report. “With firms’ focus on risk and efficiency, it is clear that firms should follow a holistic approach to reference data management on an enterprisewide basis.”

Cohen argues that an enterprisewide data management approach helps firms save money as well as identify precisely where its data resides and who is using it. “We found that 25 percent of all reference management spend in a financial institute is wasted in multiple data silos, and in managing and conditioning the data across all these,” he tells WS&T. “Our survey found that 80 percent of people said once the data has gotten into their institutions from small vendors, only a small percentage know where it is going. They don’t know who’s using it, who’s not using it anymore. There may be data that no one is using anymore.”

Still, while breaking down silos makes sense, many organizations have found obstacles to achieving an enterprisewide view of data and in coordinating efforts across different lines of business to ensure that data is managed properly, says Sajay Sethunath, an associate partner in the technology group at Capco, a consulting, technology and transformation services provider. As a result, many firms attempt an incremental approach to data management. According to Sethunath, “A lot say, ‘Let’s look at it from a targeted initiative. Let’s bite off portions of it. Let’s put the right kind of attributes in place, and let’s aggregate on the back end.’ ”

BNY Mellon’s Prabakaran suggests that some of the challenges to achieving an enterprisewide view of data originate from the boundaries and constraints that management use to define the enterprise view. “The question is: How elastic are you in adjusting to boundaries?” he suggests. “That will be one of the most consistent problems.”

In Support of Big Governance

In the meantime, firms increasingly are focusing on data governance, which details the ownership and management of a firm’s data assets. This includes monitoring the source of the data, tracking its progress through the cleansing process, identifying and mapping data elements, tracking which applications use the data and at what frequency, and controlling who is authorized to access and manipulate it.

While a relatively new concept, data governance quickly is becoming the foundation of a firm’s data management strategy, the Patni study asserts. In some cases, according to the report, data governance drives the decisions on the infrastructure, architecture and processes required to manage and deliver data to consumers.

“Governance is definitely an issue now,” Aite’s McCormick concurs. “The market has matured along the way, and you now see very well established data groups within organizations. There are industry associations and a lot of participation in those groups.”

As data governance matures, data management technologies also are evolving. “We have gone from a typical data implementation of fancy, highly scalable in-memory databases to what we see as storage of data — also known as the appliance model — playing more of a role in high-volume analytical databases,” reports BNY Mellon’s Prabakaran.

“We’re bringing some lessons learned on server consolidation techniques and bringing them into large-scale data storage,” he continues. “It’s not new, but it’s interesting how that model seems to permeate not just large-scale analytical information stores with hundreds of terabytes, but it is also moving into the mainstream where it can be used for transaction processing.”

BNY Mellon also is looking closely at data virtualization, Prabakaran reveals. “I don’t have to have another data warehouse. By using virtualization, I will bring [the data] in whatever shape or form answers my business questions,” he relates. “That concept might work on small-scale systems, but we’re planning to see if it can work on large infrastructures.”

Anticipated BenefitsData management techniques and technologies are fundamental to the creation of applications, notes Rod Johnson, general manager of the SpringSource division of VMware, and with the rise of virtualization and cloud computing, the manner in which applications need to access data is evolving. “Cloud computing is a distributed deployment model, and for that reason, caching and data accessibility are of far greater strategic importance than before,” he explains.

As regulators continue to scrutinize company’s data management practices, the industry will see more investment and innovation in that area, says the EDM Council’s Atkin. “If you put it into perspective, there have been three drivers for data management,” he says. “First, global terrorism, which caused the regulatory community to insist we validate customers, know them and follow them.” Then the Enron and WorldCom scandals forced the industry to understand the importance of entity relationships so it could understand the risk inherent in doing business with particular people, Atkin adds.

“Those first two drivers were the beginning of data activity,” he says. “But [regulation] is the big one. This will force firms to look [at data] across the organization in the same way that regulators have to look across the organization.

“This is the age of enlightenment in data management,” Atkin continues. “It’s no longer a poor, ugly stepchild of IT. It’s being recognized as an operational pillar of infrastructure.”

ER/Studio Public Training – September 13-17 – San Francisco

e-Modelers and Embarcadero Team Up to Provide ER/Studio Public Training Course
September 13-17, 2010 in San Francisco, CA

e-Modelers is pleased to announce that it will deliver public training on Embarcadero’s ER/Studio Enterprise, a powerful set of data modeling, business process modeling and UML modeling solutions. Registration is now open and the training will take place the week of September 13, 2010 at the Embarcadero headquarters in San Francisco, CA. It will highlight ER/Studio’s extensive modeling capabilities within an enterprise context, while providing best practices, an integrated case study and hands-on practice.

The classes will be led by Senior Enterprise Architect and Instructor, Dr. Nicholas Khabbaz and Senior Data Modeler and Instructor, Dan Weller at the Embarcadero headquarters in San Francisco. Dr. Khabbaz and Dan Weller of e-Modelers will cover the following areas:

  • Organize ER/Studio key features according to a solid modeling process context
  • Show how to align process, data and software systems through ER/Studio
  • Highlight key ER/Studio modeling capabilities that result in high quality models
  • Demonstrate the collaboration and reporting aspects of ER/Studio
  • Incorporate modeling and ER/Studio best practices
  • Apply all the training to an integrated case study
  • Use hands-on practice to implement all the key features covered in the training

For pricing information and additional information on attending this informative training please visit the
e-Modelers website or download the data sheet. For registration, please contact Becky Baird at
(209) 609-7195 begin_of_the_skype_highlighting  or via email at bbaird@emodelers.com

To stay connected with the ER/Studio product team with information on new products, training, resources and other information, join us online at:

How Meaningful Use Impacts Healthcare Data Management Professionals

What Healthcare Data Architects, Developers, and DBA’s need to know about achieving HITECH Meaningful Use and Certification

The HITECH Act, part of the American Recovery and Reinvestment Act Stimulus Bill passed in 2009, establishes 25 major “Meaningful Use” (MU) requirements that all electronic medical records systems must implement in order to have their users qualify for billions in government incentive money. These MU requirements will change existing and future implementations, affecting nearly all healthcare architects, developers and DBAs developing these information systems.

Join healthcare IT expert, Shahid Shah, for an insightful webinar on the impacts of meaningful use, certification of electronic health record systems, and what this means to data management professionals.

Date: Wednesday, July 14th, 2010
Time: 11:00 AM Pacific / 2:00 PM Eastern

In this one hour webinar, you’ll learn:

  • What ARRA, HITECH, Meaningful Use (MU), and Certification mean
  • How technical personnel such as data architects, DBAs and developers will be affected by MU
  • The best ways to determine the MU gaps that might be found in existing systems
  • Additional expertise technical personnel may need to meet MU.
  • How quickly systems will need to be modified and deployed to meet MU rules

Registered webinar attendees will also receive a complimentary copy of Shahid Shah’s white paper, “How Meaningful Use Impacts Healthcare Data Management Professionals”.

Register today!

About the Presenter
Shahid N. Shah is an internationally recognized and influential healthcare IT thought leader who is known as “The Healthcare IT Guy” across the Internet. He is a consultant to various federal agencies on IT matters and winner of Federal Computer Week’s coveted “Fed 100” award given to IT experts that have made a big impact in the government. Shahid has architected and built multiple clinical solutions over his almost 20 year career. He helped design and deploy the American Red Cross’s electronic health record solution across thousands of sites; he’s built two web-based EMRs now in use by hundreds of physicians; he’s designed large groupware and collaboration sites in use by thousands; and, as an ex-CTO for a billion dollar division of CardinalHealth he helped design advanced clinical interfaces for medical devices and hospitals. Shahid also serves as a senior technology strategy advisor to NIH’s SBIR/STTR program helping small businesses commercialize their healthcare applications.

Shahid runs three successful blogs. At http://shahid.shah.org he writes about architecture issues, at http://www.healthcareguy.com he provides valuable insights on how to apply technology in health care, at http://www.federalarchitect.com he advises senior federal technologists, and at http://www.hitsphere.com he gives a glimpse of the health-care IT blogosphere as an aggregator.

Have a question?
Send us an email or give us a call
1-888-233-2224

Generating XML Schemas from a Canonical Model – A Practical Example

ER/Studio SIG On Demand Presentation – “Generating XML Schemas from a Canonical Model – A Practical Example”

Successful Information Management is critical for business success; Business Process Management, Web 2.0 and Service-Oriented Architecture can only succeed where there is a common understanding of the meaning, availability and provenance of data. How can your systems be integrated, if you can’t agree how long your account numbers are, how many lines there are in a customer’s address, or the mandatory data required when creating a customer? Reduce the risk and costs of integrating your business systems, by generating your XML Schemas from a canonical data model.

  • Why you should use a canonical data model
  • Which canonical model should you use
  • What if you manage it all in XML
  • What if you leverage your enterprise or other common business model
  • A practical demonstration using a popular data modeling tool
  • What other approaches are available for generating XSDs

SEND AND EMAIL TO ERSIG@EMBARCADERO.COM TO JOIN THE ER/STUDIO SIG GROUP OR JOIN US LINKEDIN.

Click Here For Presentation

About the Presenter

George McGeachie
Metadata and Modeling Specialist
Metadata Matters Ltd

www.metadatamatters.com

There are many ways to Connect to the Embarcadero Community!
Stay in touch and keep your profile current at the Embarcadero Developer Network
Follow us on Twitter Watch us on our YouTube! Channel ER/SIG on LinkedIn Follow the ER/Blog

You’ve Just Inherited a Data Model: Now What

You’ve Just Inherited a Data Model: Now What

Presented by: Karen Lopez | InfoAdvisors |@datachick

Embarcadero recently had a three day online event called DataRage on May 25th through the 27th where we had dozens of experts present various data and database related topics. One of my favorite presentations was from Karen Lopez of InfoAdvisors where she went through the scenario of what to do when you inherit a data model. Many of you have faced this same situation when you inherit a 3rd party data model from a data model pattern like our own Universal Data Model template, substituting for a colleague, or inheriting a data model from someone who is no longer with the organization. If you have ever found yourself in this precarious situation, then you will benefit from Karen’s presentation and the 5 steps she recommends taking in this situation.

You’ve Just Inherited a Data Model: Now What

Embarcadero Presentation with Karen Lopez

Don’t miss this presentation now streaming on Embarcadero’s Channel E.

You can find some of Karen’s other work on our website. She gave another great presentation last year called Five Classic Data Modeling Mistakes and How to Avoid Them. Karen also has a number of data modeling whitepapers available for download.

Karen López is Principal Consultant of InfoAdvisors. She has more than twenty years of experience in helping organizations implement large, multi-project programs. In her role as an IT project manager she has coached many architects and developers on how to stay focused on the real goals of their projects. She wants you to see your models used, enjoyed, and respected. She is also the Moderator of InfoAdvisors/ITBoards.com IRM discussion groups, an online community of several thousand data management professionals. For more information, visit http://www.infoadvisors.com.

About the ER/Studio Team

Follow us on LinkedIn, Embarcadero Community, YouTube