0

A technology machine uses SQL Server Express to collect the data like temperature... The database is updated say every several second (i.e. low trafic). The machine and its SQL Server must work independently on anything. Sometimes the technology computer (i.e. SQL Server) is switched-off with the machine.

The central SQL Server Standard Edition should collect the data form many machines like the above.

What scenario would you recommend?

  • The machine sends each new row of a table when it is created (i.e. each several seconds one row).
  • The machine activates the process of sending the data say each hour, and sends all the rows with the newly collected data.
  • Any other approach?

If I understand the Service Broker idea well, there will be a single name of the request message type, a single name of the reply message type, a single name of contract. The related databases on the SQL machines will each have one message queue, and the related service.

I am very new to Service Broker. The tutorial examples show how to send messages as XML fragments. Is it the real world way to send the rows of the table? How can I reliably convert the result of a SELECT command to the XML fragment and back?

Thanks, Petr

Attention: The related question Service Broker — how to extract the rows from the XML message? was created.

Community
  • 1
  • 1
pepr
  • 20,112
  • 15
  • 76
  • 139

2 Answers2

3

I also recommend sending as the the database is updated, one message per row inserted (one message per statement, from a trigger as JAnis suggests, also works fine if your INSERT statements do not insert huge number of rows in a single statement).

The reason why is better to send many small messages rather than one big message is that processing one XML payload on the RECEIVE is less efficient than processing many small XML payloads.

I've seen similar deployments to what you're trying to do, see High Volume Contigous Real Time Audit And ETL. I aslo recommend you read Reusing Conversations.

Service Broker will handle very well the Express instance going on and off or being disconnected for long periods. It will simply retain the messages sent and deliver them once connectivity is restored. You just have to make sure the 10GB max database size limit of Express is enough to retain messages for as long as needed, until connectivity is back.

What Service Broker will handle poorly is if the Express appears as different DNS names every time it connect back (think a laptop that connects from home, from office and from Starbucks). Service Broker uses database routing to send acks from central server to your Express instance and the routing require a static DNS. Also is important that the Express instance is reachable from the central server (ie. is not behind Network Address Translation). For a mobile instance (the laptop mentioned before) both the dynamic name and the fact that most time will connect from behind NAT will cause problems. Using a VPN can address the issues, but is cumbersome.

Remus Rusanu
  • 288,378
  • 40
  • 442
  • 569
  • Thanks! OK. It seems to be reasonable to me to keep the sending site as simple as possible -- i.e. trigger that causes SENDing each new row via Service Broker. Does it make sense if the Express machine waited for any reply? The problem is rather one-way sending data. A side note: The Express and the Standard servers will work in a kind of physically fixed environment. The 10 GB is also more than enough. No problem here. – pepr Jul 19 '12 at 08:32
  • The Express should handle replies, eg. it should have an activated procedure attached to its sender queue. Even in this one-way traffic as you have, there is a reply: the EndDialog should be sent by your centrasl Standard server. Read http://rusanu.com/2006/04/06/fire-and-forget-good-for-the-military-but-not-for-service-broker-conversations/ to understand why having the Express end the dialogs first is dangerous. – Remus Rusanu Jul 19 '12 at 09:07
  • BTW, you should never wait for a reply in SSB, even when handling two-way traffic. Never `SEND` then `WAITFOR(RECEIVE)` expecting a response. The reply may come minutes, hours or even days later (think the central server just had a crash and took 24 hours to bring it back online...). SSB apps should by asyncronous, event driven. – Remus Rusanu Jul 19 '12 at 09:09
  • Please, notice the reference to the related question at the end of the updated question (http://stackoverflow.com/q/11563452/1346705). – pepr Jul 19 '12 at 14:55
2

Well, if you implement row sending in trigger, then sending actually happens then (not once in a while). You can always process collected messages on receiver side by activating procedure (RECEIVE) once in a while..

To send row(s) as XML you can use statement in trigger:

Declare @msg XML;
Set @msg =
    (
        Select  * from Inserted FOR XML RAW, Type
    )

And then just send a message.

Jānis
  • 2,216
  • 1
  • 17
  • 27
  • Thanks! If I understand it well, the `FOR XML RAW, TYPE`, is the Microsoft specific extension. What is the reverse operation that gets the generated XML rows (as a received message), and inserts the appropriate values into a table on the other side? If I understand it well, the result is one value of the XML type with possibly many rows expressed as XML elements. How can I split that XML value? – pepr Jul 19 '12 at 08:24
  • in my example row is element, columns are atributes. but you can rewrite example by adding "ELEMENS" and then also columns will be elements. You can get data by using "value" method of XML data type- @msg.value('(Row/@ID)[1]', 'int'). – Jānis Jul 19 '12 at 08:49
  • @Janis: Please, notice the reference to the related question at the end of the updated question (http://stackoverflow.com/q/11563452/1346705). – pepr Jul 19 '12 at 14:55