Skip to main content


Netnews Working group
Minutes of the 1st meeting in Berlin, April 22, 1996

Chair: Felix Kugler
Scribe: Felix Kugler

0. Introduction

Welcome to the first meeting. The participants list was passed, there were 21 attendees. Nobody wanted to risk being scribe and it took some time to find a "volunteer'. Unfortunatly, it was impossible later on to get the notes. The minutes are now based on the chair's sparse notes.

There were no changes proposed to the agenda.

1. News traffic analysis

Felix Kugler presented a number of slides showing how News traffic grew from March 95 to March 96, both in terms of articles (+40%) and data volume (+400%). Measurement point was a SWITCH News server. The alt hierarchy now dominates News traffic with approx. 85% of the total volume, and it has by far the highest growth. The daily traffic
pattern shows that still most articles originate in US. We see traffic peaks during US-"active hours" corresponding to night to mid-morning in Europe. Interstingly, weekend data volumes are much higher nowadays and the average article size is significantly higher at those times. Slides are available from

Conclusion: News traffic is growing at a breathtaking pace and adds up a considerable amount of the total traffic of a typical network. The lion's share of News volume is in the alt hierarchy. The good news is that Netnews traffic peaks at times where Europeans use less bandwidth for interactive work.

1.2. Analysis of today's News distribution paths

Heiko Rupp (XLINK) presented the model of his path analysis system. A number of well connected sites define a special channel on their News server which pipes header information of every incoming News article into a Perl script. The script parses the Path:-header line and fills a database of transmitted articles between adjacent news servers. An
extract of this database is sent to Heikos workstation on a regular basis where the data is combined and analyzed. Fancy scripts create Postscript maps automagically. The resulting maps are available on Xlink's WWW server. They are considered examples only because only three measuring sites - located at XLINK (DE), SWITCH (CH), and University of Pisa (IT) - were used for the prove of concept.

This work got much attention. Heiko will improve his scripts and announce them when ready. He takes care that no patches on the installed news server software is needed which would make the package difficult and dangerous to install.

Two potential weaknesses were pointed out: the maps are based on the article count and not on the transmitted data volume which is considered more important for the underlying network, furthermore the graphs ado not show the direction of newsflow. It was decided to stick with what we have now and make this operational. In a later phase the package could be enhanced to address the problems mentioned before.

Three additional ISPs volunteered to install and run the scripts for "production": ACONET (AT), DEMON (UK), TELIA (SE). For a reasonable coverage, more measuring sites will be needed. This can be accomplished offline.

In the "production phase", data shall be collected once per week. More details will be announced later. Heiko's slides are available from

2. Minimizing Netnews resource usage

2.1. Transatlantic newsfeeds

It was agreed that avoiding waste of transatlantic bandwidth has top priority. Low latency interconnections between European endpoints of US-feeds shall minimize multiple transmission of articles over the expensive transatlantic links.

In fact, part of this News Backbone already exists, and it even crosses the boundaries of the "big" European service providers. Unfortunatly, the latency is in many cases clearly too high, mostly due to overloaded News servers and links. There are only few servers with online monitoring facilities so that the performance of many servers often can be guessed only after a certain time when local statistics are available.

Based on the newsflow maps and "human experience" the newsfeed topology should be improved and a News backbone be realized. As soon as this fast European News Backbone works well and reliably, a reduction of the number of US-feeds can be considered.

Gerhard Winkler (ACONET) points out the value of local peerings because bandwidth costs are negligible in most cases. However, local peerings might be difficult to arrange for competitive considerations when the newsflow between the peers is heavily unbalanced.

2.2. coordination of intra-ISP News distribution

It was tried to get an idea from the attendees how News is distributed within the international ISP's networks. The resulting picture is very incomplete and to be interpreted with caution.

DANTE: No managed News service so far but plans to start a project. Central NRN's with multiple international links usually have good, bilaterally coordinated newsfeeds, single attached NRN's have more problems to get feed. Contact: Felix Kugler, SWITCH

EBONE: No managed News service, no central coordination. It is reported that the informal coodination works well and that every connected network can get a full feed for free from a neighboring network. Contact: Gerhard Winkler, ACONET

PIPEX: Runs several News servers so that there is usually only one feed on an international link. Contact: Mark Turner, PIPEX UK

The contacts mentioned above do not necessarily denote responsible persons, but usually well informed people.

2.3. other methods to save resources

Dropping newsgroups/hierarchies with excessive volumes (all kind of binaries) is considered a bad move though pushing binary stuff onto ftp/WWW-servers would be a reasonable goal. The problem is how to filter out binaries. The obvious first approach would be to filter based on the newsgroup name. However, binaries are often posted to
non-binary groups exactly to bypass such newsgroup exclusions. Filtering based on article size will most probably not work either. It just leads to smaller but more fragments and reduces the probability that a binary gets transmitted completly, thus triggering numerous "please repost" requests.

Mikael Kullberg (TELIA) remarks that it was probably a mistake to enlarge the maximum fragment size on Usenet from 64kB to typically 1MB. It is probably not possible in practice to go back to smaller fragments in a coordinated way, so we have to live with the big
fragments we have now.

The use of intelligent (dynamic) servers and proxies is considered interesting for leaf sites. It is not a topic for the WG at the moment.

3. Making the backbone more reliable - News backbone server requirements

There was only a short discussion due to time shortage. This topic is to be pursued on the netnews-wg list after the meeting. There was consensus that the following issues should be tackled:

  • latency goals
  • minimum expire period
  • minimum set of newsgroups
  • control message handling
  • handling of national/regional hierarchies
  • status monitoring facilities

Some sites already have online-monitoring facilities in place. The following were mentioned at the meeting:

PIPEX: WWW-interface (ex:
SWITCH: VT-100 based (ex: telnet, login: shownews)

A list of News servers with available monitoring information is planned to be setup after the meeting.

4. Tools

This agenda item too was postponed due to time constraints. It was pointed out that Dave Barr maintains very complete INN pages on his WWW server with a bunch of useful tools. Check for more info.

innfeed, a new INN backend program, is now in beta-phase. Though it still has some etches it is considered an important improvement over today's nntpsend/nntplink programs. It will be part of future INN releases.

5. The future of Netnews distribution

Apart from improvements of the current News transmission technology, fundamentally different ways to transport News are possible. Keywords are satellite transmission and IP multicast.

Satellite transmission is already in use at least in the US, but none of the attendees had detailed knowledge or experiences about this.

Heiko Rupp (XLINK) gave a short introduction about News transport via IP multicast. UUNET obviously has experimented with this technology some time ago, but gave up for the moment due to limitations in their implementation (a 9kB size limit imposed by UDP code) and the fact that no reliable multicast network is in operation yet. There was a
presentation on a USENIX conference about their system. Check the document "Drinking from the Waterhose" by K. Lidl and J. Osborn, for detailed info.

6. Central info point for Netnews distribution

This agenda item was dropped. The WG will take care of this as soon as there is a need for an info point.

7. Closing

The netnews WG is likely to meet again at the next meeting, but a final decision is postponed.