Congress papers from the Planning Institute of Australia 2011 National Congress

Abstract The following five papers were presented and peer reviewed for the PIA National Congress in Hobart in March 2011. These papers have been included because the editors feel theyare important and should be available to a broad audience. However please note that theydid not go through the academic peer review process for Australian Planner.


Introduction
When the Resource Management Bill (RMB) was introduced to the New Zealand Parliament, the sponsoring Minister and consistent proponent of the resource management reform process, Dr (now Sir) Geoffrey Palmer called it 'a milestone in resource management in New Zealand' which would 'provide for considerably greater efficiency in planning and consent processes' (Palmer, 1990, pp. 94Á95). The reference to 'greater efficiency in planning' was the first signal of the complete change that would come in the way planners would be able and would be expected to practise their profession. Throughout the act's formulation from 1987Á1991 there was no real understanding of the vast change that it would bring. No one would have predicted a decade later that Perkins and Thorns would conclude that consent processing, a central planning function, was 'a contested site with environmentalists considering that the councils operate too liberal a process whereas developers and other business people see councils as being too restrictive' (Perkins and Thorns, 2001, p. 649). This change in the public's views of the RMA and planning system it instituted and its effects on the New Zealand planning practice community are the main focus of this paper. These are explored through three separate but interrelated perspectives. The first focuses on the impact of the introduction of the Resource Management Act 1991 (RMA) and the ways in which the planning profession was required to transform itself to meet the new act's vision of the profession's role that was embedded, albeit opaquely, in the act and its systems. The second perspective looks at how planners responded to the new statutory requirements and how this shaped the way planners viewed themselves. Finally, the paper will focus on the public and political expectations of the RMA and how and in what ways it affected the profession. This is a paper that sits, for some uneasily, between what might be called 'story telling' or reflective practice and an inquiry, shaped and formulated within a traditional academic framework. This positioning is, however, a direct consequence of the author's own professional history. I was a practicing planner who wrote plans, appeared as an expert witness, acted as a consultant to adjoining local authorities, processed resource consent/planning applications and generally practised as a planner under the Town and Country Planning Act 1977 and the RMA. That professional career spanned some 16 years and a number of localities all outside the major metropolitan centres, followed by 15 years as an academic. I have also been a full (corporate) member of the New Zealand Planning Institute (NZPI) since 1986, have chaired and been a member of a NZPI branch committee and since 2008 have been a councillor of the NZPI. As such, my academic research in this area is strongly informed by my own practice career and my continuing contact with a web of current and past practitioners. I also have regular involvement with practice issues through work I undertake mentoring senior staff at a large local authority and on-going contact with past students.
This provides a complex source of what some may view as 'anecdotal evidence', or a term I prefer, 'professional conversation' to meld with more academic sources. The use of professional conversation and storytelling, as Sandercock (2004) acknowledges, is used in other disciplines. In terms of planning, she concludes 'that stories and storytelling are central to planning practice, [and] that in fact we can think about planning as performed story' (Sandercock, 2004, p. 26). If planning practice is performed stories, then the actors in that story inevitably have an experience to share about their role. Further, the use of 'embedded researchers' is common in disciplines such as social anthropology, where the researcher will commonly interact with the community or group they are studying, often over a significant period. This approach has been used in planning research. Abram (2004) explored the behaviours and performance of public sector planners in a Norwegian municipality by undertaking fieldwork that 'consisted of an ''immersion'' in the practice of the municipality' (Abram, 2004, p. 38). Thus, a professional conversation, while a novel means of academic research, has both validity and utility particularly in terms of investigating New Zealand planners in practice. Given the size of the population (4.3 million) and the even smaller number of planners (total NZPI membership including student members is less than 2000) there are few alternatives, particularly given the nature of the issues being explored. My own involvement is acknowledged and as far as possible any personal reflections are buttressed by the reflections of others, and other more formal sources such as the work of Perkins and Thorns (2001), McDonald (2005) and Jay (1999) This approach is, however, one that provides the opportunity to bridge the inevitable gap between the planning practitioner and the planning academic by reflecting some of the voices of the profession, and in so doing, gives validity to their concerns.
The response of planners, the planning profession and planning systems in other jurisdictions has been well explored by academics and the profession. Almendinger (2001), Campbell and Marshall (2005), Campbell and Henneberry (2005) and Swain and Tait (2007) provide views on the changes to the British planning profession. March and Low (2004) and March (2007) have also examined the Australian profession, the performance of which is made more complex by that countries' federal system. At the practice level, the Planning Institute of Australia (PIA) has comprehensively examined the issues facing the profession in the National Inquiry into Planning Education and Employment (PIA, 2004). Inevitably, it is important to define a profession and what it does, although extensive discussion is beyond the scope of this paper and is well addressed in Campbell and Marshall (2005). The New Zealand planning profession emerged out of a semi-professional organisation in the 1930s, to become a professional body in 1949, including time as an RTPI branch. In 1969 it became an independent entity with the creation of the NZPI (Miller, 2007). By the 1980s, planning education courses were subject to accreditation procedures by the NZPI. As elsewhere, the profession is based on the traditional model of planners being future oriented, possessing a unique body of knowledge inculcated by their planning education and training, and working to achieve an overall concept of the public good. It is a fairly traditional conception of planning, though one that has always involved some concept and processes of public involvement. In terms of the arguments March and Low (2004) make, the New Zealand planning system, in contrast to that in the state of Victoria, has more elements of what they describe as 'democratic planning' (March and Low, 2004, p. 45). On reflection, it is this democratic planning thread that has always existed in the New Zealand planning system and which has made the implementation of the RMA difficult, as it produced a number of groups with expectations of its performance beyond the expected environmentalist/developer split.

The nature of reform
It is useful to commence with a brief overview of the reform process that produced the RMA. Further and extended discussions of the reform process can be found in a number of sources including Young (2001), Ericksen et al. (2003), and Miller (2011) but is best summarised as an attempt to bring together in a single piece of legislation, all acts pertaining to the use of any natural resource. Some 59 pieces of legislation, usually single resource focused legislation such as the Clean Air Act 1972, collapsed into legislation with a single philosophic base and a single processing system. The philosophic base, which any action under the RMA was to achieve, was sustainable management as defined in s5. The focus was on the biophysical environment including the 'life-supporting capacity of air, soil, water and land', intergenerational considerations and 'avoiding, remedying or mitigating adverse effects on the environment. Sustainable management was essentially a truncated version of sustainable development that sidelined economic and social concerns to concentrate solely on the environment. In 1991, it seemed quite revolutionary but in 2010 in the light of the promotion of the more comprehensive and integrated sustainable development, it seems strangely inadequate. As a senior practitioner pithily observed, the RMA was 'legislation designed to impress visiting parliamentarians from Sweden' (Nixon, 1998, p. 2).
While the RMA reform process was extensive and exhaustive, in terms of consultation there was no practising planner on the team developing the legislation (Young, 2001). By the end of the reform process, there were already growing concerns about its workability, and Bill Williams in an incisive article in Planning quarterly, the journal of the NZPI, stated 'there is not and never has been any evidence that reform on the scale proposed is justified, and that as events developed, confusion of purpose became more obvious' (Williams, 1991). The RMA reforms were part of the neo-Liberalist reform of New Zealand's society and economy, commencing in 1985. It moved the country 'from a peripheral Fordist mode of regulation to one dominated by market regulation' (McDermott, 1998, p. 631), in which 'a set of ideas and analysis was imposed on the rest of the government economists and ultimately the nation' (Easton, 1997). This neo-Liberalist agenda produced redundancies, unemployment, welfare reform and a political and government system that would not contemplate any change in that philosophic approach until 1999Á2000.
Local government, a major employer of planners, was reformed as it was seen as 'an insuperable obstacle' in the path of 'improving autonomy, efficiency and accountability' (Bush, 1995, p. 82). Efficiency, effectiveness and accountability became the guiding principles of local government reform and remain embedded in the legislation to this day. Local government reform was put in place through amendments to the Local Government Act 1976 (LGA76) in 1987 to 1989. The first was a physical restructuring, which reduced 231 territorial authorities to 59 district councils, 15 cities and four Unitary Authorities, which combined district and regional functions. The existing 22 regions, which had had no effective planning role up to this point were reduced to 12, with the new regional councils being given both functions under the RMA and roles in hazard mitigation, pest control and regional transport. The mantra at the time was that the bigger units of government would allow the new organisations to, for instance, have their own planning staff rather than relying on consultants. This positive aspect was largely unachieved as many of the larger units were large in area but poorly resourced. In the Dannevirke area, six local bodies were amalgamated to form the new Tararua District Council but still could not afford to employ any planning staff and so relied on consultants for planning advice tailored to meet budgetary constraints.
If the physical restructuring of local government served planners poorly, the administrative restructuring was positively inimical to the practice of planning. The pursuit of efficiency, effectiveness and accountability under the amendments to the LGA76 saw extensive restructuring of all local authorities to establish a clear split between policy and regulation to ensure that there was no 'capture' of the policy makers, now powerful entities, by the implementers.
The RMA and transformation of the profession For planning this split was problematic as the planners who wrote the district plan were separated from those who implemented and administered that plan. This effectively severed the usual feedback loops that good planning relies on to ensure that implementation problems are fed back into the plan formulation process. The new Palmerston North City Council, for instance, was zealous in its policyregulation split. Policy planners were removed to the fifth floor and were given new titles emphasising their policy rather than planning roles. They were, after a short period, paid a higher salary, which quickly undermined the collegial relationship with the consents' planners. The consents' planners were left on the third floor in a regulatory unit with dog control officers, building inspectors, liquor control officers and parking wardens. They were gradually discouraged from 'bothering' the policy planners, and as long-term staff who maintained connections with former colleagues now in the other sections left, Australian Planner 185 informal connections withered. The consents' planners had real trouble maintaining their professional identity. They had to fight to avoid being put in a corporate uniform and were not allowed qualifications or professional affiliations on their business cards. As Dixon et al. (1997, p. 607) observes this had 'the unfortunate consequence of regulatory activities, such as the administration of resource consents, being regarded as the ''poor cousin'' of policy matters'. Similar changes occurred in Britain, reflecting the attempt of the neo-Liberals to redefine the 'ambit of planning', which was accompanied by a fragmentation of work and a 'culture of audit ' (Swain and Tait, 2007, p. 242).
In New Zealand, the audit aspect was often expressed in KPIs, which frequently included the suspect concept 'of the number of reports accepted without modification'. This is surely an invitation to provide reports expected to be acceptable to the politicians rather than giving the best professional advice. Both groups of planners were also managed by non-planners, who often failed to understand the nature of what they did or what it meant to be a professional. The practical outcome was that senior practitioners often had to explain to a manager why something could not be done, sometimes leading to charges of being difficult or obstructionist. Planning career paths, which traditionally started as a planning assistant and ended as City Planner, were completely disrupted. After the reforms, most planners could only hope to ascend to senior planner, after which they had to decide to become a generalist manger, move to a consultancy or to remain at that level.
This policy-regulation split had varying impacts on planners, usually depending on how 'purist' their council was in terms of instituting the split. The Wellington City Council, which in McDonald's (2005) research indicated it was happy with such a separation, had by 2009 re-amalgamated their planning functions. Within a decade of the split being put in place, the role of the regulatory planner had fallen to an all-time low, with a limited study by a recruitment consultant revealing that consent planners were seen as being on 'the slow track to nowhere' (Alach, 2004, pp. 23Á26). A consents position, as McDonald's (2005) study reveals, was a place where you stayed for as short as time as possible, involved low-level planning functions and was not a place to create a career. The situation was made worse by the tendency in the 1990s to use ISO accreditation, to supposedly improve planning systems, and frequently reducing consents' planners to box tickers. This was a perverse presentation of the public face of planning; the part the public is most likely to come in contact with. All of McDonald's respondents noted the policy/regulatory split had been inimical to the practice of planning and had significantly affected consents' planners. The combination of local body reform with the new RMA created a fundamental instability and uncertainty in the workplace. In turn, this undermined the professionalism of the planner, by making the process, not the knowledge and expertise of the planner, the driver of the consents' system.
This raises the issue of how planners should operate as professionals in the face of such significant institutional change, while maintaining the profession's central aim of achieving the public good. Planners in Victoria faced a similar challenge to their professional practice. March's (2007) study characterised the planner as being separate to the state and 'complementing its aims and activities as a means of achieving the public good' (March, 2007, p. 385). He concludes that it is at best naı¨ve to see planners as neutral experts achieving the public good, but rather that they should be seen as 'a profession-withingovernance' (March, 2007, p. 386). There is evidence in the New Zealand situation that the governance and institutional frameworks within which planners work have, in turn, shaped the concept of professionalism and planners' everyday professional conduct. Consents' planners lament the policy planner's lack of practical knowledge of the issues that arise at implementation. In turn the consents' planners are often driven by a 'customer service ethic', which seems to have become a substitute for the earlier concept of the public good. New graduates quickly abandon what they were taught, replacing it with the ethos and practices of their employer. With this comes a loss of professionally defined achievement based on the quality of the performance and outcomes. Instead, planning success is measured by the number of consents processed, sometimes achieved by methods such as 'hot desking', where a planner has no personal workspace and is given a daily/weekly processing target, an approach used by the Auckland City Council in the early 2000s. The origins of this approach were evident in Jay's (1997) earlier study.
This concentration on defining planning success through the speed of consent processing was institutionalised by the introduction of the Ministry for the Environments' (MFE) Annual Survey of Local Authorities in 1996. It measured planning performance largely through a series of simple statistics such as the percentage of consents notified and the percentage that were processed within statutory timeframes. The latter measurement took no account of what type of development was being processed. Under the RMA all consents are subject to the same timeframes, so a side yard intrusion has the same timeframe as an application for a 150-turbine wind farm. There is, moreover, no evidence that the timeframes were developed from any research so there is no indication that they were ever realistic. In 2009, a similar uninformed timeframe of 9 months to process a major infrastructural project deemed to be of national significance, was imposed through the new Environmental Protection (EPA) agency. One of its first consent, the Waterview roading project produced a 37-volume Assessment of Environmental Effects, suggesting the 9-month timeframe will only be met by compromises, probably with regard to public involvement, in the process. Discussion with MFE staff and others over many years suggests that the results of the Annual Survey were intended to be used to discipline planners into improving the timeliness of the consents process. When this did not have the desired effect, the 2009 amendments to the RMA introduced a requirement for a discount policy, whereby local authorities failing to meet the consent processing timeframes would be forced to refund as much as 50% of the consent processing fee. This requirement was introduced just as the economic downturn saw development work slow or disappear, reducing the usual income streams for regulatory units. Once again the process and its timeliness were judged more important than the quality of the consent's assessment.

How planners view themselves
The arrival of the RMA had an immediate impact on how the profession saw itself and what it represented. The profession generally has a low profile given that there are only 600 full members of the NZPI. Moreover, for many years the profession, through the NZPI, has adopted a minimalist media profile, which it is now finding almost impossible to reverse. All professions lay claim to a unique body of knowledge, which in turn fashions how they conduct themselves and the goals they pursue. The RMA made terms such as 'town planning' unacceptable so the very name, 'town planner', that many planners used now became unacceptable. Planners instead morphed into generalist titles such as policy analyst or became resource managers despite having no management responsibilities. Knowledge from the previous practice was deemed inappropriate and the most damning assessment of a district plan was to say it used a 'town and country plan approach'. This was compounded by the RMA's lack of recognition of urban issues and its relentless biophysical focus. Planners found they lacked 'professional knowledge and judgment in relation to biophysical processes' (Jay, 1999, p. 469) and were equally open to the charge that they were ignorant of the functioning of the market and business. Most planners were educated in planning schools with a design or social science basis and had little time to build up the new knowledge and skills that they were soon being criticised for not having. Given that the bedrock of their profession had been fundamentally changed, many planners began to use the term of 'generalist' in descriptions of what planners did. This brought with it the danger that in the absence of any specialised knowledge they could be replaced by anyone, and they were. The arrival of the RMA increased the numbers of untrained people using the title 'planner', particularly in regional councils where they were also displaced by scientists who were judged to have the requisite knowledge. Qualified or not, the actions of the untrained reflected on the profession as whole and this has remained a persistent problem.
Other criticism stemmed from planners being blamed for planning decisions that were made by the politicians. Politicians, until the Making Good Decisions Programme and accreditation of decision makers were introduced in 2001, were the most untrained part of the process but were entrusted with decision-making, the most important part of the process. There is no statutory protection for planners in New Zealand. In 1976, the profession declined to become a statutory profession principally because of the fear at the time that it would exclude some unqualified but competent planners (Miller, 2007). Campbell and Henneberry studied a similar change requiring British planners to adopt 'a market-sensitive approach to development planning ' (Campbell and Henneberry, 2005, p. 56) which their education had not prepared them for. Their study found this failure affected planners' ability to negotiate successful planning obligations, and that planners were resistant to incorporating the required market knowledge. In contrast, in New Zealand planners were perhaps too eager to try to use the new vernacular and the new approaches, such as effects-based plans, that were inherently flawed, leaving the planners as scapegoats for the inevitable failures. In the British study, it was evident that national level guidance could help overcome this knowledge gap. The RMA's cooperative mandate that allowed central government, through the use of National Policy Statements (NPS) and National Environmental Standards (NES), to create an overall direction on resource use and how adverse effects on the environment were to be dealt with at the regional and district level (Ericksen et al., 2003), was an equivalent mechanism. There was an expectation that MFE like its predecessors would provide a stream of helpful information on implementing the act, which would have Australian Planner 187 the positive effect of ensuring that there were consistent approaches across the country. Instead, the under-funded MFE retreated from offering any guidance with its staff and resources being downsized throughout the 1990s. Politically, the decision not to promulgate any NPSs and NESs beyond the compulsory National Coastal Policy Statement until the mid-2000s, left local authorities to develop their own varied approaches. In a pre-web environment, planners faced major hurdles in creating a new knowledge base and ensuring the act was implemented in the manner its authors intended. The Quality Planning website that was created in 2000 as a joint venture between MFE and a number of professional groups, carries some 80 advice notes, most of which came too late to influence the first generation of district and regional plans now requiring extensive and expensive review. The problems of both the consent and policy planners were aggravated by the almost annual amending of the RMA, with major amendments in 1995RMA, with major amendments in , 2000RMA, with major amendments in , 2005RMA, with major amendments in and 2009. This created constant change that undermined the development of robust plans and consents systems.
There is also evidence that one of the profession's key roles has also been adversely affected by the changes. In a survey of graduate members of the NZPI in 2001, it became evident that many young planners were not being adequately mentored. The survey was repeated in 2009 with additional mentoring questions, which as Table 1 indicates, revealed a rather disturbing picture.
In addition, those who found their mentoring good to excellent declined from 61% in 2001 to 54.5% in 2009. These figures may in fact be optimistic, as extended comments from the respondents revealed that many thought mentoring was having a program of skills enhancement or a casual buddy system. Discussion after a presentation based on these results at the 2010 NZPI Conference (Miller and Sweetman, 2010), confirmed that mentoring had low priory in many workplaces, with senior staff often being poor mentors as they had not been well-mentored themselves. Thus, one of the fundamental aspects of the development of a professional is of variable quality and is relegated to more important concerns such as meeting processing targets.
The public, the politicians, the profession The RMA brought with it substantial public expectations of planners following the new legislation. However, the new Act was a product of 'deep ideological tensions ' (Gleeson and Grundy, 1997, p. 294) and was heavily influenced by an economic assessment of natural resources. Developers and business interests failed to look beyond the promise of a standardised consent process and the promise the system would 'deliver faster, more efficient process of approvals' (Heeringa, 1997, p. 31), to the environmental framework of that consents system. That system also involved the production of an Assessment of Environmental Effects (AEE) with the associated time and monetary costs and the expansion of submission rights. Under the RMA, anyone anywhere was able to submit on a resource consent or plan with all the accompanying appeal rights. A submitter who was unhappy with the outcome of their submission could then, for a cost of $50, lodge an appeal with the Environment Court that was already over-burdened with work. As planners struggled to create new processing systems and to determine what an adequate AEE was, in the absence of legislative guidance, criticism of planners became common in the business press. The Business Roundtable, a group dedicated to ensuring market-led reform, in 1994 produced an assessment by planning barrister Allan Dormer, which was roundly critical of the performance of both planners and local government. Business publications, The National Business Review (NBR) and The Independent, were vicious in their attacks on planners, most of whom had no opportunity to reply to any criticism levelled at them. One of the most common complaints was about requests for further information, called by the NBR 'ridiculous demands on applicants' (Hosking, 1997), as the common assumption was that planners used such request as delaying tactics to legally extend the RMA timeframes. Costs were a regular focus of complaint, with John Pfahlert, of the New Zealand Minerals Industry Association, calling for planners to be reined in by way of a Cost Compliance Unit (Pfahlert, 1995, p. 11). With the criticism came constant agitation for change to the RMA, in the expectation that legislative change would improve planning practice. It was a solution used repeatedly since 1991, bringing with it almost unmanageable administrative/process change. As Complainers took their lead from the Minister for the Environment, Simon Upton, a young lawyer with a formidable intellect, who was the Minister for the Environment from 1990 to 1999 and thus had a significant influence on the development of the Act. Upton had little respect for the planning profession, stating 'unlike architects, engineers or economists, planners are not technically skilled in any readily defined or tangible way other than being knowledgeable and proficient in the planning process itself' (Edlin, 1997, p. 12).
Planners had to cope with constant criticism from the Minister who would use his obligatory NZPI Conference appearance to berate planners, usually announcing another change to the RMA. He often bypassed his own ministry staff to use 'contestable advice', often from Owen McShane. McShane, a former planner then a commentator in NBR, constantly criticised both planners and the Act. His major contribution, Land use control under the Resource Management Act came in 1998 in what the Minster called 'quite unashamedly, a piece designed to provoke debate' (McShane, 1998, p. i). It produced the usual proposals for changes to the RMA and angst for the profession. That change was largely forestalled by the 1999 election defeat of National, although the party did not abandon their dislike of the RMA. In political campaigns from 1999 onward, the RMA was always represented as a 'roadblock to development', and reform of the RMA was part of the first 100 days programme of the National-led coalition that was elected in late 2008. That has produced a series of rushed changes with even more poorly focused change to come, which the profession is again discovering is hardly offering the 'simplifying and streamlining' of its title (NZPI, 2010). This politicising of the RMA has increased pressure on the Act and has emboldened pressure groups such as Federated Farmers to launch a concerted campaign based on its 'Six Pack of Reforms' at the national and local level. The public, confused by the complexity of the resource management system which seems to value the environment over people, well illustrated by the immediate conclusion that environmental concerns in some way contributed to the Pike River mining disaster, have become alienated from the Act. The pressure on planners to avoid publically notifying applications (normally less than 6% of resource consents are notified) can mean the first indication of development is the arrival of the bulldozer. Not surprisingly, given the potential impact on treasured amenity values, there has been an upsurge in judicial reviews of notification decisions, with planners inevitably being blamed often through poor quality media coverage. Planners are always useful scapegoats because no councils will allow staff to respond to media criticism. If there is a response it is often from a media officer or manager who cannot be expected to understand the issues. Campbell and Henneberry's work on the British planning obligation system concluded that the new system created 'tensions between principles and practice' producing 'confusion and uncertainty among planners' (Campbell and Henneberry, 2005, p. 57). This suggests that the change induced by the RMA and local government reform would inevitably create problems for planners, which were never successfully resolved as the continued amendments to the RMA repeatedly changed the nature of the problem.

Conclusions
Legislative change is always difficult for a profession because it requires it to potentially change its outlook, its actions and its process. However, change has to stop at some point to allow the act, its practices and its processes to stabilise. The RMA has never been given adequate time to bed down and this has created constant pressure on the profession as they deal with that constant change while still keeping the system operating. This has serious implications for innovation in practice, as it makes that innovation costly given that it may be swept away by subsequent changes. It has created a system that encourages conservatism and avoids risk taking, which is often at the heart of developing new approaches. Planners writing first-generation district and regional plans produced huge documents because of the fear of leaving out something judged important. Secondgeneration plans appear to be more innovative, although given the persistent attacks on Horizon's planners over One Plan, particularly from Federated Farmers, this innovation could be terminated.
The creation of toxic workplaces has, as in Australia, reduced the attraction of planning as a profession while disillusionment among practitioners is easy to see. The revelation of the poor quality of mentoring in the workplace is a pointer to systemic problems that have been created within the profession that carry with them a threat to the continuation of that profession. The last 19 years has demonstrated the relative fragility of the New Zealand profession and the NZPI. The NZPI has been unable to protect its members from the often unfair criticism levelled at Australian Planner 189 them, although as Campbell and Marshall (2005) observe, this may have happened even if it was a stronger organisation. In Britain, the RTPI was polarised with regard to its role as a learned society and as a 'validating and qualifying framework to ensure a continued output of effective professional expertise' (Campbell and Marshall, 2005, p. 210). Maintaining a distinct professional identity for planning is also an issue as planners become technicians rather than experts, and disappear behind a label of 'generalist'. Planners, rather than being future orientated and innovative, become timid technocrats with poor levels of professionalism. Given Hawkins et al.'s (1975) US study, which found that professionalism was at the heart of good planning outcomes, this has significant implications for the ongoing success of planning. There are also implications for building the trust that Swain and Tait (2007) identify as a necessary aspect of any planning process, particularly those which, like the RMA, have strong collaborative elements. The RMA with its negative impacts on the roles of planners has undermined the trust relationship between the planners and those they plan for. It has inflicted similar damage on the trust relationship between planners and the institutions in which they work. In the long term this damage to the trust relationship needs to be addressed if the planning system is to achieve what it should. Those facing similar change elsewhere need to become aware of the need to place themselves early in the process in a position of strength and to be willing, in a unified manner, to become a leading part of any legislative change. Community engagement with time poor and seemingly apathetic citizens continues to challenge local governments.
Capturing the attention of a digitally literate community who are technology and socially savvy adds a new quality to this challenge. Community engagement is resource and time intensive, yet local governments have to manage on continually tightened budgets. The benefits of assisting citizens in taking ownership in making their community and city a better place to live in collaboration with planners and local governments are well established. This study investigates a new collaborative form of civic participation and engagement for urban planning that employs inplace digital augmentation. It enhances people's experience of physical spaces with digital technologies that are directly accessible within that space, in particular through interaction with mobile phones and public displays. The study developed and deployed a system called Discussions in Space (DIS) in conjunction with a major urban planning project in Brisbane. Planners used the system to ask local residents planning-related questions via a public screen, and passers-by sent responses via SMS or Twitter onto the screen for others to read and reflect upon, hence encouraging in-situ, real-time, civic discourse. The low barrier of entry proved to be successful in engaging a wide range of residents who are generally not heard from owing to their lack of time or interest. The system also reflected positively on the local government for reaching out in this way. Challenges and implications of the shorttexted and ephemeral nature of this medium were evaluated in two focus groups with urban planners. The paper concludes with an analysis of the planners' feedback evaluating the merits of the data generated by the system to better engage with Australia's new digital locals.

Introduction
We created Discussions In Space (DIS) (Schroeter and Foth, 2009) as a design experiment offering a novel and additional mechanism for residents in Brisbane, Australia, to engage in the consultation phase of an urban planning project conducted by Brisbane City Council (BCC). DIS facilitates a discussion and opinion forum around urban planning related topics, issues and questions, which are promoted on a large, situated public screen. (Figure 1) Passers-by can directly submit messages to the screen using their mobile phone's SMS, Twitter and Internet capabilities. The usergenerated messages are moderated and displayed on the screen in real-time, hence providing a platform for collective expression and public discourse.
The hypothesis of this research project is that, compared with conventional online forums or wikis, in-place digital augmentation of this kind offers significant new benefits to enhance public consultation. It allows the lowering of the barrier for users of urban environments to access place-specific, planning related information and to provide collaborative input about the future of a place, where and when it is mostly needed, that is, in place and right now. By using location-based, social technologies that the new digital locals are accustomed to, this study specifically aims at new ways to engage effectively with residents whom local governments generally have difficulties in engaging with Á in our pilot study (Schroeter and Foth, 2009), which informed this project, planners *Corresponding author. Email: r.schroeter@qut.edu.au Australian Planner 191 referred to them as younger, time-poor, transient or apathetic 'backyard buddies'.
The following research questions guided the study presented in this paper: In what way is this system and the short-texted and ephemeral data it potentially generates valuable to a local council or urban planning project? Can the application collect responses that are useful in improving the urban planning process? How is it different from other consultation tools? And how could such systems be improved for the purpose of urban planning? This study's evaluation of DIS followed up on a successful trial over a three-week case study, where 656 posts were received from 225 distinct users, mostly under the age of 30. Three 90-minute focus groups were held with a total of 13 planning professionals. All of the participants were experienced in the public participation processes for urban planning. We presented a selected subset of screen responses to the planners to consider the value and usability of this type of feedback. This paper presents an overview of the DIS system, followed by a discussion of the findings of the study's focus groups. In the conclusions we look at DIS as a community engagement tool in the context of a spectrum of public participation.

Related work
The International Association for Public Participation (IAP2, 2007) provides a Spectrum of Public Participation that reaches from 'lower' levels such as informing and consulting the public, to 'higher' levels such as involving, collaborating with and empowering the public. Our goal is the creation of a new location-based social media tool that fits into these lower levels of informing and consulting the public and is compatible with the IAP2's (2006) Public Participation Toolbox, which will be demonstrated within this paper.
The new communication technologies and the resulting information society have been cited as having a range of benefits including a role as 'democracy enhancers' (Gilder, 1990;Masuda, 1990;Rheingold, 1993), through the extended freedom of thought (Huber, 1994) and a faster flow of information (Nesbitt, 1982). Effective tapping into this potential has been raised as an issue for community engagement at the top tiers of government (AGIMO, 2008), and is especially inclusive of the discipline of urban planning, where governance directly impacts the way people live, work and dwell.
Today's leadership challenge is that of engaging citizens in the governance processes, including planning issues along with other policy areas (Lukensmeyer and Torres, 2006). Dahlgren (2009) andMcAllister (1998) both refer to the declining citizen engagement and express concern for the declining citizen knowledge, along with a disinterest and distrust in politics and representative institutions (Gibson et al., 2008). MacNamara (2010) argues for the process of public consultation and citizen engagement to 'examine uses and effectiveness of interactive internet communications more broadly ', and Bittle et al. (2009) provide promising examples of current online practices.
This paper explores the affordances, opportunities and challenges of location-based social media accessed via mobile phones (Goodman, 2005) and public screens (Lane et al., 2005) in order to achieve Jacobs' (1962) praise of cities as places with 'something for everybody, . . . created by everybody'.

Discussions in Space
DIS was deployed in conjunction with Brisbane City Council's Bright Ideas Stand. The stand featured an area approximately 3)3 metres of tiled flooring with a large printed map of the inner Brisbane area, an information poster, as well as a senior and junior urban planner on weekdays between 10 am and 2 pm. During those times, passers-by could ask the planners questions and talk about their 'bright idea for Brisbane's future', to imagine visions for Brisbane in 2050. To visualize their idea, participants playfully engaged with the map by placing building blocks on it ( Figure 2).
The DIS application was running in parallel as an additional engagement tool of the installation. The application ran while it was manned by BCC staff, as well as outside the hours, daily from 8 am to 9 pm. Therefore, residents who had something to say could choose between talking to the planners directly or sending a message via the screen while the staff were present. When staff were not present, residents could opt for sending a postcard (the stand presented a stack of postcards and an embedded mailbox) or sending their bright idea to the screen. DIS has been deployed at several locations. The data presented to the urban planners and evaluated in this paper, however, stem from one successful installation in conjunction with the Bright Ideas Stand at an inner-city university campus over 3 weeks in March 2010. This particular location ( Figure 2) was conducive to a successful trial because (i) DIS was featured on a large projection screen in a shaded indoor environment where it was easy to read; (ii) students and university staff were waiting for lectures to start, so they had enough time to interact with the system; and (iii) the campus is situated close to the CBD, so users would be familiar with inner-city issues.

Content considerations and moderation
The following points had to be considered when forming the topics and questions: Size limitation Á allow for short replies, as responses are limited to 160 characters; Time limitation Á easy to understand, allowing users to quickly think about a possible answer; Excitement and positivity Á encourage contributions about positive and constructive suggestions about the bigger picture and the future of the city; and Balance between all of the above and the needs of the urban planners, who seek information about inherently complex problems Considering the above aspects, we had to arrive at a compromise between specific questions that the planners wanted to know as part of envisioning Brisbane in the year 2050 and questions that would be suitable for the medium. Each theme had a main heading plus one or two specific sub-questions to provide participants with several options as to how they could respond to the broad topic. They also reflected what planners asked residents face-to-face at the Bright Ideas Stand. The three themes we eventually used during the case study were: 1. Is Brisbane cool/uncool? (see Figure 4 below) Á what's your favourite hang out spot and what makes it so special? What makes Brisbane a good place to live, work or use? 2. Bus, cycling, train or car? (see Figure 5 below) Á how did you get here today? How could your daily commute be improved? What would convince you to leave your car at home? 3. Share your bright idea (see Figure 1 and Figure  6 below) Á how would you like to see the city grow? What would be the most positive change?
Moderation of the screen was shared between three distributed researchers and one university staff member. The aim was to have at least one moderator on duty during operating hours, with only one moderator during busy times. Outside work hours, moderation took place via a mobile phone. Both the desktop and mobile phone moderation application received instant notifications whenever a new message was sent to the screen.
Moderators had to immediately label a message as 'on topic', 'off topic' or 'declined' to ensure the realtime nature of the application. 'On topic' and 'off topic' posts were presented on the public screen as soon as they were approved, while 'declined' messages did not appear. The screen application animated a rotation through the four most recent messages plus a random selection of older 'on topic' messages, which changed every 90 seconds to keep the content dynamic and interesting. Further, 'off topic' messages expired after 15 minutes to encourage and emphasize 'on topic' posts.
Over the three weeks, 656 posts were received from 225 distinct users, mostly university students under the age of 30. Six hundred and seven (194 users) messages were sent via SMS, 49 (31 users) via Twitter. The users who sent the most messages were also the ones who sent the most 'off topic' or 'declined' messages ( Figure 3). Overall, 154 were considered 'on topic', out of which 80 were randomly selected for the focus group study. The distribution of messages, especially the ratio between 'on' and 'off topic', should be taken with a grain of salt as studies at other locations have shown that it largely depends on the community dynamics around the screen.

Evaluation approach
To assess the value and potential of DIS as a planning consultation tool, 13 participants were invited to three 90-minute focus group sessions at the university. Two researchers facilitated the three focus group sessions, which were videotaped and transcribed as part of the analysis. The invited participants all had experience in engaging with local residents in a variety of planning-related positions, such as strategic planning (SP) or statutory planning (ST) within a council, urban design or architecture (UDA). Six participants were working for either one of three city councils in South East Queensland (Brisbane City, Logan, Sunshine Coast Regional) at the time of the study; three had worked for a council in the past but were now working independently as planning consultants (PC); a further three had always worked in industry (UDA), and one participant was a community worker (CW) heavily involved in organising weekly community events for residents in the CBD.
All three focus groups were set up to feature a mix of council and industry professionals, as well as a mix with regards to the level of experience and age, ranging from five to 30' years of experience in the sector. The aim was to create heterogeneous groups conducive to a healthy discussion, rather than parti-cipants agreeing with each other because they have similar backgrounds.
During the first part of the focus group sessions, participants were asked to reflect on the current toolsets they have used in the past to engage with citizens. Particular emphasis was placed on general advantages, disadvantages or challenges when using these tools to inform urban planning decisions. This pre-discussion not only served as an easy topic for them to break the ice, but also allowed analysis of participants' attitude towards public participation, whether it was negative, cynical, indifferent, positive, enthusiastic, open-minded in terms of new media, or if they favoured more traditional ways.
The second part started with a brief 5-minute introduction and demonstration of the Discussions In Space system and an overview of the results. Planners were then given a selected subset of the data in random order. The subset included 80 posts, which were sent to the system via SMS or Twitter during the case study at the university location. The selected subset of 80 messages all came from the pool of messages that had been accepted as 'on topic' during the moderation process. The participants were made aware that 'off topic' messages such as a simple 'hi' or jokes like 'you just lost the game', as well as inappropriate messages that were declined, were not included in the subset.
The participants were given about 5 minutes to read and sort through the subset at their own pace, slowly grouping them into stacks, which they were then asked to label in the way they thought appropriate. Finally, everyone reflected on their initial thoughts and presented their way of grouping and labelling the messages, which eventually led to a discussion amongst the participants.

Findings
Overall, the initial reaction of the participants towards DIS as an engagement tool was positive. The two prevalent aspects, which stood out amongst others as being appreciated the most by all participants, were the positivity of the submitted messages and the brevity of the messages.

Positive, brief
The positive nature of the posts was identified by the focus groups and it was seen as a 'positive consultation... a way of getting people more interested and more involved in the process' (PC Alan) and 'people generally being overall very positive' (IP Paul). IP Paul was also excited to note that it was 'lovely' to see residents 'acknowledging the culture' and 'recognising the personality of the city' as well as 'defending the unique character' (Figure 4).
Indeed, of the 208 posts which were related to the topic, only 28 (13%) had dominating elements of a complaint (typical posts are depicted in Figure 5).
Most of these angry messages were related to transport issues. In general, transport was a central theme throughout the installation. Even the first topic 'Share your bright idea,' which was later followed by a specific transport-related topic, attracted many posts with a consistent message about accessibility and movement in the city.
This emphasis on transport is indicative of what the number one issue in Brisbane is, from heated discussions in online forums to community activist groups demonstrating against new roads or tunnels being built. One of the planners put it as follows: 'On any planning project, it's never a surprise that transport is the one thing that people will tell you the most about, whether it's complaining or solutions or the whole gambit. And I guess that's because transport affects people's lives every single day.' Every urban planner had been confronted with complaints from the public at least at one point in their career, so they were not surprised to see that 'people are complaining about traffic, or complaining about the bus' using DIS as well. Yet, IP Paul felt that people did not seem to be 'seeking validation,'  Australian Planner 195 but simply 'got a point to make.' He also assumed that the contributors 'didn't care if understood' and 'just wanted to feel good about themselves.' From his own experience, 'a long letter to the council is all about validating and being understood.' Overall, he felt the messages were constructive.
CP Dale summed it up as follows: 'when people write letters or submissions, they rant a lot. It comes down to one sentence, but they have written 10 pages about it. And ultimately those 10 pages... don't mean much... it's just a vent for them, an emotional outpouring.' Yet it is a legal obligation that all letters from the public are read and responded to in one way or another, which in circumstances like the above can be time consuming and expensive. The design of the DIS system on the other hand does not allow for messages longer than 140 characters for Twitter or 160 characters for SMS. As a result, users have to choose their words carefully in order to get their point across.
The evaluation of the focus groups showed a consistent agreement amongst all participants that this limitation was in fact one of the system's 'biggest strengths.' They greatly appreciated that the messages had a 'nice bite' and were 'really quickly [to] understand,' that they were 'succinct,' 'straight to the point,' and had a general 'focus on getting the idea out.' In this context, CW Anne raised the concern that 'writing short and pithy messages is a skill that not everybody in the community has.'

Light bulbs: brainstorming during the visioning stage
The participants also felt that the system succeeded in encouraging people to think about new ideas, to be innovative and positive about the future, rather than being negative about the present. The participants unanimously saw DIS as an ideal 'IN to a project, to start the conversation'. There was a sense that the DIS could be used at a lot of different stages of the planning process, especially brainstorming during the visioning stage.
Participants said that, with a lot of consultation, the feedback is confirming what the planners already know rather than new ideas, whereas DIS provided a blank canvas for their own ideas. It was different from presenting residents with predefined ideas that may actually inhibit the creative process. CP Carol confirmed that the ideas were more 'outside the box' than those gathered through more traditional means, such as CPTs (Community Planning Teams, aka Community Advisory Groups). This also excited PC Alan: '[DIS] strikes me as coming up with lots of information that's outside the scope of the potential project... which is quite exciting!' The deliberate selection of broad topics, in combination with the public, electronic, anonymous brainstorming nature of DIS, as well as specifically targeting a younger audience (compared with the usual demographics who attend CPT meetings), seems to be fertile soil for new exciting ideas to be born. 'Jumping castles on the Brisbane River' is a prime example of an outside the box 'bright idea'. A resident sent it in, probably thinking of it as more of a joke. Nevertheless, even those presumably 'silly' ideas triggered a thought process around how the meaning of such a message, providing more elements of fun around the river, could possibly be implemented. It was suggested that some of these bright ideas provide a 'spur' for further thought and development in terms of values people hold on certain issues, which could lead to a more informed planning process.

Catalyst for social innovation
While messages that are outside the scope of a planning project may not directly influence the project itself, SP Laura recognised the great opportunities that may arise from tapping into the local, crowd-sourced pool of creativity in this way. By ignoring the prompts asked on the screen, residents were 'pushing into things like the culture and the identity and social inclusion and all those other things that are integral to the way the city... changes and develops.' Through this, residents suggest further innovations and community development beyond the planning issues at hand, which may lead to enterprise opportunities or responses by non-profit organisations.

Uncovering the community's cool, natural lingo
There was also a sense in analysis of the messages that there was 'a language there that we don't really use.' While the contrived language that often comes with formal responses was not found, the natural language of the community in their discussion of the issues was revealed. Learning and using the language of the target group is useful in communicating planning issues. SP Dale also highlighted the importance of uncovering the community's language in this way. He noted that it was common practice to enhance bar graphs resulting from market research with 'revealing' comments from the public; that is, 'a qualitative piece of information that represents some other information that you've uncovered elsewhere'. He thought the DIS data offered such symbols of concepts in the community, which is not only interesting for a politician, but also for community members.

Community issues and hotspots
Most messages did not have a personal agenda nor did they seek out purely personal benefits. There were only 27 of all 656 messages (4%), where such tendencies were apparent (examples are given in Figure 6). However, despite the brevity of the messages, UDA Paul was surprised to find that 'a lot of people were joining the dots a bit too [and] making those connections between urban consolidation and so on,' as indicated by the following post: 'More affordable accommodation in the city means less cars on the road'. Furthermore, when observing posts that were in support or against new bridges in Brisbane, SP Laura found DIS to be a good source for 'flagging... some potential hot spots [or] conflicts'.

Targeted and location specific
Planners also considered it as an advantage, depending on the nature of the urban planning project, that DIS targeted a specific user group within a certain area, as compared to other ICT tools where the information about the users is often lost or reduced to knowing residents' addresses, rather than knowing which part of the city they use. For example, the most vocal residents complaining about changes to the city square on an online forum might in fact rarely visit the square. The online forum does not recognise if real users of a particular urban place post. DIS, on the other hand, engages residents in situ, that is, the application is able to run on a screen situated at the site that is the subject of the discussion. As a result, the feedback from the public is focused on a smaller, location-specific community.

Talking versus texting
During the case study, DIS was located next to the City Council's 'bright ideas' information stand, which was occupied by urban planners between 10 am and 2 pm. Residents were able to talk to them to ask questions about the planning project and tell them their ideas, which the planners would record. The collected data indicate that some users who contributed a thoughtful idea or message deliberately chose to send it in as a short text to be displayed on the public screen, rather than talking about it to the planners who were nearby.
For those users, DIS provided the freedom to contribute on their own terms, to spend as much or as little time on the issue as they wanted, without being tangled up in a conversation. They could simply have been too intimidated or shy to talk to a planner. In addition, they did not have any questions, nor did they want to have a response from the on-site staff. They just wanted to say what they had to say, in a public way, and make sure it was recorded. Talking to a planner does not necessarily mean that one's voice is accurately recorded, as pointed out during the focus groups. Despite best efforts, misrepresentations are possible. The planners who were previously involved in such engagement processes on the ground admitted that this was indeed a challenge and hence welcomed this authentic and user-generated input channel.
SP Kirstin also found the messages to be 'a lot more honest', which she attributed to the fact that these contributors did not have to talk directly to the planner, but could do so more anonymously. SP Laura thought that communicating to a peer-based, broader pool of ideas might be more palatable for people. She noted this in light of the 'dynamics around citizens and their governments . . . the prevailing idea that people are losing faith in their governments and don't necessarily want to interface with their representatives or officers of governments.' The research of Cox (2002) supports this concern that there is a declining trust and disengagement of Australian Planner 197 citizens in the formal community processes within Australia. Cox noted the importance of building social cohesion and engagement for effective democracy and healthy civic involvement, supporting the concept of alternative avenues of communication. The ability of DIS to directly relay messages back to the community and share them across the community for all who are there to see was seen as a definite strength of this tool. The community does not see it as a one-way feed but a shared forum.

Challenges and improvements
Lacking scope SP Kirstin thought DIS suffered from a 'lack of scope.' The fast-paced nature is required to grab the attention of passers-by quickly. The trade-off is that there is neither enough time nor enough screen real estate, to provide more background information on the plan or to question. On the other hand, as mentioned earlier, it is also this lack of scope that emphasises the opportunity to focus on the residents' free-flowing ideas, rather than the planners' ideas.
The challenge to find questions that balance brevity and meaning should be addressed in close collaboration with marketing or communications professionals. In an interview with an urban planner during the pilot study of the project, he stated that there is a trend to increasingly work with the marketing and media team on these issues, whereas previously the planners would struggle to convey information that was too technical or analytical to the public. Now 'there's a whole group of different people with a different technical set who are about communications'. We found that when developing the questions for the screen, we re-worded the planners' questions and topics into simpler, everyday language.

Lacking context
The urban planners, as discussed above, appreciated succinct messages with 'bite'. On the other hand, short texts may also lead to a lack of context, e.g., what was the motivation behind this idea, why does the resident think the particular issue is important, and sometimes, simply: what did the resident actually mean? SMS texts without depth and clarity can confuse people. The same applies here. In relation to some of the posts that asked for fewer or more bridges in Brisbane, SP Carol mentioned that the posts are not useful for planners unless the 'why' question is addressed: 'But if you can get the context for that, that could be because they don't like the Kurilpa Bridge [or] because you can't get from here to the pathway.' Some of the contexts of the messages can be established by observing the space around the screen, e.g. if a message was meant to be serious or not, while other contexts can be established by carefully looking at the data itself Á for example, several messages came from the same user.

Follow-ups
DIS allows users to be contacted afterwards to explore comments to a greater depth if a planner decides it may be worthwhile to investigate further. Follow up questions could be sent to the residents over the same medium they chose, SMS or Twitter. During the installation of the case study, the public screen highlighted in a footnote that users might be contacted for research purposes. This point was also outlined in the Terms and Conditions, which were promoted on a poster near the screen and accessible online. This unobtrusively allowed residents to respond on their own terms, in their own time, in whatever detail they see fit. A great advantage is that users do not have to go through a tedious registration process to leave their details, like in many online forums. Their mobile number or Twitter account ID is automatically recorded and saved alongside their message (as advised in the application's Terms and Conditions published on a poster near the public screen as well as on a website, the address of which was promoted on the screen itself).
Those users who are serious about the idea they sent to the screen responded quite positively to a short follow-up. During the study, nine out of 20 users responded that it was okay to contact them via phone. Considering that they agreed to be called up by a stranger, this is a good return. Follow-ups can also occur in real-time and be made publicly visible as well. For example, one way to improve the system in the future, is for planners to take a more active role in running a DIS application themselves, rather than leaving it to the researchers. As part of this role, planners would not only take care of the moderation process, they would also post follow up questions, replies or clarifying information directly to the individual or even redirect them back to the screen for the community to see.

Lack of control
One of the concerns expressed by planners was the lack of control over this medium versus others, in particular face-to-face interaction. UDA Dan, for example, who was sceptical about electronic media in general said: 'The problem with all electronic media is [that] you are going to get bombarded with all these answers.' He preferred using 'face-to-face you can control a little bit more', e.g., directing residents towards a certain issue. The value of spontaneity to adjust questions in the face-to-face context was also noted. 'Whereas these [questions in DIS] being something that's published, they are almost set in stone' (CW Anne). DIS does allow for change relatively easily but understanding how the questions are being interpreted may not be immediately obvious.
The aim of the system is not to provide the same kind of control and richness one can achieve in a faceto-face conversation within a public meeting, workshop, focus group, etc. Within the spectrum of public participation, these tools are likely used to more actively involve or collaborate with a subset of selected residents. DIS is a novel consultation tool, which aims at broadening the toolset for gathering public feedback. Nevertheless, ways to increase the level of control will be further investigated, beyond the improvements discussed above.

Quantitative analysis and ratings
When sorting through the message data, UDA Dan noted: 'It can be potentially overwhelming when you get all those responses.' He did not refer to the sheer volume of messages that DIS generated, as this is much less of a problem compared with other electronic media, because the location constraints of the application limits the user base. He was referring more to the fact that, within the given time, he could not establish any common themes (besides public transport) quickly enough. He further pointed out that targeted 'micro' messages such as: 'My favourite hang-out spot is the Leaky Cauldron. Or maybe Florean Fortescue's Ice Cream Parlour' might 'at first glance . . . seem insignificant, but actually it may mean something if it is related to specific areas that people are using,' and that these messages are only useful, if they are part of a recurring theme. In other words, it would only indicate a planning-related piece of information to an urban planner if similar messages kept repeating themselves.
Some non-users who were interviewed at the site during the case study revealed that they at first intended to send in a message, but upon observing the screen for a while, found that someone else had already sent in a similar one. Hence, they refrained from their original intent. This indicates a problematic tension between the user behaviour and the planner's needs. Therefore, a simple, unobtrusive mechanism for users to indicate that they support a specific post will be investigated, possibly similar to how ratings in online forums provide quantitative clues about the messages most supported by a forum's community.

Fitting into the IPA2 toolbox
The IAP2's Public Participation Toolbox (2006) document lists a wide range of techniques, grouped into three categories: to share information; to compile and provide feedback; and to bring people together. DIS addresses a gap for tools to 'compile and provide feedback.' None of the tools listed are useful for gathering comments at an interactive information kiosk. Only four tools Á comment forms; resident feedback registers; internet surveys/polls; telephone surveys/polls Á are listed as 'providing input from individuals who would be unlikely to attend meetings' or as being 'useful in gathering input from 'regular' citizens.' None of them, however, provide the same unobtrusive level of accessibility directly situated within an urban place as does DIS. Based on the structure and vocabulary of the IAP2 toolbox, Table 1 analyses DIS against three criteria: . 'Think it through' describes critical information that is useful for any urban planning project prior to setting up the system; . 'What can go right?' evaluates the opportunities of DIS as a public participation tool; and . 'What can go wrong?' highlights potential risks and pitfalls.

Discussion
The overall positivity the planners noticed within the given message data set cannot exclusively be attributed to the system itself and the demographic of users it attracts. There are many factors that influenced this outcome. First and foremost, the topics and questions on the public screen were specifically chosen with the goal of triggering a positive response from the users. A more provocative or controversial topic, which the community feels more strongly about, would possibly generate more rants or complaints, or be more deliberative in terms of views from either side of an opinion spectrum. However, the amount of posts (12) that replied to a previous user's post was small across all 607 messages that were received over the course of the case study. Secondly, the selected subset of 80 'on topic' messages handed to the planners was not representative of all messages. Note however, that this should not imply that 'off topic' posts were mostly negative. On the contrary, the system received everything Australian Planner 199 from love messages to marriage proposals, and generated a lot of fun amongst the users. The offensive messages came from a small minority of users ( Figure 3).
Nevertheless, in the discussion leading up to evaluating the dataset, the urban planners mentioned the general positivity they experience from residents in their 20s/early 30s when thinking about the future of the city: 'They are usually the voice of reason, [who] temper the ideas we get from the older generation' or who are more likely to think that change 'might not be such a bad idea' (SP Ron). So although the positivity that the planners appreciated so much cannot be exclusively attributed to the system itself, the combination of a broad topic and a young demographic without emotional attachments to an urban place likely contributed to these findings.
This study was undertaken to reveal the potentials, merits, opportunities, challenges, implications and risks of real-time, user-generated, public screen systems like DIS for public participation purposes. This paper focuses on the response of planners from a range of backgrounds to the output generated by this technique. From this perspective, the findings reveal a tension between the (appreciated) brevity of the messages and the lack of context. This is evidently true for any compromise between length and depth of feedback. If richer feedback from 'regular' citizens is required, there are better tools available, such as telephone or in-situ interviews. However, these are also more resource intensive and expensive. DIS may have the bigger price tag for initial set-up (secured urban TV screens are about 7Á10 times more expensive than consumer TVs), but if a local authority already has access to a well positioned public or urban screen infrastructure, e.g. event screens, the cost to run the system is indeed very small (SMS gateway and web hosting is under $100 per month). The cost of the real-time moderation largely depends on how busy the system gets, which in turn depends on the location and positioning of the screen. In busy environments, one dedicated moderator is recommended for the task. In other circumstances, for example when expecting around one message per hour, several distributed trained moderators, who might be planners or professional staff going about their usual duties, can easily share the task from anywhere with their Internet connected desktop or mobile phone.
The benefits of a more active involvement of the moderators to follow up specific posts were already touched on in the previous section. Interviews with the users revealed that some were indeed confused about the relationship between the public screen and the information kiosk, and questioned whether their contribution would lead to anything further other than its appearance on the screen. The system in its current form does allow moderators to respond via the screen in a way that is highlighted to the users, however, the case studies undertaken to date have not leveraged this opportunity. Future work should investigate the effects this could have on the perceived value of the system in public, as well as the generated data.
In its current form, DIS delivers useful results as an electronic, public brainstorming tool that may be required during the visioning process of an urban planning project. The crowd-sourced creativity is encouraged by the systems anonymity, the low barrier of entry and by demonstrating to the users that any non-offensive message is valued. Future installations should examine the influence of branding the installation as a local government initiative (rather than a university research project) as well as active participation and engagement in the public conversation on the urban planners' behalf. Mounted on a truck, it could serve as a mobile, interactive probe to gauge public opinion about city issues in various neighbourhoods or suburbs.

Conclusion
DIS is a new tool that offers an innovative way to seek feedback from a more general, possibly more apathetic or time-poor public, who still have a valuable opinion about how a local urban place could be improved. It provides a low barrier of entry and input through the personal devices users feel familiar and comfortable with. By making the twoway communication between the local government and its residents more publicly accessible than online, by providing a physical, situated window into the local digital conversations, which is accessible to all ages and levels of interest, a more informed community emerges.
In a world where social connections are kept across continental boundaries, where global information is accessible anywhere creating information overflow, DIS emphasises the importance of place. IAP2 sets up a valuable toolkit for planners to assist with their responsibility to engage and consult with the public on a range of issues. DIS offers a new tool for the kit, aimed at engaging some of those citizens who are not necessarily effectively engaged by other tools, specifically younger people. As the use of interpersonal communication technologies evolves and mobile access to the Internet becomes more widespread, DIS takes advantage of these developments and the shift in popular communication methods. It also places the interaction visibly within a public place, opening the forum to all present with real benefits for open and interactive communication with the located audience.

PAPER 3
The compact city and sustainable transport: another look at the data Paul Mees*

RMIT University, Australia
The suite of policies known as the 'compact city' has emerged as the most popular prescription for reducing automobile dependence. The evidentiary base for the compact city draws on previous studies that compared population density and automobile use in a range of metropolitan areas, and that concluded that density is strongly related to automobile use. This paper re-examines the data on transport and density in US, Canadian and Australian cities, using census data on mode share for the journey to work, and on the density of 'urbanised areas'. This comparison is possible because the three countries' census agencies collect density and mode share data on a comparable basis, although the Australian and Canadian agencies only publicly released density data following their 2006 censuses. These data allow cross-city and cross-national comparisons to be performed on a more accurate basis than was possible at the time of the earlier studies. Standard statistical techniques are used to examine the relationship between density and the mode share for automobiles, public transport and walking/ cycling. The relationship turns out to be different from that reported in previous studies: public transport and automobile use are only weakly correlated with density, while walking and cycling show no correlation at all. The significance of these results is discussed in light of equivalent British and European density and travel data, and some figures on centralisation of employment. The conclusion is that the effectiveness of the compact city model has been overstated, and the effectiveness of transport policy itself understated.

Introduction: sustainable transport and the compact city
Growing concerns about climate change and insecure oil supplies have highlighted the problem of automobile dependence in cities. There is widespread agreement that cities must become more sustainable, by reducing trip lengths and shifting travel to walking, cycling and public transport (e.g. Garnaut, 2008;MCU, 2010;Mees, 2010).
The most popular recipe for sustainable urban travel is the suite of policies that have come to be known as the compact city. Low population densities are regarded as the main cause of automobile dependence, with Los Angeles held up as the pre-eminent example of the connection between 'sprawl' and excessive automobile use. By contrast, Portland Oregon is hailed as a model of 'smart growth', with increased urban densities due to Transit-Oriented Development. The garden city movement promised us the dream that we could live in the countryside and work in the city . . . Overlay this mindset with an over-reaction to the ills of the industrial city and the emergence of the motor car and you have the root causes of the current form of our cities Á namely low density, widely spread, activity zoned cities where the motor car dominates our public realm and public transport has been largely marginalised. (MCC, 2009, p. 7) The report's solution is to adopt what it claims are 'the six key ingredients of successful cities', of which 'the question of city density is arguably the most important' (MCC, 2009, pp. 10Á11).
The idea that sustainable transport depends on high urban densities leads many urban analysts to advocate increased densities, but it can also apply in reverse. In the absence of sufficiently high densities, it is often argued that alternatives to the car are impossible to achieve. For example, during last year's Victorian election campaign, in which the poor state of public transport was a major issue, Monash University art critic Robert Nelson argued in the Melbourne Age: We shouldn't blame public transport . . . No public transport system can cope with low density. All the good systems in the world belong to dense cities; and none of the sparse cities has a good system. (Nelson, 2010) Density-based responses to the environmental problems of transport tend to downplay the importance of transport policy itself, but transport policy can be changed much more rapidly than urban form, and the financial and political costs may also be lower. Before accepting that sustainable transport requires changes in urban form that may be impossible to achieve, policy-makers should carefully scrutinise the evidence supporting these arguments. This has not generally been the case.

The compact city in Australia
Most contemporary commentary on the densityÁ transport connection takes as its starting point the multi-city comparisons of Newman and Kenworthy (1989Kenworthy ( , 1999, particularly their famous graph in which transport energy use (an indication of automobile dependence) and density are mapped across a range of international cities: see Figure 1. The graph shows a strong correlation between the two variables, and a threshold of 'about 20 to 30 persons per hectare' below which automobile use appears to increase exponentially (Newman and Kenworthy, 1999, p. 100).
Critics of Newman and Kenworthy's work point out that correlation is not the same as causation, and that other factors also influence automobile use. A recent report by the US Transportation Research Board argues: Aggregate analyses such as Newman and Kenworthy's mask real differences in densities within metropolitan areas, as well as in the travel behaviour of subpopulations, that vary on the basis of socioeconomic characteristics. For example, central cities may house disproportionate shares of lower-income residents, who are less able to afford owning and operating an automobile, and younger people and older households without children whose travel is below average. On the other hand, suburban areas tend to include a disproportionate share of families, who are often in higher-income groups with higher levels of automobile ownership and travel demands for jobs, education and extracurricular events. (TRB, 2009, pp. 33Á34) Although the general criticism that correlation is not the same as causation is undoubtedly correct, the TRB's specific examples reveal a US bias. In Australian cities, as well as much of Canada and Europe, inner cities house the wealthiest sections of the community, with lower-income groups increasingly forced to middle and outer suburban locations. But the patterns of lower automobile use in the inner city and higher rates in outer areas can still be found, especially in Australia, where the dependence of lowincome outer-suburban residents on cars is a subject of increasing concern (Dodson and Sipe, 2008).
A more cogent criticism is that closer scrutiny of the Newman and Kenworthy data suggests that the pattern may be less simple than has been suggested. Mindali et al. (2004) analysed the original data set from the 1980s (Newman and Kenworthy 1989), paying particular attention to a cluster comprised of US and Australian cities. Among this cluster, there was no relationship between density and automobile usage: Australian cities had similar densities to their US counterparts, but dramatically lower car travel. A similar pattern can also be found in the 1990 data shown in Figure 1. The Australian cities are all to the left of the trend line, with lower automobile use than their densities would suggest; US cities are on the line or to the right. In fact, the Australian cities' automobile usage rates are slightly closer to the denser European cities than to the US cities, which have identical average densities to their Australian cousins (Canberra is an exception). Canadian cities Australian Planner 203 also appear to the right of the trend line in Figure 1, with similar automobile usage rates to Australian cities, but much higher densities.
The two biggest anomalies in Figure 1 are Toronto and Los Angeles, both of which are well to the right of the trend line, because they have much higher densities than their peers, but similar levels of automobile use. In the case of Toronto, the main reason for the anomaly appears to be the use of the Municipality of Metropolitan Toronto Á which became the City of Toronto in 1998 Á as a proxy for the metropolitan area. As Newman and Kenworthy note, the City houses barely half the population of the wider area, which had a significantly lower density in 1991: 26 per hectare compared with 41 for the City (Newman and Kenworthy, 1999, p. 96).
These anomalies suggest a need to examine the data for Australian, Canadian and US cities more closely. Another reason for focusing on these countries is the broad similarities between the form of urban development. By contrast, while comparing Australian cities with Hong Kong or Seoul may be possible, it is not a very useful task for policy-makers, since there is no conceivable scenario under which the densities of Australian cities could be increased to levels comparable with those cities. Fortunately, the census data collected by the three countries' statistical agencies enables the comparison across Australia, Canada and the US to be attempted.

X-ray the city!
The problem of Toronto discussed above illustrates the importance of ensuring that density comparisons are made on a consistent and rigorous basis. Failure to do so will produce results that are at best meaningless, and at worst downright misleading.
The problem is not new. More than six decades ago, Ernest Fooks published a little book titled X-ray the city! Fooks arrived in Melbourne as a refugee from Nazism in 1939. He was the first person in Australia Á and possibly the English-speaking world Á to hold a doctorate in town planning, which he had obtained in Vienna with an investigation of linear cities. Fooks was the first lecturer in town planning at the Melbourne Technical College, now RMIT, although he ultimately ended up working as an architect (Townsend, 1998). Fooks wanted to place Australian town planning on an intellectually rigorous footing, and wrote the book to show how this might be done.
The central argument of X-ray the city! is one that still needs to be made in the twenty-first century. Most reported measurements of urban density are calculated by dividing the population of a municipality or other administrative region by its gross area. 'It is of the utmost importance,' Fooks says, 'to stress the major defect of such figures: THE ARBITRARY NATURE OF URBAN BOUNDARIES' (Fooks, 1946, p. 43; capitalisation in original). Municipal and administrative boundaries rarely correspond to actual urbanised areas. Some cities (e.g. Brisbane) contain large areas of vacant land within their boundaries, while others (e.g. the City of Toronto) occupy only the inner part of the urbanised area. Therefore, more accurate density measures are needed: Fooks proposed a series of them, linked to form a 'density diagram' that could be used to 'X-ray the city'.

P. Mees
Fooks' efforts to introduce rigour and consistency into Australasian discussions of density were unsuccessful. Nearly half a century after Fooks' book, Brian McLoughlin (1991) lamented the shallowness of local analysis, arguing that British town planners had established rigorous definitions of density that could be used for comparative purposes, but were being ignored.
The key point Fooks and McLoughlin make is that useful measures of density should be based on the area of urbanised land, not on arbitrary administrative boundaries. The whole urban area should be counted, not just that portion lying with the boundaries of a central municipality: urbanised New York extends far beyond the five boroughs of New York City, into Long Island and even the neighbouring states of Connecticut and New Jersey. Conversely, only urbanised land should be counted when measuring density, so measurements must exclude non-urban land that happens to lie within city boundaries.
Density can be examined in more detail by distinguishing between residential and non-residential land. Using McLoughlin's nomenclature, Net residential density is calculated by considering only the residential blocks on which houses are built. Gross residential density includes non-residential uses found within residential neighbourhoods, such as local schools and parks. Overall urban density includes all other urban uses, such as industrial areas, transport terminals and regional open space. Different definitions of density will naturally produce different figures. So when comparing the densities of different cities, or parts of cities, it is important to use consistent definitions, count only urbanised land and count all the urbanised land. Most discussions of density by urban planners have failed this test. Countless discussions of metropolitan areas have compared 'densities' of inner and outer municipalities based on the whole area within municipal borders. Since outer municipalities often incorporate large areas of nonurban land, the result always appears to be a steep decline in density with distance from the centre. But this decline is likely to be exaggerated or even completely illusory: Max Neutze's careful analysis of Adelaide three decades ago found that the apparent decline in density was a statistical artefact, with residential densities actually highest on the urban fringe, and overall urban densities roughly constant throughout the metropolis (Neutze, 1981, p. 67).
Newman and Kenworthy expressly attempted to avoid problems of this kind in their multi-city comparison, by using a definition that corresponds to overall urban density in the above discussion. They were successful in most cases, but not all. In some cities, especially in Europe, land use data for complete urbanised areas proved difficult to obtain, and only the central municipality was studied. Because the central municipality is the most densely-populated part of the region, this means the density figures are overstated for all such cities. In the case of the 1999 study, this means Amsterdam, Brussels, Frankfurt, Hamburg, Munich, Stockholm and Vienna Á the majority of the European cities shown on the graph in Figure 1 (Kenworthy and Laube, 1999, pp. 27Á32).
A similar problem affected Newman and Kenworthy's density data for Toronto, which as we have seen was confined to the City of Toronto. The resulting overstatement of density was magnified by the fact that the gross residential area was inadvertently used as the basis for calculating density, instead of the overall urban area. This can be seen clearly from the map of urbanised Toronto in Kenworthy and Laube (1999, p. 375), which shows Toronto and York Universities, two large cemeteries, the main racecourse and numerous parks as non-urban.

The density and transport table
Newman and Kenworthy had little difficulty specifying the densities of cities in the United States, because that country's Census Bureau has been calculating overall urban density figures for some time (see US Census Bureau, 2007, p. A-22). An 'urbanized area' is defined for each metropolitan region, made up by combining adjacent 'census blocks' (the smallest units for which data are collected) with more than 1000 residents per square mile, or 386 per square kilometre, regardless of how many municipal or even state boundaries are crossed. Less-dense census blocks that are surrounded by 'urban' blocks are also included. This generally contains most of the population of the equivalent 'metropolitan statistical area', which covers non-urban as well as urban land. The main exception is free-standing settlements within the boundaries of the census area, which are counted as separate urbanised areas if sufficiently distant from the main area: for example, San Bernadino is counted separately from Los Angeles.
Newman and Kenworthy used the urbanised area density figures for US cities, but did not use their equivalents for Australian and Canadian cities, possibly because these were hard to locate until recently. Statistics Canada defines 'urban areas' on an almost identical basis to the United States, using a density threshold of 400 per square kilometre (Puderer, 2009, pp. 5Á6). The Australian Bureau of Statistics does the same for 'urban centres', although with a threshold of 200 per square kilometre (ABS, 2006, chapter 6), which means that Australian urban Australian Planner 205 densities will be slightly understated relative to the other two countries.
Each country's statistical agency also asks a question in the census about the method of travel to work, in a manner that enables the answers to be compared. While work trips only account for a minority of urban travel, they are the only kind for which this kind of consistent information is available across such a range of cities. Surveys of overall travel are usually conducted locally, in different years, and often with inconsistent methodologies.
Despite the limitations of this census data, it enables a more rigorous comparison of urban densities and transport patterns across the three countries than has been made previously Á partly because not all the information was available at the time Newman and Kenworthy collected their data. The Canadian census has only included a question on the method of travel to work since 1996, while the land areas of Canadian urban areas were not published until the 2006 census (the Australian urban centre areas were released for earlier censuses up to 1991, but not released again until the 2006 census).
One difference with Newman and Kenworthy's methodology is made necessary by time and resource constraints. Newman and Kenworthy included all urban areas within the boundaries of the broader statistical regions in their density figures, for example including San Bernadino in Los Angeles. Because there are so many smaller urbanised areas, the following data are based on the central urban area only, which usually accounts for the great majority of the urban population. This difference makes the density figures for the US and Australia slightly higher than those of Newman and Kenworthy, but is unlikely to significantly affect the rankings of different urban areas.
The results are set out in Table 1, using figures from the most recent census in each country: 2006 in Australia and Canada, 2000 in the United States. Because there are so many metropolitan areas in the USA relative to Canada and Australia, only the largest have been included. The urban areas have been arranged in order of overall urban density, from highest to lowest.
The results are very different from what might have been expected. Far from being the archetype of sprawl, Los Angeles has the highest density of any urban area in the table, just edging out Toronto and San Francisco, and significantly higher than other Canadian and US cities. LA is considerably denser than all Australian cities, even allowing for the understatement of the Australian figures created by the differing definition of urban areas. By contrast, Portland, Oregon has less than half the density of the City of the Angels, with a lower figure than most Australian cities. And there are other surprises: Boston's density is much lower than Las Vegas or Phoenix, as is Brisbane's.
The US and Australian results are consistent with those reported by Newman and Kenworthy: all editions of their data-set show Los Angeles having a higher density than any other city in the US or Australia. The big difference comes with the Canadian figures Á which, it should be recalled, are compiled on a virtually identical basis to those for US cities. The problem here seems to have been that Newman and Kenworthy's Canadian city densities were calculated on a 'gross residential', rather than 'overall urban' basis, as we saw above in the case of Toronto. This made the Canadian densities seem much higher than those in Australia and the United States, when in reality they are much the same.
One thing the results make clear is that high-rise city cores are not necessarily good predictors of overall urban densities. New York City does have a high urban density, but its 8 million residents are surrounded by 13 million suburbanites, many of whom live in very spacious surrounds. The City of Los Angeles is less dense than New York City, but its suburbs are considerably more dense than those of the Big Apple. In each case, the suburbs, which house the majority of the population, have the biggest impact on the overall result. Robert Bruegmann (2005, pp. 67Á68) points out that the high suburban densities of West Coast US cities are partly due to their dependence on piped water, which prevents the very scattered, 'ex-urban' development found along much of the East Coast.
Australian cities are more like Los Angeles than New York. Their central regions have lower densities than those of older North American cities, but their suburbs generally have higher densities, thanks to stronger regional land-use planning, which has restricted scattered fringe development. Brisbane, with a weaker tradition of regional planning, has a significantly lower density than any other large Australian urban area.
The densities of Australian, Canadian and US cities are more similar than has generally been believed, and bear little relationship with the amount of high-rise development in their centres. They also show little relationship with public transport use. Los Angeles is three times as dense as Brisbane, but public transport's share of work trips is only a third as high. Portland, Oregon has a higher public transport mode share than Los Angeles despite its much lower density, but with only 6% of workers using public transport, Portland is less successful than any Australian or Canadian city.

P. Mees
The US cities, apart from New York, have the lowest rates of public transport use and the Canadians the highest, with Australia in between. The same national patterns are apparent for walking rates, which are generally highest where public transport use is highest. Smaller cities tend to have more walking than larger ones; they also tend to have lower densities. Cycling is of negligible importance across all three countries, but a similar pattern applies to that with walking: the Canadian figures are highest, despite the country's inclement weather.
Car usage rates are, naturally, the reverse of the other modes, lowest in Canadian cities and New York; highest in the United States. Again, density is a poor predictor of car usage rates: New York and Ottawa are the only cities where the figure is below 70%, but do not have particularly high densities. Victoria, capital of the Province of British Columbia, a small relatively low-density city, is noteworthy for its high walking and cycling rates, which together with respectable public transport usage produce a comparatively low rate of automobile use. The comparison with Canberra, which has a similar population and density, is instructive.

Studying the data using regression analysis
The data in Table 1 can be analysed more closely using regression analysis, the same methodology employed by Newman and Kenworthy to create the graph in Figure 1 and its predecessors. Figures 2Á5 set out the results, with density plotted against the share of work trips made by public transport (Figure  2), walking and cycling (Figure 3), all 'sustainable modes' Á that is, walking plus cycling plus public transport ( Figure 4) and private cars ( Figure 5).
The results are very different from those obtained by Newman and Kenworthy. There is little correlation between density and the use of public transport, sustainable modes or the car, with R 2 numbers around 0.3, compared with the 0.8 to 0.9 found by Newman and Kenworthy. And there is no correlation at all for walking and cycling, with an R 2 below 0.1.
With such low figures, the shapes of the curves are largely irrelevant, but it should be noted that there is no evidence of a threshold at which automobile use takes off or sustainable modes collapse. In fact, the pattern is the reverse of this, with sustainability declining above 20 to 25 persons per hectare due to the influence of relatively dense US cities such as Los Angeles and New Orleans. It is also noteworthy that all bar one of the cities below the curves for sustainable modes (Figures 2, 3 and 4) and above the curve for automobile use ( Figure 5) are US cities: the Australian and Canadian cities lie on the other side of the curve, with the Canadian cities further from it.
The regression analysis confirms that density is not responsible for the differing transport performance among the three countries' cities. Instead, it confirms that US cities apart from New York perform poorly from a sustainability perspective, Australian cities are somewhat better and Canadian cities perform best Á regardless of density.

Urban form or structure?
So what is responsible for the differences in transport performance if not density? Mindali et al. (2004) found that the share of employment in the Central Business District (CBD) was strongly (negatively) correlated with automobile usage, suggesting that urban structure is more important than urban form. It has not been possible to assess this connection using the data and methodology employed here, because while the Australian, Canadian and US census agencies have agreed on comparable definitions of urbanised areas, they have not done so for the CBD; in addition, only the Australian Bureau of Statistics publishes census data giving CBD employment levels.
Australian and Canadian cities are more strongly centralised than US cities, except for New York, which in this respect is more like an Australian or Canadian urban area. It is easy to understand why strongly centralised urban regions might see greater usage of public transport, although the connection with walking and cycling is less obvious. But Australian cities are more centralised than their Canadian counterparts (see Mees, 2000, chapter 7 for a detailed discussion of Melbourne and Toronto), while Vancouver, which is not the provincial capital and has an awkwardly-sited CBD, is a 'weak-centred city' like many US counterparts.
There is some evidence that the superior performance of Canadian cities relative to their Australian counterparts is due to higher use of sustainable modes, particularly public transport, by workers employed in non-central locations, rather than the share of workers employed in the CBD or their travel behaviour. In all metropolitan areas across the three countries, the great majority of workers are employed outside CBDs, and an even greater majority of non-work travel is to non-central locations. So the travel choices of suburban workers have more influence on the overall result than do those of CBD workers.
A Statistics Canada report on the 2001 Canadian census gives detailed data on mode choice for CBD and suburban workers. The share of CBD workers using public transport was 59% in Toronto and 55% Australian Planner 207 . But the share of workers in suburban employment clusters using public transport was much higher in the Canadian cities, ranging from 9 to 36% in Toronto and 11 to 28% in Montreal, compared with a range of 3 to 8% for Melbourne. As a result, while CBD workers accounted for 43 and 38% of public transport commuters in Toronto and Montreal, they accounted for 51% in Melbourne. The reasons for public transport's greater effectiveness in serving the suburban market in Canada, and the links to higher rates of walking and cycling, are beyond the scope of this paper, but are discussed in Mees (2010).
Space does not permit a detailed analysis of UK and European density figures, which are discussed in Mees (2010), chapter 4). However, the broad pattern is that British urban densities are higher than those in Europe, thanks to the stronger control over suburban sprawl provided by the UK planning system. British cities are more like Los Angeles, with relatively even densities; many European cities are more like New York, with high central densities surrounded by low-density, scattered suburban development (EEA, 2006).
The UK Office for National Statistics defines a series of 'major urban areas', but uses a different methodology to that employed in Australia and Interestingly, the Swiss Federal Statistical Agency appears to use a similar methodology to the ONS in delineating land areas, so British and Swiss density figures can be compared, at least broadly. The densities of English major urban areas range from a low of 32 persons per hectare in Teeside to a high of 51 per hectare in London (Mees, 2010, p. 63). By comparison, the density of the urbanised portion of the Canton (State) of Zurich Á which covers Zurich city, suburbs and ex-urns Á is approximately 38 per hectare (calculated from SFSO, 2009), towards the lower end of the UK figures, and below Merseyside (44 per ha) and Greater Manchester (40). But the share of work trips made by public transport in Canton Zurich is more than double the average for UK cities, except for London.
Further support for these conclusions comes from a recent study by Ewing and Cervero (2010, p. 275), who performed a 'meta-analysis' by aggregating the results of a large number of previous studies of the relationship between transport and land use in US cities. They concluded that while 'conventional wisdom holds that population density is a primary determinant of vehicular travel . . . [t]his does not appear to be the case once other variables are controlled'.

Conclusions
The data used in this paper have multiple limitations, arising from the following factors: . the US figures date from 2000, while those for Australia and Canada date from 2006; . the three countries do not employ exactly the same definitions of urban areas, which means the Australian density figures are under-stated relative to the other countries; . densities have been calculated for the principal urbanised area within each statistical region, whereas ideally 'satellite' areas should also be included; and . mode share figures are for the journey to work only, rather than for all travel.
Nevertheless, despite the limitations, the data in Table 1 suggest the need for a serious re-examination of the 'compact city' solution to mode shift. This will require additional work to address the limitations mentioned above, and will become easier once data from the 2010 US census and 2011 Australian and Canadian censuses become available.
There is no doubt that very large differences in density can influence transport patterns. Hong Kong's very high density is a major reason why automobile use is so low: if the city somehow became as spacious as Brisbane, car usage rates would increase. But the question for policy-makers is whether changes in density of the kind that might be possible in real urban environments will significantly influence mode share.
On this question, the answer appears to be in the negative. The compact city is not the solution to the problem of automobile dependence, although it may still be possible that it has a role to play, particularly at the local scale. Many decades of compact city policies might make Melbourne as dense as Los Angeles is now, or Brisbane as dense as Las Vegas, but changes like this are unlikely to produce significant shifts to metropolitan-wide travel patterns. This analysis supports the suggestion made 15 years ago by the

Australian Planner 211
UK Royal Commission on Environmental Pollution: 'there is no single pattern of land uses that will reduce the need for travel, and so reduce the effects of transport on the environment' (RCEP, 1994, p. 151).
These findings should be good news for policymakers and others concerned about problems like global warming and oil security. They suggest that transport policy, which can be changed more rapidly and with less expense and controversy than urban density, is a more important influence on outcomes (see Mees, 2010). So the sustainable urban transport problem might be easier to solve than we think. Newman, P. and Kenworthy, J., 1989. Cities and automobile dependence: an international sourcebook. Aldershot: Gower. Newman, P. and Kenworthy, J., 1999

PAPER 4
Urban poverty in Pacific towns and cities and the impact from the global financial crisis: insights from Port Moresby, Papua New Guinea

Introduction
The Pacific Island Countries (PCs) are experiencing rapid and unprecedented change, providing donors, governments and communities with a range of challenges and opportunities (UNESCAP and UN Habitat, 2010). This change is set against a backdrop of increasing urbanisation, a recent phenomenon in the Pacific Islands Region (Figure 1) involving the movement of people from rural areas to towns and cities, and accompanied by major economic, social and environmental transformation (Jones, 2007). According to UN Habitat (2009), the bulk of the world's projected population increase to 2030 will be located in growing urban areas. In 2008, over half of the world's population lived in urban areas and this is projected to increase to 6.4 billion people or 70% of the world's population by 2050. Importantly, most of the population increase will happen in the Asia-Pacific Region and, significantly, much of the growth will occur in smaller settlements of around 100,000 to 250,000 persons (UN Habitat, 2009). Such smaller towns and cities are a feature of Pacific urbanisation patterns, with only two of the 21 PIC capital towns and cities namely, Port Moresby and Suva, greater than 250,000 persons. While Pacific towns and cities are small compared with the large and mega city regions of Asia, there is unanimous agreement that the future of the Pacific Region is clearly one focused on growing urban areas, including rapid growth in squatter and informal settlements (Jones, 2007;Storey, 2006Storey, , 2010UNESCAP and UN Habitat, 2010).
The growth of towns and cities in PICs is being subjected to new and influential forces, which are requiring stakeholders to rethink how best to manage the urbanisation process and its consequences. The current millennium, especially the post 2005 period, has seen the elevation of new drivers of urban change in PICs, specifically climate change, natural disasters, and more recently, the GFC (reference to the global financial crisis in this paper implies a reference to the 'global financial and economic' crisis) (Jones, 2010). These drivers have amplified and highlighted the adverse symptoms and issues associated with the urbanisation process, including rising urban poverty, food insecurity, increasing settlements and informality, resource depletion, declining law and order and environmental decline (Jones, 2005).
Against a backdrop of mediocre PIC economic performance during the last decade, the recent GFC has seen PICs confront lower economic growth rates, a declining macroeconomic outlook, falling government income as well as increased levels of poverty in the 2008Á2009 period (AusAID and New Zealand Government, 2009). In the context of urban poverty in the Pacific Region, there have been increasing calls for evidence-based research on the dimensions of urban poverty in PNG (see, for example, Storey, 2010) as well as the need to mainstream such work into more effective planning in practice, including addressing unplanned squatter and informal settlements (Office of Urbanisation, 2010). In this context, the purpose of this paper is to: . outline the diversity of urban settings in the Pacific Region; . overview the GFC and its impact on PIC economies; . examine trends in poverty and urban poverty in the Pacific Region, including those generated from the GFC; and . explore the results of household interviews in a squatter settlement in Port Moresby, PNG, so as to understand how households have adapted and changed to cope with the effects of the GFC on poverty and hardship. This is important so as to understand the genesis of urban poverty, and the context of its entrenchment in the urban setting.
The rationale for undertaking the field research in Port Moresby is threefold. First, Port Moresby is the Pacific Region's largest city. Secondly, PNG contains the largest number of urban residents in the region, and thirdly, the most number of cities and towns (three cities and seventeen towns) are contained in PNG. Furthermore, in 2010, PNG completed the most recent national urbanisation policy in the Pacific Region, namely, the National Urbanisation Policy for PNG, 2010Á2020, to address rising urban growth issues.

The diversity of Pacific towns and cities: urban population trends
The Pacific Region contains some 7500 islands grouped into three main social, cultural, and geographic areas Á Melanesia, Micronesia, and Polynesia (Jones, 1997). In 2010, the midyear population estimate for the Pacific Region was 9.8 million persons (Secretariat of the Pacific Community, 2010) (Table 1). Based on the last census populations of PICs, the average percentage share of urban populations in PICs was approximately 50%. In terms of actual persons living in Pacific towns and cities, in 2010 just over 2.5 million persons (26%) were residing in PIC urban areas. However, due to the confined nature of local government jurisdictions and expanding peri urban areas, the actual number of persons in PIC urban areas is under-enumerated, and regionally is likely to be much higher (Jones, 2007).
The number of Pacific islanders living in urban areas is skewed by the impact of the larger Melanesian populations of PNG, Solomon Islands and Fiji. Most Pacific island urban residents are to be found in PICs that comprise Melanesia (1.6 million persons out of a total sub-regional 2010 population estimate of 8.6 million. The largest proportions of urban populations are found in Micronesia, followed by Polynesia and Melanesia (Figure 2). The difference in scale between the largest and most populated PIC land mass, PNG, and the remainder of PICs is reflected in the fact that the region's largest urban populations and the largest city, Port Moresby, are to be found in PNG. In 2010, the urban population of PNG was approximately one million persons (Office of Urbanisation, 2010). The PNG urban population is more than the entire 2010 populations of the Pacific sub-regions of Polynesia (663,795 persons) and Micronesia (547,345 persons).

The global financial crisis and its impact on pacific island economies
Against a backdrop of strong global economic growth, the GFC emerged in the second quarter of 2008, and has been officially considered 'over' since late 2009. However, since that time, there continues to be stuttering patterns of global financial and economic growth (Asian Development Bank, 2009). Backed by the advanced economies in North America and Europe, borrowers of subprime housing loans in the United States (US) started defaulting on mortgage loan repayments around August, 2008, leading to a housing and banking sector crisis. The adverse risk behaviour of commercial banks and lending institutions brought the global financial system to the edge of collapse, primarily in developed and industrialised countries, as trade, investment, access to finance, commodity prices and production rapidly declined (Parks and Abbott, 2009).
The joint 2009 AusAID and New Zealand Government review on the impact of the GFC in the Pacific Region described the slowdown of the global economy as the worst in 75 years. The GFC led to a falling demand for goods and services, lower outputs, declining trade volumes and values, and increases in unemployment rates. These changes followed earlier increases in food and fuel costs in PICs in 2006(Asian Development Bank, 2010. The GDP growth for PICs in 2009 was 2.8% compared with 5. 2% and 3% in 2008respectively (AusAID and New Zealand Government, 2009). These Pacific

Australian Planner 215
Region growth rates were skewed by the stronger performance of PNG and Timor Leste (UNDP Pacific Centre, 2009).
The adverse impact of the GFC in PICs continues to be played out through falling demand for exports, declining tourism, reduced overseas investment, loss of jobs and a decline in remittances (Duncan and Voigt-Graf, 2010). The GFC has seen all PICs confront lower economic growth rates, a declining macroeconomic outlook, falling government income plus increased levels of poverty. In some PICs, such as Fiji, Samoa, Cook Islands and Palau, there was recession; in PNG, Timor Leste and Vanuatu there has been deceleration, while there has been stagnation in the remaining PICs (AusAID and New Zealand Government, 2009;UNESCAP, 2010). The relatively  (Chibber, 2009;UNESCAP, 2010).

Poverty and urban poverty in the Pacific Region and impacts from the global financial crisis
The concept of poverty in the Pacific Region At the Pacific Region and PIC levels, considerable progress has been made in the new millennia in defining the nature of poverty, developing explanatory frameworks and collecting, analysing and interpreting baseline data. The latter has been undertaken as part of encouraging more appropriate national and international development responses, gaining a better understanding of the causes of poverty, and importantly, measuring PIC progress towards the achievement of national and MDGs (see, for example, Abbott and Pollard, 2004;AusAID, 2009a;Parks and Abbott, 2009;UNDP Pacific Centre, 2009). In the late 1990s and early 2000s, debate over the meaning of poverty in the Pacific Region led to poverty being equated with hardship. Poverty and hardship in the Pacific Region have been couched as issues and concerns associated with the achievement of sustainable human development, including adequate income levels. Poverty in the Pacific Region is now accepted as being an inadequate level of sustainable human development underpinned by: . a lack of access to basic services and infrastructure such as health, education, power and water supply; . a lack of opportunities to participate fully in the socio-economic life of the community; and . a lack of access to productive resources and income generation systems to meet the basic needs of the household, including the extended family, clan, tribe and village community (Abbott and Pollard, 2004;Parks and Abbott, 2009).

Measuring poverty in the Pacific Region
Measuring and understanding poverty using indicators that are based on a standard of living and or income levels below which one is considered to be poor or in hardship, remains problematic (Abbott and Pollard, 2004;O'Collins, 1999;Storey, 2010;World Bank, 2005). Poverty means different concepts to different people and setting poverty lines, the cut off points that separate poor from the non-poor, can be objective as well as both relative and absolute.
The main measure of national poverty and hardship used in the Pacific Region is the national basic-needs poverty line (BNPL) (Abbott and Pollard, 2004;Kiribati National Statistics Office and UNDP Pacific Centre, 2010). Building on food poverty lines, the BNPL assesses the basic per capita costs of a minimum standard of living in a particular country, society or sub-region, and measures the number of households and the proportion of the population that are unable to meet these needs. Poverty is measured at the household level in terms of costs of food and expenditure for non-basic food items such as clothing, transport, power, education fees, shelter and the like. If the average per capita expenditure and income of a household falls below the BNPL, then all members of that household are deemed to be poor.
Poverty line estimates undertaken in the Pacific Region indicate that the average incidence of the BNPL for PICs is around 25% (excluding PNG which has a higher rate and skews the data). That is, at least one in four households and approximately one in three Pacific islanders are below their respective national poverty lines in terms of having sufficient income and expenditure to meet a basic minimum diet, as well as having adequate monies to meet priority non-food items (Parks and Abbott, 2009) (Figure 3). In other words, one third of the 2010 Pacific Region population of 9.8 million persons, namely, 3.23 million persons, fall below national poverty lines. Those PICs where the proportion of the population in hardship is estimated to be the highest are in Kiribati, Fiji, FSM, Timor Leste and PNG.

Urban poverty levels in the Pacific Region
The national poverty line estimates undertaken for urban and rural areas in PICs are shown in Figure 4. The estimates show that eight out of the 12 PICs (where data are available) have greater urban populations below the BNPL than rural populations. Only four PICs Á Timor Leste, Palau, Fiji and PNG Á have greater rural populations below the BNPL than urban areas. Importantly, the proportion of those living below the BNPL in urban areas would be higher and in rural areas lower, if PIC censuses were properly enumerated to reflect the existing built-up urban areas, including peri urban areas.
The impact of the global financial crisis on urban poverty Constrained rates of economic growth, a lack of new productive investment and rising levels of Australian Planner 217 unemployment, combined with increased prices for basic food items, has seen poverty levels increase for many households in the Pacific Region (AusAID and New Zealand Government, 2009;UNDP Pacific Centre, 2009). The GFC built on a period of disappointing economic performance by PICs, where the pace of poverty reduction in the Pacific Region has been negligible (Hasan et al., 2009).
In the context of understanding the GFC and its impact on poverty levels, the key trends in the Pacific Region are fourfold: . those who are already in need, including those living in poorer households, are the ones who are pushed further into hardship (Parks and Abbott, 2009); . poverty rates and inequalities will increase well after the GFC and are likely to persist, remaining high compared to pre crisis levels; . the household unit will become more prominent as the 'engine room' in meeting day-today living needs; and . it is the urban poor who will struggle most during a major crisis, such as the GFC (AusAID and New Zealand Government, 2009).
The 2010 Vanuatu Outcome Statement (Asian Development Bank et al., 2010) estimated that some 6.4 million people or approximately 67% of the estimated 2010 population in the Pacific Region were potentially vulnerable to the impacts generated by the GFC. The day-to-day impacts of the GFC as being played out in poorer households are diverse and include job losses, reduction in personal and household incomes, longer hours of work for the same income or less, increased costs of food and services, modification of diets and social behaviour, and changing expenditure and consumption patterns (Jones, 2010). All these further exacerbate the varying levels of poverty already being experienced in Pacific towns and cities.
The most recent estimates of national BNPL incidence in the Pacific Region indicates that an average of an additional 5% of PIC residents will have fallen into poverty in the GFC period (UNDP Pacific Centre, 2009). The increase in urban and peri urban settlements, as is being seen in Suva, Port Vila, South Tarawa, Honiara and Port Moresby, are one sign of poverty emanating from the GFC (Duncan and Voigt-Graf, 2010;Office of Urbanisation, 2010).
The GFC has shown that those already in urban poverty will tend to be more vulnerable in such crises, given the ability of poorer rural households, with little or no reliance on the urban economy for their survival, to produce and access food. The recent Kiribati poverty analysis shows that subsistence production accounted for 43% and 60% respectively for food consumed by the poorest households in rural areas and outer islands. This contrasts to the capital South Tarawa, where subsistence only contributed one third of food consumed by the poorer urban households (Kiribati National Statistics Office and UNDP Pacific Centre, 2010). Urban populations are likely to be more vulnerable to poverty in PICs given the increased dependence on cash, services, nil or minimum access to subsistence foods, as well as reduced social support processes and mechanisms (AusAID and New Zealand Government, 2009;ADB et al., 2010).
A major symptom of the GFC is that it has resulted in overdue recognition that those in poverty and hardship in PICs are also concentrated in urban areas (Parks and Abbott, 2009). Rural areas have received the bulk of attention in poverty analysis in the Pacific Region, despite the fact urban hardship is on the rise and that the future population of PICs is an urban one (Storey, 2010).

Insights into poorer households in Four Mile Settlement, Port Moresby
The development setting of Papua New Guinea With a total landmass of approximately 465,000 km 2 , PNG is the largest and most populated of all PICs ( Figure 5). In mid 2010, PNG had a population estimated at approximately 6.75 million persons (Secretariat of the Pacific Community, 2010). Topographically, PNG is one of the most rugged and diverse countries in the world, with vast natural resources, especially mineral, forest and marine resources. Customary landowners exercise right over 97% of the total land area, and there are over 800 distinct languages spoken.
Despite such assets and a rich socio cultural diversity, in 2009 the UK-based Chronic Poverty Research Centre rated PNG as a country 'chronically poor' (Cammack, 2009). PNG falls at the lower end of United Nations Human Development Index, being 149th out of 179 countries, and in 2010, was rated amongst the most corrupt countries in the world, ranked 154 out of 178 countries. This ranking has been constant since 2005 (Transparency International, 2010).
PNG contains the bulk of the poor in the Pacific Region and in terms of reducing poverty, the situation has deteriorated. It has risen from 24% in 1996 to 37.5% and higher in the new millennia, depending on which estimate one uses. Many assessments recognise that poverty levels in PNG are the highest in the Pacific Region, and that progress towards their reduction continues to be slow (Government of PNG, 2004;UNESCAP, 2010). While the emphasis on poverty in PNG has focused primarily on rural poverty (Chandy, 2009;Feeny, 2003;Haywood-Jones and Copus-Campbell, 2009;Jha and Dang, 2008), urban poverty will become one of the most important development challenges in PNG. Continued urban growth will see poverty and its symptoms undermine progress towards a number of national targets, including the MDGs (AusAID, 2009b;Storey, 2010).

Urban growth trends in Port Moresby
Based on the 2000 census, the population of Port Moresby was 254,158 persons, or just over one third of the then PNG urban population of 675,403 persons (National Statistics Office, 2003). In 2008, the population of Port Moresby was estimated at approximately 410,000 persons (UN Habitat, 2008). Between 1980 and 2000, the annual average growth rate of Port Moresby was 4%, with some 58% of the population being migrants from other provinces. The 2000 census estimated 90% of these migrants had moved to Port Moresby in the 1990Á2000 period, with most migrants taking up residence in the settlements (Chand and Yala, 2008a).
Settlements are a permanent feature of urban life and continue to expand and develop in PNG without adherence to any formal rules and regulations (Alaluku, 2010). The Port Moresby settlements are effectively ad hoc areas of squatter housing, and are located on both State and customary land. Approximately 40% of land in Port Moresby is customary, while 60% of land is freehold or State land (Chand and Yala, 2008a). The settlements are characterised by little planning, poor quality housing and minimal infrastructure, primarily water, sanitation and power (Office of Urbanisation, 2010).
In 2000, there were only 55 settlements, and in 2008, it was estimated 45% of Port Moresby's population, namely, around 185,000 persons, lived in 99 settlements comprising 20 planned and 79 unplanned settlements (UN-Habitat, 2008). This accords with earlier estimates that nearly 50% of the Port Moresby population lived in squatter settlements throughout the city (UN-Habitat, 2004). The planned settlements are essentially low-cost self-help settlements containing basic services provided after the development (National Capital District Commission, 2006).
The settlements of Port Moresby have been described as 'cosmopolitan networks of tribal groupings or anarchical sub-cultures, which have been defined by ethnicity and regionalism within an urban context' (Muke et al., 2001, p. 7). Low income and poor households dominate the settlements of Port Moresby, with the stereotype profile of low income inhabitants slowly changing as those in formal sector employment are forced to live in the settlements (Gouy et al., 2010).

Settlements and urban poverty in Port Moresby
Urban poverty in Port Moresby has been increasingly acknowledged as a growing development issue over the last 20 years (see, for example, Yala, 2008a, 2008b;Jones, 2010;Mawuli and Guy, 2007;Muke et al., 2001;National Capital District Commission, 1996, 2006Office of Urbanisation, 2010;Storey, 2010). Poverty in Port Moresby has been historically linked with the range of households that comprise the city's settlements (Anis, 2010;Mawuli and Guy, 2007;National Capital District Commission, 2006, 1999UNESCAP and UN-Habitat, 2010).
The Port Moresby settlements exhibit a plethora of social, economic and governance issues. These include unemployed youth, HIV, crime, raskol gangs, inter clan and tribal disputes, and unresolved land tenure disputes. (Raskols are young unemployed men, primarily city and town based, who engage in robberies, rape, alcohol-related violence and murder (PNG National Council of Women, 2010)) In October 2010, it was reported that crime in Port Moresby had reached epidemic proportions, with 'thugs mugging people at bus stops and public venues, and prostitution and HIV infections having become rampant and widespread in our society' (PNG Post-Courier, 2010). It is not surprising therefore that Port Moresby has a reputation of being amongst the most dangerous cities in the world (ranked third last out of 140 cities), with deep-seated crime, law and order problems linked to patterns of inequitable economic and social develop-

The fieldwork in Four Mile Settlement
The Four Mile Settlement sits within a developed small hill area (Boroko Hill) in Port Moresby. The population of Four Mile Settlement is estimated at approximately 800 persons, and has been formally designated as a squatter settlement (National Capital District Commission, 2006). The settlement is on State lands and, as such, occupation has been the basis for claiming land rights. The settlement is primarily occupied by rural urban migrants who come from the ethnic and kinship groups of the Southern Highlands Region of PNG. There are also some settlers from the lowlands and coastal areas, with these occupants representing some 15 to 20% of the Four Mile Settlement population.
In the post 2005 period, Mawuli and Guy compiled a series of short papers on PNG social and economic support systems, including those in the settlements (Mawuli and Guy, 2007), while Chand and Yala examined how land is accessed for development in the settlements of Port Moresby (Chand and Yala, 2008a). Others have noted the need to develop small-scale household surveys as a basis for more accurately understanding poverty in PNG (Chandy, 2009), while some have argued for greater evidence-based research into understanding the multidimensional nature of PNG urban poverty (Storey, 2010). More recently, commentators have highlighted (i) the high prices of improved and unimproved property in Port Moresby that constrain Papua New Guineans gaining access to formal land supply; and (ii) the adverse impacts of the PNG economy on the rising cost of goods and services, especially those produced domestically (Guoy et al., 2010).
Given the Port Moresby setting and acknowledging the broader GFC impacts documented at the Pacific Region level, the field research was aimed at exploring the two main economic indicators of household wellbeing and welfare, namely, patterns associated with household income and household expenditure. The field research explored how income and expenditure patterns were modified and adapted by households as a result of the crisis. Within this setting, the aim of the fieldwork was to: . understand changes experienced by households over the last 2 years in meeting basic needs for survival; and . explore the range of coping and adaptation measures used by households to adjust to such changes, so as to minimise hardship and the risk of falling further into poverty.
Given resource limitations and safety considerations, some 24 household heads were interviewed via focus group interviews in September and October 2010, across a range of random locations in the settlement. Dialogue was undertaken concurrently with work being carried out by the PNG Office of Urbanisation in assessing options to upgrade informal settlements.
While not all household heads were familiar with terms such as the GFC, households were aware of the wider changing economic circumstances and the increased pressure that had been placed on households over the past two years to meet basic needs. The range of informal income-producing and entrepreneurial activities in which households participate in Four Mile Settlement was diverse: cooked food, drinks including alcohol, marijuana, stolen goods, second-hand clothing, prostitution, animals (raising of dogs, pigs, cats, chickens and ducks), store food and goods, buia and beetle nut, cigarettes, tobacco, fish and crabs, vegetables, fruits, sago, Australian Planner 221 sweets and lollies, gambling (cards, darts, bingo), billums (shoulder bags), string making, coconut brooms, illegal household connections to water and power, land 'sales' and allocation, and petty crime. In regard to understanding changes to household income, the results emerging from the discussion with households are summarised in Table 2.
The trends on household income indicate the range of coping and adaptation measures that households have used to address their decline in real incomes. Members of households are working longer hours for the same income, less income or just 'some' income. Labour and time inputs are irrelevant to many Á some income is sufficient for survival. There has been a rise in the occurrence of 'risk income' generating and entrepreneurial activities. These are activities that household members would not have entertained two years earlier, such as prostitution (at all levels from street to clubs), crime and gambling. The pattern of uneven income means an increasing focus on 'here and now' income generation, such as the rise in the number of buia (a spicy fruit) and cigarette sales. This also means more requests from households and their members for informal credit, and vice versa. With regard to understanding household expenditure on the consumption of goods and services, the results emerging from the households are summarised in Table 3.
The trends on household expenditure indicate that approximately 70 to 80% of all expenditure is for meeting basic food needs. Acquiring food as well as securing adequate shelter are the two priority household needs to be met. Households are increasingly buying goods and services from the informal sector markets and stores, rather than larger stores and supermarkets, as goods are cheaper in the informal sector. For example, food for sale may be past the use-by expiry date, goods may have been illegally acquired (especially mobile phones) and goods are available in individual quantities, rather than packaged. For some goods and services, such as additional clothing, many households indicated they now defer expenditure. For public services such as water and power, most cope by paying someone to access them illegally.
In terms of the adequacy of social safety nets, households were asked how they were coping as a social unit in terms of being unified and their connectivity with other groups. The reason for exploring this parameter was that some observers (for example, ADB et al., 2010: Parks andAbbott, 2009) indicated that in time of hardship and poverty, there has been a decline in social safety nets and support. In the Four Mile Settlement, households indicated that the increasing conditions of hardship and poverty had in fact strengthened the safety nets and social support systems, both within and outside the settlement. The ability of settlers from the Highlands Region to marshal ethnic and kinship ties via the wantok system Á namely, the seeking of assistance, usually in the form of cash or food, from friends, ethnic and kinship relations who are normally in wage and regular employment Á was in itself a safety net, both within the settlement and with other Household income stayed the same fluctuated (mainly decreased) especially in informal activity overall, decline in value of real income due to cost of living increases more informal activity (buia, cigarettes, alcohol, kerosene and so on) longer hours of work (formal and informal) rise in 'risk' activities (e.g. card gambling and prostitution) rise in petty crime reduction in 'non-income' contributing HH members increase in HH paying-contributing members e.g. room rentals sale of non essential assets (electrical goods, ice box, chairs and so on) more borrowing and increase in debt levels Á fortnightly cycle of debt and credit more income Á credit requested from wantoks with regular income (either within or outside the settlement) minor subsistence food production (seasonal depending upon rain) wantoks within Port Moresby (Table 4). Thus, the kinship system is used to alleviate poverty, with culture, demographics and other factors affecting the distribution of income and access to food and resources, both within each household and the wider ethnic and kinship group.
In terms of households considered to be the poorest in the settlement, the results from interviews were unanimous. Three themes emerged.
1. Those households in most hardship are those who cannot generate and sustain a regular source of income. The moniterisation of urban life and the need for cash are inescapable realities in meeting basic needs within the urban setting. 2. Of those who cannot generate a regular source of income, the most vulnerable households were those who contained a large number of women, children and to a lesser degree, men.
Men were considered to be physically and social capable of being able to survive in a 'male dominated' society. 3. The next group considered to be poorest are those that cannot access a reliable and ongoing supply of services, namely, water and power.

Conclusions
This paper has examined the impact associated with a 'new' driver of urban change, namely, the GFC, on poorer urban households in Four Mile Settlement, Port Moresby. While the literature indicates a range of changes emanating from the GFC at the global, regional, and to a lesser degree PIC level, there is a need for evidence-based research on how households in settlements cope and adapt to poverty, and how such findings can be supported. With settlements increasingly comprising the bulk of PIC urban residents and the urban poor, such findings are important for both planning knowledge and planning in practice. Such insights will assist in leveraging overdue policy and project interventions by governments and development partners, such as AusAID, for support to livelihoods, skill training and provision of basic water and sanitation. The household focus interviews undertaken in the Four Mile Settlement in Port Moresby indicate two key messages. First, while some households are vulnerable to the circumstances associated with declining real incomes, many find ways to cope, being adaptable and resilient to sourcing new incomeproducing activities. Households manoeuvre their way through hardship by adopting different coping strategies, subject to their collective abilities, including maintaining exchange relationships and fulfilling social obligations. Households, rather than individuals, form the basis of the informal economy, including exchanges between households. In the case of the Four Mile Settlement, the large number of Southern Highlanders living in one enclave and sub enclaves (smaller groupings of households) and linked by ethnic and kinship ties, has provided a social and economic safety net for many. Ties to kin, land and expenditure patterns are mainly focused on food needs (70Á80% expenditure) followed by shelter food Á a focus on cheapest and quantity such as rice, tinned fish, biscuits, noodles and so on (not quality and most nutritious) expenditure uneven, especially HHs with income derived from mainly informal employment defer expenditure access services illegally one meal a day (evening) or forgo meals shelter Á increase in demand for room or shared accommodation buy goods and services more from informal sector than larger stores Á cheaper and can purchase in less quantity clothing Á focus on used second hand clothing services Á for water, illegal connection of water, rent from outsource (legal or illegal), or use public stand pipe: illegal power connections or rent from outsource, use of kerosene and firewood sanitation Á pit toilet alcohol Á make homemade brew school fees Á no fees means no school place of origin, are the glue for many households to address poverty concerns, with such unifying elements consolidated and reinforced by households in need during a crisis, such as the GFC. In the Four Mile Settlement context, the role and resilience of social and kinship networks in addressing poverty and crisis concerns, cannot be understated.
Secondly, over 75% of households indicated their income and expenditure fluctuated on a weekly and for some, a daily basis. As such, a proportion of households move above or below the poverty line on a regular basis, especially smaller households. Understanding this 'poverty transition' area and associated dynamics of households who decline and fluctuate in income, is just as important as understanding those who fall below the poverty line.
Building on Pacific Region and PNG work by Yala (2008a, 2008b), Jones (2005Jones ( , 2007Jones ( , 2010, Mawuli and Guy (2008) and Storey (2006Storey ( , 2010, the research adds value to the urgent need to address the condition of settlement households and permanency of the settlements now entrenched in the urban form of Pacific towns and cities. Port Moresby, with the greatest number of settlements in the Pacific Region, is symptomatic of the declining urban security and broader social, economic, governance and environmental demise now being reflected across all Pacific towns and cities (UNESCAP and UN-Habitat, 2010). At the broader Pacific Region level, the research highlights the limited attention that has been given to urban poverty in the context of reducing overall poverty levels (see Jones, 2007, for a discussion on why urbanisation and urban development issues have not figured consistently on the Pacific development agenda). The research begs many questions, including what types of economic growth and required downstream processing will best favour and reach those who are the poorest in PICs, and what sort of planning education, awareness and knowledge sharing is appropriate for Pacific planning practice (and PIA), in responding to these major human development issues sitting right on Australia's doorstep?

Introduction
Coastal hazards are expected to increase with rising sea levels and increased storminess. While there are a range of estimates depending on mitigation effectiveness, climate sensitivity to greenhouse gas (GHG) concentrations and ice sheet dynamics, a sea level rise (SLR) of about a metre by 2100 is considered possible or likely (Cazenave, 2006;Church and White, 2006;Rahmsdorf et al., 2007). Even with effective GHG mitigation, some amount of sea-level rise is now unavoidable. Climate change is also expected to lead to more extreme weather, increasing the risk of higher storm surges and larger waves, and the frequency and severity of coastal flooding. In susceptible locations it is expected to contribute to increased erosion (IPCC, 2007). This will cause property damage as well as risk of injury, loss of life and other economic losses. While some of the costs may be borne by the wider community, much will be borne by the owners of the affected properties. Some may suppose these increasing risks to coastal properties could discourage investment in these risky places, with property losses from inundation and erosion leading to declining property values, potentially discouraging coastal living. This paper looks at the evidence for the impacts of increased risks on property prices in flood-affected areas facing sea level rise. This shows that even regular flooding is unlikely to lead to property devaluation and retreat without other forms of intervention due to a range of factors. It then compares this to the impacts of two approaches to planning regulation. The paper presents a case for an active risk management approach to managing property in hazardous coastal areas, taking into account both the consideration of a smooth transition in property values and interests of the wider community.
Evidence for property price effects of current day flooding and other risks The literature suggests there is often a negative impact on property prices following significant flood or erosion events. However, there also appears to be numerous exceptions where events have not led to discernable impacts on property prices. Context and broader market trends are among contributors to variations in results. Analysis by Lamond et al. (2010) of a number of frequently flooded locations in the UK suggests property price impacts are small and have been dwarfed by the impact of inflation over their study period. They conclude that house purchasers behave 'in an entirely reactive manner' and evaluate risks 'based on recent experience rather than scientifically calculated probabilities' (Lamond et al., 2010, p. 350). *Corresponding author. Email: sgstas@sgsep.com.au In contrast, a study in the Netherlands that investigated the impact on property prices from the flooding of the Meuse River in 1993 and 1995 (Daniel et al., 2009, p. 574), found no signs 'that the flood effect gradually became smaller as the memories of the second flood faded'. They suggested that 'the second flood underscored the necessity for people to account permanently for the risks associated with river flooding'. Between the two floods the decrease in house value was 4.6%. After the second flood they found a doubling of the effect to 9.1%. However, the two flood events occurred over a two-year interval so the period of time to forget was short.
A recent study by Eves et al. (2010) was conducted to measure the difference in the price of homes in flood-affected areas and non-flood affected areas in the same suburbs of Brisbane, based on analysing house sales between 1990 and 2009. Price differentials between flood-affected areas and non-flood affected areas were found by to be influenced by the number of significant flood events that occurred, rather than the fact that the residential property may be in a floodable location.
Nevertheless, Eves et al. (2010, p. 12) indicate hazard designation might have had some influence on properties prices. They attribute a widening gap between the flood and non-flood average price in their study area to the 'change in the awareness of new flood heights' as well as a slower property market. However, it appears that the impact of an event occurring is greater than the impact of properties being designated as being at risk of an event occurring. In their review of relevant literature, Lamond et al. (2010, p. 338) note that 'the nature of the disclosure of flood risk designation was seen to be important, in particular whether disclosure of flood risk was mandatory at the point of property sale'. Eves et al. (2010) also note significant variation in the degree of impacts between market sub-sectors. For example, they found that 'higher value areas of Brisbane often have value factors such as views, housing quality and location to services that outweigh the potential impact of flood inconvenience'. In contrast, 'many of these factors are not present in the lower value suburbs so the effect of flooding is a greater issue in these sub-sectors' (Eves et al., 2010, p. 14).
The impact of designating properties as being at risk of an event occurring is likely to vary depending on whether restrictions are imposed on future development of the property, and the nature and severity of those restrictions. Byron Shire Council's policy of coastal retreat (Byron Shire, 1988) in the face of coastal erosion was claimed by a real estate spokesperson quoted in the press (Jackson, 2010) to have 'knocked at least $1 billion off the value from Byron's beachfront land'. Even allowing for a degree of hyperbole and self-interest, there is clearly a strong perception that requirement to retreat will affect the value of properties. The policy, which prohibits property owners taking certain action to protect their property against erosion, was the subject of a protracted legal battle between Byron Shire Council and residents of beachfront properties at Belongil Beach. An agreement was reached in February 2010 between the parties 'that maintenance and repair of the interim protection works originally constructed by Council are permissible by the property owner on their Belongil property under Council's existing Development Consent' (Byron Shire, 2010).
Studies estimating flood damage consistently report that communities experiencing regular flooding are able to reduce direct losses to property and contents and have better prepared and executed emergency response plans (for example, Natural Resources and Mines, 2002). Thus, to the extent that price impacts on property values is 'rational', increasing flood frequency can be expected to result in less than proportional increases in flood damage, therefore lower than proportional reductions in property values.
The pattern of price response to flood and other risk is subject to many influences and while not always clear cut, it suggests, and is largely consistent with, the following general patterns.
. Flood events may have a negative effect on property prices but with a high level of variability. . Purchasers act in a reactive manner and do not price flood impacts 'rationally'. . Repeated flooding has different effects in different situations Á in some cases it leads to further depression of values, in others to preparedness and increased tolerance. . The impact of actual floods has a greater impact than designating areas as flood prone, although flood designation does have some impact. The effect of flood designation depends significantly on the nature of the designation and disclosure. . The impact of designating an area as flood prone on property values may be associated with the nature of restrictions applying to the property rather than simply the identification of risk. . For waterfront or other high prestige or high value locations (e.g. views, direct beach access) Australian Planner 227 the impacts of flood risk are generally outweighed by the substantial price premiums these properties obtain.
The evidence suggests that prices would be depressed just after flooding but may even be unrealistically high Á that is, there is little if any discount for risk Á if there has been no damaging flood recently. To the extent that these dynamics apply, observed property prices may be expected to oscillate around a 'rational' property valuation that makes appropriate allowance for expected damage and inconvenience.
The response to erosion risk has many similarities, but erosion is more likely to result in damage that is hard to repair, particularly where it leads to permanent loss of useable land. Thus, price impacts after an erosion 'event' that actually damages property are more likely to be lasting. Far more extensive and detailed study would be required to produce conclusive evidence of the dynamics and all the factors that contribute to price effects associated with flood or erosion risk.

Private flood costs
A review of flood damage versus depth of flood from different sources (DECCW, 2007;DNRE, 2000;Nadal, 2007;Natural Resources and Mines, 2002;URS, 2002) suggests that direct damage to property from flooding is, on average, a modest portion of the total improved value of the property. Even floods up to 1Á2 m above floor height are generally estimated to result in damage equal to, on average, about 25Á35% of the improvements and contents, which in turn is about 65% of the total property value (SGS Economics and Planning, 2010). Thus, when damage occurs from a severe flooding event, the direct cost to property is of the order of less than 25% of market value. For minor flood events, which may occur more frequently, costs will be much less. Indeed, if a property is flooded but the flood does not go above the floor height, costs are mostly associated with damage to landscaping and yard fixtures, clean-up cost and inconvenience, or, in some cases, disruptions due to lack of access.
Given that floods and particularly severe floods generally occur infrequently, the 'expected' cost of the damage is reduced by the probability of occurrence. Given that floods may not occur for some years, the net present value also would incorporate some discount rate, further diminishing the cost in current terms. Thus, the relatively modest property value reductions described in the previous section are not necessarily inconsistent with a rational approach to risk assessment, as we shall see in a more detailed analysis in a later section.

Private costs versus private value
There is a strong focus on avoiding costs from identified risks in consideration of the impacts of floods and erosion. Discussions in planning policy are commonly framed in terms of avoiding future costs to the property owners, the precautionary principle and intergenerational equity, where the costs of damage saved, either by preventative spending or by retreat from hazardous areas, represent the scale of the gain to the community (VCAT, 2010).
Such a perspective overlooks the compelling reason why people take these risks Á they value the attributes of living in some of these hazardous locations very highly and are prepared to pay a high price to gain that value. For example, for a property facing a 40% risk of severe flooding or erosion, these risks may 'rationally' reduce property value by 10%. But if the property has direct waterfront access and views it may be worth 100% more than the property just across the street that lacks these attributes and is elevated on rock and free of erosion risk. Even after the risk discount, that property at risk would still be worth 80% more than the property across the street in spite of the risks being priced in.
For properties that are flood prone but lack any compelling virtues, the impacts of flood are more likely to be significant. Thus, flood-prone, low-lying 'swampy' ground well back from the coast and not near or facing an attractive riverscape would suffer the property value discount without attracting an offsetting price premium.

A broader perspective: including public costs
Only part of the damage from floods occurs directly to private property (DNRE, 2000;Messner et al., 2007;URS 2002). A significant portion of the damage after a flood event is to public infrastructure, other private infrastructure (power, communications) or in the costs of rescue and recovery. There may also be wider disruptions and consequential losses from economic activity. These wider costs are mostly not seen directly by individual property owners, or where they are, it may be as a broad-based rate increase, not generally applied to those that occupy the flood prone areas.
If flooding becomes frequent or severe, results in failure of critical infrastructure or otherwise reduces services below acceptable levels, there will be a need to upgrade services to overcome this. If this further cost is borne by the public, not the private property owner, it further reduces the effect of true flood costs on the pricing of individual properties.
These public costs are one of the main reasons why even a fully priced, rational response to expected flood damage to private property would provide insufficient incentive for private property owners to avoid areas with a high flood risk.

The situation with climate change
All of the preceding discussion applies to property that may face a significant present day risk and occurs even without climate change. With climate change, risk increases over time. The rest of the paper considers how that may affect property prices over time, especially as risk rises to the extent that the property may ultimately become uninhabitable.
In principle, a 'rational' evaluation can be made of the likely future costs of flood or erosion risks including risk escalating with climate change. This would take into account the increasing probability of floods or erosion over time, and the expected cost including inconvenience and emotional impacts. In practice, this is difficult because there are so many uncertainties, including actual rates of sea level rise among many others. It assumes that actual property values are not confounded by changing context, large changes in market responses and other effects. The analysis provided below should be seen as illustrative.
When the risk is not significant in the present day, but is expected to become significant in the future, the market discounts the costs of those future risks. Discounting of future cash flows (whether costs or benefits) is not just an arcane device employed by economists but a practical judgement made by ordinary consumers. For example, buyers will pay a premium in choosing a more energy efficient appliance or building design, where the premium reflects the expected value of energy savings compared with alternatives. Studies that have explored this, and other cases where a current day expenditure will reduce future costs, show that the effective discount rate for most consumers is very high, typically 20Á40% (see for example Meier and Whittier, 1983).
Part of the selection of a very high discount rate by consumers is the uncertainty that the expected energy savings will occur (or, in the case of flood avoidance, that the flood and resulting damage will occur). However, even applying a very much more modest discount rate of 6%, damage that is not expected to occur for 25 years will be discounted by nearly 80%, and damage not expected to occur for 50 years will be discounted by 95% of present day value. Damage 90 years from now would only count for 0.4% of the present day value.
However, during this time, the value of the (improved) asset would have greatly declined, unless there had been reinvestment. A 90-year-old asset would more than likely have been written off if there had been little or no reinvestment beyond basic maintenance over the period. By year 90, any damage to the property would more than likely be to the reinvestment and improvements made in the interim. The risk to these investments would be judged according to when they were made Á with a much shorter time horizon than 90 years. SGS Economics and Planning (2010) modelled the expected future value of damage to a single-storey dwelling over time as flood risk develops, relative to its risk free value (that is, in a non-flood-prone location but otherwise of equal value) for three scenarios, to compare the effects of three different approaches to managing risk: . built at the 100 year average return interval (ARI) flood level in 2010; . built at the 100 year ARI flood level estimated for 2100; and . built at the 100 year ARI flood level estimated for 2050 but with an obligation to upgrade in the face of rising flood risk and meet all additional costs of services.
The implications for each of these scenarios is drawn out and compared.

Scenario 1
The first scenario assumed new property development is developed at a height that is above the current 100 year ARI flood level (or 1% annual exceedance probability (AEP)). This was 2.3 m Australian height datum (AHD) for a particular coastal locality, Wyong, where the AEP for various flood heights has been well documented for current day conditions. Sea level was assumed to rise by 0.4 m by 2050 and 0.9 m by 2100 relative to 1990 levels, and the estimates for future flood heights above these higher baseline levels had been modelled. Thus, the findings shown in the modelling are, strictly speaking, specific to Wyong but the broad trends and conclusions are applicable to many coastal areas with estuarine inundation having weak hydraulic linkage to the open ocean.

Australian Planner 229
Key assumptions were as follows (a full list of modelling assumptions may be obtained by contacting the author).
. Any building damaged by flooding can be repaired and reoccupied, even if the destruction is total and it needs to be fully rebuilt. That is, even very high levels of future flood risk, frequency or depth, do not stop the right to occupy the land. . The land will continue to be serviced and that the cost to the householder of those services will remain at general community norms. . No special assistance is provided to homeowners to compensate for any damage after a flood event Á that is the property owner bears the full cost. (If property owners can obtain insurance, it is assumed that the insurance rates are set by the insurance company to reflect risk and payouts, not subsidised, so insured flood costs are met on average by the occupants through premiums). In the first 30 years, the cumulative probability of any flood above floor level would be less than 40%. It is nearly as likely to be a minor flood only just above floor level ( B0.1 m) than a moderate flood of 0.1Á 0.6 m above floor level. There is a very low probability of a more severe flood in the first 30 or 40 years.

Lifetime probability of a flood
The longer the lifespan of the building, the greater the risk of a flood being encountered: the chances of encountering an extreme event increase with a longer period of exposure; and if sea level rise occurs as expected, floods will occur more often and be more severe.
By 2080, the chances of a flood occurring above floor level approach 100%. A moderate depth flood is significantly more likely than a minor flood and even a major flood has a 20% chance of occurring by this time. By 2100, the chances of a major flood occurring over the building's life exceed 50%.

Cost of flood damage
The calculated net present value (NPV) of flood costs is the percentage discount that a 'rational' person might reduce their offer on a dwelling due to expected future flooding costs. Based on the figure above, the cost might be expected to rise rapidly with longer lifetimes. However, as shown in Figure 2, the NPV of future flood damages expressed as a percentage of property value is quite low, rising from just over 1.5% for the first 30 years, to 3.2% for a 90 year building life (2100). That is because: . flood damage in the early years is of low probability and the damage from floods that do occur is relatively minor; and . the flood damage farther into the future, while larger, is discounted substantially compared to current values and adds only a small amount to the present day cost.
As this same house is bought and sold over its lifetime, each subsequent buyer will be faced with a new estimate of NPV for flood damages. For later buyers, the flood damage they face in the early years of their ownership will be higher than for the previous owner, increasing their required discount. The more severe flood damage is coming closer, with less discounting, and will have a bigger influence on the next buyer's willingness to pay. Figure 3 shows how the NPV of expected flood damage moves up over time, assuming each buyer uses a 50-year time horizon when considering future damages. (A 50-year time horizon captures over 70% of the future discounted damage cost of using a 90-year time horizon for the case modelled.) Thus, while the buyer with a 50-year time horizon in 2010 would discount the price only 1.5% for future flood risk, the buyer in 2090 would seek to discount the price by 37% for future flood risk relative to a property with the same characteristics and amenity but that was flood risk free.
Assuming the house changes hands every 20 years, each subsequent buyer will apply a higher discount due to increased flood risks. Table 1 shows the discount each subsequent buyer would expect relative to the amount they would have paid (they got some discount when the property was purchased).
Each new buyer expects a higher discount than the seller received when they bought. The total (cumulative) discount adds up to the final buyer's total discount expected relative to an equivalent flood-risk-free property. However, the capital loss for any buyer in this example never exceeds 15% compared with what they paid. The expectation is that property prices increase in value with inflation, whether flood prone or not, and the 'losses' are relative to the gain that would otherwise have occurred for a flood risk free dwelling. Even with the discounts, prices may increase in dollar terms between each transaction right up to 2090 or beyond.
All of these curves have been based on a single flood damage curve for a single storey dwelling. Flood damages as a percentage of total value will generally be somewhat lower for two or more storey buildings than those shown.
Although the selling price of the house is being reduced relative to a flood risk free property, the house is not actually getting 'cheaper'. The house costs nearly as much as a flood free house but the difference in capital cost is being paid in flood damages, inconvenience and other costs. In between floods, the amenity is presumed to be the same. However, there is a difference between the discounted flood-prone house and a flood-risk-free house: the flood-prone dwelling owner is taking a risk and may win or lose. Flood risk is statistical and the owner may suffer more or fewer floods and damage than expected. The flood-risk-free property is relatively certain of the outcome. The flood-prone house would have relatively more appeal to a risk taker and rather less to a risk-averse buyer.
As properties age, it is common for some reinvestment to occur. This could be as basic as repainting walls and replacing floor coverings, or larger expenditure on renovating the kitchen and bathrooms. Many houses may get an extension or other major structural change during their lives. In facing increasing risks, these reinvestment decisions may be taken differently.

Inundation risk in different locations
Even with significant costs from flood damage, prime property on the waterfront is still likely to carry a premium compared with property without these views and settings. The costs of floods and repairs (and living with the uncertainties) are then simply the premium paid for occupying the valued location. Access and services would need to be maintained to the properties if values are to be maintained.
In flood-prone areas that are not highly attractive, disinvestment is more likely. As frequent or more severe flooding occurs, reinvestment in the property may give poor returns relative to better-located or flood-free areas, and after a severe event the properties may be abandoned. If less well-off homeowners occupy these sites in the first instance, they may simply lack the resources to either upgrade on site or to relocate elsewhere when they will receive very little for their current asset.
While the analysis suggests that the costs of hazards to individuals averaged over the life of the dwelling are modest, when taken as an average across all events, in fact the impacts fall very unevenly. While some will be lucky, others will potentially face ruin, likely imposing costs on the wider community. The assessment also excludes consideration of community costs to maintain access and services. The cost to maintain these to acceptable levels of service may well exceed the costs of flood damage to private individuals, although this varies highly by context. However, if the community fails to provide access and services, values could be cut by far more, and potentially sooner than from flood risk cost directly to the property, particularly if expectations are for continued declines in service levels.

Scenario 2
Scenario 2 considers a requirement to place floor levels above the 100 year ARI floor level for floods expected in 2100 after a 0.9 m sea level rise, greatly reducing risk from the identified hazards and increasing certainty. It will shift the 'rational' valuation of the property upwards by the extent of the reduced risk, but with additional costs for initial construction Australian Planner 231 and other effects. The impact on lifetime flood risk is shown in Figure 4. This is a dramatic reduction in risk compared with the equivalent figure in Scenario 1. Up to 2070, flood risk approximates to zero. Even by 2100, the lifetime accumulated risk of any flood is less than 20%.
A notable comparator is that a dwelling that is built to a 100 year ARI flood level with unchanging sea level (or unchanging riverine flood regime) would face a lifetime risk over 90 years of 60% of being flooded at least once. This is deemed to be an acceptable level of risk for domestic dwellings under many standards. Thus, lifting the dwelling to meet 2100 100 year ARI levels can be seen as extremely conservative even near the end of the 90 year period.

Negligible cost of flood damage
The NPV cost of future flood risk is essentially zero, making the market value akin to buildings with equivalent amenity in flood-risk-free locations over most of the next 90 years. Analysis by SGS Economics and Planning shows that options examined for raising buildings above flood levels add 1% to 10% to the initial cost, and can be more than the relatively modest 1.5%Á3% NPV of flood risk cost saved in 2010. The additional cost of the less expensive options are arguably close to or less than the future damages they avoid, but have their own limitations.
The curve showing future discounts as sea levels rise ( Figure 5) has a lower start but a faster rise compared with the equivalent figure in Scenario 1. However, the scale is dramatically reduced, with the discount at 2090 reaching just 1.5%. Thus, for a buyer in 2090, the discount that would be sought of 37% relative to a flood-free property in Scenario 1 might be reduced to only about 1.5% in Scenario 2. Clearly, this large improvement may make a modest up-front investment in elevating the property seem justified in financial terms, even if not required by regulation. The higher sale value (from reduced flood damage risk discount) may or may not be greater than the cost in the short term, but the difference would definitely become positive over the life of the dwelling. However, unlike Scenario 1, all of the extra cost is borne by the initial owner, none by future buyers.
However, the economic argument is that if the short-term savings were invested at 6%, by the time the higher savings in flood risk were realised, one would be better off with the funds accumulated from the money saved up front. That is, even those large future savings are not sufficient to cover a relatively modest additional outlay if the savings had been earning interest and accumulating over the long period before they are needed. For example, if a 1% savings in the initial cost is invested at 6% (real) for 80 years, (tax free), it would be equal to the full initial capital cost in year one and could be used to 'write off' the investment if the investment itself had not had real capital gains. In practice, the capital gain in real estate is in the land, not the structures, which depreciate. So the investment of 1% savings would pay off the depreciated value of the building much sooner.

Other impacts on value and costs
In addition to the cost of raising the dwelling there may be additional costs for ramp or stair access, decks to achieve comparable amenity, raising driveways or vehicle parking areas and out buildings, if warranted, and landscaping impacts. Other considerations when raising a dwelling above flood heights may include: . raised buildings: may be subject to scouring where there is water flow; may impede water flow; may change the character of the dwelling (lack of walk out ground level patio); if changing from slab on ground to raised timber frame may reduce thermal mass and building energy efficiency and comfort; . change in development potential (if floors must be lifted while a height constraint remains it may reduce development potential from two storeys to one); . accessibility to the site in flood/erosion events if roads are closed or driveways and footpaths are not raised; . some level of damage to landscaping, out buildings in flood events Á that is, not all flood risk costs are avoided so the full calculated savings may not be achieved; . continued availability of services (and how these are funded); and . remaining need for emergency planning or response in hazard conditions that exceed floor levels.
All of these affect perceived net value (gains less expenses) of any adaptation chosen. If the council finds the provision of services to be cost prohibitive at some time in the future, and withdraws services as flood risk and service costs become excessive, the requirement to raise floor levels above the levels supported by the council's infrastructure would be seen as inappropriate.

Scenario 3
Scenario 3 is based on a more reactive approach while still seeking to ensure risks are managed as required. For example, property at present day risk or likely to face significant risk within the next 50 years (that is, a reasonable service life for the building) will be required to be built to deal with the coastal hazards over that period. As the risks are current or imminent, the case for dealing with them is more easily made.
A condition of development in all locations subject to hazards by 2100, even if not now, is that when the sea has risen so that these properties are at risk, or about to be at risk, they will be responsible for the costs of managing risk to acceptable levels. Further, all dwellings in hazard areas will be responsible for contributing to any additional costs (over and above 'normal' cost) for provision of

Australian Planner 233
infrastructure or community protection works as well as an emergency services levy and compulsory insurance.
For this scenario, the benchmark may be that the dwelling needs to be elevated above the 100 ARI flood level for 2050 (allowing for a sea level rise of 0.4 m to 2050), rather than 2100 level as proposed in Scenario 2. The 'lifetime' risk of flooding for this option is shown in Figure 6.
Again, referring to the comparator of a dwelling that is built to a 100 year ARI flood level with unchanging sea level (or unchanging riverine flood regime) the 90 year lifetime risk of a flood event is 60% compared with 85% for this Scenario 3 proposal. However, while the lifetime risk is higher over 90 years, the cumulative risk is very much lower for the first 70 years than a building facing a static 100 year AHD hazard. After that, the risk rises relatively rapidly and continues to do so. This provides up to 70 years to consider the desirability of reinvesting, the adoption of any new technologies that may arise in the interim, or simply to write off a good investment and retreat if conditions require it (e.g. access and services cannot be maintained).
Scenario 3 presents a prospect of lower costs to meet acceptable standards (e.g. lower floor height threshold required) in the short term, but comes with proposed balancing considerations. Homeowners in hazard areas might: . be expected to contribute to the additional costs of raising roads, bridges or other infrastructure to levels that can provide acceptable continuing services (where this is practical and acceptable for environmental and other good planning reasons); . be restricted from major reinvestment in their dwellings (extensions, other structural changes and so on) without meeting the then current flood standards; . be required to have an approved flood response plan, perhaps registered with council, and to make a contribution to a flood relief and assistance fund if their dwelling falls below the then current standard for new structures; . be required to contribute to any community protection works undertaken for joint protection of property; or . be expected to obtain flood insurance, or if not, to contribute to a community flood risk fund in the event of a flood that exceeds expectations and preparedness.
The effect of such requirements would be to put the community costs of occupying hazardous areas onto those that choose to live there. To do otherwise effectively subsidises people to live in hazardous locations. While the lesser initial cost of development to a lower standard does not create excessive risk in the short term, it provides an opportunity to decide, as the dwelling ages, to either redevelop to a higher standard or to invest in the protective technology that may be available in 50 years' time.
It also addresses the real uncertainty of future sea level rise. Should sea levels rise at a different rate than currently expected Á either faster or slower Á the future obligations would be tied to future assessed flood risk based on actual changes to sea levels and or rainfall intensities. This reduces the need to get it right far in advance, and provides a more responsive approach to actual conditions. Further, by deferring costs until expenditure gives more immediate value, it is easier to ensure the gains are greater than the cost.
Under this approach landowners would retain the right to redevelop their land Á even if it is underwater Á as long as they pay all hazard-related costs, actively manage risks, assuming occupancy and redevelopment continues to meet environmental and other basic planning criteria.  Table 2 summarises some of the key features associated with each of the three scenarios and the regulatory approach they imply for the inundation case examined. Scenario 3 provides benefits exceeding costs for private property in the short run and passes the costs of risk management for supporting public assets in the long run, avoiding subsidising households that occupy flood prone areas. It also allows greater flexibility to rates of sea level rise that differ greatly from those currently expected.

Conclusion
Market prices alone exclude too many costs to provide the incentive for property owners to avoid hazardous locations. However, requiring a response to expected conditions in 2100 may be unduly cautious and impose costs on householders exceeded by the small gains in damage and other private costs saved, while not addressing the future costs of providing or repairing public services or providing shared protection works.
Taking an approach that provides for protection from expected changes over a shorter period of say 40 years, but places requirements for response to risks later on as they develop provides greater benefits from the adaptation investment while ensuring risks remain low by responding closer to when the works are required. It also reduces the potential loss of value from flood risk, can address the extra cost incurred by services in flood prone areas and can ensure that the total cost of adaptive works is more likely to be exceeded by the gains.