Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate As this course comes to a close

Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate
Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate
As this course comes to a close, consider and reflect on how you can become a lifelong advocate for social change in your future social work practice. As a motivated policy advocate and social worker, your actions in your chosen profession will reflect your motivation to help relatively powerless, disenfranchised groups of people improve their resources, their opportunities, and their quality of life.
In this Discussion, you reflect upon your responsibility as a social worker, politically and professionally.
Post your thoughts on this question: questions in bold then answer
As a social worker, what is your responsibility to engage in political action?
Identify an area of social welfare where social work policy advocacy is needed.
Resources
Jansson, B. S. (2018). Becoming an effective policy advocate: From policy practice to social justice (8th ed.). Pacific Grove, CA: Brooks/Cole Cengage Learning Series.
Chapter 14, “Assessing Policy: Toward Evidence-Based Policy During Task 8” (pp. 488-503)

Community Tool Box (2016) Chapter 8 Section 6: Obtaining Feedback from Constituents: What Changes are Important and Feasible? Retrieved from https://ctb.ku.edu/en/table-of-contents/structure/strategic-planning/obtain-constituent-feedback/main
Learn how to obtain feedback from constituents in order to prioritize which changes are most important and feasible to pursue.

Don't use plagiarized sources. Get Your Custom Assignment on
Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate Socw 6361 WK 11 discussion 2: Becoming a Lifelong Advocate As this course comes to a close
From as Little as $13/Page

WHAT DOES IT MEAN TO OBTAIN FEEDBACK FROM YOUR CONSTITUENTS?
WHY SHOULD YOU OBTAIN FEEDBACK FROM CONSTITUENTS?
WHEN SHOULD YOU OBTAIN FEEDBACK FROM CONSTITUENTS?
HOW TO ASK YOURSELF THE RIGHT QUESTIONS
OBTAINING FORMAL FEEDBACK: CONDUCTING A SURVEY

Image of classmates understanding a study concept.

Obtaining feedback from your community is vital to understand what the community truly needs and how it perceives your organization. This section explores how to obtain formal and informal feedback from members within your community so that your group may improve its program.

WHAT DOES IT MEAN TO OBTAIN FEEDBACK FROM CONSTITUENTS?
By obtaining feedback, we simply mean asking questions to determine something you want to know. Most often, feedback is sought to determine how well people feel your organization is doing, and also how important they believe the goals of your agency are. Feedback may be obtained in a number of ways, some as simple as having a casual conversation or reading articles and editorials in the paper. Formal feedback–data that you can measure–is usually obtained through one of the following methods:

Personal interviews
Phone surveys
Written surveys or questionnaires
The term constituents, as we use it here, may refer to a variety of people, including those who are affected (directly or indirectly) by your agency’s work, elected officials, members of your coalition, journalists, community leaders, and others.

WHY SHOULD YOU OBTAIN FEEDBACK FROM CONSTITUENTS?
To understand how your organization is perceived
To get a better understanding of what the community really needs
To help prioritize tasks
To generate renewed excitement and interest in your program
To have the information ready for future use (such as grant proposals and questions from the press)
To increase community awareness of who you are and what you do
And overall, to improve your program
WHEN SHOULD YOU OBTAIN FEEDBACK FROM CONSTITUENTS?
You should try to obtain informal feedback as an ongoing, continuous process. Formal feedback may be done at differing times, including:

As part of the planning process when you start your initiative
Any time you start (or are considering starting) a new program
At the end of a certain program sponsored by your group, such as a two-day workshop discussing the risk factors for alcoholism, or a summer bicycle helmet for youths program
Periodically throughout the life of your initiative (perhaps once a year or every two years)
However, you should always be sure you know how you will actually use the information you obtain. Nothing is more frustrating to your participants than to give feedback that is not used.

HOW TO OBTAIN FEEDBACK FROM CONSTITUENTS
ASK YOURSELF THE RIGHT QUESTIONS
What do you want to know?

Some information that you could gather just won’t be used, and so it’s simply not worth the staff time to gather it. For example, perhaps you have received a grant to reduce teen pregnancy in your community. Whether or not the community perceives teen pregnancy as a problem may be less important to you than other issues, because the program is going to be implemented either way. In such a case, it might make sense for your group to use your resources in a different way, such as to determine what specific needs regarding teen pregnancy need to be addressed.

Who has already done this?

Check to see if someone, such as researchers or another agency, has already done a survey in your community asking the same questions that you would like answered. Your coalition is undoubtedly busy enough; don’t try to reinvent the wheel.

Who do you want to ask?

Decide whom you would like to survey. There are a variety of people you might decide to question, depending on what you would like to find out.

Possible respondents might include:

The targets of change, or those whose actions you would like to change
The people most affected by the problem you are addressing
Professionals in your area
Local administrators (directors, coordinators, principals, etc.)
Possible or current funders for your program
Elected officials
Journalists
Researchers and field experts
Members of your coalition
Further, decide if you want to obtain your information in a closed manner (surveying a select group of people) or in an open manner (anyone who is willing to pick up a pencil or open their mouths for a few minutes). Be careful not to ask administrators to tell you the needs of those most affected; rather, ask those who are most affected themselves.

How many people would you like to ask?

If you are only surveying the active members of a small coalition (say, less than 50 members), you might try to survey everyone. If you would like to learn about the feelings of the teenagers in your coalition with regards to drug abuse, however, you might find it unfeasible to survey every teen, and instead randomly choose a smaller, more workable group to question.

How do you want to ask people?

This may be done in a variety of ways, including:

Listen to the opinions of people you know, researchers at planning agencies, people who work in the same or a similar field, and anyone else you can think of
Suggestion boxes
Noting chance meetings or comments in a log
Feedback forms on publications such as brochures or on an agency newsletter
Comment logs by the phone
Designated “critique times at meetings”
A formal survey: either by personal interviews, a phone survey, or a written survey
GOOD TIPS:
Keep it secret. Always try to provide instructions that minimize any possibility of bias. For example, don’t discuss what you hope to learn, what you believe to be true, or what earlier surveys have told you when you are writing the instructions. When possible, allow surveys to be anonymous.
Keep your eyes and ears open. Be responsive to all possible means of obtaining data, such as learning what has been said at public protests, what complaints have been lodged or actions taken, etc.
Make the best of it. If the response you get from constituents isn’t what you hoped for–for example, if they respond that what your coalition is doing isn’t really important–reassess what you are doing, and brainstorm ideas of what else you might do to sway public opinion.
OBTAINING FORMAL FEEDBACK: CONDUCTING A SURVEY
You’ve decided to take the plunge and go all out with a formal survey. But where do you start? How do you format your work and frame your questions? There are volumes upon volumes of information suggesting how you might do this, but please consider the following information as a starting point when putting together your survey.

DECIDE HOW YOU WOULD LIKE TO CONDUCT YOUR SURVEY
First, should it be written or oral?

There are several advantages and disadvantages of each that you should take into account:

An oral survey (in person, on the phone) is often less formal, and may be easier to initiate and conduct. However, the body language or tone of the interviewer may affect the respondent’s answers, and of course, anonymity is not an option for spoken interviews. Further, responses from an oral interview are more likely to be vague and rambling, taking up valuable time as well as being difficult to chart.
A written survey may be formal and exact, and thus in the long run more efficient. However, it may be more difficult to convince people to respond to a mailed written survey than to respond orally, despite the real amount of time involved. Just think: if someone called and asked you to answer a few questions, you’d probably say yes, unless you were really pressed for time. However, if you got the same list of questions in the mail, you might think about answering them, and then forget, or misplace the letter, or just throw it away.) To get around this barrier, consider giving a survey to a “captive audience,” such as a group at a meeting or in a class.
DECIDE HOW TO FORMAT YOUR QUESTIONS
They may be written using open or closed questions:

Closed questions allow the respondent to answer from a menu of different choices. This menu might be as simple as responding to a yes/no question. It also might take the form of several words (for example, “Which of the following seems to be the biggest health concern in our community?”), or a rating scale (“On a scale of one to five, with five being most important, how would you rate the importance of stopping merchants from selling alcohol to minors?”). A rating scale is often a simple yet very effective way to learn the feelings of the people taking the survey. Five point scales (between one and five) and seven point scales are often the norm when doing a survey in this manner.
Open questions allow the respondent to answer questions in their own words, without prompts from the survey. An example of an open question would be, “What do you think is the most important health concern facing our community, and why do you think so?” The advantage of using open questions is that you are able to get deeper, more thoughtful answers than from closed questions. However, open questions may also lead to vague answers that are hard to interpret and use.
To get the best of both worlds, you might consider using a survey with closed questions that leaves room for additional comments.
TO THE EXTENT THAT IT IS POSSIBLE, REMOVE ALL POSSIBILITY OF BIAS FROM YOUR SURVEY
This includes:

When possible, don’t require (or even ask for) the names of the respondents
Avoid discussing any expectations you might have for this survey
Don’t discuss previous survey results
DON’T FORGET YOUR MANNERS
If your mother was going to respond to this survey, what would she want to see? Be sure to thank respondents ahead of time, let them know how you will use any information that you gather, and thank them again afterwards.

MAKE IT EASY
The less respondents are directly involved in your project, the less likely they are to be willing to take a lot of time filling out a survey or discussing an issue. Keep your survey as short as possible while still getting the information that you want to know. A good rule of thumb is simply, don’t ask questions you’re not going to use.

MAKE IT EASIER
If you are mailing your survey, make it easy to return. Always include a self-addressed stamped envelope.

KEEP YOUR COOL
Don’t be frustrated if only a small number of mailed surveys are returned to you; in fact, you should probably expect this. A “normal” return rate might only be about half of the surveys that you send out are actually completed. Community Toolbox (2016) 12. Evaluating the Initiative. Retrieved from http://ctb.ku.edu/en/evaluating-initiative
1. Identify key stakeholders and what they care about (i.e., people or organizations that have something to gain or lose from the evaluation).Include:

a. Those involved in operating the program or initiative (e.g., staff, volunteers, community members, sponsors, and collaborators)
b. Those prioritized groups served or affected by the effort (e.g., those experiencing the problem, public officials)
c. Primary intended users of the evaluation (e.g., program or initiative staff, community members, outside researchers, funders).

Related resourses:
Developing a Plan for Identifying Local Needs and Resources
Understanding and Describing the Community
Understanding Community Leadership, Evaluators, and Funders: What Are Their Interests?
Choosing Evaluators

2. Describe the program or initiatives framework or logic model (e.g., what the program or effort is trying to accomplish and how it is doing so).Include information about:

a. Purpose or mission (e.g., the problem or goal to which the program, effort, or initiative is addressed)
b. Context or conditions (e.g., the situation in which the effort will take place; factors that may affect outcomes)
c. Inputs: resources and barriers (e.g., resources may include time, talent, equipment, information, money, etc.). Barriers may include history of conflict, environmental factors, economic conditions, etc.
d. Activities or interventions (i.e., what the initiative will do to effect change and improvement) (e.g., providing information and enhancing skills; enhancing services and support; modifying access, barriers and opportunities; changing the consequences; modifying policies and broader systems)
e. Outputs (i.e., direct evidence of having performed the activities) (e.g., number of services provided)
f. Intended effects or outcomes

i. Shorter-term (e.g., increased knowledge or skill)
ii. Intermediate (e.g., changes in community programs, policies, or practices)
iii. Longer-term (e.g., change in behavior or population-level outcomes)

Related resources:
Developing an Evaluation Plan
Proclaiming Your Dream: Developing Vision and Mission Statements
Developing a Plan for Identifying Local Needs and Resources
Identifying Community Assets and Resources
Identifying Targets and Agents of Change: Who Can Benefit and Who Can Help

3. Focus the evaluation design – what the evaluation aims to accomplish, how it will do so, and how the findings will be used.
Include a description of:
a. Purpose or uses: what the evaluation aims to accomplish. Purposes may include: 1) Gain understanding about what works, 2) Improve how things get done, 3) Determine the effects of the program with individuals who participate, 4) Determine the effects of the program or initiative on the community
b. Evaluation questions: Indicate what questions are important to stakeholders, including those related to:

2. Process measures

1. Planning and Implementation Issues: How well was the initiative planned and implemented? Did those most affected contribute to the planning, implementation and evaluation of the effort? How satisfied are participants with the program?
2. Outcome measures

2. Attainment of objectives (e.g., How well has the program or initiative met its stated objectives?)
2. Impact on participants (e.g., How much and what kind of a difference has the program or initiative made for its prioritized groups?)
2. Impact on community (e.g., How much and what kind of a difference has the program or initiative made on the community? Were there any unintended consequences, either positive or negative?)
3. Methods: what type of measurement and study design should be used to evaluate the effects of the program or initiative? Typical designs include case studies and more controlled experiments. By what methods will data be gathered to help answer the evaluation questions? Note appropriate methods to be used including:

3. Surveys about satisfaction and importance of the initiative
3. Goal attainment reports
3. Behavioral surveys
3. Interviews with key participants
3. Archival records
3. Observations of behavior and environmental conditions
3. Self-reporting, logs, or diaries
3. Documentation system and analysis of contribution of the initiative
3. Community-level indicators of impact (e.g., rates of HIV)
3. Case studies and experiments

Related resources:
Our Evaluation Model: Evaluating Comprehensive Community Initiatives
A Framework for Program Evaluation: A Gateway for Tools
Measuring Success: Evaluating Comprehensive Community Health Initiatives
Providing Feedback to Improve the Initiative
Gathering and Using Community-Level Indicators
Rating Member Satisfaction
Conducting Interviews with Key Participants to Analyze Critical Events
A Framework for Program Evaluation
Reaching Your Goals: The Goal Attainment Report
Rating Member Satisfaction
Constituent Survey of Outcomes: Ratings of Importance
Rating Community Goals
Gathering Information: Monitoring Your Progress
Behavioral Surveys

1. Gather credible evidence- decide what evidence is, and what features affect credibility of the evaluation, including:

4. Indicators of success – specify criteria used to judge the success of the program or initiative. Translate into measures or indicators of success, including

1. Program outputs
1. Participation rates
1. Levels of satisfaction
1. Changes in behavior
1. Community or system changes (i.e., new programs, policies, and practices)
1. Improvements in community-level indicators
4. Sources of evidence (e.g., interviews, surveys, observation, review of records). Indicate how evidence of your success will be gathered
4. Quality estimate the appropriateness and integrity of information gathered, including its accuracy (reliability) and sensitivity (validity). Indicate how quality of measures will be assured.
4. Quantity estimate what amount of data (or time) is required to evaluate effectiveness.
4. Logistics indicate who will gather the data, by when, from what sources, and what precautions and permissions will be needed.

1. Outline and implement an evaluation plan. Indicate how you will:

5. Involve all key stakeholders (e.g., members of prioritized groups, program implementers, grantmakers) in identifying indicators of success, documenting evidence of success, and sense making about the effects of the overall initiative and how it can be improved.
5. Track implementation of the initiatives intervention components
5. Assess exposure to the intervention
5. Assess ongoing changes in specific behavioral objectives
5. Assess ongoing changes in specific population-level outcomes
5. Examine the contribution of intervention components (e.g., a program or policy) and possible improvements in behavior and outcomes at the level of the whole community/population
5. Consider the ethical implications of the initiative (e.g., Do the expected benefits outweigh the potential risks?)

1. Make sense of the data and justify conclusions. Indicate how each aspect of the evaluation will be met.

6. Standards values held by stakeholders and how they will be assured. Indicate how each key standard will be assured:

1. Utility standards: to ensure that the evaluation is useful and answers the questions that are important to stakeholders, including:

1. Information scope and selection: Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.
1. Report clarity: evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation.
1. Evaluation impact: evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the evaluation findings will be used.
1. Feasibility standards: to ensure that the evaluation makes sense and its steps are viable and pragmatic, including:

2. Practical procedures: the evaluation procedures should be practical, to keep disruption of everyday activities to a minimum while needed information is obtained.
2. Political viability: the evaluation should be planned and conducted with anticipation of the different positions or interests of various groups.
2. Cost effectiveness: the evaluation should be efficient and produce enough valuable information that the resources used can be justified.
1. Propriety standards: to ensure that the evaluation is ethical and that it is conducted with regard for the rights and interests of those involved, including:

3. Service orientation: evaluations should be designed to help organizations effectively serve the needs of all participants.
3. Formal agreements: the responsibilities in an evaluation (what is to be done, how, by whom, when) should be agreed to in writing, so that those involved are obligated to follow all conditions of the agreement, or to formally renegotiate it.
3. Rights of participants: evaluation should be designed and conducted to respect and protect the rights and welfare of all participants in the study.
3. Complete and fair assessment: the evaluation should be complete and fair in its examination, recording both strengths and weaknesses of the program being evaluated.
3. Conflict of interest: conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results.
1. Accuracy standards: to ensure that the evaluation findings are considered correct. Indicate how the accuracy standards will be met, including:

4. Program documentation: the intervention should be described and documented clearly and accurately, so that what is being evaluated is clearly identified.
4. Context analysis: the context in which the initiative exists should be thoroughly examined so that likely influences on the programs effects can be identified.
4. Valid information: the information gathering procedures should be chosen or developed and then implemented in such a way that they will assure that the interpretation arrived at is valid.
4. Reliable information: the information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable.
4. Analysis of quantitative and qualitative information: quantitative information (i.e., data from observations or surveys) and qualitative information (e.g., from interviews) should be appropriately and systematically analyzed so that evaluation questions are effectively answered.
4. Justified conclusions: the conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth.
6. Analysis and synthesis indicate how the evaluation report will analyze and summarize the findings.
6. Sensemaking and interpretation how will the evaluation report communicate what the findings mean? How will the stakeholders use the information to help answer the evaluation questions? How will the group communicate what the findings suggest?
6. Judgments statements of worth or merit, compared to selected standards. How will the group communicate what the findings suggest about the value added by the effort?
6. Recommendations how will the group identify recommendations based on the results of the evaluation?

1. Use the information to celebrate, make adjustments, and communicate lessons learned.Take steps to ensure that the findings will be used appropriately, including:

7. Design communicate how questions, methods, and findings are constructed to address agreed-upon uses
7. Preparation anticipate future uses of findings; how to translate knowledge into practice
7. Feedback and sense-making how communication and shared interpretation will be facilitated among all users
7. Follow-up support users needs during evaluation and after receiving findings, including to celebrate accomplishments and make adjustments
7. Dissemination communicating lessons learned to all relevant audiences in a timely manner Journal of Public Child Welfare, Vol. 3:213234, 2009

Copyright Taylor & Francis Group, LLC

ISSN: 1554-8732 print/1554-8740 online

DOI: 10.1080/15548730903129764

Fatherhood in the Child Welfare System:
Evaluation of a Pilot Project to
Improve Father Involvement

DIANA J. ENGLISH
University of Washington, Seattle, WA, USA

SHERRY BRUMMEL
Washington State Department of Social and Health Services,

Friday Harbor, WA, USA

PRISCILLA MARTENS
National Family Preservation Network, Buhl, ID, USA

Fathers provide emotional and physical, as well as financial sup-
port to their children. However, little is known about public child

welfare policies and practices related to involving fathers and

fathers families in case planning and services to children involved

in child welfare services. This article reports on the results of a pilot

project designed to improve child welfare principles, policies, and

practices related to the involvement of fathers in the lives of chil-

dren served in one Northwest public child welfare agency. The pilot

project provided training on father involvement in child welfare

decision processes and evaluated changes in practice over time.

The evaluation included an assessment of agency policy and prac-

tice, an assessment of social workers perceptions regarding fathers

involvements in the lives of their children, and examination of ac-

tual social work practices related to father involvement over time.

Changes in key areas of policy, beliefs regarding father involve-

ment in child welfare case practice, and changes in actual involve-

ment of fathers and fathers families in practice were suggested.

KEYWORDS father involvement in child welfare, child welfare

practice, family centered practice

Received: 1/6/09; revised: 11/3/06; accepted: 2/6/09
Address correspondence to Diana J. English, Child Welfare Research Group, School of

Social Work, University of Washington, 4045 Delridge Way S.W., Ste. 400, Seattle, WA 98195.
E-mail: [emailprotected]

213

214 D. J. English et al.

Research findings on the role of father involvement and child well-being are
complex, including findings that suggest both direct and indirect effects of
father involvement and child well-being. Most research on father involve-
ment has focused on samples in the general population. In a review of the
research literature on the effect of father involvement on child development,
Lamb (1997) found that parental warmth, nurturance, and closeness are
associated with positive child outcomes regardless of child gender. This
review also found that individual characteristics of fathers, rather than the
presence or absence of a father figure, are key factors related to child well-
being. However, Lambs (1997) review also reported research that suggests
that individual father characteristics are less influential than overall family
atmosphere in terms of child well-being.

Little research has focused specifically on the relationship between
father involvement and child well-being in child welfare populations. A
few recent studies from the Longitudinal Study of Child Abuse and Neglect
(LONGSCAN) have focused specifically on maltreated samples. LONGSCAN
is a multi-site, 20-year longitudinal study examining the antecedents and
consequences of abuse or neglect on childrens growth and development.
These studies found different relationships between father involvement and
child well-being based on type of fathers relationship to the child (e.g.,
biological or step-parent) (Radhakrishna et al., 2001), type of maltreatment
(Dubowitz et al., 2001), and differences based on whether the child perceived
the relationship with the father or father figure as supportive (Dubowitz
et al., 2001). Direct and indirect effects of father involvement have also been
found. For example, Marshall and English (2001) found that higher levels
of father involvement are associated with lower maternal depression, which
was in turn associated with less severe physical and verbal discipline by
the mother, as well as improved child outcomes. Dubowitz et al. (2001)
examined child neglect and found that families with a higher degree of
father involvement (duration and parenting efficacy) were less likely to be
neglectful. In a separate study of home visiting programs, Duggan et al.
(2004) concluded that the family context needs to be carefully considered
before including fathers in family decision-making, especially when domestic
violence is involved.

Findings from studies such as these are important because they expand
our understanding of the dynamics of relationships between fathers and
mothers and their children. However, research has not yet fully addressed
the complexity of father or mother involvement in cases where children are
victims of abuse or neglect. In addition to gaps in knowledge on specific
effects of father involvement in families where abuse or neglect is an issue,
there is virtually no information on how child welfare policies and practices
address the father involvement issue.

Despite significant gaps in knowledge related to the involvement of
fathers in cases where abuse or neglect has occurred, emphasis on fathers

Fatherhood in Child Welfare 215

involvement in the lives of their children has increased dramatically during
the past several years (See, for example, Strug & Wilmore-Schaeffer, 2003).
While early interest in the involvement of biological fathers with their chil-
dren focused on child support, more recently there has been a shift in policy
that includes a focus on fathers involvement with the physical and emotional
support of their children as well (United States Department of Health and
Human Services, 2001).

The passage of the Adoption and Safe Families Act (ASFA) in 1997
has the potential to dramatically influence child welfare practice related
to fathers (and extended family) involvement in the lives of children. The
1997 ASFA provisions include policies regarding concurrent case-planning,
increased use of kinship placements for children removed from parental
custody, and the use of family decision-making in case planning. In the
concurrent planning process child welfare agencies are required to routinely
identify and assess non-custodial parents (usually fathers) and extended
family members as potential support, as well as a placement resource, if
a child is removed from parental custody. Abbreviated timelines for the
development of permanent plans for children include provisions for locating
and involving a childs non-custodial father at early stages in the process in
order to avoid delays in achieving permanency for the child.

It is unclear how many children involved in the public child welfare
system have non-custodial fathers, although annual federal statistics on child
abuse and neglect indicate that 55% of all substantiated abuse and mistreat-
ment is related to single parent coping/resource issues (Administration for
Children and Families, 1997). Although many children served by the public
child welfare system live with their mothers as primary caregivers, available
data also suggest that in the majority of cases the father of the child has been
legally identified (National Survey of Americas Family, 1999) and could be
a participant in decision-making regarding a resource for the child.

Until recently, the major effort in public child welfare has been to
identify fathers as a source of financial support for children. In general,
the primary focus of family-centered child welfare services continues to
be oriented toward the childs mother (National Child Welfare Resource
Center for Family Centered Practice, 2002; Franck, 2001). Studies that have
examined overall father involvement in child welfare cases have found that
fathers are less involved in cases than mothers, that social workers directed
their attention to the mother as the primary parent, and caseworkers did
not identify the absence of father involvement as an issue (Franck, 2001;
ODonnell, 1999). A recent review by Sonnenstein et al. (2002) found that
child welfare agencies are making greater efforts to identify fathers for the
purpose of complying with Temporary Assistance for Needy Families (TANF)
and ASFA requirements.

The National Family Preservation Network (NFPN) (2001) conducted
a series of focus groups with child welfare professionals, family service

216 D. J. English et al.

workers, court officials and fatherhood employees to learn more about father
involvement in child welfare. Data from these focus groups suggest that
fathers, when considered at all, were generally viewed negatively.