text
stringlengths
11
3.65M
Grappa is an Italian spirit, made from grapes. It is similar to brandy. It contains between 35 and 60 percent alcohol by volume. Alcoholic spirits Italian food
Polyacrylic acid-coated cerium oxide nanoparticles: An oxidase mimic applied for colorimetric assay to organophosphorus pesticides. It is important and urgent to develop reliable and highly sensitive methods that can provide on-site and rapid detection of extensively used organophosphorus pesticides (OPs) for their neurotoxicity. In this study, we developed a novel colorimetric assay for the detection of OPs based on polyacrylic acid-coated cerium oxide nanoparticles (PAA-CeO2) as an oxidase mimic and OPs as inhibitors to suppress the activity of acetylcholinesterase (AChE). Firstly, highly dispersed PAA-CeO2 was prepared in aqueous solution, which could catalyze the oxidation of TMB to produce a color reaction from colorless to blue. And the enzyme of AChE was used to catalyze the substrate of acetylthiocholine (ATCh) to produce thiocholine (TCh). As a thiol-containing compound with reducibility, TCh can decrease the oxidation of TMB catalyzed by PAA-CeO2. Upon incubated with OPs, the enzymatic activity of AChE was inhibited to produce less TCh, resulting in more TMB catalytically oxidized by PAA-CeO2 to show an increasing blue color. The two representative OPs, dichlorvos and methyl-paraoxon, were tested using our proposed assay. The novel assay showed notable color change in a concentration-dependent manner, and as low as 8.62 ppb dichlorvos and 26.73 ppb methyl-paraoxon can be readily detected. Therefore, taking advantage of such oxidase-like activity of PAA-CeO2, our proposed colorimetric assay can potentially be a screening tool for the precise and rapid evaluation of the neurotoxicity of a wealth of OPs.
Theneuille is a commune. It is found in the Allier department in the center of France.
Image copyright Seminole County Sheriff's Office Image caption John Robert Neumann had a history of misdemeanour criminal offences A disgruntled US employee walked back into the factory that fired him and fatally shot five ex-colleagues, before killing himself, police say. John Robert Neumann, 45, was armed with a semi-automatic handgun and hunting knife when he entered the business near Orlando, Florida, on Monday morning. The US army veteran was sacked in April, police say. There is no suggestion he was a member of a subversive or terrorist organisation, they add. Media playback is unsupported on your device Media caption Orange County Sheriff Jerry Demings says seven survivors are being interviewed Orange County Sheriff Jerry Demings said the shooting had unfolded at the premises of Fiamma, which makes awnings for motor homes and camper vans. Most of the victims were shot in the head, some multiple times, he added. "He was certainly singling out the individuals that he shot," said Sheriff Demings. The victims included Robert Snyder, 69, Brenda Montanez-Crespo, 44, Kevin Clark, 53, Jeffrey Roberts, 57, and another unidentified man. Image copyright AFP Image caption Police moved into the scene of the crime within minutes Neumann reloaded his handgun at least once during the rampage, the sheriff said. The gunman had told an employee whom he did not know to leave the premises, and left about seven other staff members uninjured. Neumann - who lived alone in the area - killed himself as deputies were about to enter the warehouse, the sheriff said. Authorities say he did not have a permit for the weapon. He was honourably discharged from the army in 1999. Image copyright Reuters He had a history of misdemeanour criminal offences, such as possession of marijuana and driving under the influence. Neumann attacked a member of staff in 2014, though no charges were filed, police said. In a statement, Florida Governor Rick Scott condemned a "senseless act of violence". "Over the past year, the Orlando community has been challenged like never before," he said. The shooting came a week before the first anniversary of the Pulse nightclub shooting that left 49 people dead in Orlando. In last June's attack, the deadliest mass shooting in modern US history, gunman Omar Mateen killed 49 people and injured dozens more at a gay nightclub before being shot dead by police.
Anthony Michael Bourdain (June 25, 1956 - June 8, 2018) was an American celebrity chef, author, and television personality, known as one of the most influential chefs in the world. Career Bourdain first became known for his 2000 book Kitchen Confidential: Adventures in the Culinary Underbelly. His first food and world-travel television series was A Cook's Tour, which ran for 35 episodes on the Food Network from 2002 through 2003. In 2005, Bourdain began hosting the Travel Channel's culinary and cultural adventure programs Anthony Bourdain: No Reservations (2005-2012) and The Layover (2011-2013). In 2013, he switched to CNN to host Anthony Bourdain: Parts Unknown. Death On June 8, 2018, Bourdain was found dead of an apparent suicide by hanging in his hotel room in Kaysersberg-Vignoble, Haut-Rhin, France. He was working on an episode of Anthony Bourdain: Parts Unknown in Strasbourg, France. He was 61 years old. At the time of his death, he was in a relationship with actress Asia Argento.
Search form You are here A bitter pill? German bishops take a bold step in allowing emergency contraception As the Catholic bishops in the United States continue to fight for their religious liberty by arguing that opposition to all artificial contraception is a deeply held belief of the Catholic faith, the German Catholic bishops have gone in the opposite direction, announcing they will now allow Catholic hospitals to give the "morning after" pill to rape victims. Archbishop Robert Zollitsch said a four-day meeting of German bishops in the western town of Trier had "confirmed that women who have been victims of rape will get the proper human, medical, psychological and pastoral care". "That can include medication with a 'morning-after pill' as long as this has a prophylactic and not an abortive effect," he said in a statement. "Medical and pharmaceutical methods that induce the death of an embryo may still not be used." This means the German bishops are drawing a distinction between contraceptives that prevent pregnancy and those that might halt the development of an already fertilized egg. That distinction maintains the church's teaching that human life begins at the moment of conception, but is a departure from the church's belief that any artificial means of preventing pregnancy is immoral. The German bishops' decision stems from a case in which a woman was denied treatment at a German Catholic hospital after being drugged at a party and raped. The bishops had to wade through some complex questions here, such as whether taking artificial steps to prevent pregnancy--which some see as God's will, even in rape cases--is ever morally justified. Their conclusion that it is an acknowledgement of the fact that contraception may not be as much of a black and white issue as some Catholics would believe. I don't expect the German bishops' decision to be the tidal wave that changes the church's stance on contraception. But it certainly is a ripple in the ocean that will get people talking. I'm sure that many Catholics would like to see the issue brought up for a serious debate in the global church, even if that discussion doesn't seem to fit the agenda of the U.S. bishops.
The University of Kansas is a public university in Lawrence, a hilly city in northeastern Kansas. It is often abbreviated as "KU". KU held its first classes in 1866. As of Spring 2011, over 30,000 students attended school there. History There was a plan to build a university in Kansas in 1855, but it didn't happen until Kansas became a state in 1861. The Kansas government needed to decide where to build the university. Their choices were Manhattan, Emporia, or Lawrence. On January 13, 1863, Kansas State University was built in Manhattan. The only cities left were Emporia and Lawrence. Amos A. Lawrence gave $10,000 and more than 40 acres (160,000 m2) of land for a university in Lawrence. The Kansas government liked that, so the government chose Lawrence. On February 20, 1863, Kansas Governor Thomas Carney signed into law a bill creating the state university in Lawrence. The law was made if Lawrence gave a gift of a $15,000 endowment fund and a place for the university. The place would need to be in or near the town, of not less than forty acres (16 ha) of land. On November 2, 1863, Governor Carney said Lawrence had met the conditions to get the state university. In 1864, the university was officially organized. The university opened for classes on September 12, 1866, and the first class graduated in 1873. Academics School of Business The University of Kansas School of Business is a public business school on the main campus in Lawrence. The KU School of Business was created in 1924. It has more than 80 staff members, and it has about 1500 students. It was named one of the best business schools in the Midwest by Princeton Review. The KU School of Business has been credited by the Association to Advance Collegiate Schools of Business (AACSB) for its undergraduate and graduate programs in business and accounting. School of Law The University of Kansas School of Law was created in 1878. It was the top law school in the state of Kansas. The 2016 U.S. News & World Report "U.S. News Best Colleges Rankings" says that it was the 65th best law school in the United States. Classes are held in Green Hall at W 15th St and Burdick Dr, which is named after former dean James Green. School of Engineering The KU School of Engineering is a public engineering school on the main campus in Lawrence. The School of Engineering was officially created in 1891, although engineering degrees were awarded as early as 1873. The U.S. News & World Report'''s "America's Best Colleges" 2016 issue says that KU's School of Engineering was the 90th best engineering school in the United States. Famous alumni include: Alan Mulally (BS/MS), former President and CEO of Ford Motor Company, Lou Montulli, co-founder of Netscape and author of the Lynx web browser, Brian McClendon (BSEE 1986), VP of Engineering at Google, and Charles E. Spahr (1934), former CEO of Standard Oil of Ohio. Edwards Campus The KU Edwards Campus is in Overland Park, Kansas. It was created in 1993. It was created in order to provide adults with a chance to get college degrees and to get better education. About 2,000 students go there. The average age of the students is 31. The Edwards campus provides programs developmental psychology, public administration, social work, systems analysis, engineering management and design. Tuition Students enrolled in 6 or more credit hours paid a yearly required campus fee of $888. The schools of architecture, music, arts, business, education, engineering, journalism, law, pharmacy, and social welfare charge more fees. , the yearly tuition for 30 credit hours for a freshman is estimated by the university to be $10,182. This does not include room and board costs. Sports Kansas' athletics teams are called the Jayhawks. Kansas has 16 varsity teams, all of which compete in the Big 12 Conference. They are known for their men's basketball team, which most recently won a national championship in 2008. Other locations The KU Medical Center, which is one branch of the University of Kansas, is located in Kansas City, Kansas, which is east of Lawrence. Another branch of KU, called the Edwards Campus, is located in Overland Park, Kansas. Student Activities Debate The University of Kansas has had more teams (70) compete in the National Debate Tournament than any other university. Kansas has won the tournament 6 times (1954, 1970, 1976, 1983, 2009 and 2018) Media The University of Kansas's newspaper is The University Daily Kansan''. Housing Famous alumni
// Copyright (c) 2014 AlphaSierraPapa for the SharpDevelop Team // // Permission is hereby granted, free of charge, to any person obtaining a copy of this // software and associated documentation files (the "Software"), to deal in the Software // without restriction, including without limitation the rights to use, copy, modify, merge, // publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons // to whom the Software is furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in all copies or // substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, // INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR // PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE // FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR // OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER // DEALINGS IN THE SOFTWARE. using System; using System.Collections.Generic; using ICSharpCode.Reporting.Interfaces; using ICSharpCode.Reporting.Interfaces.Export; using ICSharpCode.Reporting.PageBuilder.ExportColumns; namespace ICSharpCode.Reporting.Items { /// <summary> /// Description of ReportContainer. /// </summary> public class ReportContainer:PrintableItem,IReportContainer { public ReportContainer() { items = new List<IPrintableObject>(); } private List<IPrintableObject> items; public List<IPrintableObject> Items { get { return items; } } public override IExportColumn CreateExportColumn() { var export = new ExportContainer(); export.ToExportItem(this); return export; // return new ExportContainer(){ // Name = this.Name, // Size = this.Size, // Location = this.Location, // CanGrow = this.CanGrow, // BackColor = this.BackColor, // DesiredSize = this.Size // }; } } }
Framingham is a city in the U.S. State of Massachusetts. History John Oldman was the first European to set foot on the land that is now Framingham. In 1633 he led a group of explorers down a Native American trail called Old Connecticut Path that is now Framingham's oldest road. The first European settler was John Stone, who built a home on the west bank of the Sudbury River in 1647. Starting in 1693, families from Salem came to escape the Salem witch trials. They settled in an area of Framingham that is still called Salem's End today. Framingham's original name was Danforth's Farm, named after Thomas Danforth, who owned the land. Danforth's family came to Massachusetts from Framlingham, England. The first petition to incorporate Framingham as a town was submitted to the General Court in 1693, and was denied because Thomas Danforth did not want Framingham to become a town. After Danforth died in 1699, the people made Framingham a town in 1700. In 1706 the first schoolhouse was built. The first schoolmaster was Deacon Joshua Hemenway. On February 12, 1775, British General Thomas Gage sent spies to Framingham. They reported that the Framingham minutemen were very strong and tough, so General Gage sent his troops to Lexington and Concord instead. But the Framingham minutemen marched over and helped the weaker minutemen from Lexington and Concord to fight the British. On April 18 to 19, 1775, two militia companies from Framingham, totaling about 130 men, fought in the Battle of Lexington and Concord. Only one of the men was wounded. Today, Framingham is known for the Framingham Heart Study. On April 4, 2017 residents of the Town of Framingham voted to become the City of Framingham in a 5,684 - 5,579 vote. Geography and Landmarks As of the census (a counting of people) in 2005-2009, there are 66,411 people living in 27,328 houses. In 2000, there were 66,910 people in the town. In 2010, there were 68,318 people. Framingham is sited on the ancient trail known as Old Connecticut Path. Old Connecticut Path is the oldest road in Framingham. Framingham's oldest public building is the First Baptist Church, designed by Sollaman Willard, a famous architect. Framingham has three major business districts. The "Golden Triangle" was originally a three square-mile district on the eastern side of Framingham, bordered by Route 9, Route 30, and Speen Street in Natick. The area is one of the largest shopping districts in New England. The Golden Triangle has expanded since 1993 with construction of a BJ's Wholesale Club and a Super Stop & Shop just north of Route 30. Downtown Framingham is anchored by a town hall called the Memorial Building. An influx of Hispanic and Brazilian immigrants helped revitalize the district starting in the early 2000s. West Framingham is home to two of the town's seven auto dealerships. There are also several smaller business hubs in Framingham Center, Saxonville, Nobscot, and along the Route 9 Corridor. Framingham Center is the physical and historic center of town. A dominating presence in education is Framingham State College. The Framingham Peace and 9/11 Memorials are located across the street from Farm Pond. Industry & Resources Framingham's economy is predominantly derived from retail and office complexes. Breyers, Leggat McCall, The American Heart Association, and the American Cancer Society all have facilities in the area. The Dennison Manufacturing Company was founded here in 1844 as a jewelry and watch box manufacturing company by Aaron Lufkin Dennison. Recreation Nobscot Mountain is a private facility owned by the Knox Trail Council of the Boy Scouts of America. Edward F. Loring Skating Arena is near Farm Pond, for area communities and is open to the public. Winch Park has a basketball court, a tennis court, and two large fields used for football, baseball, and lacrosse. Framingham's Country Club along Salem Rd. on the South Side is a private club that features an 18-hole course for golf.
Q: Google Maps InfoBox or Info Window click eventlistener? Is there a way of adding a click eventlistener for an info window or an InfoBox (I'm using that plugin)? Really the problem I'm having is that the window/box is sometimes getting in the way of a click eventlistener that should be triggered when the user clicks anywhere on the map. A: from the documentation: http://google-maps-utility-library-v3.googlecode.com/svn/trunk/infobox/docs/examples.html Using InfoBox to Create a Map Label This example shows how to use an InfoBox as a map label. One important step is to set the pane property to "mapPane" so that the InfoBox appears below everything else on the map. It's also necessary to set closeBoxURL to "" so that the label will not have a close box, to set disableAutoPane to true so that the map does not pan when the label is added, and to set enableEventPropagation to true so that events will be passed on to the map for handling. from the example referenced above: var myOptions = { content: boxText ,disableAutoPan: false ,maxWidth: 0 ,pixelOffset: new google.maps.Size(-140, 0) ,zIndex: null ,boxStyle: { background: "url('tipbox.gif') no-repeat" ,opacity: 0.75 ,width: "280px" } ,closeBoxMargin: "10px 2px 2px 2px" ,closeBoxURL: "http://www.google.com/intl/en_us/mapfiles/close.gif" ,infoBoxClearance: new google.maps.Size(1, 1) ,isHidden: false ,pane: "floatPane" ,enableEventPropagation: false }; var ib = new InfoBox(myOptions); ib.open(theMap, marker);
Betulaceae is a group of flowering plants also known as the birch family. This includes the birches, alders, hornbeams (Caprinus), and hazels. Betulaceae
package cabf_br /* * ZLint Copyright 2020 Regents of the University of Michigan * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy * of the License at http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. See the License for the specific language governing * permissions and limitations under the License. */ /*If the Certificate asserts the policy identifier of 2.23.140.1.2.3, then it MUST also include (i) either organizationName or givenName and surname, (ii) localityName (to the extent such field is required under Section 7.1.4.2.2), (iii) stateOrProvinceName (to the extent required under Section 7.1.4.2.2), and (iv) countryName in the Subject field.*/ import ( "github.com/zmap/zcrypto/x509" "github.com/zmap/zlint/v2/lint" "github.com/zmap/zlint/v2/util" ) type CertPolicyRequiresPersonalName struct{} func (l *CertPolicyRequiresPersonalName) Initialize() error { return nil } func (l *CertPolicyRequiresPersonalName) CheckApplies(cert *x509.Certificate) bool { return util.SliceContainsOID(cert.PolicyIdentifiers, util.BRIndividualValidatedOID) && !util.IsCACert(cert) } func (l *CertPolicyRequiresPersonalName) Execute(cert *x509.Certificate) *lint.LintResult { var out lint.LintResult if util.TypeInName(&cert.Subject, util.OrganizationNameOID) || (util.TypeInName(&cert.Subject, util.GivenNameOID) && util.TypeInName(&cert.Subject, util.SurnameOID)) { out.Status = lint.Pass } else { out.Status = lint.Error } return &out } func init() { lint.RegisterLint(&lint.Lint{ Name: "e_cab_iv_requires_personal_name", Description: "If certificate policy 2.23.140.1.2.3 is included, either organizationName or givenName and surname MUST be included in subject", Citation: "BRs: 7.1.6.1", Source: lint.CABFBaselineRequirements, EffectiveDate: util.CABV131Date, Lint: &CertPolicyRequiresPersonalName{}, }) }
The voiced epiglottal trill is a sound used in some spoken languages. It is not in English. Consonants
Natural Gas Drilling Technique Gets Congressional Attention Wyoming – Wyoming officials are fighting to keep Washington from regulating natural gas drilling. They say the state is perfectly capable of taking care of its own natural resources. Critics say drilling companies are polluting water supplies. Eric Niiler has more.
The Sierra Leone national under-17 football team is team of football players under 17 from Sierra Leone. The team is controlled by Sierra Leone Football Association and played in the 2003 FIFA U-17 World Championship in Finland. FIFA U-17 World Cup appearances FIFA U-17 World Cup record African national football teams National sports teams of Sierra Leone
DETROIT (Reuters) - General Motors Co on Wednesday warned leaders of Canada’s Unifor labor union that it will start to wind down production of its popular Chevrolet Equinox sport utility vehicle at an Ontario factory unless workers there call off a month-long strike. The General Motors CAMI car assembly plant, where the GMC Terrain and Chevrolet Equinox are built, is seen in Ingersoll, Ontario, Canada, January 27, 2017. REUTERS/Geoff Robins The strike has been fueled by union opposition to the North American Free Trade Agreement. Unifor leader Jerry Dias told Reuters on Wednesday that GM officials said they would ramp up production of the vehicle at two plants in Mexico that build the Equinox and a similar model, the GMC Terrain if the walkout is not called off. “GM just told us today that they are going to ramp up production in Mexico,” Unifor President Jerry Dias said by phone from Washington. “They have declared war on Canada.” GM has plants in the United States that are under-utilized, but retooling them to build the Equinox would be expensive. GM plans to study how quickly key suppliers to the Ontario Equinox plant could move their operations to accommodate a shift in the vehicle’s production, a person familiar with the discussions said on Wednesday. GM’s decision to build the Equinox and Terrain in Mexico is a major issue in the contract dispute between the automaker and the Canadian union. Dias said he would not call off the strike. “This is the big issue,” Dias said of the strike. “Once we solve this, everything else will fall into place.” About 2,500 workers at a factory in Ingersoll, Ontario, walked off the job on Sept. 18 after GM rejected Unifor’s call for the automaker to designate the factory, known as CAMI, as the lead production site for the Equinox in North America. The automaker invested $800 million to retool the plant for the new model. The union also objected to GM’s decision to lay off 600 CAMI workers as it phased out production of the last-generation GMC Terrain SUV, and launched production of new generation Terrain models along with the Equinox in Mexico. The CAMI plant was projected to build about 210,000 vehicles in 2018, while two plants in Mexico together were projected to build about 150,000 vehicles next year, according to AutoForecast Solutions, a forecasting firm. Unifor’s Dias has blamed NAFTA for the job losses, complicating Canadian Prime Minister Justin Trudeau’s effort to promote the benefits of open trade in response to U.S. President Donald Trump’s criticism of the deal. U.S., Canadian and Mexican negotiators began another round of talks this week to modernize the agreement. The Equinox was the second best-selling model in the United States Chevrolet lineup in September, and GM had just 41 days worth of the vehicle in stock at the end of last month, according to Automotive News. id by phone from Washington. “They have declared war on Canada.”
Clares is a small village in the province of Guadalajara. It belongs to the region of de Molina-Alto Tajo in the autonomous community of Castile-La Mancha, Spain. Fauna In 1989 a Special Protection Area (SPA) category A, was created near Clares. It covers 20,000 hectares. There are 12 protected species including: Montagu's harrier (Circus pygargus), little bustard (Tetrax tetrax), Dupont's lark (Chersophilus duponti), Common treecreeper (Agateador norteno), Thekla lark (Galerida theklae), and Dartford Warbler (Sylvia undata). Other species found in the area are the foxes, badgers, wild cats, vultures and raptors such as buzzards, kestrel and a hawk. Clares also has a private game reserve managed by the Association of Neighbors and Friends, where people can go to hunt boar, deer, hare, rabbit, partridge and quail. Villages Settlements in Castile-La Mancha
Q: Ideas to replace Stored Procedure in Cash Flow report We have a Cash flow report which is basically in this structure: Date |Credit|Debit|balance| 09/29| 20 | 10 | 10 | 09/30| 0 | 10 | 0 | The main problem is the balance, and as we are using a DataSet for the Data, it's kinda hard to calculate the balance on the DataSet, because we always need the balance from the previous day. Also this data comes from several tables and it's been hard to maintain this procedure, because the database metadata is changing frequently. Anyone could give me some possible different solutions? for the problem? This report is being displayed on a DataGrid. A: This may be too big a change or off the mark for you, but a cash flow report indicates to me that you are probably maintaining, either formally or informally, a general ledger arrangement of some sort. If you are, then maybe I am naive about this but I think you should maintain your general ledger detail as a single table that has a bare minimum number of columns like ID, date, account, source and amount. All of the data that comes from different tables suggests that there several different kinds of events that are affecting your cash. To me, representing these different kinds of events in their own tables (like accounts receivable or accounts payable or inventory or whatever) makes sense, but the trick is to not have any monetary columns in those other tables. Instead, have them refer to the row in the general ledger detail where that data is recorded. If you enforce this, then the cash flow would always work the same regardless of changes to the other tables. The balance forward issue still has to be addressed and you have to take into account the number of transactions involved and the responsiveness required of the system but at least you could make a decision about how to handle it one time and not have to make changes as the other parts of your system evolve.
Brenda Buell Vaccaro (born November 18, 1939) is an American actress. She is known for her roles in Midnight Cowboy, Once Is Not Enough, Airport '77, Capricorn One, The Pride of Jesse Hallam, Supergirl, The Mirror Has Two Faces, Heart of Midnight, and in Zorro, The Gay Blade. She received one Academy Award nomination, three Golden Globe Award nominations (winning one), four Primetime Emmy Award nominations (winning one), and three Tony Award nominations. Her best known stage roles were in The Affair (1962), Cactus Flower (1965), How Now, Dow Jones (1967), The Goodbye People (1968), the female version of The Odd Couple, (1985), and Jake's Women (1992).
Q: Installing Windows shell extension DLL with Inno Setup installer I'm developing a shell extension DLL. I want to install it using Inno Setup installer. I seen installers that ask, if I wanted to install shell extension with the program, I would like something similar using Inno Setup installer. How do I go about doing this? If not, would you be able to direct me to the right path. I been searching for days about any info about this. A: The shell extension is just a DLL with a COM class. So just deploy it and register it using the regserver flag: [Files] Source: "myext.dll"; DestDir: "{app}"; Flags: regserver See also Register Explorer COM extension only if specific task was selected.
Kensington Palace is a royal residence set in Kensington Gardens, in the Royal Borough of Kensington and Chelsea in London, England. It has been a residence of the British Royal Family since the 17th century. Today, it is the main residence of The Duke and Duchess of Gloucester; the Duke and Duchess of Kent; and Prince and Princess Michael of Kent. Kensington Palace is also used on an unofficial basis by Prince Henry, as well as his cousin, Zara Phillips. Royal residences in the United Kingdom Palaces in the United Kingdom
<!DOCTYPE html> <html> <head> <title>APIドキュメント</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <meta name="viewport" content="width=device-width, initial-scale=1"> <link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/bootstrap.min.css'/> <link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/prettify.css'/> <link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/bundled/bootstrap-responsive.min.css'/> <link type='text/css' rel='stylesheet' href='../../../apidoc/stylesheets/application.css'/> <!-- IE6-8 support of HTML5 elements --> <!--[if lt IE 9]> <script src="//html5shim.googlecode.com/svn/trunk/html5.js"></script> <![endif]--> </head> <body> <div class="container-fluid"> <div class="row-fluid"> <div id='container'> <ul class='breadcrumb'> <li> <a href='../../../apidoc/v2.ja.html'>Foreman v2</a> <span class='divider'>/</span> </li> <li> <a href='../../../apidoc/v2/images.ja.html'> Images </a> <span class='divider'>/</span> </li> <li class='active'>create</li> <li class='pull-right'> &nbsp;[ <a href="../../../apidoc/v2/images/create.pt_BR.html">pt_BR</a> | <a href="../../../apidoc/v2/images/create.de.html">de</a> | <a href="../../../apidoc/v2/images/create.it.html">it</a> | <a href="../../../apidoc/v2/images/create.sv_SE.html">sv_SE</a> | <a href="../../../apidoc/v2/images/create.zh_CN.html">zh_CN</a> | <a href="../../../apidoc/v2/images/create.en_GB.html">en_GB</a> | <a href="../../../apidoc/v2/images/create.cs_CZ.html">cs_CZ</a> | <a href="../../../apidoc/v2/images/create.fr.html">fr</a> | <a href="../../../apidoc/v2/images/create.ru.html">ru</a> | <b><a href="../../../apidoc/v2/images/create.ja.html">ja</a></b> | <a href="../../../apidoc/v2/images/create.es.html">es</a> | <a href="../../../apidoc/v2/images/create.ko.html">ko</a> | <a href="../../../apidoc/v2/images/create.ca.html">ca</a> | <a href="../../../apidoc/v2/images/create.gl.html">gl</a> | <a href="../../../apidoc/v2/images/create.en.html">en</a> | <a href="../../../apidoc/v2/images/create.zh_TW.html">zh_TW</a> | <a href="../../../apidoc/v2/images/create.nl_NL.html">nl_NL</a> | <a href="../../../apidoc/v2/images/create.pl.html">pl</a> ] </li> </ul> <div class='page-header'> <h1> POST /api/compute_resources/:compute_resource_id/images <br> <small>イメージの作成</small> </h1> </div> <div> <h2>例</h2> <pre class="prettyprint">POST /api/compute_resources/980190962/images { &quot;image&quot;: { &quot;name&quot;: &quot;TestImage&quot;, &quot;username&quot;: &quot;ec2-user&quot;, &quot;uuid&quot;: &quot;abcdef&quot;, &quot;password&quot;: &quot;password&quot;, &quot;operatingsystem_id&quot;: 309172073, &quot;compute_resource_id&quot;: 928692541, &quot;architecture_id&quot;: 331892513, &quot;user_data&quot;: true } } 201 { &quot;operatingsystem_id&quot;: 309172073, &quot;operatingsystem_name&quot;: &quot;centos 5.3&quot;, &quot;compute_resource_id&quot;: 980190962, &quot;compute_resource_name&quot;: &quot;bigcompute&quot;, &quot;architecture_id&quot;: 331892513, &quot;architecture_name&quot;: &quot;sparc&quot;, &quot;uuid&quot;: &quot;abcdef&quot;, &quot;username&quot;: &quot;ec2-user&quot;, &quot;created_at&quot;: &quot;2019-02-20 13:38:01 UTC&quot;, &quot;updated_at&quot;: &quot;2019-02-20 13:38:01 UTC&quot;, &quot;id&quot;: 980190963, &quot;name&quot;: &quot;TestImage&quot; }</pre> <h2>パラメーター</h2> <table class='table'> <thead> <tr> <th>パラメーター名</th> <th>記述</th> </tr> </thead> <tbody> <tr style='background-color:rgb(255,255,255);'> <td> <strong>location_id </strong><br> <small> 任意 </small> </td> <td> <p>ロケーション別のスコープ</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a Integer</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(255,255,255);'> <td> <strong>organization_id </strong><br> <small> 任意 </small> </td> <td> <p>組織別のスコープ</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a Integer</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(255,255,255);'> <td> <strong>compute_resource_id </strong><br> <small> 必須 </small> </td> <td> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be an identifier, string from 1 to 128 characters containing only alphanumeric characters, space, underscore(_), hypen(-) with no leading or trailing space.</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(255,255,255);'> <td> <strong>image </strong><br> <small> 必須 </small> </td> <td> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a Hash</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[name] </strong><br> <small> 必須 </small> </td> <td> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[username] </strong><br> <small> 必須 </small> </td> <td> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[uuid] </strong><br> <small> 必須 </small> </td> <td> <p>Template ID in the compute resource</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[password] </strong><br> <small> 任意 , nil可 </small> </td> <td> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[compute_resource_id] </strong><br> <small> 任意 , nil可 </small> </td> <td> <p>コンピュートリソースの ID</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[architecture_id] </strong><br> <small> 任意 , nil可 </small> </td> <td> <p>アーキテクチャーの ID</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[operatingsystem_id] </strong><br> <small> 任意 , nil可 </small> </td> <td> <p>オペレーティングシステムの ID</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be a String</p> </li> </ul> </td> </tr> <tr style='background-color:rgb(250,250,250);'> <td> <strong>image[user_data] </strong><br> <small> 任意 , nil可 </small> </td> <td> <p>イメージがユーザーデータをサポートするかどうか</p> <p><strong>Validations:</strong></p> <ul> <li> <p>Must be one of: <code>true</code>, <code>false</code>, <code>1</code>, <code>0</code>.</p> </li> </ul> </td> </tr> </tbody> </table> </div> </div> </div> <hr> <footer></footer> </div> <script type='text/javascript' src='../../../apidoc/javascripts/bundled/jquery.js'></script> <script type='text/javascript' src='../../../apidoc/javascripts/bundled/bootstrap-collapse.js'></script> <script type='text/javascript' src='../../../apidoc/javascripts/bundled/prettify.js'></script> <script type='text/javascript' src='../../../apidoc/javascripts/apipie.js'></script> </body> </html>
The Ronald and Nancy Reagan Research Institute, an affiliate of the National Alzheimer's Association in Chicago, Illinois, is an initiative founded by former United States President Ronald Reagan and First Lady Nancy Reagan to accelerate the progress of Alzheimer's disease research. The center was dedicated in 1995.
Mario & Luigi Bowser’s Inside Story A single player action RPG (Role-playing Game), Mario & Luigi: Bowser's Inside Story takes players on the DS literally into the belly of the beast - that beast being Bowser. Set within an off-the-wall storyline that turns the world of Nintendo on its ear and featuring the ability to toggle between playing as the team of Mario and Luigi, and playing as Bowser himself, this third installment of the Mario & Luigi franchise is n (more...)
Super 8 is a 2011 American science fiction movie that was produced by Steven Spielberg, J. J. Abrams and Bryan Burk and was directed by J. J. Abrams. Super 8 was released on June 10, 2011 in North America. The movie has received positive reviews in early reviews. Cast Kyle Chandler as Jackson Lamb Ron Eldard as Louis Dainard Noah Emmerich as Colonel Nelec Joel Courtney as Joe Lamb Riley Griffiths as Charles Kaznyk Elle Fanning as Alice Dainard Ryan Lee as Carey Zach Mills as Preston Josh McFarland as Tom Ashton Gabriel Basso as Martin Amanda Michalka as Jen Kaznyk Glynn Turman as Dr. Woodward Michael Hitchcock as Deputy Rosko Story line In the summer of 1979, a group of friends in a small Ohio town witness a catastrophic train crash while making a super 8 movie and soon suspect that it was not an accident. Shortly after, unusual disappearances and inexplicable events begin to take place in town, and the local Deputy tries to uncover the truth - something more terrifying than any of them could have imagined.
Q: How are dates typically handled with testing? I am writing an app at the moment that has some pretty extensive business logic based around dates. I have several hundred thousand records to test and testers who want to see how those records are handled, which has worked well so far, however, some edge cases are difficult to replicate. The reason is that most of the logic is related to today's date, in one way or another. What is the best way to handle this with both unit testing and for traditional testing? The only way I can really think of is allowing for today's date to be faked and fixed. A: Faking and fixing the date are the right way to do it. It can actually morph into a useful feature -- how many times have you wanted to be able to run a piece of logic using past data? A: You can use mocks (dependency injection) to return "today" date to be whatever you want it to be. This will allow you to test problem dates and make sure that new additions will not break the old code. There are plenty of mocking framework around and I am sure $language that you are using has at least a couple of good ones.
The Villa Lario is a Renaissance villa in the commune of Mandello del Lario on the shores of Lake Como in northern Italy. Since 2015 the villa has been designated as a 5-star luxury hotel and is part of a resort complex. The Lario resort has a heliport, and docking for boats and seaplanes. In the hotel there is the Italian Chef Enrico Derflingher (2 Michelin stars), former Chef at the White House for George Bush Senior and at Buckingham palace for the Queen Elizabeth of England and Prince Charles of Wales.
DNA sequence analysis of the Olir2-76 and Ossr1-92 alleles of the Oli-2 region of the yeast Saccharomyces cerevisiae. Analysis of related amino-acid substitutions and protein-antibiotic interaction. Petite deletion mapping helped to generate a fine-structure genetic map of the Oli-2 region of the mitochondrial genome of Saccharomyces cerevisiae. Here we report the DNA sequence analysis of the Oli-2 region from two drug-resistant alleles (Olir2-76 and Ossr1-92) which are located in the gene for subunit-6 of mitochondrial ATPase, in agreement with their genetic locations on the mitochondrial genome. An analysis of the corresponding amino-acid substitutions is also presented in the context of protein-antibiotic interactions.
Artur "Atze" Brauner (born Abraham Brauner; 1 August 1918 - 7 July 2019) was a Polish-born German movie producer and entrepreneur. He created over 300 movies from 1946 through 2019. He was Jewish and many of his relatives were killed by Nazis in the 1940s. Brauner produced Sag' die Wahrheit, one of the first movies produced in Germany after World War II. He also produced Morituri, but received negative reviews and failed at the box office. He began to work with German Hollywood-based producers such as Robert Siodmak and later Fritz Lang who started a revival of Dr. Mabuse. Some of his movies dealt with the Holocaust such as Die Weisse Rose, The Plot to Assassinate Hitler (Der 20. Juli) and Man and Beast (Mensch und Bestie). Brauner died on 7 July 2019 in Berlin at the age of 100.
--bail --reporter spec --timeout 20s
Ales () is a commune in south central France in the region of Occitanie. It is a subprefecture of the Gard department. It is also the capital of the arrondissement of the same name. It was formerly known as Alais. History Since the creation of the Gard department on 4 March 1790, Ales has been a subprefecture of the department. Geography Ales is at north-northwest of Nimes, in a curve (meander) of the Le Gardon d'Ales river, which half surrounds it. It is at the foot of the Cevennes, near the Cevennes National Park. Ales has an area of , and its average altitude is ; at the city hall, the altitude is . The commune of Ales is surrounded by the communes Saint-Jean-du-Pin, Saint-Martin-de-Valgalgues, Saint-Privat-des-Vieux, Cendras, Saint-Christol-les-Ales and Saint-Hilaire-de-Brethmas. Climate The climate of Ales, in the Koppen climate classification, is Csb - Mediterranean climate with warm summers. Population The inhabitants of Ales are known, in French, as Alesiens (women: Alesiennes). With a population of 39,993, Ales has a population density of inhabitants/km2. Evolution of the population in Ales Ales forms, with other 21 communes, the urban area of Ales with a population of 94,622 inhabitants (2013) and an area of . This urban area is the centre of the metropolitan area of Ales, formed by 52 communes with a population of 114,137 inhabitants and an area of . Administration Ales is a subprefecture of the Gard department, the capital of the arrondissement of Le Vigan and the administrative centre () of the cantons Ales-1, with 32,289 inhabitants (2014). Ales-2, with 29,141 inhabitants (2014). Ales-3, with 26,871 inhabitants (2014). It is part of the intercommunality Ales Agglomeration. Sister cities Ales is twinned with: Bilina, Czech Republic. Kilmarnock, Scotland, UK. Herstal, Belgium. Gallery Related pages Arrondissement of Ales Communes of the Gard department
Fat extravasation due to unreamed and experimentally reamed intramedullary nailing of the sheep femur. To compare systemic fat extravasation in unreamed and experimentally reamed nailing. An osteotomy was created in the proximal third of the femoral shaft in 16 sheep, and intramedullary pressure increase and fat extravasation were monitored for the two nailing techniques. The highest intramedullary pressures, median 2700 mm Hg, and highest percentages of fat extravasation, peaking at almost 90% of fat, were found for the unreamed nailing technique. The values for the reamed group were significantly lower. The extravasation of intramedullary fat can be attributed to the great increase in intramedullary pressure that occurs during unreamed nailing. Correctly performed intramedullary reaming with the new reaming system produces lower pressures and much less systemic fat extravasation, reducing the risk for fat embolism.
The New York State Bar Association (NYSBA) is a voluntary bar association for the state of New York.
Arachidonic acid distribution in lipids of mammary glands and DMBA-induced tumors of rats. In the phospholipid fractions, arachidonic acid represented a several fold higher percentage of fatty acids from DMBA-induced tumors and in mammary glands from midpregnant rats when compared to mammary glands from virgin rats. Arachidonic acid was not present in measurable quantities in the neutral lipid fractions of mammary glands from virgin rats. The arachidonic acid in the neutral lipid fraction of mammary glands from midpregnant rats was only detectable in the triglyceride-sterol ester fraction, but in that fraction less than 1% of the fatty acids were arachidonic acid. In the neutral lipids of the DMBA-induced tumors, it was of particular interest that a high proportion (19%) of the fatty acids in the diglyceride fraction consisted of arachidonic acid; no arachidonic acid was detected in the diglycerides of the normal tissues.
A party is a person or group of persons that make up a single entity for the purposes of the law. They are a participant in legal proceeding and have an interest in the outcome. Parties include: the plaintiff, the defendant, a petitioner or a respondent. They can also be a cross-complainant (a defendant who sues someone else in the same lawsuit) or a cross-defendant (a person sued by a cross-complainant). A person who only appears in the case as a witness is not considered a party. Courts use various terms to identify the role of a particular party in civil litigation. They usually call the party that brings a lawsuit as the plaintiff, or, in older American cases, the party of the first part. The party against whom the case was brought as the defendant, or, in older American cases, the party of the second part. Related pages Erga omnes Ex parte proceeding Inter partes proceeding Intervention (law)
// Copyright 2019 yuzu emulator team // Licensed under GPLv2 or any later version // Refer to the license.txt file included. #pragma once #include "core/loader/loader.h" namespace Core { class System; } namespace FileSys { class KIP; } namespace Loader { class AppLoader_KIP final : public AppLoader { public: explicit AppLoader_KIP(FileSys::VirtualFile file); ~AppLoader_KIP() override; /** * Returns the type of the file * @param file std::shared_ptr<VfsFile> open file * @return FileType found, or FileType::Error if this loader doesn't know it */ static FileType IdentifyType(const FileSys::VirtualFile& file); FileType GetFileType() const override; LoadResult Load(Kernel::Process& process, Core::System& system) override; private: std::unique_ptr<FileSys::KIP> kip; }; } // namespace Loader
Polybius is a fictitious arcade game. It supposedly causes weird side effects. Polybius is an urban legend, or a thing that may not exist. It may have originated on coinop.org, a website documenting arcade games. Etymology The name Polybius comes from the Greek words "poly" and "bios", meaning many, and lives, respectively. Polybius is also the name of a historian and a cipher. Description Polybius was a shooter with odd gameplay, like puzzles. The only screenshot is the title screen. It was supposedly developed by a "Sinneslochen Inc." in 1981. It was supposedly released in Portland, Oregon. Fiction Arcade games
Bank of America worker hit in crosswalk by Peter Pan bus. PROVIDENCE, R.I. — A 30-year-old Cumberland woman on her way to work Wednesday morning was killed when a Peter Pan bus struck her in a crosswalk in Kennedy Plaza and dragged her 50 to 60 feet. Providence police identified the victim as Michelle Cagnon. Cagnon was a Bank of America employee for nine years, according to company spokesman Trevor Koenig. According to Cagnon's Facebook page and LinkedIn profile, she was a 2004 graduate of Burrillville High School and studied at the New England College of Business and Finance. “We are deeply saddened to hear about Michelle’s passing," Koenig said Wednesday. "Our thoughts and prayers are with her family and friends at this difficult time.” Cagnon was walking east on Washington Street, in front of the Alex and Ani City Center ice skating rink, when she stepped into the crosswalk to cross East Approach a little after 8 a.m., according to Public Safety Commissioner Steven M. Paré. "Lady was in the crosswalk," said witness Charles Parker, "looked up, screamed as she got run over." "The minute she got into the middle of the crosswalk, Peter Pan came around and just hit her," said another witness, Michael "Casey" Lee, of Dartmouth, Mass. "All you heard was the smack, and that was it." Paré said that a Peter Pan bus turning left from Washington Street to East Approach struck Cagnon. "It appears that this person was in the crosswalk," he said. Paré said that the accident was caught on several cameras in Kennedy Plaza. "We're looking at all the cameras to reconstruct what happened." "Chaosness" is how Brandon Hong, of Pawtucket, who arrived on a bus moments after the accident, described the scene in the immediate aftermath. Several witnesses, including Parker and Jason Gomes, of Providence, said that passersby had to flag down the bus driver to tell him that he had hit someone. A Providence park ranger working in the area witnessed the accident and alerted emergency personnel. Police Maj. Thomas A. Verdi said that no one chased the bus. Providence police identified the bus driver as Matthew J. Reidy of Taunton, Mass. Christopher Crean, vice president for safety and security for Peter Pan Bus Lines, said Reidy was en route from Hyannis, Mass., to Providence. Five passengers were on the bus. Crean said Reidy "would have stopped at Kennedy Plaza, which is a pick-up/drop off area for us, and then proceeded to the Providence terminal," off Route 95 at the Pawtucket line. "He would have had a layover at the Providence terminal, and then driven the 9:45 a.m. route back to Hyannis," Crean said. No charges have been brought against Reidy at this time, Crean said. Reidy underwent a legally required drug and alcohol test after the accident, "which so far was negative." Reidy has been put on temporary leave, "for his sake, and for the sake of the investigation," Crean said. He added that Peter Pan Bus Lines is "working very closely with state and local police" in the investigation. "This driver is a stellar driver. He has a really good record with the company, and has been here a little over a year," Crean said. "Before you go on board with Peter Pan, we do vet our drivers, and they have to do a six-to eight-week training program." Crean said, "This is a very tragic accident. Today was a tragic day for the family of victim and for the driver as well. We're just going to try and deal with it in the most professional and sympathetic way. Our sympathy goes to the family, as well as our driver." Reidy was not physically injured, "but he was very distraught, very upset about the whole incident," Crean said. "He had some professional counseling through the police department, and in-house here as well, and we do have follow-up." Crean said state police inspected the bus, and found "no problems with it." It was released. Crean said Peter Pan Bus Lines scored a 'satisfactory' — the highest possible score — in a federal inspection "three or four months ago." Providence Bus Accidents 2016 Aug. 24: Cumberland woman is killed when she is hit by a Peter Pan bus at Kennedy Plaza. Aug. 13: A man was injured after he slipped while chasing a Rhode Island Public Transit Authority bus that had just departed from the Manton Avenue Stop & Shop. Police said his leg was run over by a rear tire. May 9: Driver and eight passengers taken to hospital after RIPTA bus struck bus shelter near the East Side bus tunnel. Driver cited for road lane violation. March 2: RIPTA driver and four passengers taken to Rhode Island Hospital after bus ran onto Francis Street sidewalk and hit a light post near junction of Francis and Gaspee streets. Driver cited for roadway violation. 2015 March 26: Ani Emdjian, 9, killed when struck by RIPTA bus on Smith Hill. Driver cleared after investigation. Sept. 16: Pedestrian struck by RIPTA bus in Kennedy Plaza suffers non-life-threatening injuries. 2014 May 28: Court security guard Frank McKnight, 69, of North Kingstown was struck by a RIPTA bus in Washington Street crosswalk near Kennedy Plaza, and died the next day. Nov. 10: RIPTA bus struck a 56-year-old woman as she attempted to cross Angell Street at Wayland Avenue. —pparker@providencejournal.com (401) 277-7360 On Twitter: @projopaul —kziner@providencejournal.com (401) 277-7375 On Twitter: @karenleez
Platte County is a county located in the northwestern portion of the U.S. state of Missouri and is part of the Kansas City metropolitan area. As of the 2020 census, the population was 106,718. Its county seat is Platte City.
Prékopa–Leindler inequality In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler. Statement of the inequality Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-negative real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy for all x and y in Rn. Then Essential form of the inequality Recall that the essential supremum of a measurable function f : Rn → R is defined by This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative absolutely integrable functions. Let Then s is measurable and The essential supremum form was given in. Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form. Relationship to the Brunn–Minkowski inequality It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 − λ)A + λB is also measurable, then where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 − λ)A + λB is also measurable, then Applications in probability and statistics The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have and let M(y) denote the marginal distribution obtained by integrating over x: Let y1, y2 ∈ Rn and 0 < λ < 1 be given. Then equation () satisfies condition () with h(x) = H(x,(1 − λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as which is the definition of log-concavity for M. To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + Y, X − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + Y, X − Y), we conclude that X + Y has a log-concave distribution. Notes References Category:Geometric inequalities Category:Integral geometry Category:Real analysis Category:Theorems in analysis
Cantril is a city of Van Buren County, Iowa in the United States.
ras-independent induction of rat brain type II sodium channel expression in nerve growth factor-treated PC12 cells. Nerve growth factor (NGF) plays an important role in the development of the nervous system, and there is considerable interest in understanding the molecular mechanisms underlying its effects on neuronal differentiation. To determine if the activity of proteins of the ras gene family is necessary for the NGF-mediated induction of sodium channel expression in pheochromocytoma (PC12) cells, sodium channel expression was analyzed in PC12 sublines stably overexpressing the dominant inhibitory mutant c-Ha-ras(Asn-17). Northern blot analysis, RNase protection assays, and whole-cell patch clamp recordings indicate that the NGF-mediated increase in type II sodium channel mRNA and sodium current density can occur independent of ras activity and by doing so provide strong evidence for the importance of ras-independent mechanisms in NGF-mediated neuronal differentiation.
Rabbi Meir Zvi Bergman (born in 1930) is the head of the Rashbi Yeshiva and a member of the Torah Elders Council of Degel HaTorah. Biography Born in Jerusalem between the walls to Rabbi Moshe Bergman and Alta Liva Raizil, daughter of Rabbi Yona Ze'ev Hershler. He is seventh generation in Jerusalem, descendant of Rabbi Eliezer Bergman. He was orphaned by his mother who contracted typhus when he was seven years old. In the 1950s, his father settled in Miron, and was among the founders of the Bnei Akiva Yeshiva there. In the year 5575 his father moved to Bnei Brak and founded the Rashbi yeshiva there. In 1971, he studied for a year at a small yeshiva in the Mekor Haim neighborhood led by Rabbi Moshe Tikocinski, later the overseer of the Slabodka Yeshiva. At the age of 11, he began studying at Yeshiva Tefarat Zion with Rabbi Michal Yehuda Lipkowitz. During his studies he stayed for about two years in the house of the "Chazon Ish" with whom he studied. After that, he studied at Yeshiva Lomza in Petach Tikva with Rabbi Reuven Katz, the overseer Rabbi Eliyahu Doshnitzer and Rabbi Elazar Menachem Man Shach - who later became his father-in-law. At Yeshiva Lomza he studied in company with his friend Rabbi Chaim Kanievsky, who later said that he studied with 'Rabbi Meir' (as he called Rabbi Bergman) more than twenty tractates in study! In 8th of Sivan 5771 he married Deborah, the daughter of Rabbi Shach. The Kiddoshin was arranged by a relative of the family, Rabbi Isser Zalman Meltzer. The matchmaker was the Chazon Ish. After his marriage, he studied in Kollel Chazon Ish, in fellowship with Rabbi Ya'akov Israel Kanievsky, with his friend from the Lomeja Yeshiva Rabbi Gedaliah Nadel and with Rabbi Haim Kanievsky. Rabbi Bergman served as a public emissary during the terrible days at the Chazon Ish, and after his death at the temple including the Chazon Ish. Later, Rabbi Ya'akov Israel Kanievsky asked to serve in this position at the Beit Meir yeshiva. He taught at the Kalatsak-Rehovot Yeshiva (the South Yeshiva) where his father-in-law, Rabbi Shach, served earlier. After that, at the behest of his father-in-law, he founded the Rashbi's Kollel in the Zichron Meir neighborhood in Bnei Brak, in which he gives lessons and talks to the abrachs. With the establishment of the kollel, Rabbi Shach financed the salaries of the abrachs, and later Rabbi Shach ordered him to travel abroad for the sake of holding the kollel. In the second decade of the 21st century, he stopped traveling abroad, and his son Rabbi Ben Zion assists in the management and holding of the kollel. Over the years, he avoided engaging in public affairs , with the exception of exceptional cases, such as in the year of the establishment of the Degal HaTorah movement, when he served as the emissary of his father-in-law, the movement's founder , Rabbi Shach, in all matters related to the establishment and success of the new movement. Also, before the special elections for the Prime Minister - 2001, the newspaper Yad Naman published his decision to vote for Ariel Sharon, even though going to the polls is not for the purpose of choosing an ultra-Orthodox party. On the eve of Pesach in the year 5773, he was appointed to serve as a member of the Council of Torah Elders of Degel HaTorah together with Rabbi Shmuel Auerbach and Rabbi Gershon Edelstein. Since then he began to engage and express his opinion in public matters. During the controversy in the Lithuanian public, he refrained from taking an active position, but continued to support the Torah Banner Party and the newspaper Yated Naman. In his conversations, he usually refers to issues of opinion that are on the agenda of ultra-Orthodox Judaism, in which he echoes the conservative teachings of his father-in-law, Rabbi Shach, such as the issue of Jewish immigration to the Temple Mount, the return of territories, and the recruitment of Yeshiva members. It is considered an authority by many Torah scholars as representing the opinion of Rabbi Shach who is accepted in the yeshiva world as the father of the ultra-orthodox view in the modern generation, and from time to time we consult him and use his testimonies about Rabbi Shach's instructions and positions. In the 2020 elections in the United State, he publicly supported President Donald Trump because he is "good for the Jews". After the attack in the city of Bnei Brak on March 29, 2022, he attacked the calls directed to the ultra-orthodox sector to obtain a license to possess weapons. After the disaster at Hilult, Rabbi Shimon bar Yochai called for a poignant soul-searching in the ultra-orthodox public, in view of the many differences that exist between each other, saying that the disaster is measure against measure because "we push each other". Family Married to Deborah, daughter of Rabbi Shach. The couple has nine children, eight sons and a daughter. His eldest son, Rabbi Ben Zion Bergman, is the son-in-law of Rabbi David Zingerevich who was an overseer at the Ponibezh yeshiva. He was crowned by him as his replacement in the leadership of the Rashbi yeshiva. Rabbi Isser Zalman, presides over the 'Mishnat Rabino' collective on Sokolov Street in Bnei Brak. Shoshana is married to Rabbi Haim Fass, one of Rabbi Shmuel Auerbach 's old students, and he serves as a lawyer and a consultant in Lakewood, New Jersey. Rabbi Yishchar, is the founder and director of the Ish Ways for Bnei Avrach Sephardim in Bnei Brak, and previously served as Elad's appointed council member. In 1990, it was published in the press that he is close to Likud Police Minister Roni Milo, and that his grandfather , Rabbi Shach , forbade him to accept a government appointment in the Ministry of Social Affairs, in order to remove his lips from saying that he caused Rabbi Shach's support for the stinking exercise tarfod and the establishment of Yitzhak Shamir's government. Rabbi Asher became famous for publishing the books "Letters and Articles" of his grandfather, Rabbi Shach, and for composing dozens of other books. His brothers-in-law are Rabbi Yisrael Zvi Yair Danziger the Rebbe of Alexander , and Rabbi Yaakov Goldman Rabbi of Zweihel Chassidism in Bnei Brak, father of Rabbi Eliezer Goldman the Rebbe of Zweihel in the United States. His writings and books 'Gates of Ora' about Maimonides 'Shaari Ora' on the Torah, two volumes. 'Shaari Ora - Essays', two additional volumes of views and moral articles on the order of the Parshiyots compiled by his students. The 'Beit Midrash' books, in which a selection of his lessons were compiled over the years in Talmud tractates and were published by the Bnei Brak Yeshiva. 'Introduction to Shearim', a bibliographic book on oral Torah, Bnei Brak 5644. Since then it has been published in many editions, including the first dotted edition in Hashon 5655. The book is considered a bestseller throughout the Jewish world, was first translated into English and published by Masura Publishing (ArtScroll), Brooklyn in 1985. The first edition in French was published in Paris in 1997. Annotations regarding "Prohibited Slaughter" ~ Bnei Brak 2005. 'Gates of Ora' ~ an edited series containing Torah perspectives, in the English language (Hebrew phrases with English translation), edited by Yaakov Levon, and published by Feldheim Books Jerusalem 1997, including bibliographic references. 'Emma Shel Malchut' ~ on the scroll of Ruth, in English, edited by Menachem Greenberg, and published in Lakewood New Jersey 2014. The 'illuminator' on a tractate of kikat (published at his choice) .
Service discovery and load balancing with DCOS and marathon-lb - manojbadam https://mesosphere.com/blog/2015/12/13/service-discovery-and-load-balancing-with-dcos-and-marathon-lb-part-2/ ====== manojbadam Is this compatible with mesos 0.25 (not DCOS) and marathon 0.14. I'm trying to run in my environment, but it is failing. ~~~ SEJeff If setup properly, marathon-lb works great with Marathon 0.14. I use it with Mesos 0.26 and it works fine, but I'm testing the blue green deployment feature of it
The 2022-23 UEFA Champions League will be the 68th season of Europe's premier club football tournament organised by UEFA. The final will be played at the Ataturk Olympic Stadium in Istanbul, Turkey. The stadium was originally appointed to host the 2020 UEFA Champions League Final, but both this, and the 2021 final which had been eventually re-allocated to the Ataturk, were moved due to the COVID-19 pandemic. The winner of the 2022-23 UEFA Champions League will automatically qualify for the 2023-24 UEFA Champions League group stage, and also earn the right to play against the winner of the 2022-23 UEFA Europa League in the 2023 UEFA Super Cup. Real Madrid are the defending champions, having won a record fourteenth title in the previous edition. Association team allocation A total of 78 teams from 53 of the 55 UEFA member associations participate in the 2022-23 UEFA Champions League (the exceptions being Russia, who are banned from participating due to 2022 Russian invasion of Ukraine, and Liechtenstein, which does not organise a domestic league). The association ranking based on the UEFA association coefficients is used to determine the number of participating teams for each association: Associations 1-4 each have four teams qualify. Associations 5-6 each have three teams qualify. Associations 7-15 (except Russia) each have two teams qualify. Associations 16-55 (except Liechtenstein) each have one team qualify. The winners of the 2021-22 UEFA Champions League and 2021-22 UEFA Europa League are each given an additional entry if they do not qualify for the 2022-23 UEFA Champions League through their domestic league. Association ranking For the 2022-23 UEFA Champions League, the associations are allocated places according to their 2021 UEFA association coefficients, which takes into account their performance in European competitions from 2016-17 to 2020-21. Apart from the allocation based on the association coefficients, associations may have additional teams participating in the Champions League, as noted below: - Additional berth for UEFA Europa League title holders Distribution The following is the access list for this season. Due to the suspension of Russia for the 2022-23 European season, and since the Champions League title holders (Real Madrid) have qualified via their domestic league, the following changes to the access list have been made: The champions of association 11 (Scotland) and 12 (Ukraine) enter the group stage instead of the play-off round (Champions Path). The champions of association 13 (Turkey) and 14 (Denmark) enter the play-off round instead of the third qualifying round (Champions Path). The champions of association 15 (Cyprus) and 16 (Serbia) enter the third qualifying round instead of the second qualifying round (Champions Path). The champions of associations 18 (Croatia), 19 (Switzerland), 20 (Greece) and 21 (Israel) enter the second qualifying round instead of the first qualifying round (Champions Path). The runners-up of associations 10 (Austria) and 11 (Scotland) enter the third qualifying round instead of the second qualifying round (League Path). Teams The labels in the parentheses show how each team qualified for the place of its starting round: TH: Champions League title holders EL: Europa League title holders 1st, 2nd, 3rd, 4th, etc.: League positions of the previous season Abd-: League positions of abandoned season as determined by the national association; all teams are subject to approval by UEFA The second qualifying round, third qualifying round and play-off round are divided into Champions Path (CH) and League Path (LP). CC: 2022 UEFA club coefficients. Notes Schedule The schedule of the competition is as follows. All matches are played on Tuesdays and Wednesdays apart from the preliminary round final. Scheduled kick-off times starting from the play-off round are 18:45 and 21:00 CEST/CET. As the 2022 FIFA World Cup takes place in Qatar between 21 November and 18 December 2022, the group stage will commence in the first week of September 2022 and conclude in the first week of November 2022 to make way for the World Cup. All draws start at 12:00 CEST/CET and are held at the UEFA headquarters in Nyon, Switzerland. Qualifying rounds Preliminary round First qualifying round Second qualifying round Third qualifying round Play-off round Group stage The draw for the group stage will be held on 25 August 2022. The 32 teams will be drawn into eight groups of four. For the draw, the teams are seeded into four pots, each of eight teams, based on the following principles: Pot 1 contain the Champions League and Europa League title holders, and the champions of the top six associations based on their 2021 UEFA country coefficients. Since the Champions League titleholders, Real Madrid, are also the champions of Association 2 (Spain), the champions of Association 7 (Netherlands), Ajax, will also be seeded into Pot 1. Pots 2, 3 and 4 contain the remaining teams, seeded based on their 2022 UEFA club coefficients. Teams from the same association could not be drawn into the same group. Eintracht Frankfurt will make their debut appearance in the group stage after winning Europa League. This season is also the first in which five German clubs will play in the group stage. Pot 1 Real Madrid CC: 124.000 Eintracht Frankfurt CC: 61.000 Manchester City CC: 134.000 Milan CC: 38.000 Bayern Munich CC: 138.000 Paris Saint-Germain CC: 112.000 Porto CC: 80.000 Ajax CC: 82.500 Pot 2 Liverpool CC: 134.000 Chelsea CC: 123.000 Barcelona CC: 114.000 Juventus CC: 107.000 Atletico Madrid CC: 105.000 Sevilla CC: 91.000 RB Leipzig CC: 83.000 Tottenham Hotspur CC: 83.000 Pot 3 Borussia Dortmund CC: 78.000 Red Bull Salzburg CC: 71.000 Shakhtar Donetsk CC: 71.000 Inter Milan CC: 67.000 Napoli CC: 66.000 Sporting CP CC: 55.500 Bayer Leverkusen CC: 53.000 Pot 3 or 4 Marseille CC: 44.000 2 winners from the play-off round (League Path) 4 winners from the play-off round (Champions Path) Pot 4 Club Brugge CC: 38.500 Celtic CC: 33.000
I’m not feeling well today, so I’m not good for much, but I can manage to journal, and am grateful for the chance to do it. Even if I have to be nauseated in order to get the downtime. I have to say, my introversion epiphany of a couple months ago was possibly the very best thing that’s happened to me in a long time, even though, as I keep exploring this, it’s bringing up some things for me that are kind of a bummer. For instance, I’ve been feeling like this is yet another place where I really got a bum deal by not being able to live full-time with my father before I did, the introvert of my two parents. I’m scrolling back in my life to even the weekend visits we spent together, and realizing what a great model they were for managing introversion well and not feeling like I had to conform to extroversion. Of the couple days we’d spend together, there was always just as much, if not more, quiet time as time spent out and about. Even the out-and-aboutness usually involved just the two of us or small groups. I’ve been thinking about the days where sometimes almost for the whole of a day, we’d hang out at his favorite deli, both of us with a book, where we’d read for a while then talk for a while, where people could stop, visit and chat us up and then move on, and if I wasn’t feeling open or chatty, I was never told to put my book down so as not to be rude. At my other home, there really wasn’t room for being introverted. About the only time I really got any kind of acceptance, or was even just left alone for a little bit without conflict was either around achievement or performance, and ideally, both. If I did some kind of dancing monkey routine, then I was marginally acceptable. But most often, my introversion was framed as rudeness, or trying to hide from people, or hide things from people; a need for privacy to refuel was often presented as a need for secrecy. Sometimes my need to be alone was framed as my not liking or loving people. Or, my desire to be slow in conflict or step away from it before reacting instead of quick and reactive was framed as not taking conflict seriously (when really, it was quite the opposite, and is still: it’s taking the time I need to react thoughtfully and well instead of getting caught up in a tidal wave of upset). Of course, in the worst of the worst of conflict, I tend to do what my Dad does when people won’t give him space, which is to just vanish altogether, which then winds up being seen as abandonment when all we are really going for is some space to ourselves so we don’t implode or explode or just get utterly lost in someone else’s drama. Suffice it to say, the wound around being way too separated from my Dad during a lot of my life is always one that stays a little bit raw, so more salt on it basically blows. It’s clear he would have done a bit better if we’d been full-time earlier, and in so, so many ways, I would have, too. This may be the least of them, really, but still. I’m sure this is something other folks who survived a lot of serious trauma can relate to, but it also always feels so strange and surprising to me to identify smaller — per my perspective, anyway – things in your life and upbringing that have messed you up or just steered you the wrong way. I feel like it’s so much harder to see them, hell, even to remember them, through the thick fog of much bigger trauma. That’s not helped, of course, by the cultural narrative we have around certain kinds of trauma that paints those of us who are survivors as, of course, so, so super-messed up by X-thing, with everything that isn’t right for us or okay as automatically attached to that trauma. But the big trauma itself obscures the smaller issues that sometimes maybe aren’t so small after all. In some weird way, it kind of makes me feel more connected to people who have NOT gone through some of the horrible shit I have, and who I’ve often had awkward conversations with when they feel bad about things like this having been traumatic for them, versus things like my living through rape or other abuses. I never felt like anyone needed to compare that way, or that there was any need to feel bad (and heck, I’m nothing more than happy when I know people haven’t been through the mill so badly in their lives). But I have always felt a little disconnected, like we weren’t quite living in the same worlds, and these kinds of realizations make me feel a connectivity I really appreciate. I think this kind of connected feeling around the smaller stuff may be what people are actually seeking when they’ve been through The Big Awful and say they “just want to be normal.” I’m recognizing a lot of seemingly-smaller things around all of this. I don’t want to do that thing people do where they latch on to this One Big Thing to Explain Everything, but you know, this does explain a lot. Also? It’s really kind of col to be learning brand new, shiny things about myself. As someone who has done a lot of reflection, got counseling way earlier in my life than most, I confess that I’m often a bit hungry for new growth. For instance, the more reading I do, the more I become aware of why friends with ADD have expressed that maybe I’m ADD: there are a bunch of introvert things that are a lot like ADD things. I’m starting to understand more and more why I sometimes feel so daft when I’m overstimulated, and how at times when the pressure is on to be so smart so fast it often IS in the context of overstimulation, and that just can’t work for me. That’s awesome for extroverts: a recipe for disaster for me, especially if I’m not doing that I can to dial everything down so I can step up. Longtime readers may recall that a bunch of years back, I felt utterly crippled by a sudden. inexplicable anxiety about public speaking. I’d never really liked doing it, especially with big groups, but I always could do it, but from outta left field, I suddenly really, really couldn’t. I’d get sick to my stomach, have panic attacks, the works. I could never figure out why it got so bad so suddenly. Then I took a look at that timeline, and noticed that happened at a time when I was so, so very exposed on the whole, had so many people and so much work I was juggling, I was so visible, and it was all utterly nonstop. It didn’t even occur to me at the time — nor later, when it calmed down some, also fairly inexplicably — that it might have been about much too much happening all the time, with me having to be on almost 24/7, and was just to do with that business of straws, camels and their backs. In retrospect, now, it seems really obvious. Also? I had this idea that because so much of my work life anymore doesn’t have me with people in-person, that a breakneck pace, so long as it wasn’t face-to-face could work just fine. Now I’m starting to see how marathoning direct service still isn’t so great, even when I don’t have people right in my face. In fact, I think what can happen is that I miss the cues I’d otherwise pick up in in-person interactions to know when I’ve hit a limit and need to recharge, so with online work, I need to create breaks and downtime in built-in ways, rather than only realizing I went over my limits once I am utterly wiped out. Anyone who knows me very well and has stayed talking with me for hours and hours and days and days has probably heard me go on at some point about my (apparent) very strange non-reaction to dopamine geekouts. Now, I can’t tell exactly how well studied the neurochem around introversion I’ve been reading about it, but it seems that being introverted, all by itself, may be why I’m just all yeah-reward-neurochem-hit-that’s-nice-whatever-moving-on around dopamine, because the word is that that’s how interoverts are with dopamine, and it’s acetylcholine we need and crave instead. Oddly enough, my nutritional deficits usually are also acetylcholine-related, and I’ve also had low blood sugar and low blood pressure all my life, which it seems may have something to do with it, too. Who knows how useful any of that may be, but more to geek out about, always fun. Unsurprisingly, a bunch of this involves Aha! moments for me, that when I bring them to Blue, is all “Umm, I know.” I suppose it never does fail that all of us are often so much more aware of the behavior of those around us than of our own. I think that’s one of those things we’re supposed to magically outgrow with the wisdom of age and a lot of meditation. And yet. That said, my sweetheart has been beautifully patient with my process in this, making extra room for me to have extra room, when I’m already someone who errs on the side of more-time-alone than most as it is. Those “Umm, I knows” also are delivered with likely less boredom than I’d expect from someone has who has already seen a lot of this from their side of the screen already. I still, I’m sorry to say, have yet to come up with the miracle plan of how to change the world as it is right now so that there’s more room in it for introverts and for what we need to be who we are. I know, you’re disappointed. Me too. But my own plan for right now is to just keep reminding myself that when I feel like there’s no room for me and I need to conform that that’s not the deal: the deal is that I need to conform to this no more than I ever have with anything else in my life, and instead carve out the space and place I need and ask for room to be made. I’m still barely just starting with that, because it asks for quite a bit of revamping and revising, but I’m getting there. This includes asking myself for that space and place, or, perhaps more to the point, the part of myself that — quite counter to almost every other part of myself through my life, so I’m resistant to even acknowledge it sometimes — really bought the bill of sale that said I had to be a person in some ways I not only am not, but a person which often obscures the uniqueness of who I am and my best ways of being me. For that matter, it obscures a whole kind of people who’ve always had a lot to give the world, but who the world has to quiet down to hear, and slow down to see and really take in, people who I’ve probably appreciated most in my life far beyond the mere fact of having a mere temperament in common. P.S. Holy bananas, do I know how out of date some of the supporting pages of this journal are. Updating them is on my to-do list. But since that’s been on my to-do list for, oh, two years and change, I’m seeing if stating that intention where other people can see it — and thus, I’ll feel really embarrassed if I don’t get to it soon — helps. one comment so far I don’t know enough neurochem or biochem to get the dopamine / acetylcholine reference, but as another introvert with low blood sugar and blood pressure and the whole nicotine thing, I’ll be doing some research. Things are a bit off with my biology these days, so that would be useful to know. It took me a long time to learn that needing a lot of time to myself is okay, and longer to not feel bad about it when saying no to people. But it’s worth it, I’m so much happier this way.
Field Marshal Thanom Kittikachorn (Thai: thn`m kittikhcchr, Thai pronunciation: ; 11 August 1911 - 16 June 2004) was the leader of Thailand from 1963 to 1973. He illegally tried to stay in power. Public protests which became violent forced him to quit. His return from exile in 1976 caused protests. On October 6, 1976, many people in these protests were killed, and a military coup happened later that day. 1911 births 2004 deaths Prime Ministers of Thailand
OC Measurements Table Center is a position that I haven’t been able to show any consistent trends with RAS and success in the NFL. Travis Frederick and Rodney Hudson for instance measured below 2.00 out of 10.00, while Jason Kelce and Alex Mack measured well above average. Still, depending on the type of offense you run it can be more important to view specific measurables and know what your thresholds are.
Jamshedpur is a city of the Indian state of Jharkhand. It has the most people of any city in Jharkhand. According to the 2011 census of India, the Jamshedpur has a population of 1,337,131. The city was founded by the late Jamshedji Nusserwanji Tata. It is also called Steel City, TataNagar or simply Tata. It is on the Chota Nagpur plateau and is surrounded by the picturesque Dalma Hills. The city is bordered by the rivers Subarnarekha and Kharkai on the north and west parts of the city. The largest factory is that of Tata Steel. It is almost at the center of the city. Tata Steel is the largest iron and steel producing plant in India, as well as the oldest. The other major factory in the city is Tata Motors. They make heavy vehicles and construction/earth moving equipment. Jamshedpur has a high literacy rate, close to the highest in India. The Steel City has 183 schools and 13 colleges. List of notable people from Jamshedpur Actor R. Madhavan Miss India World 2000; Miss World 2000 and Bollywood actress Priyanka Chopra Miss India Universe 2004 Top 10 Bollywood Actress Tanushree Dutta Imtiaz Ali, Hindi film director Indian national cricket team players Saurabh Tiwary and Varun Aaron. Arjun Munda former Chief Minister of Jharkhand Allama Arshadul Qaudri Author and spiritual leader Thakur ji pathak Indian activist politicain,bussnuisman
--- abstract: 'We present an algorithm to compute the minimum orbital intersection distance (MOID), or global minimum of the distance between the points lying on two Keplerian ellipses. This is achieved by finding all stationary points of the distance function, based on solving an algebraic polynomial equation of $16$th degree. The algorithm tracks numerical errors appearing on the way, and treats carefully nearly degenerate cases, including practical cases with almost circular and almost coplanar orbits. Benchmarks confirm its high numeric reliability and accuracy, and that regardless of its error–controlling overheads, this algorithm pretends to be one of the fastest MOID computation methods available to date, so it may be useful in processing large catalogs.' address: - 'Saint Petersburg State University, Faculty of Mathematics and Mechanics, Universitetskij pr. 28, Petrodvorets, Saint Petersburg 198504, Russia' - 'Central Astronomical Observatory at Pulkovo of the Russian Academy of Sciences, Pulkovskoje sh. 65/1, Saint Petersburg 196140, Russia' - 'Saint Petersburg State University, Faculty of Mathematics and Mechanics, Universitetskij pr. 28, Petrodvorets, Saint Petersburg 198504, Russia' author: - 'Roman V. Baluev' - 'Denis V. Mikryukov' bibliography: - 'distalg.bib' title: 'Fast error–controlling MOID computation for confocal elliptic orbits' --- close encounters ,near-Earth asteroids ,NEOs ,catalogs ,computational methods Introduction ============ The MOID parameter, or the minimum distance between points on two Keplerian orbits, has an important value in various Solar System studies. It measures the closeness of two trajectories in the $\mathbb R^3$ space, and hence ascertains whether two bodies have a risk to collide. For example, if MOID appeared below the sum of radii of two bodies than such bodies may avoid a collision only if they orbit in a mean-motion resonance, or via a perturbating effect that may increase the MOID to a safe level before the bodies could actually collide. Otherwise, the bodies will necessarily collide in some future. Therefore, computing the MOID is very old task with an application to Potentially Hazardous Objects (PHOs) and Near-Earth Asteroids (NEAs). This problem is investigated over decades already, see e.g. [@Sitarski68; @Dybczynski86] and more recent works by @Armellin10 [@Hedo18]. The MOID is a minimum of some distance or distance-like function $\rho(u,u')$ that depends on two arguments, determining positions on two orbits. The methods of finding the minima of $\rho(u,u')$ can be split in several general cathegories, depending on the dimensionality of the optimization task to be solved. This depends on how much work is pre-computed analytically. 1. Global optimization in 2D. As an ultimately simple example this includes e.g. the 2D brute-force (exhaustive) search of $\rho(u,u')$ on a 2D grid. Thanks to the existence of rigorous and finite upper limits on the gradient of $\rho(u,u')$, which appears to be a trigonometric polynomial, we can always limit the finite difference $\Delta\rho$ by $\Delta u \max |\rho'_u|$ and $\Delta u' \max |\rho'_{u'}|$. Thanks to such error predictability, algorithms of the 2D class appear the most reliable ones, because we can always determine the MOID, along with the orbital positions $u$ and $u'$, to any desired accuracy. An advanced method based on the 2D global optimization of $\rho$, which includes high-order Taylor models and interval arithmetics, was presented by @Armellin10. Nevertheless, even advanced methods of this type cannot be fast due to the need of considering 2D domains. 2. 1D optimization. Here we eliminate $u'$ from the numeric search by solving it from an analytic equation. The remaining orbital position $u$ is determined by numeric optimization of the 1D function $\tilde\rho(u) = \rho(u,u'(u))$. In general, this is faster than 2D minimization, but the derivative of $\tilde\rho(u)$ is no longer bounded, because $du'/du$ may turn infinite sometimes. Such cases may appear even for very simple circular orbits. Therefore, in general this method cannot provide a strict mathematical guarantee of the desired numerical accuracy. However, it appears more reliable than the methods of the next class. The SDG method discussed by @Hedo18 basically belongs to this class. 3. Methods of the 0D class, in which both $u$ and $u'$ are solved for rather than found by numeric optimization. This includes the method by @KholshVas99 and by @Gronchi02 [@Gronchi05], because they do not explicitly deal with any numeric optimization at all. The task is analytically reduced to solving a nonlinear equation with respect to $u$ and then express $u'$ also analytically. Methods of this class are ultimately fast but relatively vulnerable to loosing roots due to numerical errors (in nearly degenerate cases). This effect becomes important because the equation for $u$ is quite complicated and subject to round-off errors. Also, this equation often have close (almost multiple) roots that are always difficult for numeric processing. Here we present an efficient numeric implementation of the algebraic approach presented by @KholshVas99, similar to the one presented by @Gronchi02 [@Gronchi05]. This method belongs to the fast 0D class. It is based on analytic determination of all the critical points of the distance function, $|{\bm{r}} - {\bm{r}}'|^2$, augmented with an algebraic elimination of one of the two positional variables. Mathematically, the problem is reduced to a single polynomial equation of $16$th degree with respect to one of the eccentric anomalies. Recently, @Hedo18 suggested a method that does not rely on necessary determination of all the stationary points for the distance. It basically splits the problem in two tasks of 1D optimization, so this method belongs to the 1D class. Nevertheless, it proved $\sim 20$ per cent faster than the 0D Gronchi algorithm, according to the benchmarks. Though the performance differences appeared relatively moderate, there were revealed occurences when the Gronchi’s code suffered from numeric errors, reporting a wrong value for the MOID. Therefore, in this task the numeric reliability of the method is no less important than just the computing speed. Direct implementation of the methods by @KholshVas99 and @Gronchi02 might be vulnerable, because finding roots of a high-degree polynomial might be a numerical challenge sometimes. When dealing with large asteroid catalogs, various almost-degenerate cases appear sometimes, if the equations to be solved contain almost-double or almost-multiple roots. Such roots are difficult to be estimated accurately, because they are sensitive to numeric errors (even if there were no errors in the input orbital elements). Moreover, we have a risk of ambiguity: if the polynomial has two or more very close real roots then numeric errors may result in moving them to the complex plane entirely, so that we may wrongly conclude that there are no such real roots at all. Such effect of lost of real roots may potentially result in overestimating the MOID, i.e. it may appear that we lost exactly the solution corresponding to the global minimum of the distance. This issue can be solved by paying attention not only to the roots formally identified as real, but also to all complex-valued roots that appear suspiciously close to the real axis. To define formally what means ‘suspiciously close’ we need to estimate numeric error attached to a given root, not just its formal value. In other words, our task assignes an increased role to the numeric stability of the computation, because errors are known to dramatically increase when propagating through mathematical degeneracies. This motivated us to pay major attention to error control when implementing the method by @KholshVas99 in a numerical algorithm. The structure of the paper is as follows. In Sect. \[sec\_math\], we give some mathematical framework that our algorithm relies upon. Sect. \[sec\_alg\] describes the numeric algorithm itself. Sect. \[sec\_tols\] contains some guidelines on how to select meaningful error tolerances for our algorithm. Sect. \[sec\_perf\] presents its performance tests. In Sect. \[sec\_add\], we describe several auxiliary tools included in our MOID library. The C++ source code of our MOID library named [distlink]{} is available for download at `http://sourceforge.net/projects/distlink`. Mathematical setting {#sec_math} ==================== Consider two confocal elliptic orbits: $\mathcal E$ determined by the five geometric Keplerian elements $a,e,i,\Omega,\omega$, and $\mathcal E'$ determined analogously by the same variables with a stroke. Our final task is to find the minimum of the distance $|{\bm{r}} - {\bm{r}}'|$ between two points lying on the corresponding orbits, and the orbital positions $u,u'$ where this minimum is attained (here $u$ stands for the eccentric anomaly). According to @KholshVas99, this problem is reduced to solving for the roots of a trigonometric polynomial $g(u)$ of minimum possible algebraic degree $16$ (trigonometric degree $8$). It is expressed in the following form: $$\begin{aligned} g(u) &= K^2 (A^2-C^2) (B^2-C^2) + \nonumber\\ &+ 2 K C \left[NA (A^2-C^2) + MB (B^2-C^2)\right] - \nonumber\\ &- (A^2+B^2) \left[N^2(A^2-C^2)+M^2(B^2-C^2)-\right.\nonumber\\ &\left.\phantom{(A^2+B^2)}-2NMAB\right], \label{gdef}\end{aligned}$$ where $$\begin{aligned} A &=& PS' \sin u - SS' \cos u, \nonumber\\ B &=& PP' \sin u - SP' \cos u, \nonumber\\ C &=& e' B - \alpha e \sin u (1-e\cos u), \nonumber\\ M &=& PP' \cos u + SP' \sin u + \alpha e' - PP' e, \nonumber\\ N &=& PS' e - SS' \sin u - PS' \cos u, \nonumber\\ K &=& \alpha' e'^2, \label{ABC}\end{aligned}$$ and $\alpha=a/a'$, $\alpha'=a'/a$. The quantities $PP'$, $PS'$, $SP'$, $SS'$ represent pairwise scalar products of the vectors ${\bm{P}}$ and ${\bm{S}}$: $$\begin{aligned} {\bm{P}} = \{&\cos\omega\cos\Omega-\cos i\sin\omega\sin\Omega, \nonumber\\ &\cos\omega\sin\Omega+\cos i\sin\omega\cos\Omega, \nonumber\\ &\sin i\sin\omega\, \}, \nonumber\\ {\bm{S}} = \phantom{\{} &{\bm{Q}} \sqrt{1-e^2}, \nonumber\\ {\bm{Q}} = \{&-\sin\omega\cos\Omega-\cos i\cos\omega\sin\Omega, \nonumber\\ &-\sin\omega\sin\Omega+\cos i\cos\omega\cos\Omega, \nonumber\\ &\sin i\cos\omega\, \},\end{aligned}$$ with analogous definitions for ${\bm{P}}'$ and ${\bm{S}}'$. When all the roots of $g(u)$ are found, for each $u$ we can determine the second position $u'$ from $$\cos u' = \frac{BC + mA\sqrt D}{A^2+B^2},\; \sin u' = \frac{AC - mB\sqrt D}{A^2+B^2}, \label{us}$$ where $$D = A^2+B^2-C^2, \quad m=\pm 1.$$ The sign of $m$ should be chosen to satisfy $$M \sin u' + N \cos u' = K \sin u'\cos u',$$ so there is only a single value of $u'$ that corresponds to a particular solution for $u$. Finally, after both the orbital positions $u$ and $u'$ were determined, the squared distance between these points is $|{\bm{r}} - {\bm{r}}'|^2 = 2aa'\rho(u,u')$, where $$\begin{aligned} \rho(u,u') = &\frac{\alpha+\alpha'}{2}+\frac{\alpha e^2+\alpha' e'^2}{4} - PP'ee' + \nonumber\\ &+ (PP'e'-\alpha e)\cos u + SP'e'\sin u + \nonumber\\ &+ (PP'e-\alpha'e')\cos u'+PS'e\sin u'- \nonumber\\ &- PP'\cos u\cos u' - PS'\cos u\sin u'- \nonumber\\ &- SP'\sin u\cos u' - SS'\sin u\sin u' + \nonumber\\ &+ \frac{\alpha e^2}{4}\cos 2u + \frac{\alpha'e'^2}{4} \cos 2u'. \label{rho}\end{aligned}$$ Therefore, our general computation scheme should look as follows: (i) find all real roots of $g(u)$; (ii) for each solution of $u$ determine its corresponding $u'$; (iii) for each such pair $u,u'$ compute $\rho(u,u')$; and (iv) among these values of $\rho$ select the minimum one. This will give us the required MOID estimate. As we can see, the most difficult step is finding all real roots of the trigonometric polynomial $g(u)$, while the rest of the work is rather straightforward. This trigonometric polynomial can be rewritten in one of the two standard forms: $$g(u) = a_0 + 2 \sum_{k=1}^N (a_k \cos ku + b_k \sin ku) = \sum_{k=-N}^N c_k {\mathrm e}^{iku}, \label{gcan}$$ where $N=8$. The coefficients $a_k$, $b_k$, and $c_{\pm k} = a_k\mp i b_k$ can be expressed as functions of the quantities $PP'$, $PS'$, $SP'$, $SS'$, and $\alpha$, $e$, $e'$. Most of such explicit formulae would be too huge and thus impractical, but nonetheless we computed an explicit form for the coefficient $c_8$: $$\begin{aligned} c_8 = c_{-8}^* &= \left(\frac{\alpha e^2}{16}\right)^2 M_1 M_2 M_3 M_4, \nonumber\\ M_1 &= PP'-SS' - ee' - i (SP'+PS'), \nonumber\\ M_2 &= PP'-SS' + ee' - i (SP'+PS'), \nonumber\\ M_3 &= PP'+SS' - ee' - i (SP'-PS'), \nonumber\\ M_4 &= PP'+SS' + ee' - i (SP'-PS'). \label{c8}\end{aligned}$$ Here the asterisk means complex conjugation. The number of real roots of $g(u)$ cannot be smaller than $4$ [@KholshVas99]. Also, this number is necessarily even, since $g(u)$ is continuous and periodic. But the upper limit on the number of real roots is uncertain. In any case, it cannot exceed $16$, the algebraic degree of $g(u)$, but numerical simulations performed by @KholshVas99 never revealed more than $12$ real roots. Here we reproduce their empirical upper limit: based on a test computation of $\sim 10^8$ orbit pairs from the Main Belt (see Sect. \[sec\_perf\]), we obtained approximately one $12$-root occurence per $\sim 4\times 10^6$ orbit pairs[^1]. No cases with $14$ or $16$ roots were met.[^2] Since the number of real roots of $g(u)$ is highly variable and a priori unknown, certain difficulties appear when dealing with $g(u)$ in the real space. In practice $g(u)$ often becomes close to being degenerate, e.g. in the case of almost circular or almost coplanar orbits, which is frequent for asteroids and typical for major planets in the Solar System. In such cases, real roots of $g(u)$ combine in close pairs or even close quadruples. The graph of $g(u)$ passes then close to the abscissa near such roots. This means that numeric computing errors affect such nearly-multiple roots considerably, implying increased uncertainties. Moreover, we might be even uncertain about the very existence of some roots: does the graph of $g(u)$ really intersects the abscissa or it passes slightly away, just nearly touching it? In practice this question may become non-trivial due to numerical errors, that might appear important because $g(u)$ is mathematically complicated. Therefore, treating $g(u)$ only in the real space might result in loosing its roots due to numeric errors. But loosing real roots of $g(u)$ would potentially mean to vulnerably overestimate the MOID, because there is a risk that the minimum distance $|{\bm{r}}-{\bm{r}}'|$ occasionally corresponds to a lost root. Then it might be more safe to overestimate the number of real roots of $g(u)$, i.e. we should also test “almost-real” complex roots that correspond to a near-touching behaviour of $g(u)$, even if it does not apparently intersect the abscissa. This would imply some computational overheads and additional CPU time sacrificed for the algorithmic reliability and numeric stability. Also, this would mean to treat $g(u)$ in the complex plane and find all its complex roots, rather than just the real ones. As such, we need to swap to complex notations. By making the substitution $z={\mathrm e}^{iu}$ or $w={\mathrm e}^{-iu}$, we can transform $g(u)$ to an algebraic polynomial of degree $16$: $$g(u) = \sum_{k=-N}^N c_k z^k = \mathcal P(z) w^N = \mathcal Q(w) z^N. \label{gPQ}$$ So, the task of finding roots of $g(u)$ becomes equivalent to solving $\mathcal P(z)=0$ or $\mathcal Q(w)=0$.[^3] Among all these complex roots we must select those that within numeric errors lie on the unit circle $|z|=|w|=1$. Since all $a_k$ and $b_k$ are real, the complex coefficients satisfy the property $c_k = c_{-k}^*$. Hence, roots of $\mathcal P(z)$ obey the following rule: if $z=r{\mathrm e}^{i\varphi}$ is such a root then $1/z^*=r^{-1}{\mathrm e}^{i\varphi}$ is also a root of $\mathcal P$. Therefore, the entire set of these roots includes three families: (i) roots on the unit circle $|z|=1$ that correspond to real $u$, (ii) roots outside of this circle, $|z|>1$, and (iii) roots inside the unit circle, $|z|<1$. The roots with $|z|\neq 1$ are split into pairs of mutually inverse values that have $|z|<1$ and $|z|>1$. Numerical algorithm {#sec_alg} =================== Determining the polynomial coefficients and their uncertainty ------------------------------------------------------------- First of all, we must represent the polynomial $g(u)$ in its canonical form (\[gcan\]). For that, we need to compute the coefficients $c_k$. The explicit formulae for $c_k$ are too complicated and impractical, except for the case $k=\pm 8$ given in (\[c8\]). Instead of direct computation of $c_k$, we determine them by means of the discrete Fourier transform (DFT hereafter): $$c_k = \frac{1}{2N+1} \sum_{m=0}^{2N} g(u_m)\, {\rm e}^{ik u_m}, \quad u_m=\frac{2\pi m}{2N+1}. \label{dft}$$ Here, $g(u_m)$ are computed by using the relatively compact formula (\[gdef\]). Regardless of the use of DFT, this approach appears computationally faster than computing all $c_k$ directly. We do not even use FFT algorithms for that, because of too small number of coefficients $N=8$. For so small $N$, the FFT technique did not give us any remarkable speed advantage in comparison with the direct application of the DFT (\[dft\]). However, the DFT can likely accumulate some rounding errors. The accuracy of so-determined $c_k$ can be roughly estimated by comparing the DFT estimate of $c_{\pm 8}$ with its explicit representation (\[c8\]), which is still mathematically simple. We may assume that numerical errors inferred by the formula (\[c8\]) are negligible, and that all the difference between (\[dft\]) and (\[c8\]) is solely explained by the DFT errors. Moreover, we can compute the DFT (\[dft\]) for any $N>8$. In such a case, all the coefficients $c_k$ for $|k|>8$ must turn zero. However, due to numeric errors their DFT estimate may occur non-zero, and in such a case the magnitude of this $c_k$ can be used as a rough error assessment. Based on this consideration, we adopted the following formulae to estimate the average error in $c_k$: $$\varepsilon^2 = \left.\left(|c_8-c_8'|^2 + \sum_{k=9}^N |c_k'|^2 \right)\right/(N-7). \label{err}$$ Here, $c_8$ is determined from (\[c8\]), while $c_k'$ are DFT estimations from (\[dft\]). The formula (\[err\]) represents a statistical estimation that treats numerical errors in $c_k$ as random quantities. It is based on the assumption that errors in different $c_k$ are statistically independent (uncorrelated) and have the same variance. In such a case, $\varepsilon^2$ provides an estimate of that variance. In our algorithm, we set $N=10$, thus computing the DFT from $21$ points $u_m$. In practical computation we always obtained $\varepsilon$ not far from the machine precision, except for rare cases. We additionally notice that the error estimation (\[err\]) also includes, to a certain extent at least, the numeric error appeared when computing the values of $g(u_m)$ by formula (\[gdef\]), not just the DFT errors inferred by (\[dft\]). Root-finding in the complex plane --------------------------------- When all $c_k$ are determined, along with their probable numerical error, we can determine all complex roots of $\mathcal P(z)$. This is done via Newtonian iterations and obeys the following numeric scheme: 1. Initial approximations for the first $8$ roots are selected in a specific optimized manner as detailed below. 2. Initial approximation for each next root $z_k$ is chosen according to the prediction $z_k^{(0)}=1/z_{k-1}^*$, where $z_{k-1}$ is the final estimation of the previous root. Thanks to such a choice, the algorithm will always extract a paired complex root $z_k=1/z_{k-1}^*$ immediately after $z_{k-1}$. The Newtonian iterations for $z_k$ converge in this case very quickly (in two iterations or so). This does not work, if $z_{k-1}$ belongs to the family $|z|=1$ (such roots do not combine into inverse pairs), or if $z_{k-1}$ turns out to be that *second* root in the pair. Then such a starting approximation would be equal to either $z_{k-1}$ or $z_{k-2}$, so the next extracted root $z_k$ will likely appear close to one of these. 3. Each root is refined by Newtonian iterations (i) until some least required relative accuracy $\delta_{\max}$ is achieved, and then (ii) until we reach the desired target relative accuracy $\delta_{\min}$ or, at least, the maximum possible machine accuracy, if $\delta_{\min}$ is unreachable. On the first phase, we iterate until the last Newtonian step $|d_n|$ falls below $\delta_{\max}|z|$. The iterations are restarted from a different starting point, if they are trapped in an infinite loop at this phase (this is the known curse of the Newton method). On the second phase, the stopping criterion relies on the last and pre-last Newtonian steps, $|d_n|$ and $|d_{n-1}|$. The iterations are continued either until $|d_n|<\delta_{\min}|z|$, or until the relative step change, $\gamma_n=(|d_{n-1}|^2-|d_n|^2)/|d_n|^2$, drops below the machine epsilon $\epsilon$. The latter criterion is motivated as follows. In the middle of iterations, whenever numeric round-off errors are not significant yet, the parameter $\gamma_n$ should remain large positive, since each $|d_n|$ is much smaller than $|d_{n-1}|$. But in the end either $\gamma_n\to 0$, if iterations get finally stuck at almost the same numeric value near the root, or $\gamma_n$ occasionally attains negative values, if the iterations start to randomly jump about the root due to numeric errors. A good practical assumption for the accuracy parameters might be $\delta_{\max}\sim \sqrt{\epsilon}$ and $\delta_{\min}=0$ or about $\epsilon$. 4. Whenever we have an accurate estimate of a root $z_k$, this root is eliminated from $\mathcal P(z)$ through dividing it by $(z-z_k)$ via the Horner scheme. The remaining polynomial has a reduced degree. For the sake of numerical stability, we either extract the multiplier $(z-z_k)$ from $\mathcal P(z)$, if $|z_k|>1$, or $(w-w_k)$ from $Q(w)$, if $|z_k|<1$. 5. The roots are extracted in such a way until we receive a quadratic polynomial in $\mathcal P(z)$. Its two remaining roots are then obtained analytically. The order, in which the roots are extracted, is important. If we extract ‘easy’ roots first, we spend little Newtonian iterations with a high-degree $\mathcal P$. Also, such ‘easy’ roots should likely be far from degeneracies and hence be numerically accurate. Therefore, they should not introduce big numeric errors when the Horner scheme is applied. The ‘difficult’ roots that require big number of Newtonian iterations should better be extracted later, when the degree of $\mathcal P$ is reduced. If we act in an opposite manner, i.e. extract ‘difficult’ roots first, these difficult roots will inevitably increase numeric errors. After applying the Horner scheme, these increased errors are transferred to the coefficients $c_k$, reducing the accuracy of all the remaining roots. Also, bad roots always require larger number of Newtonian iterations, which become even more expensive at the beginning, when the degree of $\mathcal P$ is still large and its computation is more slow. After some tests we decided that the best way is to extract in the very beginning extreme complex roots: $|z|\ll 1$ and their inversions $|z|\gg 1$. Such roots are determined quickly and accurately, and the Horner scheme is very stable for them. Since in practical computations we always revealed at least $4$ complex roots, we try to extract these four roots in the beginning. The starting approximation for the first root, $z_1^{(0)}$, is always set to zero. This will likely give us the root with smallest $|z_1|$. The next root, $z_2$, is started from $z_2^{(0)}=1/z_1^*$ and will be determined almost immediately. It will be the largest one. Initial approximations for the next too roots, $z_3$ and $z_4$, are set from our usual rule, $z_k^{(0)}=1/z_{k-1}^*$. Thanks to this, we obtain yet another smallest root as $z_3$, and yet another largest root as $z_4$. After these four extreme complex roots are removed from $\mathcal P$, we try to extract the four guaranteed roots that lie on the unit circle. We select their initial approximations to be such that $u$ is located at the orbital nodes or $\pm 90^\circ$ from them. This is motivated by the practical observation that the MOID is usually attained near the orbital nodal line, see Sect. \[sec\_add\]. Thanks to such a choice, these four roots are determined in a smaller number of Newtonian iterations. The ninth root is iterated starting from $z_9^{(0)}=0$ again, and for the rest of roots we follow the general rule $z_k^{(0)}=1/z_{k-1}^*$. Thanks to such a choice, the algorithm tries to extract the remaining roots starting far from the unit circle $|z|=1$, approaching it in the end. Therefore, the most numerically difficult cases, which are usually located at $|z|=1$, are processed last, when the degree of $\mathcal P$ is already reduced in a numerically safe manner. Using this optimized sequence we managed to reduce the average number of Newtonian iterations from $7$–$8$ per root to $5$–$6$ per root, according to our benchmark test case (Sect. \[sec\_perf\]). Also, this allowed to further increase the overall numeric accuracy of the roots and numeric stability of the results, because highly accurate roots are extracted first and roots with poor accuracy did not affect them. Estimating roots uncertainty and roots selection ------------------------------------------------ When all complex roots of $\mathcal P(z)$ are obtained, we need to select those roots that satisfy $|z|=1$ and thus correspond to real values of $u$. However, in practice the equation $|z|=1$ will never be satisfied exactly, due to numerical errors. We need to apply some criterion to decide whether a particular $|z_k|$ is close to unit, within some admissible numeric errors, or not. We approximate the relative error of the root $z_k$ by the following formula: $$\varepsilon_z^2 = \frac{1}{|z_k|^2} \left( |d|^2 + \frac{\varepsilon_{\mathcal P}^2}{|\mathcal D|^2} \right). \label{zerr}$$ Its explanation is as follows. Firstly, $d$ is the smaller (in absolute value) of the roots of a quadratic polynomial that approximates $\mathcal P(z)$ near $z_k$: $$\begin{aligned} \frac{\mathcal P''(z_k)}{2} d^2 + \mathcal P'(z_k) d + \mathcal P(z_k) = 0, \nonumber\\ d=\frac{-\mathcal P' + \mathcal D}{\mathcal P''}, \quad \mathcal D = \pm \sqrt{\mathcal P'^2 - 2\mathcal P \mathcal P''}. \label{qapp}\end{aligned}$$ Thus, the first term in (\[zerr\]), or $|d|$, approximates the residual error of $z_k$ still remained after Newtonian iterations. It is zero if $\mathcal P(z_k)=0$ precisely. Here we use the initial polynomial $\mathcal P$ of $16$th degree, not the one obtained after dividing it by any of $z-z_k$. For practical purposes, $d$ should be calculated using a numerically stabilized formula that avoids subtraction of close numbers whenever $\mathcal P\approx 0$. For example, we can use $$d = \frac{-2\mathcal P}{\mathcal P' + \mathcal D},$$ selecting such sign of $\mathcal D$ that maximizes the denominator $|\mathcal P' + \mathcal D|$. But just $|d|$ is not enough to characterize the uncertainty of $z_k$ in full. In fact, most of this uncertainty comes from the numerical errors appearing in $\mathcal P(z)$ through $c_k$. Inaccurate computation of $\mathcal P(z)$ leads to errors in the estimated root $z_k$. Using the quadratic approximation (\[qapp\]), the sensitivity of $z_k$ with respect to varying $\mathcal P$ is expressed by the derivative $\partial d/\partial\mathcal P = -1/\mathcal D$. Hence, the second error term in (\[zerr\]) appears, $\varepsilon_{\mathcal P}/|\mathcal D|$, where $\varepsilon_{\mathcal P}$ represents the error estimate of $\mathcal P(z)$: $$\varepsilon_{\mathcal P}^2 = \varepsilon^2 \sum_{n=0}^{16} |z_k|^{2n},$$ where $\varepsilon$ given in (\[err\]). The quadratic approximation (\[qapp\]) is related to the iterative Muller method that takes into account the second derivative of $\mathcal P$. We needed to take into account $\mathcal P''$ because in practice the real roots of $g(u)$ are often combined into close pairs, triggering a close-to-degenerate behaviour with small $|\mathcal P'(z)|$. In such a case the linear (Newtonian) approximation of $\mathcal P(z)$ yields too pessimistic error estimate for $z_k$. The use of the quadratic approximation (\[qapp\]) instead allows us to adequately treat such cases with nearly double roots. However, even with (\[qapp\]) it is still difficult to treat the cases in which the roots combine in close quadruples. Then $\mathcal P''(z_k)$ becomes small too, along with $\mathcal P'(z_k)$ and $\mathcal P(z_k)$. The error estimate (\[zerr\]) becomes too pessimistic again. Such cases are very rare, but still exist. They may need to be processed with an alternative computation method (see Sect. \[sec\_add\]). In the error estimate (\[zerr\]), we neglect numerical errors of $\mathcal P'(z_k)$ and of $\mathcal P''(z_k)$, assuming that these quantities do not vanish in general and thus always keep a satisfactory relative accuracy (this is typically true even for almost double paired roots). We use the following numeric criterion to identify roots lying on the unit circle: $$\Delta_z = \frac{\left|\log|z| \right|}{\nu \varepsilon_z} \leq 3. \label{thr}$$ Here, $\nu$ is an auxiliary scaling parameter controlling the tolerance of the threshold. Normally, it should be set to unit and its meaning is to heuristically correct the estimated $\varepsilon_z$ in case if there are hints that this error estimation is systematically wrong. The threshold $3$ is supposed to mean the so-called three-sigma rule. It was selected well above the unit in order to increase the safety of roots selection and hence the reliability of the entire algorithm. After selecting all the roots $z_k$ that lie close enough to the unit circle, we may determine the corresponding eccentric anomaly $u_k=\arg z_k$, then its corresponding $u_k'$ from (\[us\]) and $\rho_k = \rho(u_k,u_k')$ from (\[rho\]). The minimum among all computed $\rho_k$ yields us the required MOID estimate. In general, the discriminant $D$ in (\[us\]) is non-negative if $u$ is a root of $g(u)$, but this can be violated in some special degenerate cases [@Baluev05]. Formally, negative $D$ means that MOID cannot be attained at the given orbital position $u$, even if $g(u)=0$. This is quite legal, meaning that some roots of $g(u)$ may be parasitic, i.e. corresponding to a critical point of $\rho(u,u')$ for some complex $u'$ (even if $u$ is real). However, $D$ may also turn negative due to numeric errors perturbing almost-zero $D>0$. We could distinguish such cases based on some uncertainty estimate of $D$, but in practice it appears easier to process all them just forcing $D=0$. In the first case (if $D$ is negative indeed), this would imply just a negligible computation overhead because of unnecessary testing of an additional $\rho_k$ that cannot be a MOID. But in the second case (if $D$ appeared negative due to numeric errors) we avoid loosing a potential MOID candidate $\rho_k$. Refining the MOID by 2D iterations ---------------------------------- Now we have quite a good approximation for the MOID and for the corresponding positions in $u$ and $u'$. However, their accuracy is typically 1-2 significant digits worse than the machine precision (even if we iterated the roots $z_k$ to the machine precision). The loss of numeric precision appears in multiple places: in rather complicated formulae like (\[gdef\]), in (\[us\]) if $D$ appeared small, in the DFT computation (\[dft\]), and so on. As a result, whenever working in the standard [double]{} precision, we may receive average numeric errors of $\sim 10^{-14}$ instead of the relevant machine epsilon $\sim 10^{-16}$. Although this average accuracy is pretty good for the most practical needs, in poorly-conditioned cases the errors may increase further. But fortunately, the results can be easily refined to the machine precision $\sim 10^{-16}$ at the cost of negligible overheads. This can be achieved by applying the 2D Newton iteration scheme to the function $\rho(u,u')$. Let us decompose it into the Taylor series: $$\rho(u,u') = \rho_0 + {\bm{g}} \cdot {\bm{d}} + \frac{1}{2} {\bm{d}}^{\rm T} {\bm{\mathsf{H}}} {\bm{d}} + \ldots \label{rhoTayl}$$ Here, ${\bm{g}}$ is the gradient and ${\bm{\mathsf{H}}}$ is the Hessian matrix of $\rho$, considered at the current point $(u,u')$, while ${\bm{d}}$ is the 2D step in the plane $(u,u')$. We need to find such ${\bm{d}}$ where the gradient of (\[rhoTayl\]) vanishes: $$\nabla \rho = {\bm{g}} + {\bm{\mathsf{H}}} {\bm{d}} + \ldots = 0,$$ therefore the necessary 2D step is $${\bm{d}} = - {\bm{\mathsf{H}}}^{-1} {\bm{g}}. \label{step}$$ To compute $\rho$, ${\bm{g}}$, and ${\bm{\mathsf{H}}}$, we do not rely on the formula (\[rho\]), because it is poorly suited for practical numeric computations. It may generate a precision loss due to the subtraction of large numbers. Such a precision loss appears when MOID is small compared to $a$ and $a'$. The derivatives of $\rho$ can be computed using the following formulae, obtained by direct differentiation: $$\begin{aligned} \rho = \frac{({\bm{r}} - {\bm{r}}')^2}{2aa'}, \; g_u = \frac{({\bm{r}} - {\bm{r}}') {\bm{r}}_u}{2aa'}, \; g_{u'} = -\frac{({\bm{r}} - {\bm{r}}') {\bm{r}}'_{u'}}{2aa'}, \nonumber\\ H_{uu} = \frac{({\bm{r}} - {\bm{r}}') {\bm{r}}_{uu} + {\bm{r}}_u^2}{2aa'}, \; H_{u'u'} = \frac{({\bm{r}} - {\bm{r}}') {\bm{r}}'_{u'u'} + {{{\bm{r}}}'_{u'}}^2}{2aa'}, \nonumber\\ H_{uu'} = -\frac{{\bm{r}}_u {\bm{r}}'_{u'}}{2aa'}, \qquad {\bm{r}}_u = \frac{d{\bm{r}}}{du}, \quad {\bm{r}}'_{u'} = \frac{d{\bm{r}}'}{du'}. \label{vgH}\end{aligned}$$ In fact, the effect of the precision loss is present in (\[vgH\]) too, due to the difference ${\bm{r}}-{\bm{r}}'$, but formula (\[rho\]) would exacerbate it further, because it hiddenly involves subtraction of *squares* of the quantities. Now, according to @Baluev05, the radius-vector ${\bm{r}}$ on a Keplerian elliptic orbit is $$\frac{{\bm{r}}}{a} = {\bm{P}} (\cos u - e) + {\bm{S}} \sin u, \label{rvec}$$ with a similar expression for ${\bm{r}}'$. The corresponding derivatives with respect to $u$ and $u'$ are obvious. The stopping criterion for the 2D iterations (\[step\]) is similar to the one used in the Newton-Raphson scheme for the roots $z_k$. It applies the tolerance parameter $\delta_{\min}$ to $|{\bm{d}}|$. The iterations are therefore continued either until this accuracy $\delta_{\min}$ is reached by the angular variables $u$ and $u'$, or until we reach the maximum possible numeric precision, so that further iterations unable to increase it. The other control parameter $\delta_{\max}$ is not used here. In practice it appears enough to make just one or two refining iterations (\[step\]) to reach almost the machine accuracy in $\rho$. In rare almost-degenerate cases we may need $n=3$ iterations or more, but the fraction of such occurences is small and quickly decreases for larger $n$. Estimating uncertainties of the MOID and of its orbital positions ----------------------------------------------------------------- The numeric errors in $u$ and $u'$ come from three sources: the floating-point ‘storage’ errors, the residual errors appearing due to inaccurate fit of the condition $\nabla\rho=0$, and the numeric errors appearing when computing ${\bm{g}}$. Each of these error components transfers to $\rho$. The first error part in $u,u'$ can be roughly approximated as $$\sigma_{u,u'}^{(1)} = \pi\nu\epsilon, \label{uu1}$$ assuming that $\nu$ is our universal error scaling factor. Since the gradient ${\bm{g}}$ is negligible near the MOID, the Taylor decomposition (\[rhoTayl\]) implies the following error in $\rho$: $$\Delta\rho \simeq \frac{1}{2} {\bm{d}}^{\rm T} {\bm{\mathsf{H}}} {\bm{d}},$$ where ${\bm{d}}$ has the meaning of the 2D numeric error in $u,u'$. From (\[uu1\]), we know only the typical length of ${\bm{d}}$, but the direction of this vector can be arbitrary. Then we can use the Rayleigh inequality $$|{\bm{d}}^{\rm T} {\bm{\mathsf{H}}} {\bm{d}}| \leq |\lambda_{\max}| {\bm{d}}^2,$$ where $\lambda_{\max}$ is the maximum (in absolute value) eigenvalue of ${\bm{\mathsf{H}}}$. Since the size of ${\bm{\mathsf{H}}}$ is only $2\times 2$, it can be computed directly: $$|\lambda_{\max}| = \left|\frac{H_{uu}+H_{u'u'}}{2}\right| + \sqrt{\left(\frac{H_{uu}-H_{u'u'}}{2}\right)^2 + H_{uu'}^2}.$$ So, the indicative uncertainty in $\rho$ coming from this error source is $$\sigma_\rho^{(1)} \simeq \frac{|\lambda_{\max}|}{2} \left(\sigma_u^{(1)}\right)^2. \label{urho1}$$ The second error part in $u,u'$ can be derived from (\[step\]), substituting the residual gradient ${\bm{g}}$ that appears after the last refining iteration: $$\left( \sigma_u^{(2)}\atop \sigma_{u'}^{(2)} \right) = - {\bm{\mathsf{H}}}_{\rm rsd}^{-1} {\bm{g}}_{\rm rsd}. \label{uu2}$$ From (\[rhoTayl\]), this will imply the following error in $\rho$: $$\sigma_\rho^{(2)} \simeq \frac{1}{2} {\bm{g}}_{\rm rsd}^{\rm T} {\bm{\mathsf{H}}}_{\rm rsd}^{-1} {\bm{g}}_{\rm rsd}. \label{urho2}$$ The third error source comes from possible round-off errors in ${\bm{g}}$, which may perturbate the computed position of the local minimum of $\rho$ via (\[step\]). Let us assume that the numeric uncertainty of ${\bm{g}}$ is $\sigma_g$, which has the meaning of a typical length of the computed vector ${\bm{g}}$ at the strict (algebraic) stationary point of $\rho$, where ${\bm{g}}$ must vanish. Then from (\[step\]) one may derive that $$|{\bm{d}}| \leq \frac{|{\bm{g}}|}{|\lambda_{\min}|},$$ where the $\lambda_{\min}$ is the minimum eigenvalue of ${\bm{\mathsf{H}}}$: $$\frac{1}{|\lambda_{\min}|} = \frac{|\lambda_{\max}|}{|\det{\bm{\mathsf{H}}}|}.$$ Hence, the corresponding uncertainty in $u,u'$ is estimated by $$\sigma_{u,u'}^{(3)} \simeq \frac{|\lambda_{\max}|}{|\det{\bm{\mathsf{H}}}|}\sigma_g, \label{uu3}$$ The corresponding uncertainty for $\rho$ can be expressed using (\[urho2\]), but now ${\bm{g}}$ is basically an unknown random vector of average length $\sim \sigma_g$. We can again apply the Rayleigh inequality to obtain $$\sigma_\rho^{(3)} \simeq \frac{|\lambda_{\max}|}{2} \frac{\sigma_g^2}{|\det{\bm{\mathsf{H}}}|}. \label{urho3}$$ The quantity $\sigma_g$ appears more complicated and is derived below. The fourth error source in $\rho$ appears when applying the first formula of (\[vgH\]). If MOID is small then the relative error in $\rho$ increases due to the precision loss, appearing because of the subtraction of close vectors ${\bm{r}}$ and ${\bm{r}}'$. If these vectors have relative ‘storage’ errors about $\nu \epsilon$ then their absolute numeric errors are $\sigma_{{\bm{r}}} \sim \nu\epsilon r$ and $\sigma_{{\bm{r}}'} \sim \nu\epsilon r'$. Hence, the inferred cumulative uncertainty of the difference is about their quadrature sum: $$\sigma_{{\bm{r}}-{\bm{r}}'} \simeq \nu\epsilon \sqrt{r^2+{r'}^2}. \label{sdiff}$$ Let us compute the inferred uncertainty in $\rho$ using the so-called delta method. Since $2aa'\rho=({\bm{r}}-{\bm{r}}')^2$ then the error $\Delta\rho$ appearing due to a small perturbation $\Delta_{{\bm{r}}-{\bm{r}}'}$ is $$2aa' \Delta\rho = 2({\bm{r}}-{\bm{r}}') \Delta_{{\bm{r}}-{\bm{r}}'} + \Delta_{{\bm{r}}-{\bm{r}}'}^2.$$ By replacing the terms above with their uncertainties or with absolute values of vectors, and assuming that different terms are always added in the worst-case fashion, the final error component in $\rho$ may be estimated as follows: $$\sigma_\rho^{(4)} \simeq 2\rho \frac{\sigma_{{\bm{r}}-{\bm{r}}'}}{\sqrt{2aa'}} + \frac{\sigma_{{\bm{r}}-{\bm{r}}'}^2}{2aa'}. \label{urho4}$$ Notice that we cannot in general neglect the last term in (\[urho4\]), because it may appear significant if $\rho$ is small. Now we can also estimate the numeric uncertainty of ${\bm{g}}$ by computing its finite difference from (\[vgH\]), analogously to $\Delta\rho$: $$2aa' \Delta {\bm{g}} \simeq \left(\Delta_{{\bm{r}}-{\bm{r}}'}{\bm{r}}_u + ({\bm{r}}-{\bm{r}}')\Delta {\bm{r}}_u \atop \Delta_{{\bm{r}}-{\bm{r}}'}{\bm{r}}'_{u'} + ({\bm{r}}-{\bm{r}}')\Delta {\bm{r}}'_{u'}\right).$$ By applying the same approach as for $\Delta\rho$, we may obtain the following for the uncertainties in ${\bm{g}}$: $$2aa' \left(\sigma_{{\bm{g}}_u}\atop \sigma_{{\bm{g}}_{u'}}\right) \simeq \left(\sigma_{{\bm{r}}-{\bm{r}}'} r_u + \rho\sqrt{2aa'} \sigma_{{\bm{r}}_u} \atop \sigma_{{\bm{r}}-{\bm{r}}'} r'_{u'} + \rho\sqrt{2aa'}\sigma_{{\bm{r}}'_{u'}} \right).$$ Since $\sigma_g^2 = \sigma_{{\bm{g}}_u}^2 + \sigma_{{\bm{g}}_{u'}}^2$, and $\sigma_{{\bm{r}}_u} \sim \nu\epsilon r_u$, and using (\[sdiff\]), one can obtain $$\sigma_g \simeq \frac{\nu\epsilon}{2aa'}\left(|{\bm{r}}-{\bm{r}}'|+\sqrt{r^2+r'^2}\right) \sqrt{r_u^2+{r'_{u'}}^2}.$$ But now we can clearly see that the term $|{\bm{r}}-{\bm{r}}'|$ is either small or of the same order as $\sqrt{r^2+r'^2}$, so to simplify the formula we can simply neglect it and leave only the latter term. We therefore put: $$\sigma_g \simeq \frac{\nu\epsilon}{2aa'} \sqrt{r^2+r'^2} \sqrt{r_u^2+{r'_{u'}}^2}.$$ This should be substituted to the formula (\[urho3\]) above. Finally, summing up all four error components in $\rho$ yields the cumulative uncertainty $$\sigma_\rho \sim \sigma_\rho^{(1)}+\sigma_\rho^{(2)}+\sigma_\rho^{(3)}+\sigma_\rho^{(4)}.$$ Of course, some of these error terms may often become negligible, but it is difficult to predict in advance which terms would dominate in this sum. Since either term may appear large in certain conditions, we need to preserve all of them for the sake of reliability. After that, an indicative uncertainty for the $\mathrm{MOID}=\sqrt{2aa'\rho}$ is approximated by using the delta method as $$\sigma_{\mathrm{MOID}} \sim \frac{aa'\sigma_\rho}{\mathrm{MOID}}.$$ This formula is valid only if $\rho$ is not close to zero (compared to $\sigma_\rho$). Otherwise, the MOID uncertainty is $$\sigma_{\mathrm{MOID}} \sim \sqrt{2aa'\sigma_\rho}.$$ The two latter formulae can be combined into a single approximate one: $$\sigma_{\mathrm{MOID}} \sim \frac{aa'\sigma_\rho}{\sqrt{\mathrm{MOID}^2 + aa'\sigma_\rho/2}}.$$ Self-testing numerical reliability ---------------------------------- Finally, our algorithm includes a self-diagnostic test that verifies the following post-conditions: 1. All roots that passed (\[thr\]) must comply with the requested least accuracy: $\nu\varepsilon_z<\delta_{\max}$. 2. The minimum of $\Delta_z$ among all the roots that failed (\[thr\]) must exceed $10$, meaning that there is no other suspicious root candidates. That is, the families $|z|=1$ and $|z|\neq 1$ must be separated by a clear gap. 3. The number of roots that passed (\[thr\]) must be even and greater than four (necessary algebraic conditions following from the theory). 4. After the 2D refining, the Hessian ${\bm{\mathsf{H}}}_{\rm rsd}$ is strictly positive-definite, so we indeed are at a local minimum (rather than maximum or saddle point). 5. On the 2D refining stage, the total cumulative change in $u$ satisfies the condition $|\Delta u| < \delta_{\max}$. In some part this condition duplicates (i), ensuring the initial approximation of the corresponding root did not appear to have inacceptable accuracy. But it also ensures that the 2D refining did not switch us to a completely different root of $g(u)$ (another stationary point of $\rho$). We pay no attention to $u'$ here, because it is always derived from $u$ using (\[us\]), so its numeric error, even if large, is not indicative regarding the selection of a correct root of $g(u)$. If some of these conditions are broken, the algorithm sets a warning flag. Receiving such a signal, the results should be considered unreliable. In practice this is a very seldom case (see next section), but still possible. Then the following sequence can be done to verify the results: (i) run the same algorithm on the same orbits $\mathcal E$ and $\mathcal E'$, but swap them with each other; (ii) if failed again, run the same algorithm using the [long double]{} precision instead of [double]{}; (iii) run the [long double]{} computation on swapped orbits; (iv) if everything failed, invoke an alternative MOID algorithm, e.g. the one from Sect. \[sec\_add\]. We notice that since the task is solved asymmetrically, the algorithm may yield slightly different results when computing $\mathrm{MOID}(\mathcal E,\mathcal E')$ and $\mathrm{MOID}(\mathcal E',\mathcal E)$. If the orbital configuration does not imply degeneracies, both computations should offer the same MOID value, within the reported uncertainties. If they differ unacceptably, this can serve as an additional indicator that something went wrong. However, this notice does not apply to the estimated MOID uncertainty. This uncertainty can appear different when computing $\mathrm{MOID}(\mathcal E,\mathcal E')$ and $\mathrm{MOID}(\mathcal E',\mathcal E)$, because the polynomials $g(u)$ and $g(u')$ may have (and usually do have) different algebraic properties. In fact, if the goal is accuracy rather than speed, one may always compute the MOID in the both directions, $\mathcal E \to \mathcal E'$ and $\mathcal E' \to \mathcal E$, selecting the value offering better accuracy. On the choice of error tolerances {#sec_tols} ================================= The algorithm involves three main parameters related to the error control: $\delta_{\min}$, $\delta_{\max}$, and $\nu$. The primary error tolerance is $\delta_{\min}$. It controls the resulting accuracy of the roots $z_k$, and of the eccentric anomalies $u,u'$, but not of the MOID itself and not even of the adimensional function $\rho$. By setting $\delta_{\min}$ to a larger or smaller value we may obtain less or more accurate result (in terms of $u,u'$). We can set $\delta_{\min}=0$, meaning to seek maximum precision possible with the hardware (though this probably requires to use the [long double]{} arithmetic, see below). The auxiliary error tolerance $\delta_{\max}$ does not actually control the accuracy of the results. Setting a smaller $\delta_{\max}$ won’t result in a more numerically accurate MOID estimate. This parameter has two aspects: (i) it is used in the root-finding part to control the initial ‘burn-in’ stage of the Newton scheme and (ii) it is used as an indicative threshold to separate numerically ‘reliable’ cases from ‘unreliable’ ones. Therefore, the common sense requires that $\delta_{\max}$ must be greater (preferably, significantly greater) than $\delta_{\min}$. Forcing $\delta_{\max}$ too small may result in the following undesired effects. First, the Newton root-finding scheme may drastically slow down, because its ‘burn-in’ stage does not expect that it may reach the machine precision and may otherwise iterate the roots until the internal iteration limit. The output precision would then be worse than $\delta_{\max}$ anyway. Secondly, too small $\delta_{\max}$ may trigger an unnecessary increase of the number of the unreliability warnings. Concerning the unreliability warnings, the practical value of $\delta_{\max}$ can be selected based on the observational uncertainty of orbital elements (relative uncertainty in terms of $a,a'$ or absolute one in terms of the angular elements). This input uncertainty is typically considerably larger than the machine precision. With so-selected $\delta_{\max}$, the warning flag would indicate that the numeric accuracy of $z_k$ might be worse than the errors inferred by input observational uncertainties. Then the intermediate (inferred by $z_k$) numeric uncertainty of $u$ and $u'$ may appear larger than what we can trigger by varying the orbital elements within their error boxes. In this case the warning is physically reasonable. But if the numeric uncertainty always remains below the observational one then signaling any warning does not make sense. In any case, it does not make sense to set $\delta_{\max}$ too much below the physical sizes of the objects (relative to $\sim \max(a,a')$). For the Main Belt, this is $\sim 10^{-11}$ AU corresponding to the smallest known asteroids of $\sim 1$ m in size. If the uncertainty of orbital elements is unknown or irrelevant than the good choice of $\delta_{\max}$ is $\sqrt{\delta_{\min}}$, which follows from the properties of the Newton-Raphson method. In such a case, for each root $z_k$ we need to make just one or two iterations after the ‘burn-in’ part of the Newton scheme. This is because the number of accurate digits is roughly doubled after each Newtonian iteration. For example, if the accuracy of $\sqrt\epsilon$ has been reached, on the next iteration we will likely have $\epsilon$. As to the last control parameter, $\nu$, it may be used to manually scale up all the error assessments. So far in our tests we did not find practical reasons to set it to something different from $\nu=1$. But whenever necessary, it can be used to disentangle the value $\delta_{\max}$ used by the Newtonian root-finding scheme from the threshold used in the error control part. Since $\nu$ scales all the error estimates up, its effect is equivalent to reducing the error threshold from $\delta_{\max}$ to $\delta_{\max}/\nu$, but the Newton scheme is always using $\delta_{\max}$ and ignores the scale factor $\nu$. Summarizing, it appears that in the general case it is reasonable to set $\delta_{\min}$ about the machine epsilon, $\delta_{\max}\sim\sqrt{\delta_{\min}}$, and to select such $\nu$ that $\delta_{\max}/\nu$ is about the physically justified MOID uncertainty (relative to $\sim \max(a,a')$). Practical validation and performance benchmarks {#sec_perf} =============================================== We tested our algorithm on the first $10000$ numbered asteroids from the Main Belt, implying $\sim 10^8$ orbit pairs. The orbital elements were taken from the catalog `astorb.dat` of the Lowell observatory.[^4] Our algorithm succeeded nearly always. When using the standard [double]{} floating-point arithmetic, the self-test conditions listed above were failed only once per $25000$ orbit pairs. In case of such a warning the first attempt was to rerun the same algorithm interchanging the orbits $\mathcal E$ and $\mathcal E'$. Since the method treats orbits asymmetrically, this usually helps. Double-warnings occured in our test once per $2.5\times 10^6$ orbit pairs. We note that if the algorithms returns a bad self-diagnostic flag, this does not yet mean that it failed to compute the MOID and the result is necessarily wrong or just absent. One of the reasons for a warning is that some root $z_k$ (not even necessarily related to the global minimum) is worse than the required least accuracy $\delta_{\max}$. But worse does not mean necessarily useless. This just means that the result needs an attention and probably needs a more detailed investigation using different other methods to confirm or refine the results. Occurences when the resulting MOID appears entirely wrong and has inacceptable accuracy, represent only a small fraction of all those cases when the warning was reported. ![The difference between MOID values computed by the Gronchi’s code and by our new algorithm (labelled as [distlink]{}). Top: comparing results improved by orbits interchanging and selecting the best MOID of the two. Bottom: using only a single MOID value for comparison. To reduce the figure file size, we removed from the both graphs all differences below $10^{-13}$ AU in absolute magnitude. In these conditions, no points were revealed below the abscissa, i.e. [distlink]{} always provided smaller MOID value than the Gronchi’s code. See text for a detailed explanation.[]{data-label="fig_Gtest"}](compar.eps){width="\linewidth"} We also tested the Gronchi FORTRAN code in the same setting. We found only two orbit pairs for which it failed with a error and no-result, and swapping the orbits did not help. A single-fail case, when the first attempt to compute MOID failed, but swapping the orbits did help, occured once per $\sim 3\times 10^5$ MOID computations. For the majority of orbit pairs this algorithm gave some numeric MOID value at least, but in itself this does guarantees that all these values are accurate. We provide a comparison of our new algorithm with the Gronchi code in Fig. \[fig\_Gtest\]. We compute the differences of the Gronchi MOID minus the MOID obtained by our new algorithm in two settings. In the first case, we run both algorithms for each orbit pair twice, to compute $\mathrm{MOID}(\mathcal E, \mathcal E')$ and $\mathrm{MOID}(\mathcal E', \mathcal E)$. With the Gronchi code, we select the minimum MOID between the two, and for our new code we select the best-accuracy MOID. If Gronchi algorithm failed with no-result in one of the two runs, the corresponding MOID value was ignored, and only the other one was used. If both values of the MOID obtained by Gronchi’s algorithm were failed, this orbit pair itself was ignored. Additionally, if our new algorithm reported a warning, we either ignored this MOID in favour of the other value, or invoked the fallback algorithm from Sect. \[sec\_add\], if that second MOID estimate appeared unreliable too. The MOID difference between the Gronchi code and new algorithm was then plotted in the top panel of Fig. \[fig\_Gtest\]. In the second setting, we plainly performed just a single MOID computation for each orbit pair without orbit interchange, either using the Gronchi code or our new algorithm. Orbit pairs for which Gronchi code failed or our algorithm reported a warning, were ignored and removed. The corresponding MOID difference is plotted in the bottom panel of Fig. \[fig\_Gtest\]. We may see that there are multiple occurences when Gronchi code obtained clearly overestimated MOID value (i.e., it missed the true global minimum). But all the cases, in which Gronchi algorithm produced smaller MOID than our library, correspond to the MOID difference of $\sim 10^{-13}$ AU at most, with $\sim 10^{-16}$ AU in average. So all these occurences look like some remaining round-off errors (possibly even in the Gronchi code rather than in [distlink]{}). Therefore, we did not find an occurence in which [distlink]{} would yield clearly wrong MOID value without setting the unreliability flag. ![Distribution of the estimated uncertainties $\sigma_\mathrm{MOID}$ versus an empiric error measure $\left|\mathrm{MOID}(\mathcal E, \mathcal E') - \mathrm{MOID}(\mathcal E', \mathcal E)\right|$, for the test case of $10^8$ orbital pairs (see text). The inclined line labels the main diagonal (abscissa equals ordinate). All simulated dots falled below this line. The computations were done in the [double]{} floating-point arithmetic, AMD FX configuration.[]{data-label="fig_testerr"}](errors_rasterized.eps){width="49.00000%"} In Fig. \[fig\_testerr\] we compare the quadrature sum of the reported MOID uncertainties, $\sigma_\mathrm{MOID}=\sqrt{\sigma_\mathrm{MOID(\mathcal E,\mathcal E')}^2 + \sigma_\mathrm{MOID(\mathcal E',\mathcal E)}^2}$, with the difference $|\mathrm{MOID}(\mathcal E,\mathcal E')-\mathrm{MOID}(\mathcal E',\mathcal E)|$ that can be deemed as an empiric estimate of the actual MOID error. We may conclude that our algorithm provides rather safe and realistic assessment of numeric errors, intentionally a bit pessimistic. We did not met a case with the empiric error exceeding the predicted uncertainty. ![image](iterations.eps){width="49.00000%"} ![image](iterations_2D.eps){width="49.00000%"} From Fig. \[fig\_testiter\] one can see that we spend, in average, about $n=5-6$ Newtonian iterations per each root. One way how we may further increse the speed of computation is to reduce this number. However, this number is already quite small, so there is no much room to significantly reduce it. On the refining stage the algorithm usually performs just one or two 2D Newtonian iterations in the plane $(u,u')$. The fraction of occurences when three or more refining iterations are made is very small and further decreases quickly. The maximum number of refining iterations made in this test was $7$. ------------------- -------------- -------------- -------------- -------------- Hardware [distlink]{} Gronchi code [distlink]{} Gronchi code (fast alg.) (fast alg.) Intel Core i7 $24$ $\mu$s $36$ $\mu$s $77$ $\mu$s NA Supermicro & Xeon $31$ $\mu$s $61$ $\mu$s $100$ $\mu$s NA AMD FX $44$ $\mu$s $70$ $\mu$s $357$ $\mu$s NA ------------------- -------------- -------------- -------------- -------------- \[tab\_bench\] In Table \[tab\_bench\], we present our performance benchmarks for this test application. They were done for the following hardware: (i) Intel Core i7-6700K at $4.0$ GHz, (ii) Supermicro server with Intel Xeon CPU E5-2630 at $2.4$ GHz, and (iii) AMD 990FX chipset with AMD FX-9590 CPU at $4.4$ GHz. The second configuration is rather similar to one of those used by @Hedo18. We used either 80-bit [long double]{} floating-point arithmetic or the 64-bit [double]{} one, and requested the desired accuracy of $2\epsilon$: $\delta_{\min}\sim 2.2\times 10^{-19}$ or $\delta_{\min}\sim 4.4\times 10^{-16}$, respectively. We did not use $\delta_{\min}=0$, because in the [double]{} case many CPUs hiddenly perform much of the local computation in [long double]{} precision instead of the requested [double]{}. Newtonian iterations are then continued to this undesiredly increased level of precision, if $\delta_{\min}=0$, thus introducing an unnecessary minor slowdown. The least required accuracy $\delta_{\max}$ was set to $\sqrt\epsilon$ in all of the tests. All the code was compiled with [GCC]{} (either `g++` or `gfortran`) and optimized for the local CPU architecture (`-O3 -march=native -mfpmath=sse`). The Gronchi primary computing subroutine `compute_critical_points_shift()` was called from our C++ program natively, i.e. without any intermediary file IO wrapping that would be necessary if we used the main program `CP_comp.x` from the Gronchi package. To accurately measure the time spent purely inside just the MOID computation, and not on the file IO or other algorithmic decorations around it, we always performed three independent runs on the test catalog: (i) an ‘empty’ run without any calls to any MOID algorithm, only iteration over the catalog; (ii) computation of all MOIDs using the algorithm of this paper, without writing results to a file; (iii) same for the Gronchi algorithm. The time differences (ii)-(i) or (iii)-(i) gave us the CPU time spent only inside the MOID computation. We never included the CPU time spent in the kernel mode. We assume this system time likely refers to some memory pages manipulation or other similar CPU activity that appears when the program iteratively accesses data from a big catalog. In any case, this system time would be just a minor addition to the result ($\sim 1-2$ per cent at most). The reader may notice that the hardware can generate huge performance differences, not necessarily owing to just the CPU frequency. Moreover, even the performance on the same AMD machine differs drastically between the [double]{} and [long double]{} tests. This puzzling difference appears mainly due to slow 80-bit floating-point arithmetic on AMD, not because of e.g. different number of Newtonian iterations per root (which appeared almost the same in all our tests, $5$–$6$ iterations per root). We conclude that our algorithm looks quite competitive and probably even outperforming the benchmarks obtained by @Hedo18 for their set of tested algorithms ($60$–$80$ $\mu$s per orbit pair on a Supermicro/Xeon hardware). They used [double]{} precision rather than [long double]{} one. Therefore, our algorithm possibly pretends to be the fastest one available to date, or at least it belongs to the family of the fastest ones. In the majority of cases it yields considerably more accurate and reliable results, usually close to the machine precision, and its accuracy may seriously degrade only in extraordinary rare nearly degenerate cases, which are objectively hard to process. Additional tools {#sec_add} ================ Our main algorithm based on determining the roots of $g(u)$ is fast but might become vulnerable in the rare cases of lost roots. Whenever it signals a warning, alternative algorithms should be used, trading computing speed for better numeric resistance with respect to degeneracies. In addition to the basic 0D method based on $g(u)$ root-finding, our library implements a “fallback” algorithm of the 1D type, based on the brute force-like minimization of $\tilde\rho(u)$. This method is numerically reliable thanks to its simplicity, and its slow speed is not a big disadvantage, because it needs to be run only if the basic fast method failed. In our benchmarking test it appeared $\sim 6$ times or $\sim 4$ times slower than our fast algorithm or the Gronchi code, respectively. But this is likely sensitive to its input parameters. First of all, the algorithm scans only a restricted range in the $u$ variable, discarding the values where the MOID cannot be attained. The requied $u$ range is determined as follows. Using e.g. formulae from [@KholshVas99lc], compute the minimum internodal distance $d_\Omega$. Since MOID is usually attained near the orbital nodes, this quantity and its corresponding orbital positions already provide rather good approximation to the MOID. Then consider two planes parallel to the orbit $\mathcal E'$, and separated from it by $\pm d_\Omega$. We need to scan only those pieces of orbit $\mathcal E$ that lie between these planes, i.e. lie within $\pm d_\Omega$ band from the $\mathcal E'$ plane. The points on $\mathcal E$ outside of this set are necessarily more distant from $\mathcal E$ than $d_\Omega$, so the MOID cannot be attained there. This trick often reduces the $u$ range dramatically. This optimization was inspired by the discussion given in [@Hedo18]. The detailed formulae for the reduced range of $u$ are given in \[sec\_urange\]. Moreover, this algorithm automatically selects the optimal orbits order $(\mathcal E,\mathcal E')$ or $(\mathcal E',\mathcal E)$ to have a smaller angular range to scan. In the case if the cumulative ranges appear equal (e.g. if we occasionally have the full-circle $[0,2\pi]$ in both cases) then the user-supplied order is preserved. The efficiency of this approach is demonstrated in Fig. \[fig\_urange\], where we plot the distribution density for the total range length obtained, as computed for our test case of $10^4\times 10^4$ asteroid pairs. The fraction of the cases in which this range could not be reduced at all (remained at $[0,2\pi]$) is only $\sim 2\%$, and in the majority of occurences it could be reduced to something well below $1$ rad. The efficiency of the reduction increases if MOID is small. Then the total scan range may be reduced to just a few degrees. ![The distribution density of the reduced angular range in $u$, as obatined for $\sim 10^8$ asteroid pairs.[]{data-label="fig_urange"}](urange.eps){width="\linewidth"} The minimization of $\tilde\rho(u)$ is based on subsequent sectioning of the initial angular range for $u$. The user can set an arbitrary sequence of integer numbers $n_1,n_2,n_3,\ldots,n_p$ that define how many segments are considered at different stages. The initial angular range is partitioned into $n_1$ equal segments separated by the equidistant nodes $u_k$, and the node with a minimum $\tilde\rho(u_k)$ is determined. We note that the input parameter $n_1$ is always interpreted as if it corresponded to the entire $[0,2\pi]$ range, even if the actual scan range is reduced as described above. The segment length on the first step is normally set to $h_1=2\pi/n_1$ regardless of the scan range, unless this scan range is itself smaller than $h_1$. On the second stage, the segment $[u_{k-1},u_{k+1}]$ surrounding the minimum $u_k$ is considered. It is sectioned into $n_2$ equal segments, and the node corresponding to the minimum $\tilde\rho(u_k)$ is determined again. On the third stage, the segment $[u_{k-1},u_{k+1}]$ is sectioned into $n_3$ smaller segments, and so on. On the $k$th stage, the length of the segment between subsequent nodes is reduced by the factor $2/n_k$, so only $n_k\geq 3$ are meaningful. Starting from the stage number $p$, the segments are always partitioned into $n_p$ subsegments, until the global minimum of $\tilde\rho(u)$ is located with a desired accuracy in $u$ and $\rho$. It is recommended to set $n_1$ large enough, e.g. $\sim 1000$, in order to sample the objective function with its potentially intricate variations densely enough, whereas the last $n_p$ can be set to $4$, meaning the bisection. We notice that this method was not designed for a standalone use. It was not even supposed to be either more reliable or more accurate in general than our primary fast method. It was supposed to provide just yet another alternative in rare special cases when the primary method did not appear convincingly reliable. Its practical reliability depends on the input parameters very much: too small $n_1$ may lead to frequent loosing of local minima in $\tilde\rho(u)$, if they are narrow. Hence we may miss the correct global minimum sometimes. But this effect can be always suppressed by selecting a larger $n_1$. In our tests, with $n_1=1000$ this algorithm generated one wrong MOID per $\sim 3000$ trials, so it is not recommended for a general sole use. This could be improved by implementing an adaptive sampling in the $u$ variable, e.g. depending on the derivative $|\tilde\rho'|$, but we did not plan to go that far with this method. We note that narrow local minima of $\rho$ are, informally speaking, in some sense antagonistic to almost-multiple critical points, so this fallback algorithm is vulnerable to rather different conditions than our primary fast method. Therefore it can serve as a good complement to the latter. Also, we include in the library several fast tools that may appear useful whenever we need to actually compute the MOID only for those object that are close to an orbital intersection. These tools may help to eliminate most of the orbit pairs from the processing. The first one represents an obvious pericenter–apocenter test: $\mathrm{MOID} \geq a(1-e)-a'(1+e')$, and $\mathrm{MOID} \geq a'(1-e')-a(1+e)$. If any of these quantities appeared positive and above some upper threshold $\mathrm{MOID}_{\max}$ then surely $\mathrm{MOID}>\mathrm{MOID}_{\max}$, and one may immediately discard such orbital pair from the detailed analysis. Our library also includes functions for computing the so-called linking coefficients introduced by @KholshVas99lc. The linking coefficients are functions of two orbits that have the dimension of squared distance, like $|{\bm{r}}-{\bm{r}}'|^2=2aa'\rho$, and they are invariable with respect to rotations in $\mathbb R^3$. @KholshVas99lc introduced three linking coefficients that should be selected depending on whether the orbits are (nearly) coplanar or not. See all the necessary formulae and discussion in that work. For our goals it might be important that at least one of these linking coefficients, $l_1 = d_1 d_2$, can be used as an upper limit on the MOID. It represents a signed product of two internodal distances from (\[ind\]), so the squared MOID can never be larger than $|l_1|$. This allows us to limit the MOID from another side, contrary to the pericenter-apocenter test. Moreover, based on $l_1$ we introduce yet another linking coefficient defined as $$l_1' = \min\left(|d_1|,|d_2|\right)^2 {\mathop{\text{sign}}\nolimits}l_1.$$ This modified $l_1$ provides even tighter upper limit on the squared MOID, but still preserves the sign that indicates the orbital linkage in the same way as $l_1$ did. It is important for us that all linking coefficients are computed very quickly in comparison to any MOID algorithm, because they are expressed by simple elementary formulae. The linking coefficients were named so because their original purpose was to indicate whether two orbits are topologically linked like two rings in a chain, or not. The intermediate case between these two is an intersection, when the linking coefficient vanishes together with the MOID. Therefore, these indicators can be potentially used as computationally cheap surrogates of the MOID. But in addition to measuring the closeness of two orbits to an intersection, linking coefficients carry information about their topological configuration. Also, these quantities can be used to track the time evolution of the mutual topology of perturbed non-Keplerian orbits, for example to locate the moment of their intersection without computing the MOID. Further development and plans ============================= Yet another possible way to extend our library is to implement the method by @Baluev05 for computing the MOID between general confocal unperturbed orbits, including hyperbolic and parabolic ones. This task can be also reduced to finding real roots of a polynomial similar to $\mathcal P(z)$. In a future work we plan to provide statistical results of applying this algorithm to the Main Belt asteroids, also including the comparison of the MOID with linking coefficients and other indicators of orbital closeness. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the Russian Science Foundation grant no. 18-12-00050. We express gratitude to the anonymous reviewers for the fruitful comments and useful suggestions on the manuscript. Reducing the scan range for the eccentric anomaly {#sec_urange} ================================================= Let us introduce ${\bm{R}}' = {\bm{P}}' \times {\bm{Q}}'$, which is a unit vector orthogonal to the orbital plane of $\mathcal E'$. The vectors ${\bm{P}}'$, ${\bm{Q}}'$, ${\bm{R}}'$ form an orthonormal basis in $\mathbb R^3$. Then from (\[rvec\]) let us compute the dot-product $$({\bm{r}} - {\bm{r}}') {\bm{R}}' = a PR' (\cos u - e) + a SR' \sin u,$$ which represents a projection of the distance vector ${\bm{r}} - {\bm{r}}'$ on the basis vector ${\bm{R}}'$. Note that the dot-product ${\bm{r}}' {\bm{R}}'$ is always zero. Now, we need this distance projection to be within $\pm d_\Omega$ from zero, because otherwise the absolute distance can be only larger than $d_\Omega$. This yields two inequality constraints $$e PR' - \frac{d_\Omega}{a} \leq PR' \cos u + SR' \sin u \leq e PR' + \frac{d_\Omega}{a}, \label{ineq}$$ implying an elementary trigonometric equation that can be solved via arcsines. The final set of computing formulae can be expressed as follows. Let us introduce the vector $${\bm{W}} = {\bm{R}} \times {\bm{R}}', \quad W=|{\bm{W}}|=\sin I,$$ which is directed to the ascending node of $\mathcal E'$ assuming reference $\mathcal E$. The angle $I$ is the mutual inclination between the orbits. Then determine the angle $\theta$ from $$\cos\theta = (PW)/W, \quad \sin\theta = (QW)/W.$$ It represents the true anomaly on $\mathcal E$, where that ascending node is located. Basically, $\theta$ is the angle between ${\bm{P}}$ and ${\bm{W}}$, counted positive in the direction of ${\bm{Q}}$. The location on the other orbit $\theta'$ can be determined in a similar way. Explicit formula for the scalar product $PW$ is given in [@KholshVas99lc] via orbital elements, though we prefer to multiply the vectors directly, using the following expression for ${\bm{W}}$: $$\begin{aligned} {\bm{W}} = \{&\cos i \sin i' \cos \Omega' - \sin i \cos i' \cos \Omega, \nonumber\\ &\cos i \sin i' \sin \Omega' - \sin i \cos i' \sin \Omega, \nonumber\\ &\sin i \sin i' \sin(\Omega'-\Omega)\, \}. \nonumber\\\end{aligned}$$ After that let us compute $$\begin{aligned} d_1 &= \frac{p}{1+e\cos\theta} - \frac{p'}{1+e'\cos\theta'}, \nonumber\\ d_2 &= \frac{p}{1-e\cos\theta} - \frac{p'}{1-e'\cos\theta'}, \nonumber\\ d_\Omega &= \min(|d_1|,|d_2|), \label{ind}\end{aligned}$$ where $p$ and $p'$ are orbital parameters, $p=a(1-e^2)$. Now, the inequalities (\[ineq\]) may be simplified if we decompose the vectors ${\bm{W}}$ and ${\bm{R}}'$ in the basis $\{{\bm{P}}, {\bm{Q}}, {\bm{R}}\}$: $$\begin{aligned} {\bm{W}} &= \{PW,\, QW,\, RW=0\,\}, \nonumber\\ {\bm{R}}' &= \{PR',\, QR',\, RR'=\cos I\,\}.\end{aligned}$$ Writing down the orthogonality condition between ${\bm{W}}$ and ${\bm{R}}'$ and the norm of ${\bm{R}}'$ in these coordinates, we have $$\begin{aligned} WR' &= PW\; PR' + QW\; QR' = 0, \nonumber\\ R'^2 &= 1 \implies PR'^2 + QR'^2 = W^2.\end{aligned}$$ Therefore, we may set $PR' = \mp W\sin\theta$ and $QR' = \pm W\cos\theta$ in (\[ineq\]), and the sign choice is not important here. Finally, let us define the quantity $k\geq 0$ and the angle $\varphi$ from $$\begin{aligned} A^2 = 1 - e^2 \cos^2\theta, \quad k = \frac{d_\Omega}{a W A}, \nonumber\\ \sin\varphi = \frac{\sin\theta}{A}, \quad \cos\varphi = \sqrt{1-e^2}\, \frac{\cos\theta}{A},\end{aligned}$$ and (\[ineq\]) becomes $$e\sin\varphi - k \leq \sin(\varphi-u) \leq e \sin\varphi + k.$$ In general, we have three types of solution for $u$. 1. If $|e\sin\varphi| < |1-k|$ and $k<1$ then we have two small segments for $u$ near the nodes, defined as $[\varphi-\arcsin(e\sin\varphi+k), \varphi-\arcsin(e\sin\varphi-k)]$ and $[\varphi+\pi+\arcsin(e\sin\varphi-k), \varphi+\pi+\arcsin(e\sin\varphi+k)]$; 2. If $|e\sin\varphi| < |1-k|$ and $k\geq 1$ then we have the entire circular range $[0,2\pi]$ for $u$. 3. If $|e\sin\varphi| \geq |1-k|$ then there is just one big segment for $u$ that covers angles roughly from one node to another, defined as either $[\varphi+\arcsin(e\sin\varphi-k), \varphi+\pi-\arcsin(e\sin\varphi-k)]$, if $\sin\varphi>0$, or $[\varphi-\arcsin(e\sin\varphi+k), \varphi+\pi+\arcsin(e\sin\varphi+k)]$, if $\sin\varphi<0$; In practice, the first type of occurence is the most frequent one, so the speed improvement is dramatic. Notice that for $W\to 0$ (coplanar orbits) the angle $\theta$ formally becomes undefined, but this is not important because then $k\to\infty$ and we just obtain the full-circle range $[0,2\pi]$ for $u$. So the degenerate case $W\approx 0$ is not a big numeric issue in practice. [^1]: This is actually an upper limit on that rate, because our algorithm may intentionally count some complex roots with small imaginary part as real ones. This estimate is sensitive to the selected floating-point precision and to subtle details that affect overall numeric accuracy of the algorithm. It may even be possible that all these potential $12$-root occurences contain only $10$ real roots. [^2]: We had one $14$-root occurence using the standard [double]{} precision, but this case appeared to have only $12$ real roots with [long double]{} arithmetic. [^3]: Since $u$ can take only real values, we always have $z\neq 0$ and $w\neq 0$. [^4]: See url ftp://ftp.lowell.edu/pub/elgb/astorb.html.
Bourbon County is a county in the U.S. state of Kentucky. As of the 2010 census, the population was 19,985. Its county seat is Paris.
Nutritional status of home hemodialysis patients. We studied the nutritional status of 32 patients (23 men), aged 50 (SD14) yr, on home hemodialysis (HHD) for one-138 months. No formal dietary restrictions were imposed. Anthropometric measurements were made using standard techniques, diet assessed by three-day dietetic diary and interview and plasma concentrations of nutrients were measured. Mean caloric intake was 29.4 (SD 10.7) kcal/kg; 24 (75%) patients had lower energy intakes than recommended for normals. Protein, vitamin C and folate intakes were above recommended minimum safe intakes. Intakes were less than recommended for calcium in four (13%) patients, iron in one (3%) and vitamin B12 in two (6%). One-third of both sexes had body mass indices (kg/m2) less than 25th percentile for normals, but none was less than 80% of ideal bodyweight. Arm muscle circumference was less than 10th percentile for normals in six men and three women. Triceps skin fold thickness was less than 10th percentile in four men (17%) and five women (55%). No anthropometric measurements were correlated with energy, protein or fat intake. Biochemical measurements were not useful in predicting protein intake. Neither nutritional intake nor anthropometric measurements were correlated with the duration of HHD. There was little evidence of malnutrition and wasting in this group of well rehabilitated HHD patients.
Georges-Louis Leclerc, Comte de Buffon (7 September 1707 - 16 April 1788), usually called Buffon, was a French naturalist. He was also a mathematician, cosmologist and encyclopedic author. His collected information influenced the next two generations of naturalists, including Jean-Baptiste de Lamarck and Georges Cuvier. Buffon published 35 volumes of his Histoire naturelle during his lifetime, and nine more were published after his death, for a total of 44 volumes. "Truly, Buffon was the father of all thought in natural history in the second half of the 18th century". p330 Buffon held the position of Intendant (Director) of the Jardin du Roi, now called the Jardin des Plantes; it is the French equivalent of Kew Gardens. The Lycee Buffon in Paris is named after him. His work Natural history Buffon is best remembered for his Histoire naturelle. "Written in a brilliant style, this work was read ... by every educated person in Europe".p330 It was translated into many different languages, making him one of the most widely read authors of the day, a rival to Montesquieu, Rousseau, and Voltaire. In the opening volumes of the Histoire naturelle Buffon questioned the usefulness of mathematics, criticized Carl Linnaeus's taxonomical approach to natural history, outlined a history of the Earth with little relation to the Biblical account, and proposed a theory of reproduction which ran counter to existing ideas. The early volumes were condemned by the Faculty of Theology at the Sorbonne. Buffon published a retraction, but he continued publishing the offending volumes without any change. Buffon noted that despite similar environments, different regions of the world have distinct plants and animals. This observation, later known as Buffon's Law, may be the first principle of biogeography. Common descent Buffon understood the idea of common descent, and discussed it a number of times. This does not mean he believed in it. Probably he did not, but he discussed it fairly openly on a number of occasions. Interpreting his ideas is not simple, for he returned to topics many times in the course of his work. "...all the animals might be regarded as constituting but a single family... one could say... that the ape is of the family of man... that man and ape have a common origin: that, in fact, all the families, among plants as well as animals, have come from a common stock, and that all animals are descended from a single animal, from which have sprung in the course of time... all other races of animals"... "But this is by no means a proper representation of nature. We are assured by the authority of revelation that... the first pair of every species issued fully formed from the hands of the Creator".p332 In volume 14 he argued that all the world's quadrupeds had developed from an original set of just thirty-eight quadrupeds. On this basis, he is sometimes considered a "transformist" and a precursor of Darwin. Earth science In Les epoques de la nature (1778) Buffon discussed the origins of the solar system, speculating that the planets had been created by a comet's collision with the sun. He also suggested that the earth originated much earlier than the 4004 BC of Archbishop James Ussher. Basing his figures on the cooling rate of iron tested at his Laboratory le Petit Fontenet at Montbard, he calculated that the age of the earth was 75,000 years. Once again, his ideas were condemned by the Sorbonne, and again he issued a retraction to avoid further problems. Relevance to modern biology Charles Darwin wrote in his preliminary historical sketch added to the third edition of On the Origin of Species: "Passing over... Buffon, with whose writings I am not familiar." Then, from the fourth edition onwards, he amended this to say that "the first author who in modern times has treated it [evolution] in a scientific spirit was Buffon. But as his opinions fluctuated greatly at different periods, and as he does not enter on the causes or means of the transformation of species, I need not here enter on details." The paradox of Buffon is that, according to Ernst Mayr: He was not an evolutionist, yet he was the father of evolutionism. He was the first person to discuss a large number of evolutionary problems, problems that before Buffon had not been raised by anybody.... he brought them to the attention of the scientific world. Except for Aristotle and Darwin, no other student of organisms [whole animals and plants] has had as far-reaching an influence. He brought the idea of evolution into the realm of science. He developed a concept of the "unity of type," a precursor of comparative anatomy. More than anyone else, he was responsible for the acceptance of a long-time scale for the history of the earth. He was the founder of biogeography. And yet, he hindered evolution by his frequent endorsement of the immutability of species. He provided a criterion of species, fertility among members of a species, that was thought impregnable.
Background ========== An increasing number of people are using complementary and alternative medicine (CAM). According to recent studies, 42.1 % of the American population uses some form of CAM, with 39% of the older population using CAM \[[@B1],[@B2]\]. In 1997, total spending on CAM was estimated at \$32.7 billion dollars, up from \$22.6 billion in 1990, a substantial increase that indicates an escalating portion of the population is seeking CAM \[[@B2]\]. Patients may choose to use CAM as a substitute or in conjunction with conventional medicine for a variety of reasons, including 1) dissatisfaction with health care providers and medical outcomes, 2) side effects of drugs or treatments, 3) high health costs (specifically medications), 4) lack of control in their own health care practices, and 5) impersonal and technological health care \[[@B3]-[@B5]\]. In reviewing the literature, research studies have not reported on CAM use among rural residents, older adults in culturally diverse groups. In rural settings, limited access to medical care often leads to late diagnosis, postponement of treatment, and greater impairments \[[@B6]\]. The older population is a group that has more chronic illnesses, takes longer to recover when sick, and often needs more health care services than their younger cohorts \[[@B7]\]. This may result in CAM use, often influenced by folklore and cultural beliefs. Understanding choices of CAM use is critical to provide optimal care to older, rural patients as certain remedies may be harmful or interfere with conventional medicine. With the increase in the older population and the number of persons who are choosing CAM, there has been a demand for research to examine the feasibility, benefits, clinical usefulness and development of CAM interventions in older adults. A large proportion of older adults are interested in learning more about CAM and the benefits to health \[[@B3]\]. However, there is very little research that describes CAM use in minority older adults. Therefore, the purpose of this study was to compare older African Americans (AA) and Caucasian Americans (CA) over the age of 50 on 1) use of CAM and 2) self-reported overall satisfaction with CAM being used. Rural Health and CAM -------------------- Patients in rural areas experience a variety of unmet needs partly due to limited access to primary care, fewer resources to choose from, lower income, less comprehensive health coverage, ill-equipped or poorly staffed health care agencies, and geographic isolation \[[@B8]\]. Rural health care providers often have difficulty in delivering services that target persons with special health care needs, like older adults \[[@B9]\]. Poverty is more widespread in rural areas and even higher among rural minorities, with 35.2% of rural AA living in poverty compared to 26.9% of urban AA. Private health insurance coverage is more common for residents of urban areas while Medicare spends more per capita on urban beneficiaries (\$5,288) than rural beneficiaries (\$4,375) \[[@B6]\]. All of these factors may contribute to the use of complementary and alternative therapies that may not be widely accepted in conventional medicine. An estimated 29.5% of community dwelling older adults use at least one form of CAM with women more likely than men to use CAM \[[@B10]\]. Older adults and CAM -------------------- By the year 2030, older adults will make up 22% of the total population \[[@B7]\]. Because of a predicted increase in chronic conditions, older adults may be choosing to use CAM more often than previously to help manage their health. There is lack of information regarding specific costs, benefits, risks, or precautions pertinent to the older adult. Few CAM therapies have federal regulations to guide choices made about CAM. The most commonly used CAM by the older adult has been reported as chiropractic medicine, herbal remedies, relaxation techniques, megavitamins and religious or spiritual healing \[[@B11]\]. Several reports describe clinically significant interactions between herbals/supplements and prescription medications \[[@B1],[@B12]\]. There is a lack of studies related to appropriate dosage and mechanisms of CAM practices in older adults \[[@B13]\]. The use of herbal remedies (ginkgo biloba and ginseng), vitamins, music therapy, touch, massage therapy, and neurofeedback have benefit in the older adult with implications for improved cognitive function \[[@B14]\]. The demographic characteristics that predict CAM use are gender (females use more CAM then men) and education (the higher educated use CAM more often) \[[@B5]\]. In a recent study describing members of Shield 65, a Blue Shield Medicare supplement that offers CAM coverage for people over 65, 41% of older adults used some form of CAM, with 80% reporting some improvement in their health conditions. Of the older adults who did use CAM, 58% reported they did not discuss CAM use with their medical doctor or health care practitioner \[[@B5]\]. In a study examining use of CAM in the older adult, more women and fewer African Americans and Hispanics were represented in the sample. These older adults who use CAM cited arthritis, back pain, heart disease, allergies, and diabetes as reasons using CAM \[[@B11]\]. Cultural Diversity and CAM -------------------------- Cultural diversity and the health care practices specific to a culture can shape the system of health care in a country. The assumption that conventional medical practice is the choice for all races is incorrect. CAM health care practices in the United States have broadened due to an influx of cultures, values, and beliefs \[[@B4],[@B15]\]. Exposure of U.S. citizens to other cultures and cultural healing methods, as well as documented effectiveness of CAM used in different cultures, has spawned interest in CAM in this country. There is little research addressing CAM modalities with origins in racial healing practices and folklore. Most of the CAM surveys include middle class, Caucasian, educated subjects, excluding how race may influence CAM use. The inclusion of folk remedies is often ignored or not discussed. The findings of one study on ethnic minority and CAM use showed no differences between ethnicities but recognized the need to consider CAM practices separately to get an accurate picture of ethnic minority use \[[@B16]\]. Race has been reported to affect the choices of CAM \[[@B17]\]. Older women in all cultural groups have expressed more satisfaction with use of CAM than younger women \[[@B10],[@B18],[@B19]\]. It is estimated that 83% of minority patients who use CAM do not report it to their physician \[[@B19]\]. There is a gap in the literature on CAM use by older adults of different racial backgrounds, specifically in rural areas. This survey was undertaken to provide preliminary information for future research on the measurements of outcomes and evidenced based practice in relation to CAM use in older adults of different races. The research questions were: • Is there a difference in CAM use between AA and CA older adults? • Is there a difference in satisfaction of CAM use between AA and CA older adults? • Overall, what CAM modalities are being used by rural older adults? Definition of Terms ------------------- The following terms were operationally defined for the survey: • Complementary and Alternative Medicine (CAM): for the purpose of this study, CAM, as defined by the National Center for Complementary and Alternative Medicine (NCCAM), \"is a group of diverse medical and health care systems, practices, and products that are not presently considered to be part of conventional medicine.\" \[[@B20]\]. • CAM modality use: for the purpose of this study, CAM use was identified in demographic data as a self-report. CAM modality use was presented as a list based on five modalities of NCCAM \[[@B20]\] and the top CAM use reported in the literature. Participants were asked to circle which CAM they used, as many as they used. An open ended option of \"other\" was provided for CAM not listed (see Table [2](#T2){ref-type="table"} for list). Participants could circle the use of herbs or vitamins, but did not specify which herbs or vitamins were being used. ###### CAM use by AA and CA Rural Older adults CAM Used African Americans (n = 40) Caucasian Americans (n = 143) ----------------------- ---------------------------- ------------------------------- Acupuncture 0 1 (.7%) Aroma Therapy 1 (2.5%) 3 (2.1%) Art Therapy 1 (2.5%) 1 (.7%) Biofeedback 2 (5%) 0 Chelation 0 1 (.7%) Chiropractic Medicine 3 (7.5%) 27 (19%) Chondroiton 0 13 (9.2%) Exercise 23 (57.5%) 94 (66.2%) \*Glucosamine 0 28 (19.7%) Herbs 10 (25%) 33 (23.2%) Hypnosis 0 1 (.7%) Journal Writing 2 (5%) 12 (8.5%) Magnetic Therapy 2 (5%) 9 (6.3%) Massage Therapy 4 (10%) 16 (11.3%) Meditation 12 (30%) 32 (22.5%) Melatonin 0 7 (4.9%) Metal Therapy 0 2 (1.4%) Music Therapy 6 (15%) 15 (10.6%) Naturopathic Medicine 0 4 (2.8%) Prayer 36 (90%) 119 (83.8%) Qi 1 (2.5%) 1 (.7%) Reiki 0 1 (.7%) Taichi 0 2 (1.4%) Therapeutic Touch 2 (5%) 1 (.7%) Vitamins 29 (72.5) 122 (85.9%) Visual Imagery 2 (5%) 4 (2.8%) Yoga 1 (2.5%) 4 (2.8%) Other 0 9 (6.3%) \*Significant difference by race at p = .05 • Satisfaction with CAM: for the purpose of this study, an overall level of satisfaction with CAM modality (or modalities) being used was reported as satisfaction with CAM. There were no measurements for satisfaction of each specific CAM the older adults used. Methods ======= Sample and Setting ------------------ The survey design was descriptive, comparative, and cross-sectional. A convenience sample of rural AA and CA older adults (over 50 years old) was recruited from Gulfport, Biloxi, Laurel, Hattiesburg, Natchez, Jackson, and Meridian during 10 community service organization meetings in the state of Mississippi, including American Association of Retired Persons, the retired employees of the Southern Pine Electric Power Association known as the Golden Pine Cones Club, and the Retired Seniors Volunteer Group from the local community hospital. Limitations of the study included the convenience sampling of older adults through these support groups and those who attended, sampling bias, subject effect, and self-report. The advertised topic of the meetings might attract certain people who use CAM than others. The Institutional Review Board (IRB) approval for human subjects protection was obtained at the University of Southern Mississippi, Hattiesburg, MS. Instruments ----------- Demographic data included age, gender, race, marital status, socioeconomic status (SES), education, and out of pocket expenses spent on CAM. CAM use in the older adult\'s health care practices was measured using the five modalities identified by the National Center of Complementary and Alternative Medicine (alternative medical systems, mind body interventions, biologically based therapies, manipulative/body based methods, and energy therapies). Based on the review of the literature and the most often used CAM, a list was provided for the participants to circle the CAM used. An option was given as \"other\" to assess any CAM use not listed. An overall rating of satisfaction with the CAM used was measured with a Likert scale (1--11). The higher the score, the more satisfied with the CAM. The survey was developed for the study, a limitation of the study. Procedure --------- Community agencies for older adults were contacted throughout the state to ask for participation in the project. At the beginning of the support group meetings, a total of 378 participants were asked to indicate whether or not they had used CAM. This was done by asking if the participants were currently using anything for their own health that was not prescribed by their family physician. Participants who indicated that they had used CAM were then invited to participate in the survey. An educational program about CAM use was presented after the questionnaires were answered entitled, \"What Everyone Should Know About CAM\". The program was presented after the collection of data. Topics that were discussed included a brief overview of the following topics: 1) definition of CAM, 2) history of CAM, 3) choosing to use CAM, 4) safety and effectiveness of CAM, 5) contraindications in CAM, 6) choosing a practitioner in CAM, 7) cost, and 8) consulting a health care provider. Information was presented in an unbiased format (neither supportive or opposed to CAM) with the goal to provide general information on CAM to the older adults. Participation was strictly voluntary and written consent was obtained. Confidentiality was assured by assigning identification numbers matching the consent form with the questionnaire. If the older adult agreed to be in the study, the survey was filled out at the beginning of the meeting. There was no penalty if anyone decided not to participate in the research project. All were invited to stay for the presentation, whether they did or did not participate in the study. After the older adults had answered the questionnaire, the educational presentation was started. Results ======= Data was collected on the older adults who chose to participate and agreed to answer the questionnaire. Of the 378 support group attendees, 183 indicated that they had used CAM. All of these 183 volunteered to participate in the survey and returned completed questionnaires. The sample consisted of 40 AA and 143 CA older adults who volunteered to be participants. Demographic data were collected on the older adults\' who agreed to complete the questionnaire on age, gender, race, marital status, SES, education, and out of pocket expenses spent on CAM (Table [1](#T1){ref-type="table"}). Significant overall differences between the AA and CA were found on SES (P = .008, F = 3.049, df = 6) and marital status (p = .042, F = 4.201, df = 1). In the SES ranking, only 13 CA (14%) reported an income of \$40,000 or greater with 13 CA older adults (11%) compared to only one AA (3%). A higher percentage of older AA adults were single (62.5%) compared to CA (57%). ###### Demographics of older adults who reported use of CAM Demographic Variable Intervals Overall AA n = 40 CA n = 143 ------------------------------------- ---------------------- ------------ ------------ ------------ \*Age 50--59 7 (3.8%) 0 7 (5%) 60--79 52 (28%) 19 (47.5%) 33 (23%) 70--79 75 (40%) 19 (47.5%) 55 (38%) 80 \> 52 (28%) 2 (5%) 48 (34%) Gender Male 45 (24%) 9 (23%) 33 (23%) Female 139 (76%) 30 (77%) 109 (77%) Marital Status Married 78 (42%) 15 (37.5%) 61 (43%) Single 107 (58%) 25 (62.5%) 81 (57%) Socioeconomic Status (SES) \$0 -- \$19,999 95 (51.1%) 28 (72%) 64 (49%) \$20 -- \$39,999 64 (34.4%) 10 (25%) 54 (41%) \$40 -- \$59,999 8 (4.4%) 0 8 (6%) \$60,000 and \> 6 (3.2%) 1 (3%) 5 (4%) \*Education \< 8^th^grade 16 (8.7%) 8 (21%) 7 (5%) 9 -- 12 grade 106 (57.6%) 25 (64%) 79 (55.5%) College 57 (30.6%) 5 (13%) 52 (36.5%) Graduate School 5 (2.7%) 1 (2%) 4 (3%) Out of pocket expenses spent on CAM Under \$100.00 84 (48.8%) 22 (55%) 60 (47%) \$101.00 -- \$500.00 56 (32.6%) 7 (17.5%) 48 (37%) \$501.00 -- \$1,000 17 (9.9%) 5 (12.5%) 12 (9%) \$1001.00 -- \$1500 7 (4.1%) 3 (7.5%) 4 (3%) \$1501.00 -- \$2000 5 (2.9%) 2 (5%) 3 (2.5%) \$2000 and more 3 (1.7%) 1 (2.5%) 2 (1.5%) \*Significant difference by race at p = .05 To answer Research Question 1 (Is there a difference in CAM use between AA and CA older adults?), participants were asked to circle the CAM they used on a list provided. The mean number of CAM used by the participants who answered the questionnaire was 3.8 (SD = 2.14). The range of number of CAM used by each participant was 1--12. Overall, CA used more CAM products than AA with CA using an average of 4 CAM products and AA using 3 CAM products per person. AA did not use glucosamine as often as CA (p =.002). CAM use between AA and CA is seen in Table [2](#T2){ref-type="table"}. Chi square analysis was calculated on race and the demographic variables on CAM use, with significant findings for age (p = .003) and education (p \< .001). AA in the group were older and less educated than CA. CAM use was more prevalent in the 50--60 year old age group for the CA (n = 7) than for the AA (n = 0). A significant difference was seen with education with 40% of CA (n = 56) having high school education and higher compared to only 15% (n = 6) for AA. The actual CAM use of those who did not respond could modify the estimate of CAM use among the sample attending the workshops. To answer Research Question 2 (Is there a difference in satisfaction of CAM use between AA and CA older adults?), participants were asked to rate their satisfaction on a Likert scale of 1--11, 11 being the most satisfied with CAM. Using t-test analysis, no differences existed between satisfaction with CAM use. AA mean satisfaction was 8.34 (SD = 2.25) and CA mean satisfaction was 8.32 (SD = 2.25). To compare the AA and CA rural older adults, bivariate correlations were calculated by pairs using Pearson\'s product moment correlations with r2 at .08 and .16 for AA and CA, respectively. No correlations existed between number of CAM used and satisfaction with CAM. To answer Research Question 3 (Overall, what CAM modalities are being used by rural older adults?), number of CAM used was calculated. The most commonly used CAM reported were prayer (n = 155), vitamins (n = 151), exercise (n = 117), meditation (n = 44), herbs (n = 43), chiropractic medicine (n = 30), glucosamine (n = 28), and music therapy (n = 21). Overall 378 older adults were invited to answer the questionnaire for the research project. It is not certain if the 195 persons who did not complete the questionnaire (51%) who attended the support group meetings did or did not use CAM. Demographic data and information related to CAM use is not known on those who chose not to answer the questionnaire. In summary, the study finds that rural older adults who used CAM reported satisfaction with use. Differences in CAM use was identified by age and education. CA use more CAM per person than AA with significant findings in the use of glucosamine by CA. Discussion ========== This study is a survey, therefore generalizability of findings should be considered. Differences between CA and AA were found in age and education. AA using CAM were older than the CA. This may be due to the generation of older adults who had to rely more on folklore practices in their lives from the early 1900\'s, when health care access was not available to rural residents or minorities. Very few of these AA were educated past high school, while a larger number of CA were educated at the college level and higher. The AA may have had difficulty in comprehending the questions and may have been less willing to participate in research. Race may also have played a factor in the low response from AA due to mistrust of research activities the AA culture has experienced. The top five uses of CAM in both AA and CA were prayer, vitamins, exercise, meditation, and herbs. The most common uses of CAM in previous studies identified were chiropractic medicine, herbal remedies, relaxation techniques, megavitamins and religious or spiritual healing \[[@B11]\]. Since Mississippi is considered the \"Bible Belt\" of the South, the commitment to spirituality may have contributed to the use of prayer as the highest CAM used in older adults. Prayer may also be used more frequently in the elderly who are facing issues related to chronic health conditions, socio-economic status, sociocultural limitations, and end of life decisions. It should be noted that the differences found in the use of vitamins (.06) and chiropractic medicine (.08) were near significance at the .05 level by race; perhaps, with a larger sample size, significant difference may have been obtained. These findings may be due to the lack of insurance that poverty stricken minority older adults face in rural areas like Mississippi. Chiropractic services are often covered by health insurance in this country with a higher likelihood for middle class citizens to be covered by such insurance. There was no difference in the satisfaction of the CAM used by CA or AA. Both groups were satisfied with the CAM being used. All of the older adults who answered this questionnaire lived in southern Mississippi, considered to be 100% rural. Mississippi has a high poverty rate with 19.9% of the population living below poverty, compared to 12.4% nationwide \[[@B21]\]. Rural older adults may feel the need to use more CAM and folklore practices than conventional medications because of the perceived reliability of these interventions based on folklore or family traditions and the availability of these practices. Of all the CAM, exercise had the highest percentage of usage, excluding nutritional supplements. Being in a rural area, many of the older adults spent their lives doing physical labor, working on farms, and doing blue-collar jobs. This may predispose these older adults to include some form of exercise in their daily lives. Of the 378 older adults who were asked if they used CAM, 183 (49%) reported CAM use and answered the survey. This number may be an underestimation of the population, as there may have been older adults who used CAM who did not want to participate in the study. However, the number correlates with previously reported results of 41% to 75% of older adults using some form of CAM \[[@B5]\]. The majority of the participants were female in a low SES bracket and with no more than a high school education. Many of the participants used a variety of CAM products with an average CAM use of 3.8 interventions per participant. One hundred and fifty five (85%) participants used prayer, with 151 (83%) using vitamins, and 117 (64%) using exercise as CAM interventions. The workshop title may have attracted a certain population more than others. It should also be noted that a number of persons who chose not to participate in the study may use CAM. Fewer AA attended the community programs than CA. The African American population represents 36.3% of the Mississippi residents, a high percentage compared to the national average of12.3% \[[@B21]\]. AA may avoid these meetings due to feelings of racial discrimination still present between AA and CA in the \"deep South\". Community churches may be better sources of recruitment for AA. Despite the fact that almost half of the group who attended the meetings used CAM, only a small percentage of money was being spent by the older adults for CAM. Forty five per cent of the participants reported spending less than \$100/year on CAM. This may be due to the limited income available to spend on CAM (51.1% of older adults in this study made less than \$19,999/year). Insurance often does not cover CAM so CAM may not be considered due to cost. There may be limited availability of CAM in rural areas indicating that rural residents may have to travel to receive CAM, adding an extra burden of expense. Conclusions =========== Health care providers should be aware of CAM use in older adults, specifically those who live in rural areas who may be familiar with folklore and other alternative interventions in their daily health care practices. Patients in poverty may not have the benefits of expensive CAM interventions, including those reimbursed by health care insurance as many of rural older adults are either uninsured or have limited reimbursement policies through state and government agencies. Health care providers must recognize that CAM may be used very differently among a variety of racial backgrounds. Recommendations for future research may include: • use of herbs and specific concerns related to aging and metabolism including absorption, distribution, metabolism, and excretion that can affect the interactions of medications among AA and CA elders • pharmacological studies specifically for older adults to determine potential interactive effects of CAM between AA and CA with standard treatment medications, and • evaluation of the safety and efficacy of CAM practices in AA and CA older adults, specifically herbs and vitamins. In summary, health care providers must be aware of different uses of CAM by race. Older adults, both AA and CA, may have specific concerns because of gerontological issues that may increase susceptibility to CAM interventions. The responsibility of the health care provider regarding CAM use, side effects, and benefits must be acknowledged. Differences in CAM use by race must also be considered when advising patients. Competing Interests =================== None declared. Authors\' contributions ======================= NC was the P.I. in this study. She was responsible for designing and coordinating the study including analyzing data, submitting it for publication, as well as presenting findings at international conference. TA participated in data collection and read and approved of the final manuscript. BC participated in development of methodology, data collection, and participated in the coordination of the study. She assisted in standardizing the presentation of materials for all the groups as well as developing the program presentation. She edited and approved of the final manuscript. JF participated in development of methodology, data collection, and participated in the coordination of the study. She assisted in developing the CAM presentation. All authors read and approved of the final manuscript. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1472-6882/3/8/prepub> Acknowledgements ================ This study was funded by the University of Southern Mississippi Research Council and Sigma Theta Tau International Honorary Nursing Society, Gamma Lambda Chapter, Hattiesburg, MS. \*Partial support was from grant award T32-AT-00052 CAM Research Training Program funded by the National Center for Complementary and Alternative Medicine, National Institutes of Health.
The Lieutenant Governor of New Mexico () is an elected official in the state of New Mexico that ranks just below the Governor of New Mexico. The lieutenant governor is the first person in the order of succession of New Mexico's executive branch. List Notes
/* Copyright (C) 2014, The University of Texas at Austin This file is part of libflame and is available under the 3-Clause BSD license, which can be found in the LICENSE file at the top-level directory, or at http://opensource.org/licenses/BSD-3-Clause */ #include "FLAME.h" #ifdef FLA_ENABLE_NON_CRITICAL_CODE FLA_Error FLA_Trinv_ln_opt_var4( FLA_Obj A ) { FLA_Datatype datatype; int mn_A; int rs_A, cs_A; datatype = FLA_Obj_datatype( A ); mn_A = FLA_Obj_length( A ); rs_A = FLA_Obj_row_stride( A ); cs_A = FLA_Obj_col_stride( A ); switch ( datatype ) { case FLA_FLOAT: { float* buff_A = FLA_FLOAT_PTR( A ); FLA_Trinv_ln_ops_var4( mn_A, buff_A, rs_A, cs_A ); break; } case FLA_DOUBLE: { double* buff_A = FLA_DOUBLE_PTR( A ); FLA_Trinv_ln_opd_var4( mn_A, buff_A, rs_A, cs_A ); break; } case FLA_COMPLEX: { scomplex* buff_A = FLA_COMPLEX_PTR( A ); FLA_Trinv_ln_opc_var4( mn_A, buff_A, rs_A, cs_A ); break; } case FLA_DOUBLE_COMPLEX: { dcomplex* buff_A = FLA_DOUBLE_COMPLEX_PTR( A ); FLA_Trinv_ln_opz_var4( mn_A, buff_A, rs_A, cs_A ); break; } } return FLA_SUCCESS; } FLA_Error FLA_Trinv_ln_ops_var4( int mn_A, float* buff_A, int rs_A, int cs_A ) { float* buff_m1 = FLA_FLOAT_PTR( FLA_MINUS_ONE ); int i; for ( i = 0; i < mn_A; ++i ) { float* A00 = buff_A + (0 )*cs_A + (0 )*rs_A; float* a10t = buff_A + (0 )*cs_A + (i )*rs_A; float* A20 = buff_A + (0 )*cs_A + (i+1)*rs_A; float* alpha11 = buff_A + (i )*cs_A + (i )*rs_A; float* a21 = buff_A + (i )*cs_A + (i+1)*rs_A; float* A22 = buff_A + (i+1)*cs_A + (i+1)*rs_A; int mn_ahead = mn_A - i - 1; int mn_behind = i; /*------------------------------------------------------------*/ // FLA_Scal_external( FLA_MINUS_ONE, a21 ); // FLA_Trsv_external( FLA_LOWER_TRIANGULAR, FLA_NO_TRANSPOSE, FLA_NONUNIT_DIAG, A22, a21 ); bl1_sscalv( BLIS1_NO_CONJUGATE, mn_ahead, buff_m1, a21, rs_A ); bl1_strsv( BLIS1_LOWER_TRIANGULAR, BLIS1_NO_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_ahead, A22, rs_A, cs_A, a21, rs_A ); // FLA_Ger_external( FLA_MINUS_ONE, a21, a10t, A20 ); bl1_sger( BLIS1_NO_CONJUGATE, BLIS1_NO_CONJUGATE, mn_ahead, mn_behind, buff_m1, a21, rs_A, a10t, cs_A, A20, rs_A, cs_A ); // FLA_Trmv_external( FLA_LOWER_TRIANGULAR, FLA_TRANSPOSE, FLA_NONUNIT_DIAG, A00, a10t ); bl1_strmv( BLIS1_LOWER_TRIANGULAR, BLIS1_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_behind, A00, rs_A, cs_A, a10t, cs_A ); // FLA_Invert( FLA_NO_CONJUGATE, alpha11 ); bl1_sinverts( BLIS1_NO_CONJUGATE, alpha11 ); /*------------------------------------------------------------*/ } return FLA_SUCCESS; } FLA_Error FLA_Trinv_ln_opd_var4( int mn_A, double* buff_A, int rs_A, int cs_A ) { double* buff_m1 = FLA_DOUBLE_PTR( FLA_MINUS_ONE ); int i; for ( i = 0; i < mn_A; ++i ) { double* A00 = buff_A + (0 )*cs_A + (0 )*rs_A; double* a10t = buff_A + (0 )*cs_A + (i )*rs_A; double* A20 = buff_A + (0 )*cs_A + (i+1)*rs_A; double* alpha11 = buff_A + (i )*cs_A + (i )*rs_A; double* a21 = buff_A + (i )*cs_A + (i+1)*rs_A; double* A22 = buff_A + (i+1)*cs_A + (i+1)*rs_A; int mn_ahead = mn_A - i - 1; int mn_behind = i; /*------------------------------------------------------------*/ // FLA_Scal_external( FLA_MINUS_ONE, a21 ); // FLA_Trsv_external( FLA_LOWER_TRIANGULAR, FLA_NO_TRANSPOSE, FLA_NONUNIT_DIAG, A22, a21 ); bl1_dscalv( BLIS1_NO_CONJUGATE, mn_ahead, buff_m1, a21, rs_A ); bl1_dtrsv( BLIS1_LOWER_TRIANGULAR, BLIS1_NO_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_ahead, A22, rs_A, cs_A, a21, rs_A ); // FLA_Ger_external( FLA_MINUS_ONE, a21, a10t, A20 ); bl1_dger( BLIS1_NO_CONJUGATE, BLIS1_NO_CONJUGATE, mn_ahead, mn_behind, buff_m1, a21, rs_A, a10t, cs_A, A20, rs_A, cs_A ); // FLA_Trmv_external( FLA_LOWER_TRIANGULAR, FLA_TRANSPOSE, FLA_NONUNIT_DIAG, A00, a10t ); bl1_dtrmv( BLIS1_LOWER_TRIANGULAR, BLIS1_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_behind, A00, rs_A, cs_A, a10t, cs_A ); // FLA_Invert( FLA_NO_CONJUGATE, alpha11 ); bl1_dinverts( BLIS1_NO_CONJUGATE, alpha11 ); /*------------------------------------------------------------*/ } return FLA_SUCCESS; } FLA_Error FLA_Trinv_ln_opc_var4( int mn_A, scomplex* buff_A, int rs_A, int cs_A ) { scomplex* buff_m1 = FLA_COMPLEX_PTR( FLA_MINUS_ONE ); int i; for ( i = 0; i < mn_A; ++i ) { scomplex* A00 = buff_A + (0 )*cs_A + (0 )*rs_A; scomplex* a10t = buff_A + (0 )*cs_A + (i )*rs_A; scomplex* A20 = buff_A + (0 )*cs_A + (i+1)*rs_A; scomplex* alpha11 = buff_A + (i )*cs_A + (i )*rs_A; scomplex* a21 = buff_A + (i )*cs_A + (i+1)*rs_A; scomplex* A22 = buff_A + (i+1)*cs_A + (i+1)*rs_A; int mn_ahead = mn_A - i - 1; int mn_behind = i; /*------------------------------------------------------------*/ // FLA_Scal_external( FLA_MINUS_ONE, a21 ); // FLA_Trsv_external( FLA_LOWER_TRIANGULAR, FLA_NO_TRANSPOSE, FLA_NONUNIT_DIAG, A22, a21 ); bl1_cscalv( BLIS1_NO_CONJUGATE, mn_ahead, buff_m1, a21, rs_A ); bl1_ctrsv( BLIS1_LOWER_TRIANGULAR, BLIS1_NO_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_ahead, A22, rs_A, cs_A, a21, rs_A ); // FLA_Ger_external( FLA_MINUS_ONE, a21, a10t, A20 ); bl1_cger( BLIS1_NO_CONJUGATE, BLIS1_NO_CONJUGATE, mn_ahead, mn_behind, buff_m1, a21, rs_A, a10t, cs_A, A20, rs_A, cs_A ); // FLA_Trmv_external( FLA_LOWER_TRIANGULAR, FLA_TRANSPOSE, FLA_NONUNIT_DIAG, A00, a10t ); bl1_ctrmv( BLIS1_LOWER_TRIANGULAR, BLIS1_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_behind, A00, rs_A, cs_A, a10t, cs_A ); // FLA_Invert( FLA_NO_CONJUGATE, alpha11 ); bl1_cinverts( BLIS1_NO_CONJUGATE, alpha11 ); /*------------------------------------------------------------*/ } return FLA_SUCCESS; } FLA_Error FLA_Trinv_ln_opz_var4( int mn_A, dcomplex* buff_A, int rs_A, int cs_A ) { dcomplex* buff_m1 = FLA_DOUBLE_COMPLEX_PTR( FLA_MINUS_ONE ); int i; for ( i = 0; i < mn_A; ++i ) { dcomplex* A00 = buff_A + (0 )*cs_A + (0 )*rs_A; dcomplex* a10t = buff_A + (0 )*cs_A + (i )*rs_A; dcomplex* A20 = buff_A + (0 )*cs_A + (i+1)*rs_A; dcomplex* alpha11 = buff_A + (i )*cs_A + (i )*rs_A; dcomplex* a21 = buff_A + (i )*cs_A + (i+1)*rs_A; dcomplex* A22 = buff_A + (i+1)*cs_A + (i+1)*rs_A; int mn_ahead = mn_A - i - 1; int mn_behind = i; /*------------------------------------------------------------*/ // FLA_Scal_external( FLA_MINUS_ONE, a21 ); // FLA_Trsv_external( FLA_LOWER_TRIANGULAR, FLA_NO_TRANSPOSE, FLA_NONUNIT_DIAG, A22, a21 ); bl1_zscalv( BLIS1_NO_CONJUGATE, mn_ahead, buff_m1, a21, rs_A ); bl1_ztrsv( BLIS1_LOWER_TRIANGULAR, BLIS1_NO_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_ahead, A22, rs_A, cs_A, a21, rs_A ); // FLA_Ger_external( FLA_MINUS_ONE, a21, a10t, A20 ); bl1_zger( BLIS1_NO_CONJUGATE, BLIS1_NO_CONJUGATE, mn_ahead, mn_behind, buff_m1, a21, rs_A, a10t, cs_A, A20, rs_A, cs_A ); // FLA_Trmv_external( FLA_LOWER_TRIANGULAR, FLA_TRANSPOSE, FLA_NONUNIT_DIAG, A00, a10t ); bl1_ztrmv( BLIS1_LOWER_TRIANGULAR, BLIS1_TRANSPOSE, BLIS1_NONUNIT_DIAG, mn_behind, A00, rs_A, cs_A, a10t, cs_A ); // FLA_Invert( FLA_NO_CONJUGATE, alpha11 ); bl1_zinverts( BLIS1_NO_CONJUGATE, alpha11 ); /*------------------------------------------------------------*/ } return FLA_SUCCESS; } #endif
Donkey Kong Country (also called DKC) is the first installment of the Donkey Kong Country series and the first game in the original Donkey Kong Country trilogy, overall. This video game was released in 1994 for the Super Nintendo Entertainment System. This game is a platformer developed by Rare and published by Nintendo. The playable characters in this game are Donkey Kong and Diddy Kong. Donkey Kong Country was followed by two sequels: Donkey Kong Country 2: Diddy's Kong Quest and Donkey Kong Country 3: Dixie Kong's Double Trouble! in 1995 and 1996. The game was remade for the Game Boy Color in 2000 and was also remade again for the Game Boy Advance in 2003. Donkey Kong Country was ported over to the Wii's Virtual Console in 2006 and 2007. On November 25, 2012, for reasons unknown, Donkey Kong Country and both its sequels were delisted from the Wii's Virtual Console worldwide except in South Korea, but on October 30, 2014, the games were relisted only in Europe and Australia. Around the same time, the games were released on the Wii U's Virtual Console in Europe and Australia, in Japan on November 26, 2014, and in the United States and Canada on February 26, 2015. For handhelds, Donkey Kong Country was ported exclusively over to the New Nintendo 3DS's Virtual Console in March 2016. It is one of the 21 games included on the Super NES Classic Edition and was made available for the Nintendo Switch Online on July 15, 2020. Story The story for Donkey Kong Country is about the Kremlings stealing Donkey Kong's bananas and kidnapping Donkey Kong's nephew Diddy and putting him in a barrel. After Donkey Kong saves Diddy Kong from the barrel, both Donkey Kong and his nephew go together to fight the Kremlings and other enemies so they can get back their large collection of stolen bananas. Worlds There are six worlds and a final boss battle on Gangplank Galleon sometimes referred to as a world. The worlds are; Kongo Jungle Monkey Mines Vine Valley Gorilla Glacier Kremkroc Industries Inc. Chimp Caverns Gangplank Galleon (sometimes considered as a world) Bosses There are seven bosses in Donkey Kong Country. These bosses include; Very Gnawty Master Necky Queen B. Really Gnawty Dumb Drum Master Necky Snr. King K. Rool Changes between the ports Donkey Kong Country had received two remakes; one for the Game Boy Color and one for the Game Boy Advance. The Game Boy Advance version has a larger amount of changes, while the Game Boy Color version doesn't have as much changes. The changes are listed below; Game Boy Color This version contains three different title screens, one chosen out of random. Even in the North American version, the game can have English, Spanish, French, German, or Italian chosen as the language. This is not present in the Japanese version. The mode selection screen is made to resemble that of Donkey Kong 64. Only one Kong can be present at once. When one of them is hit, the other one appears. The other Kong is located at the bottom-left part of the screen with a DK Barrel icon to indicate the that Kong is there. The victory dance and victory music is taken out of the game. The world map can be viewed all over by the player pressing SELECT and then uses the left or right buttons on the D-Pad to view the overworld of a world. The Game Boy Color version added compatibility with the Game Boy Printer as Sticker Pads were also added. Winky's Walkway has been extended from the original version and even features another Mini-Necky. A new stage named Necky Nutmare has been added in this version. It appears in Chimp Caverns as the fourth level of the world. This stage is located after Funky's Flights and before Loopy Lights. Two new minigames called Funky Fishing (this one reappears in the Game Boy Advance version under the title of Funky's Fishing) and Crosshair Cranky. Candy's Save Point has been renamed to Candy's Challenge. This makes the game autosave after a level is completed. Some of the music has been reused from Donkey Kong Land. Otherwise, it is an 8-bit remix of the SNES version's music. The Cast of Characters is no longer featured before the end credits and is replaced with various screenshots which feature levels not seen in normal gameplay. Donkey or Diddy transforms into Animal Buddies, rather than riding them. If the animal buddy is hit or if SELECT is pressed, the animal transforms back into Donkey or Diddy, depending on the controlled monkey. Two additional difficulty settings have been added; there is one that removes DK Barrels, and another one that removes Star Barrels. These modes can be used once the game is completed for the first time to gain extra Sticker Pads. There is no sunset in the level Orang-utan Gang and no rain in Ropey Rampage. Orang-utan Gang takes place during the day and Ropey Rampage is only nighttime. The fifth world, Kremkroc Industries, Inc. is simply renamed to Kremkroc Industries. The temple in Temple Tempest is blue instead of yellow. In Slipslide Ride, the purple ropes are replaced with blue ropes which slide the player downwards, while white ones which replace the blue ones slide the player upwards. When a stage is completed, the stage's name when the stage is selected turns pink, instead of being marked with a "!" at the end of the stage's name. Gnawties ride on top of the millstones, instead of inside them. The Gnawties on the millstone are also recolored to a regular red palette in favor of the Game Boy Color's limited color palette. All of the Zingers in the game are yellow and do not appear in various colors. Game Boy Advance The world maps are redone to zoom in more. Some layout changes have also been done. Starting from Vine Valley and onwards, some of the stages have been rearranged such as Croctopus Chase and Ice Age Alley. Funky Fishing is renamed to Funky's Fishing. An all new minigame named Candy's Dance Studio also appears which replaces Candy's Challenge and Candy's Save Point from the previous two versions. A new time attack mode called DK Attack has been added as a mode on the main menu. There is a Hero Mode where players can play the game as a yellow Diddy Kong. Donkey Kong is excluded though. Rock Kroc can be defeated by the use of Donkey Kong's hand slap while it is dormant. This also happens in the Japanese version of the original Donkey Kong Country. Some enemies have changed colors. An example is a standard Kritter is originally green and is changed to purple or purple Klaptraps being changed to red. A scrapbook feature has been added. This causes hidden cameras to hide all over Donkey Kong Island. Several bosses have new strategies. This includes; Queen B. - Uses smaller Zingers as a shield when she is attacked. Really Gnawty - Causes stalagmites to fall with a giant pounce. Dumb Drum - Can only be defeated by the use of a TNT Barrel rather than all the enemies having to be defeated. Master Necky Snr. - During the first round, Master Necky aids Master Necky Snr. in battle. After one is defeated, the other one becomes more ferocious. The game can be saved at anytime on the world map by pressing START. Funky can be summoned on the world map when needed. This only happens after Funky is visited first. Smaller animals appear as background elements such as spiders and rats. The game saves the number of extra lives the player has collected. This feature was criticized by many as it removed the challenge from the game. 1994 video games Game Boy Color games Game Boy Advance games Virtual Console games Platform games Super Nintendo Entertainment System games
76 B.R. 857 (1987) In re William Noah KNIGHT, Debtor. Charles A. GOWER, Trustee, Plaintiff, v. HOTEL RAMADA OF NEVADA, d/b/a Tropicana Hotel and Country Club, Defendant. Bankruptcy No. 84-40483-COL, Adv. P. No. 86-4051-COL. United States Bankruptcy Court, M.D. Georgia, Columbus Division. August 13, 1987. *858 Fife M. Whiteside, Columbus, Ga., for plaintiff. Nolan B. Harmon, Atlanta, Ga., for defendant. MEMORANDUM OPINION JOHN T. LANEY, III, Bankruptcy Judge. The above-captioned Adversary Proceeding is before the Court on cross motions for summary judgment. The Trustee has objected to proofs of claim filed by a creditor, Hotel Ramada of Nevada, d/b/a Tropicana Hotel and Country Club (hereinafter "Tropicana"). The Trustee has also filed a four count counterclaim. The Trustee admits that Tropicana is entitled to summary judgment as to Counts One, Three and Four. The remaining Count of the counterclaim, Count Two involves an alleged post-petition transfer in the amount of $5,000.00. The undisputed facts show that the Debtor owed a considerable amount to Tropicana as a result of gambling debts prior to filing a Chapter 11 petition, which was later converted to Chapter 7. While the case was pending under Chapter 11 the Debtor, without authorization from the Court, purchased two cashier's checks from a bank, each in the amount of $5,000.00. He then traveled to Nevada, where gambling is admittedly legal, and delivered these cashier's checks to Tropicana. By affidavit the Debtor contends that he paid these funds on his past due account with Tropicana, as a result of which he received new credit. Tropicana does not admit applying these funds to the past due account, but does admit receipt of the cashier's checks and contends that as a result of the receipt of the same, the Debtor was extended postpetition credit, presumably for further gambling. Apparently at the request of the Debtor the bank stopped payment on the cashier's checks. However, Tropicana either sued or threatened suit against the bank and settled for receipt of $5,000.00. It is this $5,000.00 which is in dispute in Count Two. The Trustee contends that the $5,000.00 was a post petition transfer of funds of the estate out of the ordinary course of business which is avoidable by the Trustee under Section 549 of the Bankruptcy Code and recoverable under Section 550. Tropicana contends that since stop payment orders were honored on the cashier's checks it received no funds from the estate of the Debtor, but that the $5,000.00 it received was from the bank, a third party and therefore that it is not an avoidable postpetition transaction. Since the affidavit of the Debtor shows that the bank subsequently set off against funds in his account an amount of approximately $3,500.00, Tropicana contends that if it is liable its liability is limited to $3,500.00, the amount by which the estate was diminished. First, with regard to the objection to the proofs of claim, the Court notes that Claims No. 1, 16, and 31 were filed by Tropicana. Each claim is for $40,000.00 and is concededly for gambling debts incurred *859 by the Debtor prepetition. It is conceded that there is only one such pre-petition debt totalling $40,000.00 and that the other claims are duplicates. Since Claim No. 31 is the most complete and the Trustee has stipulated that it may be considered an amendment of one of the earlier claims which was filed before the bar date, objections to claims numbered 1 and 16 are sustained and said claims are stricken. Claim Number 31 is allowed and considered timely filed. The Trustee argues that Tropicana's claim should be disallowed because of the public policy of Georgia against gambling. The pertinent provision of the Bankruptcy Code is Section 502(b), which provides in part that: ". . . if such objection to a claim is made, the court, after notice and a hearing, shall determine the amount of such claim in lawful currency of the United States as of the date of the filing of the petition, and shall allow such claim in such amount except to the extent that— (1) such claim is unenforceable against the debtor and property of the debtor, under any agreement or applicable law for a reason other than because such claim is contingent or unmatured. . . ." (Emphasis added.) The Trustee contends that "applicable law" refers to the law of the forum, which is Georgia. The Trustee contends that in a Georgia court, Georgia law would be applied and that Georgia has a strong public policy against enforcing gambling debts. The Trustee contends that even though the gambling debt would have been legally enforceable in Nevada, Georgia courts would not allow its enforcement in Georgia. Tropicana responds that the "applicable law" referred to in Section 502(b) is the law of the state where the contract originated, to-wit Nevada. It relies on In re Smith, 66 B.R. 58 (Bankr.D.Md.1986). That case cited legislative history giving examples of applicable defenses which bear upon the execution, interpretation, and validity of the contract as indicating that Congress intended the phrase "applicable law" to be the place of the making of the contract, not of the forum, unless the parties have agreed to the contrary. Ibid, 59 (footnote), 61. The Smith decision was affirmed in an unreported decision by the District Court that did not reach the question of the meaning of the phrase "applicable law," since it held that the alternative ground of the holding that a Maryland court would enforce the gambling contract if it was legal where incurred was correct. Maryland has legalized various forms of gambling and has its own state lottery. (In re Smith, 77 B.R. 33 (D.C.Md.1987). The Trustee cites a number of Georgia statutes that make gambling illegal, to-wit: O.C.G.A. Sections 16-12-21, 16-12-22, 16-12-23, 16-12-24, and 16-12-28. O.C.G.A. Section 13-8-2 provides in part: "(a) A contract which is against the policy of the law cannot be enforced. Contracts deemed contrary to public policy include but are not limited to: * * * (4) Wagering contracts. . . . " O.C.G.A. Section 13-8-3 provides that gambling contracts are void and money paid or property delivered as consideration for gambling may be recovered. The Trustee relies upon a diversity case from the United States District Court for the Southern District of Georgia, Gulf Collateral, Inc. v. Morgan, 415 F.Supp. 319 (1976). In that case, an action was brought to collect a gambling debt which arose in Nevada where gambling was legal. The debtor was granted summary judgment, Judge Lawrence holding that the public policy of Georgia rendered such obligations unenforceable in the state. Tropicana argues that Gulf Collateral is not controlling because the public policy of Georgia has changed since 1976 and also because the Bankruptcy Court is not bound to apply the law of Georgia as was the District Court in the diversity action. In support of a change in public policy, Tropicana cites a 1977 law regulating and licensing nonprofit bingo games, codified as O.C.G.A. Sections 16-12-50 et seq. and a 1985 law allowing manufacturing, processing, selling, possessing or transporting equipment, devices and materials used in lotteries *860 conducted by other states, codified as O.C.G.A. Section 16-12-35. The Supreme Court of the United States recognized in Vanston Bondholders Protective Committee v. Greene, 329 U.S. 156, 67 S.Ct. 237, 91 L.Ed. 162 (1946) that a Bankruptcy Court is not required to adjudicate controversies as if it were a state court in the state in which it sits. Instead, "bankruptcy courts must administer and enforce the Bankruptcy Act as interpreted by this Court in accordance with authority granted by Congress to determine how and what claims shall be allowed under equitable principles." Ibid, 329 U.S. at 162-63, 67 S.Ct. at 240. This Court is not persuaded that the public policy of Georgia has been changed to the extent that a Georgia court or a Federal District Court in a diversity case would enforce a gambling obligation. The Georgia Court of Appeals, citing O.C.G.A. Section 13-8-2(a)(4), has recently stated in dicta that it is "fully aware that gambling is against the public policy of Georgia. . . ." Hargreaves v. Greate Bay Hotel & Casino, 182 Ga.App. 852, 357 S.E.2d 305 (1987). However, this Court agrees with Judge Mannes in In re Smith, supra, that the phrase "applicable law" in Section 502(b) of the Bankruptcy Code means the place of making the contract, not the place of the forum, unless the contract provides to the contrary. Therefore, Tropicana's proof of claim No. 31 is allowed and the Trustee's objection to the same is overruled. With regard to Count Two of the counterclaim, the Court finds that the net result of the postpetition dealings between the Debtor and Tropicana was an unauthorized transfer of property of the estate in the net amount of $3,500.00. Even though Tropicana actually received $5,000.00 and even though such receipt was not the immediate result of the honoring of the post-petition unauthorized cashier's checks, the eventual result was that the estate was diminished by $3,500.00 as a result of a postpetition unauthorized transfer. This transfer may be avoided under Section 549 of the Bankruptcy Code and the Trustee may recover that amount for the benefit of the estate under Section 550. An order will be entered in accordance herewith. ORDER In accordance with the Opinion of the Court rendered on today's date, IT IS ORDERED as follows: (1) The Trustee's objections to claims Numbered 1 and 16 are sustained and said claims are disallowed. (2) The Trustee's objection to Claim No. 31 of Hotel Ramada of Nevada, d/b/a Tropicana Hotel and Country Club ("Tropicana") is overruled and said claim is allowed as an unsecured claim, timely filed, in the amount of $40,000.00. (3) Tropicana is granted summary judgment in its favor as to Counts One, Three and Four of the Trustee's counterclaim. (4) The Trustee is granted judgment on Count Two of its counterclaim against Tropicana in the amount of $3,500.00, the Court finding that Tropicana received an unauthorized postpetition transfer from the Debtor in said amount and that said amount is recoverable in this action by the Trustee against Tropicana. (5) Said judgment in favor of the Trustee and against Tropicana shall bear interest postjudgment at the rate of 6.98 per cent per annum.
Lubos Kohoutek (, born 29 January 1935) is a Czech astronomer. He was known as a discoverer of minor planets and comets, including Comet Kohoutek which was visible to the naked eye in 1973. He also discovered comets 75D/Kohoutek and 76P/West-Kohoutek-Ikemura. The main-belt asteroid 1850 Kohoutek was named after him.
May 11, 2007 Mark Leat, the Longton North BNP councillor on Stoke Council has decided he's had enough of the fascist BNP and stumbled across the council chamber to become a non-aligned Independent, thus wiping out in a single embarrassing go the staggering single extra seat that the BNP made at last week's local government elections. So, after putting up the record number of 750 candidates and ploughing a phenomenal amount of money into the campaign - delivering up to six campaign leaflets in some areas, we're told - the party has ended up precisely where it started. Leat's only claim to fame is a bit of BNP self-promotion when a pre-election Voice of Freedom article claimed the party was achieving credibility following his award of chairman of the Health Commission. Following this, the party claimed it was 'working towards the day when the BNP hold the majority on Stoke Council'. That's put an end to that plan.
Jok Richard Church (November 28, 1949 - April 29, 2016) was an American cartoonist. He was born in Akron, Ohio. He created the Universal Press Syndicate comic strip You Can With Beakman and Jax which was later made into the TV series Beakman's World. Church died of a heart attack in San Francisco, California on April 29, 2016, aged 67.
Q: Why isn't the action for my newly created UISwitch being called (Objective-c, Xcode 7.0.1)? So I have a UIswitch (firstSwitch) which when it is ON calls an action where it creates another UISwitch (secondSwitch). Then I want to repeat this step where the newly created UISwitch (secondSwitch) calls an action where it creates a new UISwitch (thirdSwitch) when it is switched ON. But the problem is, the first newly created switch (secondSwitch) does not detect the ON state so it cannot create the new UISwitch (thirdSwitch). Any advice or guidance on this would be greatly appreciated. Thank you in advance. Below are snippets of code to get a better idea (please disregard the positioning of x, y, width, height): - (void)viewDidLoad { [super viewDidLoad]; [self loadFirstSwitch]; [firstSwitch addTarget:self action:@selector(switchIsChanged:) forControlEvents:UIControlEventValueChanged]; //This secondSwitch does not detect the change in state to ON [secondSwitch addTarget:self action:@selector(switchIsChanged:) forControlEvents:UIControlEventValueChanged]; } - (void)loadFirstSwitch { firstSwitch = [[UISwitch alloc] initWithFrame:CGRectMake(screenWidth - 50 - buttonPadding, 527.5 + buttonWidth, 50, 27)]; [scrollView addSubview:firstSwitch]; } - (void)loadSecondSwitch{ secondSwitch = [[UISwitch alloc] initWithFrame:CGRectMake(screenWidth - 50 - buttonPadding, 527.5 + buttonWidth, 50, 27)]; [scrollView addSubview:secondSwitch]; } - (void)loadThirdSwitch{ thirdSwitch = [[UISwitch alloc] initWithFrame:CGRectMake(screenWidth - 50 - buttonPadding, 527.5 + buttonWidth, 50, 27)]; [scrollView addSubview:thirdSwitch]; } - (void) switchIsChanged:(UISwitch *)paramSender{ if(paramSender == firstSwitch){ if([paramSender isOn]){ [self loadSecondSwitch]; }else{ NSLog(@"Switch is off"); } } if(paramSender == secondSwitch){ if([paramSender isOn]){ [self loadThirdSwitch]; }else{ NSLog(@"Switch is off"); } } } A: You are adding target to secondSwitch before it is created, it should be done after it is created, better to put in the function loadSecondSwitch, so you code looks like - (void)loadSecondSwitch{ secondSwitch = [[UISwitch alloc] initWithFrame:CGRectMake(screenWidth - 50 - buttonPadding, 527.5 + buttonWidth, 50, 27)]; [scrollView addSubview:secondSwitch]; [secondSwitch addTarget:self action:@selector(switchIsChanged:) forControlEvents:UIControlEventValueChanged]; } Remove [secondSwitch addTarget:self action:@selector(switchIsChanged:) forControlEvents:UIControlEventValueChanged]; from viewDidLoad. I hope it works for you. Cheers.
Ried im Zillertal is a municipality of the district of Schwaz in the Austrian state of Tyrol.
A.1. Releases Meat Scented Candles Just in Time for Father’s Day File this under weird things you buy just to see what they are like. A.1. is introducing the world to meat-scented candles. According to the A.1. website the candles smell like coming home to a nice, juicy, hearty dinner with the sweet and tangy taste of A.1. sauce. Who doesn't want their home and all their belongings to smell like beef? The meat candles are only $14.99 and would make a unique Father's Day gift if your dad is a grillmaster, or just really likes red meat. There are three different "flavors" to choose from on the website: Original Meat Candle, Backyard BBQ Candle, and the Burger Candle. I love to put A.1. on everything, but to be honest I can't say that I love the smell of it. It does make broccoli way better, and it is even delicious with Salmon so maybe in the future we will be able to submit for new A.1. scents? I might have to order myself a meat candle and just burn it in my office at work to see how many co-workers get very hungry all of the sudden. If you want to order yourself or your Dad a meat candle you can do so on the A.1. website, here.
Hugo de Leon (born 27 February 1958) is a former Uruguayan football player. He has played for Uruguay national team. Club career statistics |- |1978||rowspan="3"|Nacional||rowspan="3"|Primera Division|||| |- |1979|||| |- |1980|||| |- |1981||rowspan="4"|Gremio||rowspan="4"|Serie A||22||0 |- |1982||21||0 |- |1983||20||1 |- |1984||20||1 |- |1985||rowspan="2"|Corinthians Paulista||rowspan="2"|Serie A||24||0 |- |1986||0||0 |- |1987||Santos||Serie A||0||0 |- |1987/88||Logrones||La Liga||16||0 |- |1988||rowspan="2"|Nacional||rowspan="2"|Primera Division|||| |- |1989|||| |- |1989/90||River Plate||Primera Division||12||0 |- |1990||Nacional||Primera Division|||| |- |1991||Botafogo||Serie A||12||0 |- |1992||Nacional||Primera Division|||| || 97||2 16||0 12||0 125||2 |} International career statistics |- !Total||48||0 |}
Knicks star Carmelo Anthony said Saturday that he didn't think he deserved to be thrown out of Friday's game against the Celtics and hinted that a previous history with the referee who ejected him may have played a role in his dismissal. "I always feel it's something. Every time we ... I don't want to say it's personal, but I always feel like it's something," Anthony said of referee Tony Brothers, who ejected him on Friday. Carmelo Anthony leaves the court after being ejected by referee Tony Brothers in Friday's game against the Celtics. Bob DeChiara/USA TODAY Sports Anthony, speaking to reporters in Toronto before Knicks' 118-107 loss, added: "I didn't think [Friday] night it called for a tech or an ejection at that point of time. I really don't know what to say about the situation." Anthony's wife, La La Anthony, tweeted on Friday night that Brothers "hates" Anthony and has a "personal" issue with the Knicks star. He hates Mel. It's personal. Always has. https://t.co/oHGrrOe30Q — LA LA (@lala) November 12, 2016 Brothers told a pool reporter in Boston that he had no history with Anthony, a 13-year veteran. La La Anthony, however, reiterated on Twitter that Brothers has a personal issue with Anthony after Brothers' denial. Carmelo Anthony was tossed with 4 minutes, 44 seconds left in the second quarter after consecutive technical fouls issued by Brothers. Anthony received the first technical when he said something to Brothers after a loose ball foul call. Brothers walked away, but Anthony trailed behind and kept talking. He was then hit with the second technical. Brothers later told a pool reporter that he whistled Anthony for the technical due to "bad language." "One, I don't feel I said anything on getting a tech -- and two -- getting ejected,'' Anthony said. He added: "There's nothing for me to say to him. It ain't personal with me from my end. I don't have anything to say to him. He's a ref. I play. I'll keep my mouth shut next time.'' Anthony reportedly argued with Brothers during the Knicks' road loss to Detroit earlier this season. Brothers was also officiating the Knicks-Celtics game in 2013 in which Anthony and Kevin Garnett had a run-in on the court that escalated into a confrontation after the game in the bowels of Madison Square Garden. ESPN's Ian Begley contributed to this report.
100 Women is a BBC series started in 2013. It looks at the role of women in the twenty-first century. It organised events in London and Mexico. After the women are named, the BBC has three weeks of information about women. Women from all over the world make comments on Twitter about the interviews and debates. History After the 2012 Delhi gang rape case, BBC Controller Liliane Landor, BBC editor Fiona Crack, and other journalists started a series about the issues and successes of women. Women told the BBC there was not enough information about issues women face. In March 2013, BBC received a "flood of feedback from female listeners" that asked for more information "from and about women." The BBC started the series in 2013 because there were not enough women represented in the media. The BBC used a survey in 26 languages to choose women for the first program. There were programs for one month, then there was a conference on 25 October. Women from different countries talked about issues they shared. There were many subjects, like work, feminism, motherhood, and religion. The series looked at cultural and social problems women have in life. After the first program, there were many other subjects, like education, health, equal pay, genital mutilation, domestic violence, and sexual abuse. The series tries to give women a place to talk about how to make the world better and stop sexism. Women on the list are from many countries and many professions. Some of the women are famous, and some are not well known. Names of the 100 women 2016 The 2016 theme was Defiance. Part of the 100 Women festival was in Mexico City. The 2016 list was in alphabetical order. 2015 The BBC News 100 Women list in 2015 was made up of many notable international names, as well as women who were unknown, but who represent issues women face. The women of 2015, included representatives from 51 countries and were not necessarily those who would traditionally have been seen as role models--a woman suffering from depression, a woman who advocates for equal access to bathroom facilities, a woman who encourages other women to avoid make-up, and a reindeer nomad. 2014 The BBC News 100 Women list in 2014 continued the efforts of the first year's initiative. 2013 The 2013 event was a month-long BBC series that took place in October. The series examined the role of women in the 21st century and culminated in an event held at BBC Broadcasting House in London, United Kingdom on 25 October 2013 involving a hundred women from around the world, all of whom came from different walks of life. The day featured debate and discussion on radio, television and online, in which the participants were asked to give their opinions about the issues facing women. The event held on 25 October 2013 featured 100 women from all walks of life. Initiatives by year 2013: 2014: 2015: 2016: Other participants
Q: Reverse a string and get an error The error is The best overloaded method match for 'string.String(char[])' has some invalid arguments My code: string reverseValue = new string( value.Select((c, index) => new { c, index }) .OrderByDescending(x => x.index).ToArray()); A: char[] chars = value.ToCharArray(); Array.Reverse(chars); new String(chars); Or (somewhat slower) new String(value.Reverse().ToArray()); Note that this won't handle UTF32 surrogate pairs, nor combining characters.
Couloume-Mondebat is a commune in the Gers department. It is in southwestern France.
Tissue pharmacokinetics of fleroxacin in humans as determined by positron emission tomography. The delivery of fleroxacin, a new broad-spectrum fluoroquinolone, to the major organs of the body was studied in 12 normal human volunteers (nine men and three women), utilizing positron emission tomography (PET). Following the infusion of 20 mCi of [(18)F]fleroxacin in conjuction with a standard therapeutic dose of 400 mg, images were acquired over 8 h. Beginning the next day, the subjects received unlabeled drug at a dose of 400 mg/day for 3 days, with a repeat PET study on the fifth day. Fleroxacin is distributed widely throughout the body, with the notable exception of the central nervous system, with stable levels achieved within 1 h after completion of the infusion. Especially high peak concentrations (18 mug/g) were achieved in the kidney, liver, lung myocardium, and spleen. The mean plateau concentrations (2-8 h post-infusion, mug/g) were: brain 0.83; myocardium, 4.53; lung, 5.80, liver, 7.31; spleen, 6.00; bowel, 3.53; kidney, 8.85; bone, 2.87; muscle, 4.60; prostate, 4.65; uterus, 3.87; breast, 2.68; and blood, 2.35. Repetitive dosing had no significant effect on the pharmacokinetics of the drug. Since the MIC(90)'s of the family Enterobacterioaceae and Neisseria gonorrhoeae are <2 mug/ml, with the great majority of the individual species 1 mug/ml, these results suggest that a single daily dose of 400 mg of fleroxacin should be effective in the treatment of infections such as urinary tract infection and gonorrhea.
Mpika District is a district of Zambia in the Northern Province. The capital is Mpika. As of the 2000 Zambian Census, the district had a population of 146,196 people.
Checklist for the evaluation of low vision in uncooperative patients. To present a checklist for the evaluation of low vision in uncooperative patients; in this specific case, children with neurological deficits. The checklist includes several behavioral indicators obtainable with a standard clinical examination. Each test is assigned a score (0=failure, 1=success). The final visual quotient score is obtained by dividing the partial score by the total number of tests performed. Eleven children with cerebral visual impairment were studied using behavioral and preferential looking techniques. Visual quotient was >0 in all patients, indicating that residual visual function was always detectable. Average visual quotient was 0.74. Visual quotient can be useful both for follow-up examinations and comparison and integration with other evaluation methods (behavioral and instrumental) of residual visual capacity. In particular, if combined with preferential looking techniques, visual quotient testing permits characterization of the entire spectrum of low vision.
"Barbie Girl" is a dance-pop hit song from the Danish music group Aqua. The single was released in 1997. Its lyrics are about Barbie and Ken, the toys from Mattel. The song was very popular. It was #1 across the music charts of certain countries. It was in the Top 20 charts in the United States. Barbie Girl also caused controversy at various times. 1997 songs Pop songs Eurodance songs
JACKSONVILLE, Fla. -- Jacksonville Jaguars fans might be divided on whether management made a mistake by re-signing quarterback Blake Bortles or failing to draft an offensive lineman with the team's first-round draft pick, but they all agree on this: Myles Jack wasn't down. It has been nearly five months since officials made a huge mistake in the AFC Championship Game at Gillette Stadium, when they blew the whistle to stop Jack from returning a fumble for a touchdown, but Jaguars fans still (understandably) won't let it go. That's why "Myles Jack wasn't down" has become a thing around the city. A really, really big thing. Two local breweries have named a pair of microbrews with the phrase. High schoolers have decorated their mortarboards with it at graduation. T-shirts have been printed. Signs have cropped up on television at other sporting events, such as "WWE Monday Night Raw" in Albany, New York. "Myles Jack wasn't down" even made its way into Rohan Bansal's valedictorian speech at Jacksonville's Atlantic Coast High School. "I love it," Jack told ESPN. "I'm a B-list celebrity on this team. We got Blake, Leonard [Fournette], Jalen [Ramsey]. Anywhere I can get my name in there, I'm cool with it." Jack, a linebacker, admits that the fun people are having certainly eases some of the pain of what happened on the sixth play of the fourth quarter, with the Jaguars leading the New England Patriots 20-10. The Patriots used a trick play -- receiver Danny Amendola threw a pass to running back Dion Lewis -- but Jack ran Lewis down after a 22-yard gain and ripped the ball out of Lewis' grasp as they went to the ground. Jack ended up with the ball, got up and headed for the end zone, but officials blew the play dead and stopped what would have been a touchdown. After reviewing the play, the officials ruled that Jack was down by contact, and the Jaguars took over on their 33. "At that moment, at that time, when I picked the ball up and ran and why I slammed the ball down, like, I knew I wasn't down," Jack said. "So I was screaming at the ref, like, 'Why the eff are you blowing the play down when I know it's not down?' Enough people at home know I wasn't down. People at the stadium know I wasn't down. "It [people having fun with the phrase] is comforting, I guess. Therapeutic." The Jaguars went three-and-out after the turnover, and the Patriots responded with the first of their two fourth-quarter touchdowns. Had officials not prematurely blown Jack's return dead, the Jaguars would have been ahead 27-10, and the entire complexion of the game would have changed. That is why Jaguars fans won't -- and can't -- let it go. That includes guys such as Intuition Ale Works founder Ben Davis, who decided to brew a small batch of a Belgian Tripel and call it "Myles Jack Wasn't Down." Davis said he got the idea from seeing social media posts with the phrase. He started selling the beer June 7 at his brewery and tap room two blocks from TIAA Bank Field. "We want to promote the Jaguars," Davis said. "A lot of our biggest drinkers and supporters are Jags fans and are the demographic that kind of gets behind them. And I truly hate the Patriots. "... The older you get as a brewery and the more beers you brew, it definitely gets harder and harder to come up with names. It's something that we thought was fun." So did Eric Luman, who owns Green Room Brewing in Jacksonville Beach. During the NFL playoffs, his company brewed two beers named Sacksonville in honor of the Jaguars' defensive nickname, and it went over so well that he wanted to try another brew. It was barroom manager Brendan Davis who came up with the idea to name the IPA they brewed two months ago Straight Facts -- Myles Jack Wasn't Down. Luman said it was a small batch, and there isn't any left. "It went over really well," Luman said. "Those two Sacksonville beers we did and Myles Jack just flew off our shelves." Makes sense. There's no better way for Jaguars fans to drown their sorrows than with a beer named after the play that might have robbed the franchise of its first Super Bowl appearance. Or maybe play a joke on the president of the United States by asking his Twitter account to DM you because you have information that proves the Russia investigation is a witch hunt -- and answer the message with: Myles Jack wasn't down in the AFC Championship Game against the Patriots. That, by the way, is Jack's favorite. This is the funniest thing I've seen all year 😂😂😂 https://t.co/sDCZIYR22V — Myles Jack (@MylesJack) April 29, 2018 "I got on Twitter and [saw] that, and I really laughed out loud," he said. The players can do that now, but that doesn't mean they no longer believe they were robbed by the officials. "We saw it live, and I thought it then, and I still think it now," Bortles said. "I think it's tough to argue with, but there's definitely things we could have done in that game outside of that play, offensively, that could have won us the game, so it's tough to point out that or use that [as an excuse], but he definitely wasn't down."
Claro was a municipality of the district Riviera in the canton of Ticino in Switzerland. On 2 April 2017, the former municipalities of Camorino, Claro, Giubiasco, Gnosca, Gorduno, Gudo, Moleno, Monte Carasso, Pianezzo, Preonzo, Sant'Antonio and Sementina merged into the municipality of Bellinzona.
Data circuit-terminating equipment A data circuit-terminating equipment (DCE) is a device that sits between the data terminal equipment (DTE) and a data transmission circuit. It is also called data communication(s) equipment and data carrier equipment. Usually, the DTE device is the terminal (or computer), and the DCE is a modem. In a data station, the DCE performs functions such as signal conversion, coding, and line clocking and may be a part of the DTE or intermediate equipment. Interfacing equipment may be required to couple the data terminal equipment (DTE) into a transmission circuit or channel and from a transmission circuit or channel into the DTE. Usage Although the terms are most commonly used with RS-232, several data communication standards define different types of interfaces between a DCE and a DTE. The DCE is a device that communicates with a DTE device in these standards. Standards that use this nomenclature include: Federal Standard 1037C, MIL-STD-188 RS-232 Certain ITU-T standards in the V series (notably V.24 and V.35) Certain ITU-T standards in the X series (notably X.21 and X.25) A general rule is that DCE devices provide the clock signal (internal clocking) and the DTE device synchronizes on the provided clock (external clocking). D-sub connectors follow another rule for pin assignment. DTE devices usually transmit on pin connector number 2 and receive on pin connector number 3. DCE devices are just the opposite: pin connector number 2 receives and pin connector number 3 transmits the signals. When two devices, that are both DTE or both DCE, must be connected together without a modem or a similar media translator between them, a crossover cable must be used, e.g. a null modem for RS-232 or an Ethernet crossover cable. See also Networking hardware References External links Data Terminating Equipment or Data Circuit-Terminating Equipment speeds, IBM Category:Data transmission Category:Telecommunications equipment
Vercel-Villedieu-le-Camp is a commune. It is in Bourgogne-Franche-Comte in the Doubs department in east France.
Open-end spinning devices with such rotor bearing arrangements are known in various different embodiments, and have been extensively described, for example in German Patent Publications DE 195 43 745 A1, DE 196 01 034 A1, DE 197 05 607 A1 or DE 41 17 175 A1. For example, German Patent Publication DE 195 43 745 A1 describes an embodiment in which the spinning rotor is supported both radially and axially by means of a magnetic bearing arrangement. The magnetic bearing at the end of the rotor shaft described in German Patent Publication DE 195 43 745 A1 has a magnetic rotor ring as well as a magnetic stator ring. Here, the two magnetic rings are aligned magnetically in such a way that a bearing gap is created between them. To suppress the radial oscillations of the magnetically seated spinning rotor, which occur in particular during a starting phase, the stator magnet is furthermore seated with its movements limited in the radial direction. Here, the radial deflections of the magnetic stator ring are damped by a mechanical friction device. However, it is disadvantageous with this known installation that the static charge, which in particular occurs during the spinning of synthetic materials in the area of the spinning rotor, cannot be dissipated in a defined manner, since the spinning rotor is electrically insulated from the grounded components of the open-end spinning device. This electric charge has an interfering effect on the spinning process. In the open-end spinning device in accordance with German Patent Publication DE 197 05 607 A1, the spinning rotor is supported with its rotor shaft in the wedge gaps of a support ring bearing, and rests axially against an aerostatic bearing. The rotor shaft is made of carbide, at least in the area of the bearing surface which cooperates with the axial bearing. Because of the electrically insulated bearing of the spinning rotor, it is also possible with this known bearing arrangement for a static charge of the spinning rotor to appear during spinning of synthetic material in particular, leading to disadvantageous effects on the spinning process. Even temporary unintentional contact between the rotor shaft bearing surface and the bearing plate of the axial bearing, made of a carbon material, cannot produce a sufficient removal of the charge. An open end spinning device is furthermore known from German Patent Publication DE 196 01 034 A1, in which the spinning rotor is aerostatically seated both radially and axially. As with the above described bearing arrangements, the bearing arrangement in accordance with German Patent Publication DE 196 01 034 A1 also has the problem that a permanent electric insulation of the spinning rotor during the spinning operation is produced because of the air gap of the aerostatic bearing, and therefore no sufficient removal of the electrostatic charge created during spinning takes place. The problem of insufficient grounding of the spinning rotor, in particular in the course of processing synthetic feed materials, is also present in bearing arrangements of open-end spinning devices, such as those known from German Patent Publication DE 41 17 175 A1. With these bearing arrangements, which per se have proven themselves in actual use, the radial seating of the rotor shaft is customarily provided by plastic-coated support rings. Since the rotor shaft has a non-conducting, for example oxide-ceramic, insert in the area of its axial support, this also leads to an electric insulation and therefore to a static charge buildup of the spinning rotor.
The Inbe clan (Yin Bu Shi Inbe-shi) were a strong family in Japan a long time ago, during a time called the Kofun period (250-538 CE) and the Asuka period (538-710 CE). They came from an area called Kibi Province, which is now Okayama Prefecture. They were prominent in their links to religion. They claimed descent from Futodama. History The clan started off as low class but gained power due to religious reasons. During the reign of Emperor Kotoku, the Inbe, along with the Nakatomi and Urabe families, were tasked with supervising the Department of Divinities. During the Asuka period, the Inbe clan became more prominent and was involved in the political and military affairs of the Yamato court. In 587 CE, the Inbe clan was part of the allied forces that fought against the powerful Soga clan in the Battle of Shigisan. The battle resulted in the defeat of the Soga clan and the Inbe clan's rise in power. During the 7th century, the Inbe clan had important roles in the Yamato court, including the position of O-omi, responsible for managing the court's affairs. In 645 CE, the clan supported Prince Naka no Oe, who later became Emperor Tenji, in a successful coup against his mother Empress Kogyoku. During the Nara period (710-794 CE), the Inbe clan's influence declined, and they were gradually overtaken by the Nakatomi clan and their descendant clan the Fujiwara clan. In 807 their leader wrote the to complain to the Emperor over their exclusion. They migrated to the east soon after and built the Legacy The Inbe clan was an influential group in Japan during the Kofun and Asuka periods. They were based in the Kibi Province, and were involved in politics and military affairs. Their legacy is still present in Japanese culture today, as they were responsible for building the Inbe Shrine, an important religious site. The Inbe clan's name can also be found in literary works such as the Man'yoshu. The Japanese government recognizes the Inbe clan as an important historical group, designating them as such in 1967. Related pages Inbe Shrine
Juvenile court a place of hope, despair, second chances Judge John Williams presides over cases in Hamilton County Juvenile Court on Wednesday.(Photo: The Enquirer/Amanda Rossmann) Story Highlights According to the 2013 annual report, last year Hamilton County Juvenile Court handled 720 complaints for assault, 216 for menacing, 75 for sexual offenses, 309 for robbery, 434 for burglary, 975 for theft, 363 for vandalism and damage and 3 for homicide, among other charges. At 800 Broadway, chief magistrate Carla Guenthner steps into an elevator and pushes the button marked 1. “Everything quiet on the first floor?” she asks the security guard beside her. The woman looks at her with a knowing smile. “We’ll see. It’s subject to change minute to minute.” Probably nothing truer has ever been said of 800 Broadway, which is shorthand for the Hamilton County Juvenile Court. About 30,000 new cases are heard here each year, all of them involving that segment of the population whose brains are still under development, along with their driving ability, decision-making skills and manners. It’s why, brought into court in handcuffs and shackles, a female teenage defendant has been known to pass by a young court worker and whisper brightly, “I really like your skirt!” They may be facing charges for menacing, theft, marijuana, chronic truancy or assault – and they may have a rap sheet longer than the “Loyalty” tattoo running down their forearm – but they’re still kids. Which is what makes juvenile court a setting of both endless hope and latent despair. Wednesday morning, Judge John Williams’ docket started with a 16-year-old and 17-year-old charged in a Madisonville killing, and a 13-year-old charged with reckless homicide in the shooting death of his 13-year-old friend. His parents and grandfather watching red-eyed from chairs along the wall, the boy – hardly 5 feet tall – was led into court by a sheriff’s deputy. He curled forward in his chair, made eye contact with no one, and hung his head. In his chambers later, Williams says, “What I’m always struck by – even in some of the really violent cases – is that they’re so small.” A law student interning at the court says the first thing she notices is this: “They’re not scary – they may be intimidated, confused sometimes, but not scary.” Which cannot always be said of their parents. Court officials say parents’ support is crucial for helping a wayward kid. But some moms and dads don’t agree with that, or seemingly anything else a judge or magistrate tells them. Thursday, as she was supposed to be answering visiting Judge Sylvia Hendon’s questions about why her 16-year-old son was living unsupervised and apart from his family, a mother rolled out her own list of complaints and rebuttals instead. Asked to release the boy on probation to his mother, Hendon said he should have been there for the last two years. “Excuse me? There’s no point in coming in here. We’ll be fighting this,” his mother said, and left the courtroom. Moments later, glancing back to see his mother, the 16-year-old saw only her empty chair. Wednesday, when Williams gave her 14-year-old son additional days in juvenile detention and probation instead of sending him to the state’s youth prison, Regina Owens sat nodding her head and wiping tears from her eyes. When Williams asked her to let the court know if there were infractions, she said, “I’ll tell on him in a heartbeat.” Afterward, in the hallway outside, Owens’ older son, Demetrius Harris, remembered being in the same courtroom five years ago, when he was a 14-year-old who had accidentally shot and wounded a friend. He could have been sentenced to 17 years in prison but served a short term in detention instead. He still remembers then-Judge Thomas Lipps’ admonitions. They stuck. “I got my GED, I got a job and I’m going to the Marines next year – and I never came back here,” he says. It’s the best outcome the juvenile court judges and magistrates could hope for and the reason that, faced with packed dockets, they still take time to advise, warn, encourage, scold and sometimes cajole the young people before them. “If you loved your mom, would you walk around with a 45 (caliber handgun)?” Williams asks one young defendant facing weapon charges. “I’m not going to put up with this, with guns. Do you understand if you don’t listen, what I’m going to do? I’m going to send you to DYS (the Department of Youth Services). I’m not going to let you fail.” Later, Williams says that if only two out of 100 youth heed his warnings, “I’m still always going to say it.” From his corner of the courtroom, Williams’ bailiff, John Englert, says the power of juvenile court is that the youths who come before it still have time to change. “Sometimes the judge only has a few minutes to talk to them, but sometimes that’s the minute that works.” ■
Peter Kern (13 February 1949 - 26 August 2015) was an Austrian actor, movie director, screenwriter and producer. He has appeared in more than 70 movies since 1957 and directed about 25 movies. He starred in the 1978 movie Flaming Hearts.
<?php namespace Kanboard\Subscriber; use Symfony\Component\EventDispatcher\EventSubscriberInterface; use Kanboard\Core\Security\AuthenticationManager; use Kanboard\Core\Session\SessionManager; use Kanboard\Event\AuthSuccessEvent; use Kanboard\Event\AuthFailureEvent; /** * Authentication Subscriber * * @package subscriber * @author Frederic Guillot */ class AuthSubscriber extends BaseSubscriber implements EventSubscriberInterface { /** * Get event listeners * * @static * @access public * @return array */ public static function getSubscribedEvents() { return array( AuthenticationManager::EVENT_SUCCESS => 'afterLogin', AuthenticationManager::EVENT_FAILURE => 'onLoginFailure', SessionManager::EVENT_DESTROY => 'afterLogout', ); } /** * After Login callback * * @access public * @param AuthSuccessEvent $event */ public function afterLogin(AuthSuccessEvent $event) { $this->logger->debug('Subscriber executed: '.__METHOD__); $userAgent = $this->request->getUserAgent(); $ipAddress = $this->request->getIpAddress(); $this->userLockingModel->resetFailedLogin($this->userSession->getUsername()); $this->lastLoginModel->create( $event->getAuthType(), $this->userSession->getId(), $ipAddress, $userAgent ); if ($event->getAuthType() === 'RememberMe') { $this->userSession->validatePostAuthentication(); } if (session_is_true('hasRememberMe')) { $session = $this->rememberMeSessionModel->create($this->userSession->getId(), $ipAddress, $userAgent); $this->rememberMeCookie->write($session['token'], $session['sequence'], $session['expiration']); } } /** * Destroy RememberMe session on logout * * @access public */ public function afterLogout() { $this->logger->debug('Subscriber executed: '.__METHOD__); $credentials = $this->rememberMeCookie->read(); if ($credentials !== false) { $session = $this->rememberMeSessionModel->find($credentials['token'], $credentials['sequence']); if (! empty($session)) { $this->rememberMeSessionModel->remove($session['id']); } $this->rememberMeCookie->remove(); } } /** * Increment failed login counter * * @access public * @param AuthFailureEvent $event */ public function onLoginFailure(AuthFailureEvent $event) { $this->logger->debug('Subscriber executed: '.__METHOD__); $username = $event->getUsername(); if (! empty($username)) { // log login failure in web server log to allow fail2ban usage error_log('Kanboard: user '.$username.' authentication failure'); $this->userLockingModel->incrementFailedLogin($username); if ($this->userLockingModel->getFailedLogin($username) > BRUTEFORCE_LOCKDOWN) { $this->userLockingModel->lock($username, BRUTEFORCE_LOCKDOWN_DURATION); } } else { // log login failure in web server log to allow fail2ban usage error_log('Kanboard: user Unknown authentication failure'); } } }
West Punjab was a former province of Pakistan which existed from 1947 to 1955. The province covered an area of ,including much of the current Punjab province and the Islamabad Capital Territory, but excluding the former princely states of Bahawalpur. The capital was the city of Lahore and the province had four divisions (Lahore, Sargodha, Multan and Rawalpindi). The province was bordered by the former Indian state of East Punjab to the east, the princely state of Bahawalpur to the south, the provinces of Balochistan and Sindh to the southwest, Khyber Pakhtunkhwa to the northwest, and Azad Jammu and Kashmir and Occupied Kashmir to the northeast. History The independence of Pakistan in 1947 led to the divisions of the Punjab province into two new provinces. The East Punjab where most people were Sikh and Hindu became part of the new nation of India while the mainly Muslim West Punjab became part of the new nation of Pakistan. The name of the province was shortened to Punjab in 1950. West Punjab was merged into the province of West Pakistan in 1955 under the One Unit policy announced by Prime Minister Chaudhry Mohammad Ali. When that province was dissolved, the area of the former province of West Punjab was combined with the former state of Bahawalpur to form a new Punjab Province. People At independence there was a Muslim majority in West Punjab with significant minorities of Hindus and Sikhs. Nearly all of these minorities left West Punjab for India, to be replaced by large numbers of Muslims fleeing in the opposite direction. The official language of West Punjab was Urdu but most of the population spoke Punjabi using the Shahmukhi script. Government The offices of Governor of West Punjab and Chief Minister of West Punjab lasted from August 15 1947, until 14th October 1955. The first Governor was Sir Francis Mudie with Iftikhar Hussain Khan as the first Chief Minister. Both offices were abolished in 1955, when the province of West Pakistan was created. The last Governor of West Punjab, Mushtaq Ahmad Gurmani, became the first Governor of West Pakistan. Related pages Old Punjab region Western Punjab (Pakistan) Punjab (India) Haryana Bahawalpur (princely state)
The Size Dependence of Phytoplankton Growth Rates: A Trade-Off between Nutrient Uptake and Metabolism. Rates of metabolism and population growth are often assumed to decrease universally with increasing organism size. Recent observations have shown, however, that maximum population growth rates among phytoplankton smaller than ∼6 μm in diameter tend to increase with organism size. Here we bring together observations and theory to demonstrate that the observed change in slope is attributable to a trade-off between nutrient uptake and the potential rate of internal metabolism. Specifically, we apply an established model of phytoplankton growth to explore a trade-off between the ability of cells to replenish their internal quota (which increases with size) and their ability to synthesize new biomass (which decreases with size). Contrary to the metabolic theory of ecology, these results demonstrate that rates of resource acquisition (rather than metabolism) provide the primary physiological constraint on the growth rates of some of the smallest and most numerically abundant photosynthetic organisms on Earth.
The British Rail Class 100 diesel multiple units were built by Gloucester Railway Carriage & Wagon Company Limited from 1956 to 1958, designed and built in collaboration with the Transport Sales Dept. of T.I. (Group Services) Ltd. 100 BREL locomotives
What's next for dysfunctional Titans family? I think it's safe to say the Adams family's Thanksgiving dinner doesn't exactly resemble a Norman Rockwell painting. The events of the past week have given us insight into the level of dysfunction surrounding the extended family of late Titans founder Bud Adams. We knew it was a mess. But we had no idea how big a mess. There's less backstabbing in a Shakespeare tragedy. A couple of weeks ago, Tommy Smith seemed comfortable in his role as president/CEO of the Titans. He even made the Nashville sports radio rounds and talked about how things were going to improve with both the Titans' on-field and off-field operations. A few days later, he was the target of a palace coup that effectively ended his involvement with the organization beyond the one-third stake his wife, Susie Adams Smith, has in the team. He went from being the leader of the franchise to persona non grata. Those with working knowledge of the situation are not all that surprised. They say it was only a matter of time before the family turned on itself. Apparently, matriarch Nancy Adams was able to maintain some degree of order within the family until her death in February 2009. After that, things fell apart. For one thing, it's no secret that Smith and his father-in-law were not close during Bud's later years. There are stories out there that the two would not dine within eyeshot of each other. Look, family squabbles are nothing new. All of us, regardless of our tax brackets, have issues. Normally, though, our dirty laundry is not aired so publicly. Then again, most of our families are not in possession of an NFL franchise. Smith's hasty "retirement" with the Titans was the result of a family feud, one that very well could lead to the sale of the team in the not-too-distant future. As long as Bud Adams was alive, the franchise was off-limits to potential buyers. It would have to be pried from his cold, dead hands. Seventeen months after his death, it's just another corporate entity that is jointly held by three heirs and might be available if the price is right. And that price could be in the neighborhood of $2 billion. It's interesting that the family members apparently could agree on one thing: Bringing back Steve Underwood to oversee day-to-day operations. Underwood long was a trusted lieutenant to Bud Adams. He was his chief legal counsel and one of his confidantes. Beyond that, Underwood is a smart man with great people skills and remarkable common sense — characteristics this organization desperately lacks. I suspect Underwood was brought out of retirement with two objectives: Stabilize things in the short term and get the franchise positioned for sale in the longer term. For his part, Underwood told Tennessean beat writer Jim Wyatt the team is not for sale. Fine. But with so much instability, I contend that everything is subject to change. And that includes ownership of the team. Besides, if you're trying to get top dollar for something, what do you do? Say it's not for sale, of course. As far as potential buyers are concerned, can we please dismiss this idea that Jimmy Haslam somehow might wind up owning the Titans? This just in: Haslam owns the Cleveland Browns and is running them from his base of operations at Pilot/Flying J headquarters in Knoxville. Even so, some people simply can't let go of the idea that the owner of one NFL franchise could swap it for another. This isn't like trading football cards. It's not quite as simple as giving up a couple of draft picks and a player to be named later in order to exchange one team for another. Ownership by Haslam would only take this franchise from bad to worse. On Haslam's watch, the Browns have been a dumpster fire. It's one of the very few NFL organizations that is more poorly run than the Titans. Besides, Haslam is worried about two other things right now: What is he going to do about Johnny Manziel? Is he going to escape the federal investigation of Pilot/Flying J relatively unscathed? Not necessarily in that order. David Climer's columns appear on Wednesday, Friday, Sunday and Monday. Reach him at 615-259-8020 and on Twitter @DavidClimer.
Euskirchen (; Kolsch: Oskerche) is a town and capital of Euskirchen, a district (Kreis) in North Rhine-Westphalia.
Acute adaptation in adrenergic control of lipolysis during physical exercise in humans. During prolonged exercise, the free fatty acids derived from adipocyte lipolysis are the principal fuel utilized by muscles. In humans, the lipid mobilization from adipose tissue is mainly regulated by insulin and catecholamines: the latter hormones have both beta-adrenergic stimulatory and alpha 2-adrenergic inhibitory effects on lipolysis. The aim of this study was to determine whether rapid alterations in the peripheral action of the regulatory hormones occur during physical work and whether they are of importance for the enhanced lipid mobilization. The acute effects of exercise on the regulation of lipolysis were investigated in isolated adipocytes removed from the gluteal region of 14 healthy volunteers before and immediately after the exercise period. Exercise induced a 20-35% significant increase in the lipolytic response to noradrenaline alone and in combination with the selective alpha 2-antagonist yohimbine and to the pure beta-agonist isoproterenol in isolated adipocytes. The antilipolytic effects of both the alpha 2-agonist clonidine and insulin were unaffected by exercise. Exercise did not influence the specific adipocyte receptor binding of 125I-cyanopindolol (beta-adrenergic receptor), [3H]yohimbine (alpha-adrenergic receptor), and mono-125I-[Tyr A14]insulin (insulin receptor). In conclusion, a single period of submaximal exercise increases adipocyte lipolytic responsiveness to catecholamines through an increased beta-adrenoceptor-mediated effect at steps distal to the receptor binding. Thus the increased peripheral action of catecholamines may be of importance for the observed enhanced lipid mobilization during physical work.
The Mesostigmatophyceae is a type of basal green algae. They are usually found in fresh water. Mesostigmatophyceae can be be a sister group to all green algae, or as sister to all Streptophyta. Categorization There are many ways to place the Mesostigmatophyceae into a category. One popular way is to have it only one genus, which is Mesostigma. A different way to do it is two have two clades; Chlorokybus and Spirotaenia. It is also common to put Chlorokybus in a different class as Chlorokybophyceae. Below is a common way to sort the categories.
Kinetic hindrance of Fe(II) oxidation at alkaline pH and in the presence of nitrate and oxygen in a facultative wastewater stabilization pond. To better understand the dynamics of Fe2 + oxidation in facultative wastewater stabilization ponds, water samples from a three-pond system were taken throughout the period of transition from anoxic conditions with high aqueous Fe2 + levels in the early spring to fully aerobic conditions in late spring. Fe2 + levels showed a highly significant correlation with pH but were not correlated with dissolved oxygen (DO). Water column Fe2 + levels were modeled using the kinetic rate law for Fe2 + oxidation of Sung and Morgan.[5] The fitted kinetic coefficients were 5 +/- 3 x 10(6) M(- 2) atm(-1) min(-1); more than six orders of magnitude lower than typically reported. Comparison of four potential Fe redox couples demonstrated that the rhoepsilon was at least 3-4 orders of magnitude higher than would be expected based on internal equilibrium. Surprisingly, measured nitrate and DO (when present) were typically consistent with both nitrate (from denitrification) and DO levels (from aerobic respiration) predicted from equilibrium. Although the hydrous Fe oxide/FeCO3 couple was closest to equilibrium and most consistent with the observed pH dependence (in contrast to predicted lepidocrocite), Fe2 + oxidation is kinetically hindered, resulting in up to 10(7)-fold higher levels than expected based on both kinetic and equilibrium analyses.
A Travelcard is a ticket used for most transport in London, including buses and the underground. The ticket is issued by Transport for London and National Rail and can be used on the services of either. History Before Travelcards were used, travellers in Greater London had to pay every time they changed between London Transport's bus and Underground services. This was a problem as it led to delays while purchasing the next ticket. One ticket for all transport in London was one of the key promises made in 1981 by the newly-elected Labour Greater London Council, headed by council leader Ken Livingstone. The policy, with the slogan "Just The Ticket", meant that only one ticket was needed for London Transport bus and Underground services. They also reduced the price of transport. The price cut was then ruled illegal, but the one pricing scheme was mostly a success and was extended during the 1980s and 1990s as new transport routes were opened in London. By 1985, there were two travelcards: the Travelcard itself, which covered only London Buses and the Underground, and the Capitalcard, which covered most bus, Underground and local British Rail (BR) services. The Travelcard replaced the original Capitalcard, while including its greater availability, around 1989. The original "zonal" system was mainly in Greater London. The zone areas used both letters and numbers for the outermost zones such that bus availability ignored the letter while BR and Underground availability depended also upon the letter(s) - e.g. a Travelcard or Capitalcard valid in zone 3a (but not 3b or 3c) was valid on buses in zones 3a, 3b and 3c but only in zone 3a when used on the railway services. Transport modes A Travelcard entitles the holder to use the following modes of transport within Greater London: London Buses, including some services that terminate outside of Greater London London Underground London Overground Docklands Light Railway National Rail except for the Heathrow Express Tramlink Travelcards also entitle the holder to a 33% discount on some London River Services.
By Share Blog Roll Happy Hour with a Big Mama Jun 07th 2012 Delray has always been my “downtown,” but the beach up here in Ocean Ridge is what tends to ground me. Especially now, in the summer, when it’s perfect for swimming, the days are long, and turtles are nesting. We’ve always had a high number of turtle nests, and I always go on turtle walks this month—usually after 10 p.m. and always when it’s a little murky out there, salt smelling, turtle-y, as I like to call it. This summer I’ve been lazy and I haven’t been once. Each morning I see all the new nests and I swear I’m going to go that night but it just has not happened. I am too deep into a book, or I’ve found one lone “Criminal Minds” rerun I have never seen before, or I’m scoring a pair of Jumbu adventure shoes on Zappo’s. Until last weekend when the phone rang at about 6:30 p.m., right before Brian Williams and right before I was about to make a ceremonial Tito’s martini (shaken, not stirred, of course.) “There’s a turtle on the beach,” my neighbor Lynn said. “Now??” I said, having almost never seen a turtle laying her eggs in daylight. So I dropped everything and walked to the end of my street, and there were all my neighbors, a couple of Ocean Ridge police keeping everyone at bay and a very handsome leatherback turtle deeply immersed in laying her eggs. Watching this for the umpteenth time is sort of like watching paint dry; it takes for-e-ver for her to dig the nest, drop the eggs, bury the eggs, cover the nest and then manage to actually turn that gi-normous body around with those awkward flippers kind of uselessly sweeping out plumes of sand. By the time she was ready to try to drag herself off the nest, I’d missed Brian Williams, and my martini was talking to me from my kitchen counter up the street. I was ready for this to be over. And then she began that that lurching slow move toward the ocean. A giant heave, then a heavy pause, then another lurch toward the shoreline. That’s when I couldn’t take my eyes off her, the way she was pulled to the water, and how she labored over every foot of sand. And then she was in, the massive shell washed by the tide, then slipping deeper. The last we saw of her was her head rising out of the water as she swam away, the late sun glancing off the top of her shell. So. There it was. Just another Friday night in a South Florida summer. The kind of happy hour you almost never get to have.
Jean-Francois de Galaup, comte de La Perouse (1741--1788) was a French navigator who explored the Pacific Ocean. He died when his ship was wrecked in the New Hebrides during 1788. He met the English when they arrived in Australia with the First Fleet in January 1788. After a short stay in Botany Bay, he sailed away into the Pacific.
Abstract A method for investigating the nature of thermally activated relaxations in terms of their cooperative character is tested in both polymer and low molecular weight crystal systems. This approach is based on analysis of the activation entropy in order to describe thermally activated relaxations. The betaine arsenate/phosphate mixed system of low molecular weight crystals was selected for investigation because pure compounds of this system show ferro-/antiferroelectric phase transitions and the mixed crystals undergo different kinds of relaxation processes involving both dipole–dipole and dipole–lattice interactions. The polymer chosen was a side chain liquid-crystalline polysiloxane, which shows the β-relaxation characteristic of disordered systems and amorphous materials. The cooperative versus local character of the relaxations is described in terms of “complex” and “simple” relaxations based on calculations of the activation entropies. The initial assumptions of the theory, as well as the resulting equations, were found to be applicable to the systems studied.
Archer is a city of O'Brien County in the state of Iowa in the United States. Around 120 people were living in Archer as of 2000. Cities in Iowa
Hey everyone! Before you continue, no, this isn't a post about getting MonoGame to work with C++! Now that everyone didn't run away, I have a question that i've been debating for a few days now. I've been programming for 4-5 years now, and the entire time I have had game programming in my sights. Only recently I have been able to start development on a full game, not a sample game you make from a book where the game runs through and ends. I mean a full game, like one you would buy. I've been working on it for a couple weeks using XNA 4.0, even though I have been aware that XNA is basically dead in Microsofts eyes. I was using this as a learning experience, and I did learn a lot. Now here comes the question. I have two options here to continue, because I feel like as though I know i'm learning, continuing with XNA is counter productive if I were to want to deploy and sell my games (key word, IF. I know not everyone is going to be able to, or want to deploy their games to sell). I could port my game over to MonoGame, which I looked through and scoured the internet for info on. The problem here, is that since it's still in development, there is no content pipeline that you get with XNA, which was a huge part of it. I know MonoGame is basically the go-to thing for XNA devs as well. My other option is to switch to C++. I did what every hobby dev does, and started my tenure of programming trying to learn C++, so I do have SOME experience. (Some = getting a sprite to move with DirectX after I learned the language itself of course). Another thing to keep in mind is that I do all of this by myself, and do not work in a team. More thinking about the future, would it be worth it to port my game to MonoGame, continue developing it, and deal with all the tricky workarounds that MonoGame has as of now, (Version 3.0 or 3.1, can't remember which), or would it be better to just start learning C++ again, and get back into that for game development? Before everyone comes out with the, "no language is right for every situation, choose what works for you, etc", i'm not looking for advice on a personal level. I'm looking for advice at an industry level. Basically what i'm asking is, would it be worth it for a one man programming team to deal with the MonoGame stuff, or would it be worth it to make the switch back to C++? Taking the time to learn the language isn't a problem (I'm a third year student at college, I have some time before the real world!) What would be more advantageous in the long term is what I want to know. I'm only looking for opinions here, as I am aware of how many factors can affect a decision like this. I'm not looking for anyone to tell me how hard one is, or how easy another is, as i've had at least some experience in both. Not saying i'm a pro, far from it actually. Just stuck in this tough decision that I can't figure out which side to go with! If it helps, I only work in 2D. Not really working on any 3D games, at least in the foreseeable future. Thanks everyone! I'm in a similar situation to you in that I'm considering switching to MonoGame. However, C++ is out of the question for me because I've been working on my game for about a year in C#. Fortunately for you, your game is in the very early stages of development so switching to either shouldn't be much trouble. If MonoGame doesn't have a content pipeline, does it have something similar? How will you import your assets with it? Furthermore, if you switch to C++, which engine will you use? MonoGame strives to be an open source XNA 4.0, so if you're more familiar with game development in XNA then chances are you're more familiar with C#, so I'd stick with that language. I have also heard that porting to MonoGame from XNA is pretty easy, but I don't have first-hand experience doing it so don't take my word for it. Overall, I think you should look at the big picture. Which platforms would you want to sell your game on? Is it a PC game or a mobile game? Regarding languages: C++ is a powerful language, but it takes a while to get used to. Since it's your very first original game, I'd stick with C# since Visual Studio is a great developer tool and you'll have more problems with C++'s intricacies. Basically what i'm asking is, would it be worth it for a one man programming team to deal with the MonoGame stuff, or would it be worth it to make the switch back to C++? Yes and yes. I know you said you didn't want that advice, but unfortunately it's the correct answer. Either of those are viable options with advantages and disadvantages, and at the end of the day the biggest factor is going to be your own personal preference. Small teams and individual developers have been successful with both C# and C++, and both languages are more than sufficient for what you're trying to do. Personally, given you're significantly more familiar with and also making progress with C# and XNA I would tend towards sticking with C# and learning to use MonoGame when or if it becomes necessary. I would suggest you simply stick with XNA for now, as for the time being it is still perfectly usable and the functionality and usability of MonoGame is being improved all the time to ensure switching is as painless as possible. You can then make the change only once you actually need to do so. For reference, it's my understanding that most people currently just use the XNA content pipeline along with MonoGame but that the MonoGame replacement is coming along nicely and should be able to take over that role sometime in the near future. You can still continue to use XNA if you like it - It won't stop working or refuse to install on people's computers, it's just not going to be developed any further. If you want to get it onto Windows 8 or any other unsupported platform, you could use MonoGame to port it over when you need to. For MonoGame, you just need to use XNA's content pipeline to compile your assets before adding them to your project. There's also a Content Compiler project on CodePlex that might simplify things, but I haven't tried it yet. If you're comfortable using those frameworks, and enjoy it, there's no harm in continuing to do so. Any concepts you learn in one language usually transfer to a new one, so you're not going to "waste" any time now even if you change later: When you're certain you'd rather be working with C++, and feel you're ready, you can do so whenever you want. I hate to 'leave it up to you' since you specifically asked to avoid that, but I don't know that anyone's opinions would be relevant to your decision, the big picture is the one you're painting yourself. I could advise you to go with C++, just because that's what Epic used for the Unreal engine, but that's irrelevant if you have no interest in working there. (As well as a poor basis for advice in the first place!) Basically, you got caught up in having to make an unexpected decision, while feeling like you don't have enough knowledge to do so. It's not as dire as it seems, and truthfully in the longterm, you'll be fine no matter which way you go. It's really only a matter of which feels more comfortable for you.
Survivor Series (2013) was a professional wrestling pay-per-view show made by WWE. It was held on November 24, 2013 at the TD Garden in Boston, Massachusetts. It was the twenty seventh Survivor Series event held by the WWE. Matches
After having typically appeared in the very hallowed pages of Baseball Think Factory, Dan Szymborski’s ZiPS projections have been released at FanGraphs the past couple years. The exercise continues this offseason. Below are the projections for the Seattle Mariners. Szymborski can be found at ESPN and on Twitter at @DSzymborski. Other Projections: Atlanta / Baltimore / Cincinnati / Kansas City / New York AL / Philadelphia / Pittsburgh / Texas / Toronto. Batters The arrival of Jerry Dipoto in Seattle has been accompanied by considerable turnover within the club’s roster — some of which is represented in the major-league depth chart. Nori Aoki, Chris Iannetta, and Adam Lind all receive projections in the one-win range. Not unexpected, that, but also not a source of great inspiration to the people of Seattle. Then there’s the case of Leonys Martin. Rendered more or less redundant in Texas, the center fielder is projected to produce 2.5 wins in roughly two-thirds playing time, one of the best marks among the club’s position players. Elsewhere, the strengths of the club remain the same. Robinson Cano, Nelson Cruz, and Kyle Seager are all forecast to record three or more wins. This is particularly encouraging for Cano. After posting a 2.1 WAR in nearly 700 plate appearances this past season, Cano is expected to approach the four-win threshold in 2016. Pitchers If one is inclined to put stock in ZiPS, one is inclined also to believe that the version of reality in which Hisashi Iwakuma signs officially with the Dodgers is an unfortunate reality for the Mariners. After Felix Hernandez, the recipient (predictably) of a strong projection, the next best non-Iwakuma mark belongs to left-hander Wade Miley, who barely passes the one-win threshold. The numbers after that aren’t particularly encouraging, either. With regard to the bullpen, one finds that it features almost an entirely different cast than that which ended the 2015 season. No coincidence, that, in light of how the club’s relief corps finished 26th in the majors by WAR. The prognosis for the 2016 version of the bullpen isn’t wildly encouraging, but suggests an improvement over its predecessor. Bench/Prospects While Dan Szymborski’s computer algorithms might seem pessimistic with regard to some of the Mariners’ newest acquisitions, one player for whom that’s not the case is outfielder Boog Powell. The author of precisely zero major-league plate appearances, Powell is forecast by ZiPS to produce 1.5 wins in roughly 500 plate appearances on the strength of reasonable plate-discipline numbers and slightly above-average center-field defense. Among pitchers, the returns aren’t quite so promising. New acquisition Tony Zych is projected for roughly half a win. Depth Chart Below is a rough depth chart for the present incarnation of the Mariners, with rounded projected WAR totals for each player. For caveats regarding WAR values see disclaimer at bottom of post. Click to embiggen image. Ballpark graphic courtesy Eephus League. Depth charts constructed by way of those listed here at site and author’s own haphazard reasoning. *** *** *** *** *** *** Disclaimer: ZiPS projections are computer-based projections of performance. Performances have not been allocated to predicted playing time in the majors — many of the players listed above are unlikely to play in the majors at all in 2016. ZiPS is projecting equivalent production — a .240 ZiPS projection may end up being .280 in AAA or .300 in AA, for example. Whether or not a player will play is one of many non-statistical factors one has to take into account when predicting the future. Players are listed with their most recent teams unless Dan has made a mistake. This is very possible as a lot of minor-league signings are generally unreported in the offseason. ZiPS is projecting based on the AL having a 3.93 ERA and the NL having a 3.75 ERA. Players that are expected to be out due to injury are still projected. More information is always better than less information and a computer isn’t what should be projecting the injury status of, for example, a pitcher with Tommy John surgery. Regarding ERA+ vs. ERA- (and FIP+ vs. FIP-) and the differences therein: as Patriot notes here, they are not simply mirror images of each other. Writes Patriot: “ERA+ does not tell you that a pitcher’s ERA was X% less or more than the league’s ERA. It tells you that the league’s ERA was X% less or more than the pitcher’s ERA.” Both hitters and pitchers are ranked by projected zWAR — which is to say, WAR values as calculated by Dan Szymborski, whose surname is spelled with a z. WAR values might differ slightly from those which appear in full release of ZiPS. Finally, Szymborski will advise anyone against — and might karate chop anyone guilty of — merely adding up WAR totals on depth chart to produce projected team WAR.
Land vertebrates have feet. The organization of their feet varies. Two factors come into play: weight and lifestyle. 1. Plantigrade: heavy animals usually put their heels down to support their weight. 2. Unguligrade: large animals with hooves. 3. Digitigrade is usually reserved for lighter animals. It means to walk on the toes. Weight Most heavy animals walk on four legs. There are some exceptions. Some really heavy birds move or moved on two legs. Moas are one example. It is quite clear that an ostrich is a very effective bird running on two legs. Birds generally are an example of the change from four legs (originally as dinosaurs) to two legs. Humans the end product of changes that started in arboreal apes. The study of such things is called "comparative foot morphology".
The end goal of the proposed program is to provide training through a truly multidisciplinary design course where engineering students and physical therapist students at the University of North Florida work together to design, fabricate, and test adaptive technology targeting postural control, mobility, social participation, and quality of lifefor children with developmental disabilities. As part of this program students will gain an increased appreciation of the diverse roles and contributions from different disciplines in the context of advanced rehabilitation technology development for pediatric applications. This program will enhance students training through a hands-on, interprofessional, and translational design experience focused on complete working prototypes that meet clinical and community needs. The specific program aims are: (1) To identify and describe various assistive technology solutions for functional limitations for a variety of developmental disabilities. (2) To develop sklls to function as a member of a multidisciplinary team including effective communication across disciplines and people first language, behavior and sensitivity during clinical observations with clients. (3) To identify, formulate, and solve engineering problems utilizing a family-centered functional approach to the assessment of assistive technology needs including cultural uniqueness as applied to ethnic and cultural minorities. (4) To understand the professional, legal, and ethical responsibility of interacting with a client in a professional capacity related t providing assistive technology devices and services. (5) To describe the process utilized to develop product concepts, specifications, prototyping, testing, and fabrication of an assistive technology finished product. (6) To compare and contrast the gaps between engineering and rehabilitation theory and reality.