CR7 Legacy

first_imgCristiano Ronaldo Cristiano Ronaldo – The legacy of a ‘Champion’ revealed Goal 21:34 6/13/18 FacebookTwitterRedditcopy Cristiano Ronaldo Videos The Real Madrid and Portugal star is defined as much by his determination as he is by his astonishing achievements It’s the legacy that a champion leaves behind for people to remember and not just the trophies and medals that one garnishes their cabinet with. In this regard, Cristiano Ronaldo has set a precedent that only a few athletes around the world are able to match, let alone beat. The 33-year-old superstar, in his ninth season at Real Madrid, has continued to shine like the stars. Records continue to tumble as blazed by the Funchal-born footballer, who is already at the summit of Real Madrid’s all-time scoring list, the gap constantly growing, and he continues to score gravity defying goals that make audiences gasp in awe.  Editors’ Picks Out of his depth! Emery on borrowed time after another abysmal Arsenal display Diving, tactical fouls & the emerging war of words between Guardiola & Klopp Sorry, Cristiano! Pjanic is Juventus’ most important player right now Arsenal would be selling their soul with Mourinho move But do great goals make him a champion? Does smashing records and winning trophies make Ronaldo a champion? The work of a champion comes before the feat itself.Cristiano Ronaldo Real MadridBe it an intense training regime in the gym or a big matchday at the Bernabeu, Ronaldo’s commitment to excellence cannot be compared. Ronaldo’s drive has been aided by his want of silverware for the team and he will go the distance to achieve his aim. But to do so, even the Portugal captain has made his very own winning formula.His work-out regimen sees him hit the gym for three-to-four hours every day. That is coupled by an equally rigorous spell on the training pitches of Real Madrid where he burns the turf with 25-30 minute long running sessions.Ronaldo’s sessions in the gym include a perfect blend of cardio-vascular and weight-oriented exercises. He also likes to keep his regimen blended with high intensity activities, such as a fast sprint in between exercises or drills, so that blood flow remains throughout the system.As much as he believes in the power of exercise, he also believes that relaxing and letting your muscles gain much-needed rest allows one to develop a better physique.He also advocates the aid of a gym partner to push one to his maximum potential, while believing exercising, no matter how small, can be fit into a daily routine without hassle.Cristiano Ronaldo Juventus Real Madrid UEFA Champions LeagueHe follows a pre-planned diet rich in protein. He particularly likes fish and his favourite dish is bacalhau a braz, which is a mixture of cod, onions, thinly sliced potatoes and scrambled eggs.On the pitch too, Ronaldo’s ability is magnified with his vast array of skill, his blazing pace and his excellent decision-making which has seen him attain a very high conversion rate. This is where he proves himself to be an equal, if not a greater, adversary to Lionel Messi.This season, the Funchal-born attacker has scored more than 40 goals, proving that age is just a number. Speed, skill, strength and stamina are the biggest assets of the reigning Ballon d’or winner. His application of these qualities is non-comparable. His ruthlessness in front of goal cuts him a ceiling no other footballer may be able to reach.This is particularly important given the impetus Portugal will place on him getting home the elusive World Cup trophy, this June. The Selecao have heaped their hopes on their star marksman and he promises to deliver with the exceptional form he is in.Cristiano Ronaldo Portugal 2017-18Ronaldo’s endurance in the latest commercial of Clear MEN’s shampoo is depicted in full flow, where his physical endurance and his skill on the ball is shown. With Portugal playing to maximize his ability, Ronaldo will depend on himself to grow the wings he requires to fly high and earn his team the trophy, much like he did for the European Championships.It isn’t solely the imagination that works wonders for footballers of tomorrow. Ronaldo’s possessed two of the most important habits to change the spectre of his competitors and personal aims – dedication and persona.One of Cristiano’s many intriguing facets has been his lifelong focus on sharpening his attributes to fit the wider picture – for his teammates to harness and for the world to relish.Ronaldo sets such high standards for himself that he is his own champion.last_img read more

Sir Roger Moore And Friends Join Forces For GivingTales

first_imgSome of the world’s leading actors have today joined forces in a new children’s educational project called GivingTales.Giving TalesIt aims to produce engaging and entertaining versions of popular children’s fairy tales while helping to teach and educate children worldwide. As part of the company’s mission to educate and support children, GivingTales kft is committing 30% of its revenue to Unicef UK to help children around the world.Video: GivingTales – Behind The ScenesDeveloped in association with Sir Roger Moore, a UNICEF Goodwill Ambassador, GivingTales features the voice talents of world-renowned actors including Ewan McGregor, Unicef UK Ambassador, Stephen Fry, and Dame Joan Collins. Hans Christian Andersen’s timeless fairy tales have now been modernised, condensed and paired with vivid illustrations that capture the universal and timeless life lessons synonymous with Andersen’s stories.“I’ve been a long time admirer of Hans Christian Andersen’s work, and I think it’s a wonderful collection of fairy stories for children and adults alike,” said Sir Roger Moore.Many of the themes in Hans Christian Andersen’s stories have relevance for children today, such as bravery, selflessness, and compassion.“The Ugly Duckling is the primal story about bullying and self-belief – it is a marvelous tale,” said Stephen Fry, actor, screenwriter and author.Available for free download starting today on Apple, Android, and Windows Phone 8 mobile devices, the app comes with The Princess and the Pea (Sir Roger Moore, KBE). Each additional story is available for download at $3.99 each. Three additional stories are available in the first series: The Emperor’s New Clothes (Dame Joan Collins, DBE), The Little Match Girl (Ewan McGregor, OBE) and The Ugly Duckling (Stephen Fry).Today’s children are increasingly accustomed to consuming content in one short sitting. While paper books may be giving way to digital versions, classic fairy tales never grow old. There’s a big need to adapt traditional stories into shorter, animated versions, so they can captivate and inspire another generation of young readers.“Taking care not to lose the essence of what makes Hans Christian Andersen’s stories so great, GivingTales has condensed the stories down so they can be enjoyed in minutes, not hours. Using the voices of renowned actors gives them new life in a memorable and entertaining way,” Jacob Moller, CEO of GivingTales.“We’re overwhelmed by the initial support we’ve received, both from the celebrities affiliated with our project, as well as our ongoing relationship with the Unicef UK. Together, we hope to make a difference in the lives of many of children around the world,” said Klaus Lovgreen, Chairman, GivingTales.GivingTales was developed by an award-winning team of illustrators and producers. The company’s founders have a history of developing digital entertainment content, having developed Top Ten apps for both Apple App Store and Google Play. GivingTales is currently available in English with plans for additional stories, actors, and titles being made available over the coming months.last_img read more

The iPad app making life easier for people in public housing

first_img For the disability community, tech is the great equalizer The app helping the homeless take back control Men won’t talk about mental health and it’s literally killing them Related stories 0 CNET may get a commission from retail offers. See It Review • Apple iPad 2018 review: The iPad for everyone Post a comment $249 With every FACS officer responsible for between 350 and 450 properties, the department was previously only visiting 30 percent of its public housing tenants in a given year. After the app was launched across the state in April 2018, the department conducted one third of its yearly visits — more than 20,000 interactions — in just 60 days.Former FACS client services officer Roger Mclean helped develop the app and knows the problem faced by front-line public housing workers too well. For each public housing visit he used to conduct, he says he would spend upwards of three hours printing out forms, rifling through case files and doing dry paperwork. For a person who got into the job to help people, the bulk of his time was spent on data entry. “It was horrible and very time consuming,” he says. “Now, we’re not rushing.”With only an iPad in tow, case workers can now spend time actually speaking to tenants in their homes, where issues are easier to identify and difficult conversations can be conducted in privacy. For elderly residents and people living with a disability the focus on in-home interactions is game-changing.  “Before, we spent 100 percent of our time on 10 percent of our clients,” says Lance Carden, director of customer service and business improvement at FACS.But for Carden, the biggest change has been a shift from putting out fires to actually engaging with people in the community who need it most. “We miss out on early intervention if we’re not visiting everybody. And we’re missing that social and human element.”Tech Enabled: CNET chronicles tech’s role in providing new kinds of accessibility.The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter. $329 $249 Share your voice See Itcenter_img See it Apple Amazon Mentioned Above Apple iPad 2018 (space gray, 32GB) Culture See It Best Buy Family and Community Services officer Roger Mclean talks through the Ivy app with Kate McDonnell. Ian Knighton/CNET For millions around the world, public housing offers the promise of a much-needed roof overhead.But the reality of public housing can be grim, and problems that start small can often become bureaucratic nightmares.That might be a case of waiting weeks to get a broken door fixed or having to file repeated complaints about rowdy neighbours. But issues can be left to fester if councils ignore public housing tenants. And in some cases, as the world saw with the massive fire at London’s Grenfell Tower housing complex in 2017, that can have tragic consequences. While governments can be notoriously slow to adapt, one community housing provider is using tech to catch potential problems before they become big issues, making life easier for some of the most vulnerable people in society.  That solution is the Ivy app.Created by the NSW Department of Family and Community Services (FACS) in Australia, this iOS app was developed to cut out the endless paperwork case workers and community housing residents need to complete to get basic things done. It lets case workers fill property condition reports and take photos directly on an iPad, while also accessing family records, past incidents or safety issues and recent rent and water bills. Residents can complete forms and make payments on the spot, without having to visit a FACS office or wait an age on the phone to get connected to a call centre. And it’s all done through an iPad, which holds records of all the properties and families a case worker deals with, letting them map out appointments and access any information with a tap of the screen.  facs-apple-ivy-app-4Enlarge ImageThe Ivy app lets public housing residents pay bills, update records and get immediate referrals for help around their home.  Ian Knighton/CNET A simple tech update might seem like a no-brainer. But for Kate McDonnell, who lives in public housing with her five children in inner-city Sydney, the Ivy app has been a huge help. “Before, paperwork got lost … things were falling by the wayside,” she says. Case workers were “overloaded” with admin, and when she did actually get home visits, it was often a new case worker each time.Now, when she has issues, she doesn’t need to wrangle her two young children to get to a FACS centre while the other kids are in school — everything is done through the iPad. And when her case worker visits her house, “I know who they are.” $249 Tags Apple iPadlast_img read more

The shocking family member who is still a staunch supporter of Meghan

first_imgMeghan Markle birthdayTwitterMeghan Markle may be finding it hard to find allies in the Royal Family apart from Prince Harry. And it is no secret how the Markle family feels about Meghan and her new Royal status. But despite all of that, there is apparently one family member who Meghan can still control.Meghan Markle could “get away with murder” when it comes to one particular family member who she could “twist round her little finger”, according to a royal biography.Reportedly, the Duchess of Sussex was doted on by her father Thomas Markle Snr while growing up, being much younger than his two other children – Samantha and Thomas Jnr. In fact, Samantha was already 17 years old when Meghan was born. According to the 2018 book ‘Meghan: A Hollywood Princess’, Meghan was the “apple of her father’s eye” and could do no wrong. There have been reports of Thomas Markle playing favourites with Meghan before.Royal biographer Andrew Morton asserted that her half-sister Samantha “often” claimed Meghan teased her that she was the favourite child and would receive the most expensive gift at Christmas from their father. Meghan Markle has had a falling out with the Markle family after joining the Royal Family. Mr. Morton wrote: “Meghan went to stay with her mother and did not even visit her father’s house to pick up her mail.”  Meghan MarkleThe Duke and Duchess of Sussex Official Instagram (sussexroyal)Mr. Morton went on to write that it was a situation that perplexed her friends, as they knew that Meghan was the apple of her father’s eye and could get away with blue murder.However, claims by Samantha Markle need to be taken with a pinch of salt as she isn’t exactly a fan of the Duchess of Sussex. In fact, she has used Meghan Markle’s fame as a Royal to promote herself, cashing in by writing books on Meghan and making a media spectacle of herself.last_img read more

UK govt rocked by Huawei leak scandal

first_imgBritain’s Prime Minister Theresa May gestures during a visit to the Leisure Box in Brierfield, Lancashire, on 25 April 2019, during campaigning for the local elections. Photo: AFPBritain’s splintered government was rocked Friday by a growing scandal over who leaked news that prime minister Theresa May has conditionally allowed Chinese giant Huawei to develop the UK 5G network.The highly controversial decision was reportedly made at a meeting on Tuesday of Britain’s National Security Council despite opposition from some ministers who are seen as potential candidates to replace May.National Security Council discussions are only attended by senior ministers and security officials who first sign the Official Secrets Act that commits them to keep all conversations private or risk prosecution.But The Telegraph newspaper broke the news late Tuesday that May approved granting Huawei permission to build up “non-core” elements of Britain’s next-generation telecommunications network.The United States is adamantly opposed to Huawei’s involvement because of the firm’s obligation under Chinese law to help its home government if asked, including in intelligence matters.British media reported that Cabinet Secretary Sir Mark Sedwill—the country’s most senior civil servant—gave those present an ultimatum until Thursday afternoon to deny responsibility for the leak.Foreign Secretary Jeremy Hunt and Defence Secretary Gavin Williamson did so first.Hunt called it “utterly appalling” and Williams described it as “completely unacceptable”.They were soon joined by interior minister Sajid Javid—who like Hunt is one of the frontrunners to succeed May as Conservative Party leader—and at least one other attending cabinet member.May herself said Thursday that she does not comment on National Security Council meetings.Sky News reported Friday that the ongoing government inquiry into the source of the leak could become a formal criminal investigation.Former cabinet secretary Gus O’Donnell told BBC radio that the disclosure of National Security Council information was “incredibly serious” and a “complete outrage”.“This is really important for the country, these issues are massively important,” he said.May’s government has been experiencing strains for months.Disputes over Britain’s stalled withdrawal from the European Union have seen several ministers resign.May herself has promised to step down as soon as she gets the first stage of Brexit over the line. The new extended deadline for the process is now 31 October.Her commitment to quit has only fomented cabinet rivalries as various ministers jockey for position in a looming leadership race.May’s spokesman said Wednesday that a formal decision on Huawei would be made by June.last_img read more

Houston Colleges Offer Food Scholarships to Help Students Ease Food Insecurity

first_img Share X Laura Isensee/Houston Public MediaSince it opened in January 2018, the student market at Texas Woman’s University provides about 80 students – mostly in graduate programs – with 60 pounds of food a month.On a recent Monday afternoon at Texas Woman’s University in the Medical Center, it was delivery day. It’s always a little bit of a surprise what arrives from the Houston Food Bank.Graduate student Torrey Alexis unpacked boxes and found lettuce for garden salads, a whole mixture of fruits and frozen sausages.“And bags of rice — awesome!” he said.After class, Alexis, 24, will hand out maroon tote bags loaded with 30 pounds of groceries to dozens of fellow grad students. It’s part of his masters project in nutrition. He’s collecting food diaries and surveys on students’ food needs.The market is also personal. Alexis takes home two bags of food for himself. “I’m going to say it has helped me a lot, because it’s a lot of money — like I’m an out-of-state student, so a lot of my fees goes to out-of-state tuition. And so it’s kind of like money is very tight,” Alexis said.Last semester, between moving from Louisiana, starting graduate school and then being out of work during Hurricane Harvey, Alexis had to skip meals sometimes to pay bills. Or he made sure he had healthy snacks to keep him going.In fact, 20 percent of students at TWU have experienced food insecurity. That’s almost as much as the national average. A recent study found that over a third of U.S. college students went hungry over the last year.It all means the stereotype of the poor college student surviving on Ramen noodles isn’t a joke for a growing number of young people. And community colleges and universities like Texas Woman’s have started to offer a new kind of scholarship — for food — together with the Houston Food Bank.Video Playerhttps://cdn.hpm.io/wp-content/uploads/2018/04/17161405/College-Food-Scholarship-In-Depth.mp400:0000:0000:15Use Up/Down Arrow keys to increase or decrease volume.Deb  Unruh, assistant director of student life at TWU, surveyed students in 2016.Their response: “They were cutting back on the size of meals, they were skipping meals altogether, they weren’t eating as much food as they thought they should and that money was running out at the end of the month, so they just couldn’t buy food,” she recounted.Unruh wasn’t totally surprised. For a while, she’d noticed students scarfing down snacks at the student life center, where they ate very quickly and ate a lot.Laura IsenseeDeb Unruh surveyed students at TWU in 2016 after she noticed students regularly scarfed down snacks at the student life center – as if it was their main meal. Unruh discovered 20 percent of students at TWU’s Houston campus in the Texas Medical Center experienced food insecurity, not knowing where their next meal would come from.It’s all led to this partnership with the Houston Food Bank. Carolyn Moore, a professor in nutrition and food sciences at TWU, helped make the connection with the Houston Food Bank. She also funded — with some of her own money — a renovation to house the new student market, adding new refrigerators and a freezer to keep produce fresh. Since the market opened in January, about 80 students receive groceries twice a month, just as long as they stay in school.“The reason that we call it a food scholarship is because we’re looking to tie this to outcomes,” said Harry Hadland with the Houston Food Bank. “It’s not just, ‘Hey, here’s some food, go be well with your life.’ It’s, ‘Here’s some food, let this help you maintain your way through with whatever program you’re pursuing,” Hadland said.  Some say rising tuition and housing costs mean more students resort to these programs. But it’s a complicated issue and there could be other factors.Laura IsenseeCarolyn Moore, professor of nutrition and food science at TWU, donated over $10,000 of her own money to build the student market at TWU. She’s advising graduate student Torrey Alexis on his masters project that’s monitoring how the food scholarships impact students’ nutrition. They both volunteer to help organize food for students on distribution day.Still, it’s prompted the Houston Food Bank to expand its food scholarships. Hadland said that they have student markets at six colleges so far, including San Jacinto Community College and the University of Houston-Downtown. And the nonprofit will open a ninth student market in Baytown at Lee College in the fall. Together, there are about 1,000 students in Houston higher education institutions on these food scholarships.At Texas Woman’s University, both administrators and students said that the food scholarships have made a difference. Unruh said that students seem more confident and that fills her with gratitude.“I mean, goodness! What a gift of humanity one to another, honest to goodness,” she said.Alexis hopes his masters project proves that his peers get more calories and better nutrition, because of this program. He’ll share the research with the Houston Food Bank. They won’t be able to tell if it improves students’ academics, but Alexis said that already his own stress is already way down. “I don’t really have to worry about food as much now. I have so much cereal at my house right now, it’s ridiculous,” he said.That means he can focus on work at a local hospital and class, so he can graduate with his masters in May 2019.  To embed this piece of audio in your site, please use this code: Listen 00:00 /04:00last_img read more

Paul McCartney almost played Ross fatherinLaw on Friends

first_imgBeatle star Paul McCartney almost guest starred on Friends as he was offered the role of Ross’ father-in-law.Emmy-nominated casting director Leslie Litt, who was working on the NBC hit series during most of its run, revealed that McCartney, now 72, could’ve appeared in the season 4 finale of the show as David Schwimmer’s on-screen father-in-law, but he turned it down, reported Huffington Post.“I went through his manager and gave him all the details. One day, someone in the office brought me a faxed letter written to me by Paul himself! He thanked me for my interest and said how flattered he was, but it was a very busy time for him,” Litt said.If the British musician had agreed to do it, he would’ve appeared in the two-part season four finale which aired in 1998.In the said outing, Ross married Emily (Helen Baxendale) in London though he accidentally said Rachel’s (Jennifer Aniston) name instead of his bride’s name at the altar.last_img read more

Generative Adversarial Networks Generate images using Keras GAN Tutorial

first_imgYou might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0]num_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images.  Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keraslast_img read more

Airline defendants to pay US95 million in 911 case

first_img Thursday, November 23, 2017 NEW YORK — Insurers for American Airlines, United Airlines and other aviation defendants have agreed to pay $95 million to settle claims that security lapses led planes to be hijacked in the Sept. 11 attacks.The settlement was described in papers filed Tuesday in Manhattan federal court. Developers of the new World Trade Center buildings had once demanded $3.5 billion from aviation-related companies after hijacked planes destroyed three skyscrapers among five demolished buildings on Sept. 11, 2001.Lawyers said the agreement signed last week resulted from “extensive, arms-length negotiations” by lawyers “who worked diligently for months.” The agreement also said the parties make no admissions or concessions with respect to liability for the attacks.“The court’s approval of the settlement agreement will bring to a close this hard-fought 13-year litigation on terms agreeable to the parties,” the lawyers said.Attorney Desmond T. Barry Jr., who submitted the papers to U.S. District Judge Alvin K. Hellerstein, declined to comment Wednesday.More news:  Save the dates! Goway’s Africa Roadshow is backDeveloper Larry Silverstein and World Trade Center Properties have collected more than $5 billion from other defendants through lawsuits. The money has aided the reconstruction of buildings on the 16-acre lower Manhattan site.Earlier settlements included $135 million paid to a financial services firm that lost two-thirds of its employees.American Airlines spokesman Matt Miller said the company is pleased to have reached a settlement.“We will never forget that terrible day and its lasting impact including the tragic loss of 23 members of the American Airlines family,” said Miller.United Airlines declined to comment.Bud Perrone, a spokesman for Silverstein, said the company is “pleased to have finally reached a resolution to this piece of post-9-11 litigation.” << Previous PostNext Post >> Source: The Associated Presscenter_img Airline defendants to pay US$95 million in 9/11 case Sharelast_img read more